Page MenuHomeIsabelle/Phabricator

No OneTemporary

This file is larger than 256 KB, so syntax highlighting was skipped.
diff --git a/admin/jenkins/ci_build_mac.scala b/admin/jenkins/ci_build_mac.scala
--- a/admin/jenkins/ci_build_mac.scala
+++ b/admin/jenkins/ci_build_mac.scala
@@ -1,24 +1,24 @@
object profile extends isabelle.CI_Profile
{
import isabelle._
val afp = Path.explode("$ISABELLE_HOME/afp")
val afp_thys = afp + Path.explode("thys")
- override def threads = 4
- override def jobs = 2
+ override def threads = 8
+ override def jobs = 1
def include = List(afp_thys)
def select = Nil
def pre_hook(args: List[String]) =
println(s"Build for AFP id ${hg_id(afp)}")
def post_hook(results: Build.Results) = {}
def selection = Sessions.Selection(
all_sessions = true,
exclude_sessions = List("HOL-Proofs", "HOL-ODE-Numerics", "Linear_Programming"),
exclude_session_groups = List("slow"))
}
diff --git a/doc/editors/new-entry-checkin.md b/doc/editors/new-entry-checkin.md
--- a/doc/editors/new-entry-checkin.md
+++ b/doc/editors/new-entry-checkin.md
@@ -1,95 +1,95 @@
New Submissions (editors only)
------------------------------
**Mercurial Setup**
As editor you have at least two working copies of the repository:
current release branch and development version.
- Start by making a directory `~/afp` where the different branches
will go.
- To set up the release version, in that directory do (fill in 20XX)
hg clone ssh://hg@bitbucket.org/isa-afp/afp-20XX release
- for development
hg clone ssh://hg@bitbucket.org/isa-afp/afp-devel devel
You might need to set up ssh keys on Bitbucket for this to work. This can
be done under "Manage account/SSH Keys".
New submissions, changes to the web site and to admin scripts go into
afp/release. Gerwin merges these into the development version within a
day or two.
Maintenance and changes on existing submissions all occur in afp/devel
and go properly public with the next Isabelle release (they are only
available as (public) development tar.gz's)
**New Submissions**
Everything happens in the release branch `afp/release`.
1. unpack tar file and move new entry to `afp/release/thys`
2. make sure there is a `thys/entryname/ROOT` file and add `entryname`
to `thys/ROOTS`. For the former see the template in
`thys/Example-Submission/ROOT`. In particular the entry should be in
chapter AFP, and group `(AFP)`, i.e.
chapter AFP
session foo (AFP) = bar +
3. to check, run in `afp/release/thys`
../admin/testall -c <name>
(be sure to have `ISABELLE_RELEASES` set to the path where Isabelle
releases are kept, e.g. `/home/proj/isabelle/`)
4. check license headers: if the authors want the code released under
LGPL instead of BSD, each file should mention "License: LGPL" in the
header.
5. `hg add` and `hg commit` the new submission
6. Enter data for author/abstract/index/etc in the file
`metadata/metadata`. Make sure that your editor uses UTF-8 encoding
for this file to preserve special characters. If the entry uses a
new topic or category, add it to metadata/topics (make sure there is
an empty line at the end of the file).
7. Generate the new web site by running `../admin/sitegen` .
8. Use `hg st` and `hg diff` to make sure the generated html makes
sense. The diff should be small and concern the new entry only.
9. `hg add` and `hg commit` the web site updates.
10. finally, when you are happy with everything, `hg push` all changes
to bitbucket. The publish script will refuse to publish if the
changes aren't pushed.
11. to publish the changes to the web, run
../admin/publish <name>
- This will check out the Isabelle201X (=release) version of the
+ This will check out the Isabelle202X (=release) version of the
archive from bitbucket, will run the session `name` to generate
HTML, produce a `.tar.gz` for the archive and for the entry, and
will update the web pages on the server. The script will ask before
it copies, so you can check locally if everything is as you want it.
12. That's it. Changes should show up at <http://isa-afp.org>
**New submission in devel**
Although it is a condition of submission that the entry works with the
current stable Isabelle version, occasionally it happens that a
submission does not work with the stable version and cannot be
backported, but is important/good enough to include anyway. In this
case, we can't release the submission on the main web site yet, but we
can add it to the development version of the archive, such that it is at
least available to those who are working with the current Isabelle
development snapshot.
The check-in procedure is the same as for a normal release entry, apart
from the fact that everything happens in the devel instead of release
directory and that the last step (publish) is omitted.
The authors of the entry should be notified that the entry will only
show up on the front page when the next Isabelle version is released.
diff --git a/metadata/metadata b/metadata/metadata
--- a/metadata/metadata
+++ b/metadata/metadata
@@ -1,9109 +1,9177 @@
[Arith_Prog_Rel_Primes]
title = Arithmetic progressions and relative primes
author = José Manuel Rodríguez Caballero <https://josephcmac.github.io/>
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
date = 2020-02-01
notify = jose.manuel.rodriguez.caballero@ut.ee
abstract =
This article provides a formalization of the solution obtained by the
author of the Problem “ARITHMETIC PROGRESSIONS” from the
<a href="https://www.ocf.berkeley.edu/~wwu/riddles/putnam.shtml">
Putnam exam problems of 2002</a>. The statement of the problem is
as follows: For which integers <em>n</em> > 1 does the set of positive
integers less than and relatively prime to <em>n</em> constitute an
arithmetic progression?
[Complex_Geometry]
title = Complex Geometry
author = Filip Marić <http://www.matf.bg.ac.rs/~filip>, Danijela Simić <http://poincare.matf.bg.ac.rs/~danijela>
topic = Mathematics/Geometry
date = 2019-12-16
notify = danijela@matf.bg.ac.rs, filip@matf.bg.ac.rs, boutry@unistra.fr
abstract =
A formalization of geometry of complex numbers is presented.
Fundamental objects that are investigated are the complex plane
extended by a single infinite point, its objects (points, lines and
circles), and groups of transformations that act on them (e.g.,
inversions and Möbius transformations). Most objects are defined
algebraically, but correspondence with classical geometric definitions
is shown.
[Poincare_Disc]
title = Poincaré Disc Model
author = Danijela Simić <http://poincare.matf.bg.ac.rs/~danijela>, Filip Marić <http://www.matf.bg.ac.rs/~filip>, Pierre Boutry <mailto:boutry@unistra.fr>
topic = Mathematics/Geometry
date = 2019-12-16
notify = danijela@matf.bg.ac.rs, filip@matf.bg.ac.rs, boutry@unistra.fr
abstract =
We describe formalization of the Poincaré disc model of hyperbolic
geometry within the Isabelle/HOL proof assistant. The model is defined
within the extended complex plane (one dimensional complex projectives
space &#8450;P1), formalized in the AFP entry “Complex Geometry”.
Points, lines, congruence of pairs of points, betweenness of triples
of points, circles, and isometries are defined within the model. It is
shown that the model satisfies all Tarski's axioms except the
Euclid's axiom. It is shown that it satisfies its negation and
the limiting parallels axiom (which proves it to be a model of
hyperbolic geometry).
[Fourier]
title = Fourier Series
author = Lawrence C Paulson <https://www.cl.cam.ac.uk/~lp15/>
topic = Mathematics/Analysis
date = 2019-09-06
notify = lp15@cam.ac.uk
abstract =
This development formalises the square integrable functions over the
reals and the basics of Fourier series. It culminates with a proof
that every well-behaved periodic function can be approximated by a
Fourier series. The material is ported from HOL Light:
https://github.com/jrh13/hol-light/blob/master/100/fourier.ml
[Generic_Deriving]
title = Deriving generic class instances for datatypes
author = Jonas Rädle <mailto:jonas.raedle@gmail.com>, Lars Hupel <https://www21.in.tum.de/~hupel/>
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
date = 2018-11-06
notify = jonas.raedle@gmail.com
abstract =
<p>We provide a framework for automatically deriving instances for
generic type classes. Our approach is inspired by Haskell's
<i>generic-deriving</i> package and Scala's
<i>shapeless</i> library. In addition to generating the
code for type class functions, we also attempt to automatically prove
type class laws for these instances. As of now, however, some manual
proofs are still required for recursive datatypes.</p>
<p>Note: There are already articles in the AFP that provide
automatic instantiation for a number of classes. Concretely, <a href="https://www.isa-afp.org/entries/Deriving.html">Deriving</a> allows the automatic instantiation of comparators, linear orders, equality, and hashing. <a href="https://www.isa-afp.org/entries/Show.html">Show</a> instantiates a Haskell-style <i>show</i> class.</p><p>Our approach works for arbitrary classes (with some Isabelle/HOL overhead for each class), but a smaller set of datatypes.</p>
[Partial_Order_Reduction]
title = Partial Order Reduction
author = Julian Brunner <http://www21.in.tum.de/~brunnerj/>
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
date = 2018-06-05
notify = brunnerj@in.tum.de
abstract =
This entry provides a formalization of the abstract theory of ample
set partial order reduction. The formalization includes transition
systems with actions, trace theory, as well as basics on finite,
infinite, and lazy sequences. We also provide a basic framework for
static analysis on concurrent systems with respect to the ample set
condition.
[CakeML]
title = CakeML
author = Lars Hupel <https://www21.in.tum.de/~hupel/>, Yu Zhang <>
contributors = Johannes Åman Pohjola <>
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
date = 2018-03-12
notify = hupel@in.tum.de
abstract =
CakeML is a functional programming language with a proven-correct
compiler and runtime system. This entry contains an unofficial version
of the CakeML semantics that has been exported from the Lem
specifications to Isabelle. Additionally, there are some hand-written
theory files that adapt the exported code to Isabelle and port proofs
from the HOL4 formalization, e.g. termination and equivalence proofs.
[CakeML_Codegen]
title = A Verified Code Generator from Isabelle/HOL to CakeML
author = Lars Hupel <https://lars.hupel.info/>
-topic = Computer Science/Programming Languages/Compiling, Logic/Rewriting
+topic = Computer science/Programming languages/Compiling, Logic/Rewriting
date = 2019-07-08
notify = lars@hupel.info
abstract =
This entry contains the formalization that accompanies my PhD thesis
(see https://lars.hupel.info/research/codegen/). I develop a verified
compilation toolchain from executable specifications in Isabelle/HOL
to CakeML abstract syntax trees. This improves over the
state-of-the-art in Isabelle by providing a trustworthy procedure for
code generation.
[DiscretePricing]
title = Pricing in discrete financial models
author = Mnacho Echenim <http://lig-membres.imag.fr/mechenim/>
-topic = Mathematics/Probability Theory, Mathematics/Games and Economics
+topic = Mathematics/Probability theory, Mathematics/Games and economics
date = 2018-07-16
notify = mnacho.echenim@univ-grenoble-alpes.fr
abstract =
We have formalized the computation of fair prices for derivative
products in discrete financial models. As an application, we derive a
way to compute fair prices of derivative products in the
Cox-Ross-Rubinstein model of a financial market, thus completing the
work that was presented in this <a
href="https://hal.archives-ouvertes.fr/hal-01562944">paper</a>.
extra-history =
Change history:
[2019-05-12]:
Renamed discr_mkt predicate to stk_strict_subs and got rid of predicate A for a more natural definition of the type discrete_market;
renamed basic quantity processes for coherent notation;
renamed value_process into val_process and closing_value_process to cls_val_process;
relaxed hypothesis of lemma CRR_market_fair_price.
Added functions to price some basic options.
(revision 0b813a1a833f)<br>
[Pell]
title = Pell's Equation
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
date = 2018-06-23
notify = eberlm@in.tum.de
abstract =
<p> This article gives the basic theory of Pell's equation
<em>x</em><sup>2</sup> = 1 +
<em>D</em>&thinsp;<em>y</em><sup>2</sup>,
where
<em>D</em>&thinsp;&isin;&thinsp;&#8469; is
a parameter and <em>x</em>, <em>y</em> are
integer variables. </p> <p> The main result that is proven
is the following: If <em>D</em> is not a perfect square,
then there exists a <em>fundamental solution</em>
(<em>x</em><sub>0</sub>,
<em>y</em><sub>0</sub>) that is not the
trivial solution (1, 0) and which generates all other solutions
(<em>x</em>, <em>y</em>) in the sense that
there exists some
<em>n</em>&thinsp;&isin;&thinsp;&#8469;
such that |<em>x</em>| +
|<em>y</em>|&thinsp;&radic;<span
style="text-decoration:
overline"><em>D</em></span> =
(<em>x</em><sub>0</sub> +
<em>y</em><sub>0</sub>&thinsp;&radic;<span
style="text-decoration:
overline"><em>D</em></span>)<sup><em>n</em></sup>.
This also implies that the set of solutions is infinite, and it gives
us an explicit and executable characterisation of all the solutions.
</p> <p> Based on this, simple executable algorithms for
computing the fundamental solution and the infinite sequence of all
non-negative solutions are also provided. </p>
[WebAssembly]
title = WebAssembly
author = Conrad Watt <http://www.cl.cam.ac.uk/~caw77/>
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
date = 2018-04-29
notify = caw77@cam.ac.uk
abstract =
This is a mechanised specification of the WebAssembly language, drawn
mainly from the previously published paper formalisation of Haas et
al. Also included is a full proof of soundness of the type system,
together with a verified type checker and interpreter. We include only
a partial procedure for the extraction of the type checker and
interpreter here. For more details, please see our paper in CPP 2018.
[Knuth_Morris_Pratt]
title = The string search algorithm by Knuth, Morris and Pratt
author = Fabian Hellauer <mailto:hellauer@in.tum.de>, Peter Lammich <http://www21.in.tum.de/~lammich>
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
date = 2017-12-18
notify = hellauer@in.tum.de, lammich@in.tum.de
abstract =
The Knuth-Morris-Pratt algorithm is often used to show that the
problem of finding a string <i>s</i> in a text
<i>t</i> can be solved deterministically in
<i>O(|s| + |t|)</i> time. We use the Isabelle
Refinement Framework to formulate and verify the algorithm. Via
refinement, we apply some optimisations and finally use the
<em>Sepref</em> tool to obtain executable code in
<em>Imperative/HOL</em>.
[Minkowskis_Theorem]
title = Minkowski's Theorem
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Geometry, Mathematics/Number Theory
+topic = Mathematics/Geometry, Mathematics/Number theory
date = 2017-07-13
notify = eberlm@in.tum.de
abstract =
<p>Minkowski's theorem relates a subset of
&#8477;<sup>n</sup>, the Lebesgue measure, and the
integer lattice &#8484;<sup>n</sup>: It states that
any convex subset of &#8477;<sup>n</sup> with volume
greater than 2<sup>n</sup> contains at least one lattice
point from &#8484;<sup>n</sup>\{0}, i.&thinsp;e. a
non-zero point with integer coefficients.</p> <p>A
related theorem which directly implies this is Blichfeldt's
theorem, which states that any subset of
&#8477;<sup>n</sup> with a volume greater than 1
contains two different points whose difference vector has integer
components.</p> <p>The entry contains a proof of both
theorems.</p>
[Name_Carrying_Type_Inference]
title = Verified Metatheory and Type Inference for a Name-Carrying Simply-Typed Lambda Calculus
author = Michael Rawson <mailto:michaelrawson76@gmail.com>
-topic = Computer Science/Programming Languages/Type Systems
+topic = Computer science/Programming languages/Type systems
date = 2017-07-09
notify = mr644@cam.ac.uk, michaelrawson76@gmail.com
abstract =
I formalise a Church-style simply-typed
\(\lambda\)-calculus, extended with pairs, a unit value, and
projection functions, and show some metatheory of the calculus, such
as the subject reduction property. Particular attention is paid to the
treatment of names in the calculus. A nominal style of binding is
used, but I use a manual approach over Nominal Isabelle in order to
extract an executable type inference algorithm. More information can
be found in my <a
href="http://www.openthesis.org/documents/Verified-Metatheory-Type-Inference-Simply-603182.html">undergraduate
dissertation</a>.
[Propositional_Proof_Systems]
title = Propositional Proof Systems
author = Julius Michaelis <http://liftm.de>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
topic = Logic/Proof theory
date = 2017-06-21
notify = maintainafpppt@liftm.de
abstract =
We formalize a range of proof systems for classical propositional
logic (sequent calculus, natural deduction, Hilbert systems,
resolution) and prove the most important meta-theoretic results about
semantics and proofs: compactness, soundness, completeness,
translations between proof systems, cut-elimination, interpolation and
model existence.
[Optics]
title = Optics
author = Simon Foster <mailto:simon.foster@york.ac.uk>, Frank Zeyda <mailto:frank.zeyda@york.ac.uk>
-topic = Computer Science/Functional Programming, Mathematics/Algebra
+topic = Computer science/Functional programming, Mathematics/Algebra
date = 2017-05-25
notify = simon.foster@york.ac.uk
abstract =
Lenses provide an abstract interface for manipulating data types
through spatially-separated views. They are defined abstractly in
terms of two functions, <em>get</em>, the return a value
from the source type, and <em>put</em> that updates the
value. We mechanise the underlying theory of lenses, in terms of an
algebraic hierarchy of lenses, including well-behaved and very
well-behaved lenses, each lens class being characterised by a set of
lens laws. We also mechanise a lens algebra in Isabelle that enables
their composition and comparison, so as to allow construction of
complex lenses. This is accompanied by a large library of algebraic
laws. Moreover we also show how the lens classes can be applied by
instantiating them with a number of Isabelle data types.
extra-history =
Change history:
[2020-03-02]:
Added partial bijective and symmetric lenses.
Improved alphabet command generating additional lenses and results.
Several additional lens relations, including observational equivalence.
Additional theorems throughout.
Adaptations for Isabelle 2020.
(revision 44e2e5c)
[Game_Based_Crypto]
title = Game-based cryptography in HOL
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>, S. Reza Sefidgar <>, Bhargav Bhatt <mailto:bhargav.bhatt@inf.ethz.ch>
-topic = Computer Science/Security/Cryptography
+topic = Computer science/Security/Cryptography
date = 2017-05-05
notify = mail@andreas-lochbihler.de
abstract =
<p>In this AFP entry, we show how to specify game-based cryptographic
security notions and formally prove secure several cryptographic
constructions from the literature using the CryptHOL framework. Among
others, we formalise the notions of a random oracle, a pseudo-random
function, an unpredictable function, and of encryption schemes that are
indistinguishable under chosen plaintext and/or ciphertext attacks. We
prove the random-permutation/random-function switching lemma, security
of the Elgamal and hashed Elgamal public-key encryption scheme and
correctness and security of several constructions with pseudo-random
functions.
</p><p>Our proofs follow the game-hopping style advocated by
Shoup and Bellare and Rogaway, from which most of the examples have
been taken. We generalise some of their results such that they can be
reused in other proofs. Thanks to CryptHOL's integration with
Isabelle's parametricity infrastructure, many simple hops are easily
justified using the theory of representation independence.</p>
extra-history =
Change history:
[2018-09-28]:
added the CryptHOL tutorial for game-based cryptography
(revision 489a395764ae)
[Multi_Party_Computation]
title = Multi-Party Computation
author = David Aspinall <http://homepages.inf.ed.ac.uk/da/>, David Butler <mailto:dbutler@turing.ac.uk>
-topic = Computer Science/Security
+topic = Computer science/Security
date = 2019-05-09
notify = dbutler@turing.ac.uk
abstract =
We use CryptHOL to consider Multi-Party Computation (MPC) protocols.
MPC was first considered by Yao in 1983 and recent advances in
efficiency and an increased demand mean it is now deployed in the real
world. Security is considered using the real/ideal world paradigm. We
first define security in the semi-honest security setting where
parties are assumed not to deviate from the protocol transcript. In
this setting we prove multiple Oblivious Transfer (OT) protocols
secure and then show security for the gates of the GMW protocol. We
then define malicious security, this is a stronger notion of security
where parties are assumed to be fully corrupted by an adversary. In
this setting we again consider OT, as it is a fundamental building
block of almost all MPC protocols.
[Sigma_Commit_Crypto]
title = Sigma Protocols and Commitment Schemes
author = David Butler <https://www.turing.ac.uk/people/doctoral-students/david-butler>, Andreas Lochbihler <http://www.andreas-lochbihler.de>
-topic = Computer Science/Security/Cryptography
+topic = Computer science/Security/Cryptography
date = 2019-10-07
notify = dbutler@turing.ac.uk
abstract =
We use CryptHOL to formalise commitment schemes and Sigma-protocols.
Both are widely used fundamental two party cryptographic primitives.
Security for commitment schemes is considered using game-based
definitions whereas the security of Sigma-protocols is considered
using both the game-based and simulation-based security paradigms. In
this work, we first define security for both primitives and then prove
secure multiple case studies: the Schnorr, Chaum-Pedersen and
Okamoto Sigma-protocols as well as a construction that allows for
compound (AND and OR statements) Sigma-protocols and the Pedersen and
Rivest commitment schemes. We also prove that commitment schemes can
be constructed from Sigma-protocols. We formalise this proof at an
abstract level, only assuming the existence of a Sigma-protocol;
consequently, the instantiations of this result for the concrete
Sigma-protocols we consider come for free.
[CryptHOL]
title = CryptHOL
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>
-topic = Computer Science/Security/Cryptography, Computer Science/Functional Programming, Mathematics/Probability Theory
+topic = Computer science/Security/Cryptography, Computer science/Functional programming, Mathematics/Probability theory
date = 2017-05-05
notify = mail@andreas-lochbihler.de
abstract =
<p>CryptHOL provides a framework for formalising cryptographic arguments
in Isabelle/HOL. It shallowly embeds a probabilistic functional
programming language in higher order logic. The language features
monadic sequencing, recursion, random sampling, failures and failure
handling, and black-box access to oracles. Oracles are probabilistic
functions which maintain hidden state between different invocations.
All operators are defined in the new semantic domain of
generative probabilistic values, a codatatype. We derive proof rules for
the operators and establish a connection with the theory of relational
parametricity. Thus, the resuting proofs are trustworthy and
comprehensible, and the framework is extensible and widely applicable.
</p><p>
The framework is used in the accompanying AFP entry "Game-based
Cryptography in HOL". There, we show-case our framework by formalizing
different game-based proofs from the literature. This formalisation
continues the work described in the author's ESOP 2016 paper.</p>
[Constructive_Cryptography]
title = Constructive Cryptography in HOL
author = Andreas Lochbihler <http://www.andreas-lochbihler.de/>, S. Reza Sefidgar<>
-topic = Computer Science/Security/Cryptography, Mathematics/Probability Theory
+topic = Computer science/Security/Cryptography, Mathematics/Probability theory
date = 2018-12-17
notify = mail@andreas-lochbihler.de, reza.sefidgar@inf.ethz.ch
abstract =
Inspired by Abstract Cryptography, we extend CryptHOL, a framework for
formalizing game-based proofs, with an abstract model of Random
Systems and provide proof rules about their composition and equality.
This foundation facilitates the formalization of Constructive
Cryptography proofs, where the security of a cryptographic scheme is
realized as a special form of construction in which a complex random
system is built from simpler ones. This is a first step towards a
fully-featured compositional framework, similar to Universal
Composability framework, that supports formalization of
simulation-based proofs.
[Probabilistic_While]
title = Probabilistic while loop
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>
-topic = Computer Science/Functional Programming, Mathematics/Probability Theory, Computer Science/Algorithms
+topic = Computer science/Functional programming, Mathematics/Probability theory, Computer science/Algorithms
date = 2017-05-05
notify = mail@andreas-lochbihler.de
abstract =
This AFP entry defines a probabilistic while operator based on
sub-probability mass functions and formalises zero-one laws and variant
rules for probabilistic loop termination. As applications, we
implement probabilistic algorithms for the Bernoulli, geometric and
arbitrary uniform distributions that only use fair coin flips, and
prove them correct and terminating with probability 1.
extra-history =
Change history:
[2018-02-02]:
Added a proof that probabilistic conditioning can be implemented by repeated sampling.
(revision 305867c4e911)<br>
[Monad_Normalisation]
title = Monad normalisation
author = Joshua Schneider <>, Manuel Eberl <https://www21.in.tum.de/~eberlm>, Andreas Lochbihler <http://www.andreas-lochbihler.de>
-topic = Tools, Computer Science/Functional Programming, Logic/Rewriting
+topic = Tools, Computer science/Functional programming, Logic/Rewriting
date = 2017-05-05
notify = mail@andreas-lochbihler.de
abstract =
The usual monad laws can directly be used as rewrite rules for Isabelle’s
simplifier to normalise monadic HOL terms and decide equivalences.
In a commutative monad, however, the commutativity law is a
higher-order permutative rewrite rule that makes the simplifier loop.
This AFP entry implements a simproc that normalises monadic
expressions in commutative monads using ordered rewriting. The
simproc can also permute computations across control operators like if
and case.
[Monomorphic_Monad]
title = Effect polymorphism in higher-order logic
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>
-topic = Computer Science/Functional Programming
+topic = Computer science/Functional programming
date = 2017-05-05
notify = mail@andreas-lochbihler.de
abstract =
The notion of a monad cannot be expressed within higher-order logic
(HOL) due to type system restrictions. We show that if a monad is used
with values of only one type, this notion can be formalised in HOL.
Based on this idea, we develop a library of effect specifications and
implementations of monads and monad transformers. Hence, we can
abstract over the concrete monad in HOL definitions and thus use the
same definition for different (combinations of) effects. We illustrate
the usefulness of effect polymorphism with a monadic interpreter for a
simple language.
extra-history =
Change history:
[2018-02-15]:
added further specifications and implementations of non-determinism;
more examples
(revision bc5399eea78e)<br>
[Constructor_Funs]
title = Constructor Functions
author = Lars Hupel <https://www21.in.tum.de/~hupel/>
topic = Tools
date = 2017-04-19
notify = hupel@in.tum.de
abstract =
Isabelle's code generator performs various adaptations for target
languages. Among others, constructor applications have to be fully
saturated. That means that for constructor calls occuring as arguments
to higher-order functions, synthetic lambdas have to be inserted. This
entry provides tooling to avoid this construction altogether by
introducing constructor functions.
[Lazy_Case]
title = Lazifying case constants
author = Lars Hupel <https://www21.in.tum.de/~hupel/>
topic = Tools
date = 2017-04-18
notify = hupel@in.tum.de
abstract =
Isabelle's code generator performs various adaptations for target
languages. Among others, case statements are printed as match
expressions. Internally, this is a sophisticated procedure, because in
HOL, case statements are represented as nested calls to the case
combinators as generated by the datatype package. Furthermore, the
procedure relies on laziness of match expressions in the target
language, i.e., that branches guarded by patterns that fail to match
are not evaluated. Similarly, <tt>if-then-else</tt> is
printed to the corresponding construct in the target language. This
entry provides tooling to replace these special cases in the code
generator by ignoring these target language features, instead printing
case expressions and <tt>if-then-else</tt> as functions.
[Dict_Construction]
title = Dictionary Construction
author = Lars Hupel <https://www21.in.tum.de/~hupel/>
topic = Tools
date = 2017-05-24
notify = hupel@in.tum.de
abstract =
Isabelle's code generator natively supports type classes. For
targets that do not have language support for classes and instances,
it performs the well-known dictionary translation, as described by
Haftmann and Nipkow. This translation happens outside the logic, i.e.,
there is no guarantee that it is correct, besides the pen-and-paper
proof. This work implements a certified dictionary translation that
produces new class-free constants and derives equality theorems.
[Higher_Order_Terms]
title = An Algebra for Higher-Order Terms
author = Lars Hupel <https://lars.hupel.info/>
contributors = Yu Zhang <>
-topic = Computer Science/Programming Languages/Lambda Calculi
+topic = Computer science/Programming languages/Lambda calculi
date = 2019-01-15
notify = lars@hupel.info
abstract =
In this formalization, I introduce a higher-order term algebra,
generalizing the notions of free variables, matching, and
substitution. The need arose from the work on a <a
href="http://dx.doi.org/10.1007/978-3-319-89884-1_35">verified
compiler from Isabelle to CakeML</a>. Terms can be thought of as
consisting of a generic (free variables, constants, application) and
a specific part. As example applications, this entry provides
instantiations for de-Bruijn terms, terms with named variables, and
<a
href="https://www.isa-afp.org/entries/Lambda_Free_RPOs.html">Blanchette’s
&lambda;-free higher-order terms</a>. Furthermore, I
implement translation functions between de-Bruijn terms and named
terms and prove their correctness.
[Subresultants]
title = Subresultants
author = Sebastiaan Joosten <mailto:sebastiaan.joosten@uibk.ac.at>, René Thiemann <mailto:rene.thiemann@uibk.ac.at>, Akihisa Yamada <mailto:akihisa.yamada@uibk.ac.at>
topic = Mathematics/Algebra
date = 2017-04-06
notify = rene.thiemann@uibk.ac.at
abstract =
We formalize the theory of subresultants and the subresultant
polynomial remainder sequence as described by Brown and Traub. As a
result, we obtain efficient certified algorithms for computing the
resultant and the greatest common divisor of polynomials.
[Comparison_Sort_Lower_Bound]
title = Lower bound on comparison-based sorting algorithms
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
date = 2017-03-15
notify = eberlm@in.tum.de
abstract =
<p>This article contains a formal proof of the well-known fact
that number of comparisons that a comparison-based sorting algorithm
needs to perform to sort a list of length <em>n</em> is at
least <em>log<sub>2</sub>&nbsp;(n!)</em>
in the worst case, i.&thinsp;e.&nbsp;<em>Ω(n log
n)</em>.</p> <p>For this purpose, a shallow
embedding for comparison-based sorting algorithms is defined: a
sorting algorithm is a recursive datatype containing either a HOL
function or a query of a comparison oracle with a continuation
containing the remaining computation. This makes it possible to force
the algorithm to use only comparisons and to track the number of
comparisons made.</p>
[Quick_Sort_Cost]
title = The number of comparisons in QuickSort
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
date = 2017-03-15
notify = eberlm@in.tum.de
abstract =
<p>We give a formal proof of the well-known results about the
number of comparisons performed by two variants of QuickSort: first,
the expected number of comparisons of randomised QuickSort
(i.&thinsp;e.&nbsp;QuickSort with random pivot choice) is
<em>2&thinsp;(n+1)&thinsp;H<sub>n</sub> -
4&thinsp;n</em>, which is asymptotically equivalent to
<em>2&thinsp;n ln n</em>; second, the number of
comparisons performed by the classic non-randomised QuickSort has the
same distribution in the average case as the randomised one.</p>
[Random_BSTs]
title = Expected Shape of Random Binary Search Trees
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
date = 2017-04-04
notify = eberlm@in.tum.de
abstract =
<p>This entry contains proofs for the textbook results about the
distributions of the height and internal path length of random binary
search trees (BSTs), i.&thinsp;e. BSTs that are formed by taking
an empty BST and inserting elements from a fixed set in random
order.</p> <p>In particular, we prove a logarithmic upper
bound on the expected height and the <em>Θ(n log n)</em>
closed-form solution for the expected internal path length in terms of
the harmonic numbers. We also show how the internal path length
relates to the average-case cost of a lookup in a BST.</p>
[Randomised_BSTs]
title = Randomised Binary Search Trees
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
date = 2018-10-19
notify = eberlm@in.tum.de
abstract =
<p>This work is a formalisation of the Randomised Binary Search
Trees introduced by Martínez and Roura, including definitions and
correctness proofs.</p> <p>Like randomised treaps, they
are a probabilistic data structure that behaves exactly as if elements
were inserted into a non-balancing BST in random order. However,
unlike treaps, they only use discrete probability distributions, but
their use of randomness is more complicated.</p>
[E_Transcendental]
title = The Transcendence of e
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Analysis, Mathematics/Number Theory
+topic = Mathematics/Analysis, Mathematics/Number theory
date = 2017-01-12
notify = eberlm@in.tum.de
abstract =
<p>This work contains a proof that Euler's number e is transcendental. The
proof follows the standard approach of assuming that e is algebraic and
then using a specific integer polynomial to derive two inconsistent bounds,
leading to a contradiction.</p> <p>This kind of approach can be found in
many different sources; this formalisation mostly follows a <a href="http://planetmath.org/proofoflindemannweierstrasstheoremandthateandpiaretranscendental">PlanetMath article</a> by Roger Lipsett.</p>
[Pi_Transcendental]
title = The Transcendence of π
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
date = 2018-09-28
notify = eberlm@in.tum.de
abstract =
<p>This entry shows the transcendence of &pi; based on the
classic proof using the fundamental theorem of symmetric polynomials
first given by von Lindemann in 1882, but the formalisation mostly
follows the version by Niven. The proof reuses much of the machinery
developed in the AFP entry on the transcendence of
<em>e</em>.</p>
[DFS_Framework]
title = A Framework for Verifying Depth-First Search Algorithms
author = Peter Lammich <http://www21.in.tum.de/~lammich>, René Neumann <mailto:neumannr@in.tum.de>
notify = lammich@in.tum.de
date = 2016-07-05
-topic = Computer Science/Algorithms/Graph
+topic = Computer science/Algorithms/Graph
abstract =
<p>
This entry presents a framework for the modular verification of
DFS-based algorithms, which is described in our [CPP-2015] paper. It
provides a generic DFS algorithm framework, that can be parameterized
with user-defined actions on certain events (e.g. discovery of new
node). It comes with an extensible library of invariants, which can
be used to derive invariants of a specific parameterization. Using
refinement techniques, efficient implementations of the algorithms can
easily be derived. Here, the framework comes with templates for a
recursive and a tail-recursive implementation, and also with several
templates for implementing the data structures required by the DFS
algorithm. Finally, this entry contains a set of re-usable DFS-based
algorithms, which illustrate the application of the framework.
</p><p>
[CPP-2015] Peter Lammich, René Neumann: A Framework for Verifying
Depth-First Search Algorithms. CPP 2015: 137-146</p>
[Flow_Networks]
title = Flow Networks and the Min-Cut-Max-Flow Theorem
author = Peter Lammich <http://www21.in.tum.de/~lammich>, S. Reza Sefidgar <>
-topic = Mathematics/Graph Theory
+topic = Mathematics/Graph theory
date = 2017-06-01
notify = lammich@in.tum.de
abstract =
We present a formalization of flow networks and the Min-Cut-Max-Flow
theorem. Our formal proof closely follows a standard textbook proof,
and is accessible even without being an expert in Isabelle/HOL, the
interactive theorem prover used for the formalization.
[Prpu_Maxflow]
title = Formalizing Push-Relabel Algorithms
author = Peter Lammich <http://www21.in.tum.de/~lammich>, S. Reza Sefidgar <>
-topic = Computer Science/Algorithms/Graph, Mathematics/Graph Theory
+topic = Computer science/Algorithms/Graph, Mathematics/Graph theory
date = 2017-06-01
notify = lammich@in.tum.de
abstract =
We present a formalization of push-relabel algorithms for computing
the maximum flow in a network. We start with Goldberg's et
al.~generic push-relabel algorithm, for which we show correctness and
the time complexity bound of O(V^2E). We then derive the
relabel-to-front and FIFO implementation. Using stepwise refinement
techniques, we derive an efficient verified implementation. Our
formal proof of the abstract algorithms closely follows a standard
textbook proof. It is accessible even without being an expert in
Isabelle/HOL, the interactive theorem prover used for the
formalization.
[Buildings]
title = Chamber Complexes, Coxeter Systems, and Buildings
author = Jeremy Sylvestre <http://ualberta.ca/~jsylvest/>
notify = jeremy.sylvestre@ualberta.ca
date = 2016-07-01
topic = Mathematics/Algebra, Mathematics/Geometry
abstract =
We provide a basic formal framework for the theory of chamber
complexes and Coxeter systems, and for buildings as thick chamber
complexes endowed with a system of apartments. Along the way, we
develop some of the general theory of abstract simplicial complexes
and of groups (relying on the <i>group_add</i> class for the basics),
including free groups and group presentations, and their universal
properties. The main results verified are that the deletion condition
is both necessary and sufficient for a group with a set of generators
of order two to be a Coxeter system, and that the apartments in a
(thick) building are all uniformly Coxeter.
[Algebraic_VCs]
title = Program Construction and Verification Components Based on Kleene Algebra
author = Victor B. F. Gomes <mailto:victor.gomes@cl.cam.ac.uk>, Georg Struth <mailto:g.struth@sheffield.ac.uk>
notify = victor.gomes@cl.cam.ac.uk, g.struth@sheffield.ac.uk
date = 2016-06-18
topic = Mathematics/Algebra
abstract =
Variants of Kleene algebra support program construction and
verification by algebraic reasoning. This entry provides a
verification component for Hoare logic based on Kleene algebra with
tests, verification components for weakest preconditions and strongest
postconditions based on Kleene algebra with domain and a component for
step-wise refinement based on refinement Kleene algebra with tests. In
addition to these components for the partial correctness of while
programs, a verification component for total correctness based on
divergence Kleene algebras and one for (partial correctness) of
recursive programs based on domain quantales are provided. Finally we
have integrated memory models for programs with pointers and a program
trace semantics into the weakest precondition component.
[C2KA_DistributedSystems]
title = Communicating Concurrent Kleene Algebra for Distributed Systems Specification
author = Maxime Buyse <mailto:maxime.buyse@polytechnique.edu>, Jason Jaskolka <https://carleton.ca/jaskolka/>
-topic = Computer Science/Automata and Formal Languages, Mathematics/Algebra
+topic = Computer science/Automata and formal languages, Mathematics/Algebra
date = 2019-08-06
notify = maxime.buyse@polytechnique.edu, jason.jaskolka@carleton.ca
abstract =
Communicating Concurrent Kleene Algebra (C²KA) is a mathematical
framework for capturing the communicating and concurrent behaviour of
agents in distributed systems. It extends Hoare et al.'s
Concurrent Kleene Algebra (CKA) with communication actions through the
notions of stimuli and shared environments. C²KA has applications in
studying system-level properties of distributed systems such as
safety, security, and reliability. In this work, we formalize results
about C²KA and its application for distributed systems specification.
We first formalize the stimulus structure and behaviour structure
(CKA). Next, we combine them to formalize C²KA and its properties.
Then, we formalize notions and properties related to the topology of
distributed systems and the potential for communication via stimuli
and via shared environments of agents, all within the algebraic
setting of C²KA.
[Card_Equiv_Relations]
title = Cardinality of Equivalence Relations
author = Lukas Bulwahn <mailto:lukas.bulwahn@gmail.com>
notify = lukas.bulwahn@gmail.com
date = 2016-05-24
topic = Mathematics/Combinatorics
abstract =
This entry provides formulae for counting the number of equivalence
relations and partial equivalence relations over a finite carrier set
with given cardinality. To count the number of equivalence relations,
we provide bijections between equivalence relations and set
partitions, and then transfer the main results of the two AFP entries,
Cardinality of Set Partitions and Spivey's Generalized Recurrence for
Bell Numbers, to theorems on equivalence relations. To count the
number of partial equivalence relations, we observe that counting
partial equivalence relations over a set A is equivalent to counting
all equivalence relations over all subsets of the set A. From this
observation and the results on equivalence relations, we show that the
cardinality of partial equivalence relations over a finite set of
cardinality n is equal to the n+1-th Bell number.
[Twelvefold_Way]
title = The Twelvefold Way
author = Lukas Bulwahn <mailto:lukas.bulwahn@gmail.com>
topic = Mathematics/Combinatorics
date = 2016-12-29
notify = lukas.bulwahn@gmail.com
abstract =
This entry provides all cardinality theorems of the Twelvefold Way.
The Twelvefold Way systematically classifies twelve related
combinatorial problems concerning two finite sets, which include
counting permutations, combinations, multisets, set partitions and
number partitions. This development builds upon the existing formal
developments with cardinality theorems for those structures. It
provides twelve bijections from the various structures to different
equivalence classes on finite functions, and hence, proves cardinality
formulae for these equivalence classes on finite functions.
[Chord_Segments]
title = Intersecting Chords Theorem
author = Lukas Bulwahn <mailto:lukas.bulwahn@gmail.com>
notify = lukas.bulwahn@gmail.com
date = 2016-10-11
topic = Mathematics/Geometry
abstract =
This entry provides a geometric proof of the intersecting chords
theorem. The theorem states that when two chords intersect each other
inside a circle, the products of their segments are equal. After a
short review of existing proofs in the literature, I decided to use a
proof approach that employs reasoning about lengths of line segments,
the orthogonality of two lines and the Pythagoras Law. Hence, one can
understand the formalized proof easily with the knowledge of a few
general geometric facts that are commonly taught in high-school. This
theorem is the 55th theorem of the Top 100 Theorems list.
[Category3]
title = Category Theory with Adjunctions and Limits
author = Eugene W. Stark <mailto:stark@cs.stonybrook.edu>
notify = stark@cs.stonybrook.edu
date = 2016-06-26
-topic = Mathematics/Category Theory
+topic = Mathematics/Category theory
abstract =
This article attempts to develop a usable framework for doing category
theory in Isabelle/HOL. Our point of view, which to some extent
differs from that of the previous AFP articles on the subject, is to
try to explore how category theory can be done efficaciously within
HOL, rather than trying to match exactly the way things are done using
a traditional approach. To this end, we define the notion of category
in an "object-free" style, in which a category is represented by a
single partial composition operation on arrows. This way of defining
categories provides some advantages in the context of HOL, including
the ability to avoid the use of records and the possibility of
defining functors and natural transformations simply as certain
functions on arrows, rather than as composite objects. We define
various constructions associated with the basic notions, including:
dual category, product category, functor category, discrete category,
free category, functor composition, and horizontal and vertical
composite of natural transformations. A "set category" locale is
defined that axiomatizes the notion "category of all sets at a type
and all functions between them," and a fairly extensive set of
properties of set categories is derived from the locale assumptions.
The notion of a set category is used to prove the Yoneda Lemma in a
general setting of a category equipped with a "hom embedding," which
maps arrows of the category to the "universe" of the set category. We
also give a treatment of adjunctions, defining adjunctions via left
and right adjoint functors, natural bijections between hom-sets, and
unit and counit natural transformations, and showing the equivalence
of these definitions. We also develop the theory of limits, including
representations of functors, diagrams and cones, and diagonal
functors. We show that right adjoint functors preserve limits, and
that limits can be constructed via products and equalizers. We
characterize the conditions under which limits exist in a set
category. We also examine the case of limits in a functor category,
ultimately culminating in a proof that the Yoneda embedding preserves
limits.
extra-history =
Change history:
[2018-05-29]:
Revised axioms for the category locale. Introduced notation for composition and "in hom".
(revision 8318366d4575)<br>
[2020-02-15]:
Move ConcreteCategory.thy from Bicategory to Category3 and use it systematically.
Make other minor improvements throughout.
(revision a51840d36867)<br>
[MonoidalCategory]
title = Monoidal Categories
author = Eugene W. Stark <mailto:stark@cs.stonybrook.edu>
-topic = Mathematics/Category Theory
+topic = Mathematics/Category theory
date = 2017-05-04
notify = stark@cs.stonybrook.edu
abstract =
Building on the formalization of basic category theory set out in the
author's previous AFP article, the present article formalizes
some basic aspects of the theory of monoidal categories. Among the
notions defined here are monoidal category, monoidal functor, and
equivalence of monoidal categories. The main theorems formalized are
MacLane's coherence theorem and the constructions of the free
monoidal category and free strict monoidal category generated by a
given category. The coherence theorem is proved syntactically, using
a structurally recursive approach to reduction of terms that might
have some novel aspects. We also give proofs of some results given by
Etingof et al, which may prove useful in a formal setting. In
particular, we show that the left and right unitors need not be taken
as given data in the definition of monoidal category, nor does the
definition of monoidal functor need to take as given a specific
isomorphism expressing the preservation of the unit object. Our
definitions of monoidal category and monoidal functor are stated so as
to take advantage of the economy afforded by these facts.
extra-history =
Change history:
[2017-05-18]:
Integrated material from MonoidalCategory/Category3Adapter into Category3/ and deleted adapter.
(revision 015543cdd069)<br>
[2018-05-29]:
Modifications required due to 'Category3' changes. Introduced notation for "in hom".
(revision 8318366d4575)<br>
[2020-02-15]:
Cosmetic improvements.
(revision a51840d36867)<br>
[Card_Multisets]
title = Cardinality of Multisets
author = Lukas Bulwahn <mailto:lukas.bulwahn@gmail.com>
notify = lukas.bulwahn@gmail.com
date = 2016-06-26
topic = Mathematics/Combinatorics
abstract =
<p>This entry provides three lemmas to count the number of multisets
of a given size and finite carrier set. The first lemma provides a
cardinality formula assuming that the multiset's elements are chosen
from the given carrier set. The latter two lemmas provide formulas
assuming that the multiset's elements also cover the given carrier
set, i.e., each element of the carrier set occurs in the multiset at
least once.</p> <p>The proof of the first lemma uses the argument of
the recurrence relation for counting multisets. The proof of the
second lemma is straightforward, and the proof of the third lemma is
easily obtained using the first cardinality lemma. A challenge for the
formalization is the derivation of the required induction rule, which
is a special combination of the induction rules for finite sets and
natural numbers. The induction rule is derived by defining a suitable
inductive predicate and transforming the predicate's induction
rule.</p>
[Posix-Lexing]
title = POSIX Lexing with Derivatives of Regular Expressions
author = Fahad Ausaf <http://kcl.academia.edu/FahadAusaf>, Roy Dyckhoff <https://rd.host.cs.st-andrews.ac.uk>, Christian Urban <http://www.inf.kcl.ac.uk/staff/urbanc/>
notify = christian.urban@kcl.ac.uk
date = 2016-05-24
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
abstract =
Brzozowski introduced the notion of derivatives for regular
expressions. They can be used for a very simple regular expression
matching algorithm. Sulzmann and Lu cleverly extended this algorithm
in order to deal with POSIX matching, which is the underlying
disambiguation strategy for regular expressions needed in lexers. In
this entry we give our inductive definition of what a POSIX value is
and show (i) that such a value is unique (for given regular expression
and string being matched) and (ii) that Sulzmann and Lu's algorithm
always generates such a value (provided that the regular expression
matches the string). We also prove the correctness of an optimised
version of the POSIX matching algorithm.
[LocalLexing]
title = Local Lexing
author = Steven Obua <mailto:steven@recursivemind.com>
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
date = 2017-04-28
notify = steven@recursivemind.com
abstract =
This formalisation accompanies the paper <a
href="https://arxiv.org/abs/1702.03277">Local
Lexing</a> which introduces a novel parsing concept of the same
name. The paper also gives a high-level algorithm for local lexing as
an extension of Earley's algorithm. This formalisation proves the
algorithm to be correct with respect to its local lexing semantics. As
a special case, this formalisation thus also contains a proof of the
correctness of Earley's algorithm. The paper contains a short
outline of how this formalisation is organised.
[MFMC_Countable]
title = A Formal Proof of the Max-Flow Min-Cut Theorem for Countable Networks
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>
date = 2016-05-09
-topic = Mathematics/Graph Theory
+topic = Mathematics/Graph theory
abstract =
This article formalises a proof of the maximum-flow minimal-cut
theorem for networks with countably many edges. A network is a
directed graph with non-negative real-valued edge labels and two
dedicated vertices, the source and the sink. A flow in a network
assigns non-negative real numbers to the edges such that for all
vertices except for the source and the sink, the sum of values on
incoming edges equals the sum of values on outgoing edges. A cut is a
subset of the vertices which contains the source, but not the sink.
Our theorem states that in every network, there is a flow and a cut
such that the flow saturates all the edges going out of the cut and is
zero on all the incoming edges. The proof is based on the paper
<emph>The Max-Flow Min-Cut theorem for countable networks</emph> by
Aharoni et al. Additionally, we prove a characterisation of the
lifting operation for relations on discrete probability distributions,
which leads to a concise proof of its distributivity over relation
composition.
notify = mail@andreas-lochbihler.de
extra-history =
Change history:
[2017-09-06]:
derive characterisation for the lifting operations on discrete distributions from finite version of the max-flow min-cut theorem
(revision a7a198f5bab0)<br>
[Liouville_Numbers]
title = Liouville numbers
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
date = 2015-12-28
-topic = Mathematics/Analysis, Mathematics/Number Theory
+topic = Mathematics/Analysis, Mathematics/Number theory
abstract =
<p>
Liouville numbers are a class of transcendental numbers that can be approximated
particularly well with rational numbers. Historically, they were the first
numbers whose transcendence was proven.
</p><p>
In this entry, we define the concept of Liouville numbers as well as the
standard construction to obtain Liouville numbers (including Liouville's
constant) and we prove their most important properties: irrationality and
transcendence.
</p><p>
The proof is very elementary and requires only standard arithmetic, the Mean
Value Theorem for polynomials, and the boundedness of polynomials on compact
intervals.
</p>
notify = eberlm@in.tum.de
[Triangle]
title = Basic Geometric Properties of Triangles
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
date = 2015-12-28
topic = Mathematics/Geometry
abstract =
<p>
This entry contains a definition of angles between vectors and between three
points. Building on this, we prove basic geometric properties of triangles, such
as the Isosceles Triangle Theorem, the Law of Sines and the Law of Cosines, that
the sum of the angles of a triangle is π, and the congruence theorems for
triangles.
</p><p>
The definitions and proofs were developed following those by John Harrison in
HOL Light. However, due to Isabelle's type class system, all definitions and
theorems in the Isabelle formalisation hold for all real inner product spaces.
</p>
notify = eberlm@in.tum.de
[Prime_Harmonic_Series]
title = The Divergence of the Prime Harmonic Series
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
date = 2015-12-28
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
abstract =
<p>
In this work, we prove the lower bound <span class="nobr">ln(H_n) -
ln(5/3)</span> for the
partial sum of the Prime Harmonic series and, based on this, the divergence of
the Prime Harmonic Series
<span class="nobr">∑[p&thinsp;prime]&thinsp;·&thinsp;1/p.</span>
</p><p>
The proof relies on the unique squarefree decomposition of natural numbers. This
is similar to Euler's original proof (which was highly informal and morally
questionable). Its advantage over proofs by contradiction, like the famous one
by Paul Erdős, is that it provides a relatively good lower bound for the partial
sums.
</p>
notify = eberlm@in.tum.de
[Descartes_Sign_Rule]
title = Descartes' Rule of Signs
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
date = 2015-12-28
topic = Mathematics/Analysis
abstract =
<p>
Descartes' Rule of Signs relates the number of positive real roots of a
polynomial with the number of sign changes in its coefficient sequence.
</p><p>
Our proof follows the simple inductive proof given by Rob Arthan, which was also
used by John Harrison in his HOL Light formalisation. We proved most of the
lemmas for arbitrary linearly-ordered integrity domains (e.g. integers,
rationals, reals); the main result, however, requires the intermediate value
theorem and was therefore only proven for real polynomials.
</p>
notify = eberlm@in.tum.de
[Euler_MacLaurin]
title = The Euler–MacLaurin Formula
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
topic = Mathematics/Analysis
date = 2017-03-10
notify = eberlm@in.tum.de
abstract =
<p>The Euler-MacLaurin formula relates the value of a
discrete sum to that of the corresponding integral in terms of the
derivatives at the borders of the summation and a remainder term.
Since the remainder term is often very small as the summation bounds
grow, this can be used to compute asymptotic expansions for
sums.</p> <p>This entry contains a proof of this formula
for functions from the reals to an arbitrary Banach space. Two
variants of the formula are given: the standard textbook version and a
variant outlined in <em>Concrete Mathematics</em> that is
more useful for deriving asymptotic estimates.</p> <p>As
example applications, we use that formula to derive the full
asymptotic expansion of the harmonic numbers and the sum of inverse
squares.</p>
[Card_Partitions]
title = Cardinality of Set Partitions
author = Lukas Bulwahn <mailto:lukas.bulwahn@gmail.com>
date = 2015-12-12
topic = Mathematics/Combinatorics
abstract =
The theory's main theorem states that the cardinality of set partitions of
size k on a carrier set of size n is expressed by Stirling numbers of the
second kind. In Isabelle, Stirling numbers of the second kind are defined
in the AFP entry `Discrete Summation` through their well-known recurrence
relation. The main theorem relates them to the alternative definition as
cardinality of set partitions. The proof follows the simple and short
explanation in Richard P. Stanley's `Enumerative Combinatorics: Volume 1`
and Wikipedia, and unravels the full details and implicit reasoning steps
of these explanations.
notify = lukas.bulwahn@gmail.com
[Card_Number_Partitions]
title = Cardinality of Number Partitions
author = Lukas Bulwahn <mailto:lukas.bulwahn@gmail.com>
date = 2016-01-14
topic = Mathematics/Combinatorics
abstract =
This entry provides a basic library for number partitions, defines the
two-argument partition function through its recurrence relation and relates
this partition function to the cardinality of number partitions. The main
proof shows that the recursively-defined partition function with arguments
n and k equals the cardinality of number partitions of n with exactly k parts.
The combinatorial proof follows the proof sketch of Theorem 2.4.1 in
Mazur's textbook `Combinatorics: A Guided Tour`. This entry can serve as
starting point for various more intrinsic properties about number partitions,
the partition function and related recurrence relations.
notify = lukas.bulwahn@gmail.com
[Multirelations]
title = Binary Multirelations
author = Hitoshi Furusawa <http://www.sci.kagoshima-u.ac.jp/~furusawa/>, Georg Struth <http://www.dcs.shef.ac.uk/~georg>
date = 2015-06-11
topic = Mathematics/Algebra
abstract =
Binary multirelations associate elements of a set with its subsets; hence
they are binary relations from a set to its power set. Applications include
alternating automata, models and logics for games, program semantics with
dual demonic and angelic nondeterministic choices and concurrent dynamic
logics. This proof document supports an arXiv article that formalises the
basic algebra of multirelations and proposes axiom systems for them,
ranging from weak bi-monoids to weak bi-quantales.
notify =
[Noninterference_Generic_Unwinding]
title = The Generic Unwinding Theorem for CSP Noninterference Security
author = Pasquale Noce <mailto:pasquale.noce.lavoro@gmail.com>
date = 2015-06-11
-topic = Computer Science/Security, Computer Science/Concurrency/Process Calculi
+topic = Computer science/Security, Computer science/Concurrency/Process calculi
abstract =
<p>
The classical definition of noninterference security for a deterministic state
machine with outputs requires to consider the outputs produced by machine
actions after any trace, i.e. any indefinitely long sequence of actions, of the
machine. In order to render the verification of the security of such a machine
more straightforward, there is a need of some sufficient condition for security
such that just individual actions, rather than unbounded sequences of actions,
have to be considered.
</p><p>
By extending previous results applying to transitive noninterference policies,
Rushby has proven an unwinding theorem that provides a sufficient condition of
this kind in the general case of a possibly intransitive policy. This condition
has to be satisfied by a generic function mapping security domains into
equivalence relations over machine states.
</p><p>
An analogous problem arises for CSP noninterference security, whose definition
requires to consider any possible future, i.e. any indefinitely long sequence of
subsequent events and any indefinitely large set of refused events associated to
that sequence, for each process trace.
</p><p>
This paper provides a sufficient condition for CSP noninterference security,
which indeed requires to just consider individual accepted and refused events
and applies to the general case of a possibly intransitive policy. This
condition follows Rushby's one for classical noninterference security, and has
to be satisfied by a generic function mapping security domains into equivalence
relations over process traces; hence its name, Generic Unwinding Theorem.
Variants of this theorem applying to deterministic processes and trace set
processes are also proven. Finally, the sufficient condition for security
expressed by the theorem is shown not to be a necessary condition as well, viz.
there exists a secure process such that no domain-relation map satisfying the
condition exists.
</p>
notify =
[Noninterference_Ipurge_Unwinding]
title = The Ipurge Unwinding Theorem for CSP Noninterference Security
author = Pasquale Noce <mailto:pasquale.noce.lavoro@gmail.com>
date = 2015-06-11
-topic = Computer Science/Security
+topic = Computer science/Security
abstract =
<p>
The definition of noninterference security for Communicating Sequential
Processes requires to consider any possible future, i.e. any indefinitely long
sequence of subsequent events and any indefinitely large set of refused events
associated to that sequence, for each process trace. In order to render the
verification of the security of a process more straightforward, there is a need
of some sufficient condition for security such that just individual accepted and
refused events, rather than unbounded sequences and sets of events, have to be
considered.
</p><p>
Of course, if such a sufficient condition were necessary as well, it would be
even more valuable, since it would permit to prove not only that a process is
secure by verifying that the condition holds, but also that a process is not
secure by verifying that the condition fails to hold.
</p><p>
This paper provides a necessary and sufficient condition for CSP noninterference
security, which indeed requires to just consider individual accepted and refused
events and applies to the general case of a possibly intransitive policy. This
condition follows Rushby's output consistency for deterministic state machines
with outputs, and has to be satisfied by a specific function mapping security
domains into equivalence relations over process traces. The definition of this
function makes use of an intransitive purge function following Rushby's one;
hence the name given to the condition, Ipurge Unwinding Theorem.
</p><p>
Furthermore, in accordance with Hoare's formal definition of deterministic
processes, it is shown that a process is deterministic just in case it is a
trace set process, i.e. it may be identified by means of a trace set alone,
matching the set of its traces, in place of a failures-divergences pair. Then,
variants of the Ipurge Unwinding Theorem are proven for deterministic processes
and trace set processes.
</p>
notify =
[List_Interleaving]
title = Reasoning about Lists via List Interleaving
author = Pasquale Noce <mailto:pasquale.noce.lavoro@gmail.com>
date = 2015-06-11
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
<p>
Among the various mathematical tools introduced in his outstanding work on
Communicating Sequential Processes, Hoare has defined "interleaves" as the
predicate satisfied by any three lists such that the first list may be
split into sublists alternately extracted from the other two ones, whatever
is the criterion for extracting an item from either one list or the other
in each step.
</p><p>
This paper enriches Hoare's definition by identifying such criterion with
the truth value of a predicate taking as inputs the head and the tail of
the first list. This enhanced "interleaves" predicate turns out to permit
the proof of equalities between lists without the need of an induction.
Some rules that allow to infer "interleaves" statements without induction,
particularly applying to the addition or removal of a prefix to the input
lists, are also proven. Finally, a stronger version of the predicate, named
"Interleaves", is shown to fulfil further rules applying to the addition or
removal of a suffix to the input lists.
</p>
notify =
[Residuated_Lattices]
title = Residuated Lattices
author = Victor B. F. Gomes <mailto:vborgesferreiragomes1@sheffield.ac.uk>, Georg Struth <mailto:g.struth@sheffield.ac.uk>
date = 2015-04-15
topic = Mathematics/Algebra
abstract =
The theory of residuated lattices, first proposed by Ward and Dilworth, is
formalised in Isabelle/HOL. This includes concepts of residuated functions;
their adjoints and conjugates. It also contains necessary and sufficient
conditions for the existence of these operations in an arbitrary lattice.
The mathematical components for residuated lattices are linked to the AFP
entry for relation algebra. In particular, we prove Jonsson and Tsinakis
conditions for a residuated boolean algebra to form a relation algebra.
notify = g.struth@sheffield.ac.uk
[ConcurrentGC]
title = Relaxing Safely: Verified On-the-Fly Garbage Collection for x86-TSO
author = Peter Gammie <http://peteg.org>, Tony Hosking <https://www.cs.purdue.edu/homes/hosking/>, Kai Engelhardt <>
date = 2015-04-13
-topic = Computer Science/Algorithms/Concurrent
+topic = Computer science/Algorithms/Concurrent
abstract =
<p>
We use ConcurrentIMP to model Schism, a state-of-the-art real-time
garbage collection scheme for weak memory, and show that it is safe
on x86-TSO.</p>
<p>
This development accompanies the PLDI 2015 paper of the same name.
</p>
notify = peteg42@gmail.com
[List_Update]
title = Analysis of List Update Algorithms
author = Maximilian P.L. Haslbeck <http://in.tum.de/~haslbema/>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2016-02-17
-topic = Computer Science/Algorithms/Online
+topic = Computer science/Algorithms/Online
abstract =
<p>
These theories formalize the quantitative analysis of a number of classical algorithms for the list update problem: 2-competitiveness of move-to-front, the lower bound of 2 for the competitiveness of deterministic list update algorithms and 1.6-competitiveness of the randomized COMB algorithm, the best randomized list update algorithm known to date.
The material is based on the first two chapters of <i>Online Computation
and Competitive Analysis</i> by Borodin and El-Yaniv.
</p>
<p>
For an informal description see the FSTTCS 2016 publication
<a href="http://www21.in.tum.de/~nipkow/pubs/fsttcs16.html">Verified Analysis of List Update Algorithms</a>
by Haslbeck and Nipkow.
</p>
notify = nipkow@in.tum.de
[ConcurrentIMP]
title = Concurrent IMP
author = Peter Gammie <http://peteg.org>
date = 2015-04-13
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
abstract =
ConcurrentIMP extends the small imperative language IMP with control
non-determinism and constructs for synchronous message passing.
notify = peteg42@gmail.com
[TortoiseHare]
title = The Tortoise and Hare Algorithm
author = Peter Gammie <http://peteg.org>
date = 2015-11-18
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
abstract = We formalize the Tortoise and Hare cycle-finding algorithm ascribed to Floyd by Knuth, and an improved version due to Brent.
notify = peteg42@gmail.com
[UPF]
title = The Unified Policy Framework (UPF)
author = Achim D. Brucker <mailto:adbrucker@0x5f.org>, Lukas Brügger <mailto:lukas.a.bruegger@gmail.com>, Burkhart Wolff <mailto:wolff@lri.fr>
date = 2014-11-28
-topic = Computer Science/Security
+topic = Computer science/Security
abstract =
We present the Unified Policy Framework (UPF), a generic framework
for modelling security (access-control) policies. UPF emphasizes
the view that a policy is a policy decision function that grants or
denies access to resources, permissions, etc. In other words,
instead of modelling the relations of permitted or prohibited
requests directly, we model the concrete function that implements
the policy decision point in a system. In more detail, UPF is
based on the following four principles: 1) Functional representation
of policies, 2) No conflicts are possible, 3) Three-valued decision
type (allow, deny, undefined), 4) Output type not containing the
decision only.
notify = adbrucker@0x5f.org, wolff@lri.fr, lukas.a.bruegger@gmail.com
[UPF_Firewall]
title = Formal Network Models and Their Application to Firewall Policies
author = Achim D. Brucker <https://www.brucker.ch>, Lukas Brügger<>, Burkhart Wolff <https://www.lri.fr/~wolff/>
-topic = Computer Science/Security, Computer Science/Networks
+topic = Computer science/Security, Computer science/Networks
date = 2017-01-08
notify = adbrucker@0x5f.org
abstract =
We present a formal model of network protocols and their application
to modeling firewall policies. The formalization is based on the
Unified Policy Framework (UPF). The formalization was originally
developed with for generating test cases for testing the security
configuration actual firewall and router (middle-boxes) using
HOL-TestGen. Our work focuses on modeling application level protocols
on top of tcp/ip.
[AODV]
title = Loop freedom of the (untimed) AODV routing protocol
author = Timothy Bourke <http://www.tbrk.org>, Peter Höfner <http://www.hoefner-online.de/>
date = 2014-10-23
-topic = Computer Science/Concurrency/Process Calculi
+topic = Computer science/Concurrency/Process calculi
abstract =
<p>
The Ad hoc On-demand Distance Vector (AODV) routing protocol allows
the nodes in a Mobile Ad hoc Network (MANET) or a Wireless Mesh
Network (WMN) to know where to forward data packets. Such a protocol
is ‘loop free’ if it never leads to routing decisions that forward
packets in circles.
<p>
This development mechanises an existing pen-and-paper proof of loop
freedom of AODV. The protocol is modelled in the Algebra of
Wireless Networks (AWN), which is the subject of an earlier paper
and AFP mechanization. The proof relies on a novel compositional
approach for lifting invariants to networks of nodes.
</p><p>
We exploit the mechanization to analyse several variants of AODV and
show that Isabelle/HOL can re-establish most proof obligations
automatically and identify exactly the steps that are no longer valid.
</p>
notify = tim@tbrk.org
[Show]
title = Haskell's Show Class in Isabelle/HOL
author = Christian Sternagel <mailto:c.sternagel@gmail.com>, René Thiemann <mailto:rene.thiemann@uibk.ac.at>
date = 2014-07-29
-topic = Computer Science/Functional Programming
+topic = Computer science/Functional programming
license = LGPL
abstract =
We implemented a type class for "to-string" functions, similar to
Haskell's Show class. Moreover, we provide instantiations for Isabelle/HOL's
standard types like bool, prod, sum, nats, ints, and rats. It is further
possible, to automatically derive show functions for arbitrary user defined
datatypes similar to Haskell's "deriving Show".
extra-history =
Change history:
[2015-03-11]: Adapted development to new-style (BNF-based) datatypes.<br>
[2015-04-10]: Moved development for old-style datatypes into subdirectory
"Old_Datatype".<br>
notify = christian.sternagel@uibk.ac.at, rene.thiemann@uibk.ac.at
[Certification_Monads]
title = Certification Monads
author = Christian Sternagel <mailto:c.sternagel@gmail.com>, René Thiemann <mailto:rene.thiemann@uibk.ac.at>
date = 2014-10-03
-topic = Computer Science/Functional Programming
+topic = Computer science/Functional programming
abstract = This entry provides several monads intended for the development of stand-alone certifiers via code generation from Isabelle/HOL. More specifically, there are three flavors of error monads (the sum type, for the case where all monadic functions are total; an instance of the former, the so called check monad, yielding either success without any further information or an error message; as well as a variant of the sum type that accommodates partial functions by providing an explicit bottom element) and a parser monad built on top. All of this monads are heavily used in the IsaFoR/CeTA project which thus provides many examples of their usage.
notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at
[CISC-Kernel]
title = Formal Specification of a Generic Separation Kernel
author = Freek Verbeek <mailto:Freek.Verbeek@ou.nl>, Sergey Tverdyshev <mailto:stv@sysgo.com>, Oto Havle <mailto:oha@sysgo.com>, Holger Blasum <mailto:holger.blasum@sysgo.com>, Bruno Langenstein <mailto:langenstein@dfki.de>, Werner Stephan <mailto:stephan@dfki.de>, Yakoub Nemouchi <mailto:nemouchi@lri.fr>, Abderrahmane Feliachi <mailto:abderrahmane.feliachi@lri.fr>, Burkhart Wolff <mailto:wolff@lri.fr>, Julien Schmaltz <mailto:Julien.Schmaltz@ou.nl>
date = 2014-07-18
-topic = Computer Science/Security
+topic = Computer science/Security
abstract =
<p>Intransitive noninterference has been a widely studied topic in the last
few decades. Several well-established methodologies apply interactive
theorem proving to formulate a noninterference theorem over abstract
academic models. In joint work with several industrial and academic partners
throughout Europe, we are helping in the certification process of PikeOS, an
industrial separation kernel developed at SYSGO. In this process,
established theories could not be applied. We present a new generic model of
separation kernels and a new theory of intransitive noninterference. The
model is rich in detail, making it suitable for formal verification of
realistic and industrial systems such as PikeOS. Using a refinement-based
theorem proving approach, we ensure that proofs remain manageable.</p>
<p>
This document corresponds to the deliverable D31.1 of the EURO-MILS
Project <a href="http://www.euromils.eu">http://www.euromils.eu</a>.</p>
notify =
[pGCL]
title = pGCL for Isabelle
author = David Cock <mailto:david.cock@nicta.com.au>
date = 2014-07-13
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
abstract =
<p>pGCL is both a programming language and a specification language that
incorporates both probabilistic and nondeterministic choice, in a unified
manner. Program verification is by refinement or annotation (or both), using
either Hoare triples, or weakest-precondition entailment, in the style of
GCL.</p>
<p> This package provides both a shallow embedding of the language
primitives, and an annotation and refinement framework. The generated
document includes a brief tutorial.</p>
notify =
[Noninterference_CSP]
title = Noninterference Security in Communicating Sequential Processes
author = Pasquale Noce <mailto:pasquale.noce.lavoro@gmail.com>
date = 2014-05-23
-topic = Computer Science/Security
+topic = Computer science/Security
abstract =
<p>
An extension of classical noninterference security for deterministic
state machines, as introduced by Goguen and Meseguer and elegantly
formalized by Rushby, to nondeterministic systems should satisfy two
fundamental requirements: it should be based on a mathematically precise
theory of nondeterminism, and should be equivalent to (or at least not
weaker than) the classical notion in the degenerate deterministic case.
</p>
<p>
This paper proposes a definition of noninterference security applying
to Hoare's Communicating Sequential Processes (CSP) in the general case of
a possibly intransitive noninterference policy, and proves the
equivalence of this security property to classical noninterference
security for processes representing deterministic state machines.
</p>
<p>
Furthermore, McCullough's generalized noninterference security is shown
to be weaker than both the proposed notion of CSP noninterference security
for a generic process, and classical noninterference security for processes
representing deterministic state machines. This renders CSP noninterference
security preferable as an extension of classical noninterference security
to nondeterministic systems.
</p>
notify = pasquale.noce.lavoro@gmail.com
[Floyd_Warshall]
title = The Floyd-Warshall Algorithm for Shortest Paths
author = Simon Wimmer <http://in.tum.de/~wimmers>, Peter Lammich <http://www21.in.tum.de/~lammich>
-topic = Computer Science/Algorithms/Graph
+topic = Computer science/Algorithms/Graph
date = 2017-05-08
notify = wimmers@in.tum.de
abstract =
The Floyd-Warshall algorithm [Flo62, Roy59, War62] is a classic
dynamic programming algorithm to compute the length of all shortest
paths between any two vertices in a graph (i.e. to solve the all-pairs
shortest path problem, or APSP for short). Given a representation of
the graph as a matrix of weights M, it computes another matrix M'
which represents a graph with the same path lengths and contains the
length of the shortest path between any two vertices i and j. This is
only possible if the graph does not contain any negative cycles.
However, in this case the Floyd-Warshall algorithm will detect the
situation by calculating a negative diagonal entry. This entry
includes a formalization of the algorithm and of these key properties.
The algorithm is refined to an efficient imperative version using the
Imperative Refinement Framework.
[Roy_Floyd_Warshall]
title = Transitive closure according to Roy-Floyd-Warshall
author = Makarius Wenzel <>
date = 2014-05-23
-topic = Computer Science/Algorithms/Graph
+topic = Computer science/Algorithms/Graph
abstract = This formulation of the Roy-Floyd-Warshall algorithm for the
transitive closure bypasses matrices and arrays, but uses a more direct
mathematical model with adjacency functions for immediate predecessors and
successors. This can be implemented efficiently in functional programming
languages and is particularly adequate for sparse relations.
notify =
[GPU_Kernel_PL]
title = Syntax and semantics of a GPU kernel programming language
author = John Wickerson <http://www.doc.ic.ac.uk/~jpw48>
date = 2014-04-03
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
abstract =
This document accompanies the article "The Design and
Implementation of a Verification Technique for GPU Kernels"
by Adam Betts, Nathan Chong, Alastair F. Donaldson, Jeroen
Ketema, Shaz Qadeer, Paul Thomson and John Wickerson. It
formalises all of the definitions provided in Sections 3
and 4 of the article.
notify =
[AWN]
title = Mechanization of the Algebra for Wireless Networks (AWN)
author = Timothy Bourke <http://www.tbrk.org>
date = 2014-03-08
-topic = Computer Science/Concurrency/Process Calculi
+topic = Computer science/Concurrency/Process calculi
abstract =
<p>
AWN is a process algebra developed for modelling and analysing
protocols for Mobile Ad hoc Networks (MANETs) and Wireless Mesh
Networks (WMNs). AWN models comprise five distinct layers:
sequential processes, local parallel compositions, nodes, partial
networks, and complete networks.</p>
<p>
This development mechanises the original operational semantics of
AWN and introduces a variant 'open' operational semantics that
enables the compositional statement and proof of invariants across
distinct network nodes. It supports labels (for weakening
invariants) and (abstract) data state manipulations. A framework for
compositional invariant proofs is developed, including a tactic
(inv_cterms) for inductive invariant proofs of sequential processes,
lifting rules for the open versions of the higher layers, and a rule
for transferring lifted properties back to the standard semantics. A
notion of 'control terms' reduces proof obligations to the subset of
subterms that act directly (in contrast to operators for combining
terms and joining processes).</p>
notify = tim@tbrk.org
[Selection_Heap_Sort]
title = Verification of Selection and Heap Sort Using Locales
author = Danijela Petrovic <http://www.matf.bg.ac.rs/~danijela>
date = 2014-02-11
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
abstract =
Stepwise program refinement techniques can be used to simplify
program verification. Programs are better understood since their
main properties are clearly stated, and verification of rather
complex algorithms is reduced to proving simple statements
connecting successive program specifications. Additionally, it is
easy to analyze similar algorithms and to compare their properties
within a single formalization. Usually, formal analysis is not done
in educational setting due to complexity of verification and a lack
of tools and procedures to make comparison easy. Verification of an
algorithm should not only give correctness proof, but also better
understanding of an algorithm. If the verification is based on small
step program refinement, it can become simple enough to be
demonstrated within the university-level computer science
curriculum. In this paper we demonstrate this and give a formal
analysis of two well known algorithms (Selection Sort and Heap Sort)
using proof assistant Isabelle/HOL and program refinement
techniques.
notify =
[Real_Impl]
title = Implementing field extensions of the form Q[sqrt(b)]
author = René Thiemann <mailto:rene.thiemann@uibk.ac.at>
date = 2014-02-06
license = LGPL
topic = Mathematics/Analysis
abstract =
We apply data refinement to implement the real numbers, where we support all
numbers in the field extension Q[sqrt(b)], i.e., all numbers of the form p +
q * sqrt(b) for rational numbers p and q and some fixed natural number b. To
this end, we also developed algorithms to precisely compute roots of a
rational number, and to perform a factorization of natural numbers which
eliminates duplicate prime factors.
<p>
Our results have been used to certify termination proofs which involve
polynomial interpretations over the reals.
extra-history =
Change history:
[2014-07-11]: Moved NthRoot_Impl to Sqrt-Babylonian.
notify = rene.thiemann@uibk.ac.at
[ShortestPath]
title = An Axiomatic Characterization of the Single-Source Shortest Path Problem
author = Christine Rizkallah <https://www.mpi-inf.mpg.de/~crizkall/>
date = 2013-05-22
-topic = Mathematics/Graph Theory
+topic = Mathematics/Graph theory
abstract = This theory is split into two sections. In the first section, we give a formal proof that a well-known axiomatic characterization of the single-source shortest path problem is correct. Namely, we prove that in a directed graph with a non-negative cost function on the edges the single-source shortest path function is the only function that satisfies a set of four axioms. In the second section, we give a formal proof of the correctness of an axiomatic characterization of the single-source shortest path problem for directed graphs with general cost functions. The axioms here are more involved because we have to account for potential negative cycles in the graph. The axioms are summarized in three Isabelle locales.
notify =
[Launchbury]
title = The Correctness of Launchbury's Natural Semantics for Lazy Evaluation
author = Joachim Breitner <http://pp.ipd.kit.edu/~breitner>
date = 2013-01-31
-topic = Computer Science/Programming Languages/Lambda Calculi, Computer Science/Semantics
+topic = Computer science/Programming languages/Lambda calculi, Computer science/Semantics
abstract = In his seminal paper "Natural Semantics for Lazy Evaluation", John Launchbury proves his semantics correct with respect to a denotational semantics, and outlines an adequacy proof. We have formalized both semantics and machine-checked the correctness proof, clarifying some details. Furthermore, we provide a new and more direct adequacy proof that does not require intermediate operational semantics.
extra-history =
Change history:
[2014-05-24]: Added the proof of adequacy, as well as simplified and improved the existing proofs. Adjusted abstract accordingly.
[2015-03-16]: Booleans and if-then-else added to syntax and semantics, making this entry suitable to be used by the entry "Call_Arity".
notify =
[Call_Arity]
title = The Safety of Call Arity
author = Joachim Breitner <http://pp.ipd.kit.edu/~breitner>
date = 2015-02-20
-topic = Computer Science/Programming Languages/Transformations
+topic = Computer science/Programming languages/Transformations
abstract =
We formalize the Call Arity analysis, as implemented in GHC, and prove
both functional correctness and, more interestingly, safety (i.e. the
transformation does not increase allocation).
<p>
We use syntax and the denotational semantics from the entry
"Launchbury", where we formalized Launchbury's natural semantics for
lazy evaluation.
<p>
The functional correctness of Call Arity is proved with regard to that
denotational semantics. The operational properties are shown with
regard to a small-step semantics akin to Sestoft's mark 1 machine,
which we prove to be equivalent to Launchbury's semantics.
<p>
We use Christian Urban's Nominal2 package to define our terms and make
use of Brian Huffman's HOLCF package for the domain-theoretical
aspects of the development.
extra-history =
Change history:
[2015-03-16]: This entry now builds on top of the Launchbury entry,
and the equivalency proof of the natural and the small-step semantics
was added.
notify =
[CCS]
title = CCS in nominal logic
author = Jesper Bengtson <http://www.itu.dk/people/jebe>
date = 2012-05-29
-topic = Computer Science/Concurrency/Process Calculi
+topic = Computer science/Concurrency/Process calculi
abstract = We formalise a large portion of CCS as described in Milner's book 'Communication and Concurrency' using the nominal datatype package in Isabelle. Our results include many of the standard theorems of bisimulation equivalence and congruence, for both weak and strong versions. One main goal of this formalisation is to keep the machine-checked proofs as close to their pen-and-paper counterpart as possible.
<p>
This entry is described in detail in <a href="http://www.itu.dk/people/jebe/files/thesis.pdf">Bengtson's thesis</a>.
notify =
[Pi_Calculus]
title = The pi-calculus in nominal logic
author = Jesper Bengtson <http://www.itu.dk/people/jebe>
date = 2012-05-29
-topic = Computer Science/Concurrency/Process Calculi
+topic = Computer science/Concurrency/Process calculi
abstract = We formalise the pi-calculus using the nominal datatype package, based on ideas from the nominal logic by Pitts et al., and demonstrate an implementation in Isabelle/HOL. The purpose is to derive powerful induction rules for the semantics in order to conduct machine checkable proofs, closely following the intuitive arguments found in manual proofs. In this way we have covered many of the standard theorems of bisimulation equivalence and congruence, both late and early, and both strong and weak in a uniform manner. We thus provide one of the most extensive formalisations of a the pi-calculus ever done inside a theorem prover.
<p>
A significant gain in our formulation is that agents are identified up to alpha-equivalence, thereby greatly reducing the arguments about bound names. This is a normal strategy for manual proofs about the pi-calculus, but that kind of hand waving has previously been difficult to incorporate smoothly in an interactive theorem prover. We show how the nominal logic formalism and its support in Isabelle accomplishes this and thus significantly reduces the tedium of conducting completely formal proofs. This improves on previous work using weak higher order abstract syntax since we do not need extra assumptions to filter out exotic terms and can keep all arguments within a familiar first-order logic.
<p>
This entry is described in detail in <a href="http://www.itu.dk/people/jebe/files/thesis.pdf">Bengtson's thesis</a>.
notify =
[Psi_Calculi]
title = Psi-calculi in Isabelle
author = Jesper Bengtson <http://www.itu.dk/people/jebe>
date = 2012-05-29
-topic = Computer Science/Concurrency/Process Calculi
+topic = Computer science/Concurrency/Process calculi
abstract = Psi-calculi are extensions of the pi-calculus, accommodating arbitrary nominal datatypes to represent not only data but also communication channels, assertions and conditions, giving it an expressive power beyond the applied pi-calculus and the concurrent constraint pi-calculus.
<p>
We have formalised psi-calculi in the interactive theorem prover Isabelle using its nominal datatype package. One distinctive feature is that the framework needs to treat binding sequences, as opposed to single binders, in an efficient way. While different methods for formalising single binder calculi have been proposed over the last decades, representations for such binding sequences are not very well explored.
<p>
The main effort in the formalisation is to keep the machine checked proofs as close to their pen-and-paper counterparts as possible. This includes treating all binding sequences as atomic elements, and creating custom induction and inversion rules that to remove the bulk of manual alpha-conversions.
<p>
This entry is described in detail in <a href="http://www.itu.dk/people/jebe/files/thesis.pdf">Bengtson's thesis</a>.
notify =
[Encodability_Process_Calculi]
title = Analysing and Comparing Encodability Criteria for Process Calculi
author = Kirstin Peters <mailto:kirstin.peters@tu-berlin.de>, Rob van Glabbeek <http://theory.stanford.edu/~rvg/>
date = 2015-08-10
-topic = Computer Science/Concurrency/Process Calculi
+topic = Computer science/Concurrency/Process calculi
abstract = Encodings or the proof of their absence are the main way to
compare process calculi. To analyse the quality of encodings and to rule out
trivial or meaningless encodings, they are augmented with quality
criteria. There exists a bunch of different criteria and different variants
of criteria in order to reason in different settings. This leads to
incomparable results. Moreover it is not always clear whether the criteria
used to obtain a result in a particular setting do indeed fit to this
setting. We show how to formally reason about and compare encodability
criteria by mapping them on requirements on a relation between source and
target terms that is induced by the encoding function. In particular we
analyse the common criteria full abstraction, operational correspondence,
divergence reflection, success sensitiveness, and respect of barbs; e.g. we
analyse the exact nature of the simulation relation (coupled simulation
versus bisimulation) that is induced by different variants of operational
correspondence. This way we reduce the problem of analysing or comparing
encodability criteria to the better understood problem of comparing
relations on processes.
notify = kirstin.peters@tu-berlin.de
[Circus]
title = Isabelle/Circus
author = Abderrahmane Feliachi <mailto:abderrahmane.feliachi@lri.fr>, Burkhart Wolff <mailto:wolff@lri.fr>, Marie-Claude Gaudel <mailto:mcg@lri.fr>
contributors = Makarius Wenzel <mailto:Makarius.wenzel@lri.fr>
date = 2012-05-27
-topic = Computer Science/Concurrency/Process Calculi, Computer Science/System Description Languages
+topic = Computer science/Concurrency/Process calculi, Computer science/System description languages
abstract = The Circus specification language combines elements for complex data and behavior specifications, using an integration of Z and CSP with a refinement calculus. Its semantics is based on Hoare and He's Unifying Theories of Programming (UTP). Isabelle/Circus is a formalization of the UTP and the Circus language in Isabelle/HOL. It contains proof rules and tactic support that allows for proofs of refinement for Circus processes (involving both data and behavioral aspects).
<p>
The Isabelle/Circus environment supports a syntax for the semantic definitions which is close to textbook presentations of Circus. This article contains an extended version of corresponding VSTTE Paper together with the complete formal development of its underlying commented theories.
extra-history =
Change history:
[2014-06-05]: More polishing, shorter proofs, added Circus syntax, added Makarius Wenzel as contributor.
notify =
[Dijkstra_Shortest_Path]
title = Dijkstra's Shortest Path Algorithm
author = Benedikt Nordhoff <mailto:b.n@wwu.de>, Peter Lammich <http://www21.in.tum.de/~lammich>
-topic = Computer Science/Algorithms/Graph
+topic = Computer science/Algorithms/Graph
date = 2012-01-30
abstract = We implement and prove correct Dijkstra's algorithm for the
single source shortest path problem, conceived in 1956 by E. Dijkstra.
The algorithm is implemented using the data refinement framework for monadic,
nondeterministic programs. An efficient implementation is derived using data
structures from the Isabelle Collection Framework.
notify = lammich@in.tum.de
[Refine_Monadic]
title = Refinement for Monadic Programs
author = Peter Lammich <http://www21.in.tum.de/~lammich>
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
date = 2012-01-30
abstract = We provide a framework for program and data refinement in Isabelle/HOL.
The framework is based on a nondeterminism-monad with assertions, i.e.,
the monad carries a set of results or an assertion failure.
Recursion is expressed by fixed points. For convenience, we also provide
while and foreach combinators.
<p>
The framework provides tools to automatize canonical tasks, such as
verification condition generation, finding appropriate data refinement relations,
and refine an executable program to a form that is accepted by the
Isabelle/HOL code generator.
<p>
This submission comes with a collection of examples and a user-guide,
illustrating the usage of the framework.
extra-history =
Change history:
[2012-04-23] Introduced ordered FOREACH loops<br>
[2012-06] New features:
REC_rule_arb and RECT_rule_arb allow for generalizing over variables.
prepare_code_thms - command extracts code equations for recursion combinators.<br>
[2012-07] New example: Nested DFS for emptiness check of Buchi-automata with witness.<br>
New feature:
fo_rule method to apply resolution using first-order matching. Useful for arg_conf, fun_cong.<br>
[2012-08] Adaptation to ICF v2.<br>
[2012-10-05] Adaptations to include support for Automatic Refinement Framework.<br>
[2013-09] This entry now depends on Automatic Refinement<br>
[2014-06] New feature: vc_solve method to solve verification conditions.
Maintenace changes: VCG-rules for nfoldli, improved setup for FOREACH-loops.<br>
[2014-07] Now defining recursion via flat domain. Dropped many single-valued prerequisites.
Changed notion of data refinement. In single-valued case, this matches the old notion.
In non-single valued case, the new notion allows for more convenient rules.
In particular, the new definitions allow for projecting away ghost variables as a refinement step.<br>
[2014-11] New features: le-or-fail relation (leof), modular reasoning about loop invariants.
notify = lammich@in.tum.de
[Refine_Imperative_HOL]
title = The Imperative Refinement Framework
author = Peter Lammich <http://www21.in.tum.de/~lammich>
notify = lammich@in.tum.de
date = 2016-08-08
-topic = Computer Science/Programming Languages/Transformations,Computer Science/Data Structures
+topic = Computer science/Programming languages/Transformations,Computer science/Data structures
abstract =
We present the Imperative Refinement Framework (IRF), a tool that
supports a stepwise refinement based approach to imperative programs.
This entry is based on the material we presented in [ITP-2015,
CPP-2016]. It uses the Monadic Refinement Framework as a frontend for
the specification of the abstract programs, and Imperative/HOL as a
backend to generate executable imperative programs. The IRF comes
with tool support to synthesize imperative programs from more
abstract, functional ones, using efficient imperative implementations
for the abstract data structures. This entry also includes the
Imperative Isabelle Collection Framework (IICF), which provides a
library of re-usable imperative collection data structures. Moreover,
this entry contains a quickstart guide and a reference manual, which
provide an introduction to using the IRF for Isabelle/HOL experts. It
also provids a collection of (partly commented) practical examples,
some highlights being Dijkstra's Algorithm, Nested-DFS, and a generic
worklist algorithm with subsumption. Finally, this entry contains
benchmark scripts that compare the runtime of some examples against
reference implementations of the algorithms in Java and C++.
[ITP-2015] Peter Lammich: Refinement to Imperative/HOL. ITP 2015:
253--269 [CPP-2016] Peter Lammich: Refinement based verification of
imperative data structures. CPP 2016: 27--36
[Automatic_Refinement]
title = Automatic Data Refinement
author = Peter Lammich <mailto:lammich@in.tum.de>
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
date = 2013-10-02
abstract = We present the Autoref tool for Isabelle/HOL, which automatically
refines algorithms specified over abstract concepts like maps
and sets to algorithms over concrete implementations like red-black-trees,
and produces a refinement theorem. It is based on ideas borrowed from
relational parametricity due to Reynolds and Wadler.
The tool allows for rapid prototyping of verified, executable algorithms.
Moreover, it can be configured to fine-tune the result to the user~s needs.
Our tool is able to automatically instantiate generic algorithms, which
greatly simplifies the implementation of executable data structures.
<p>
This AFP-entry provides the basic tool, which is then used by the
Refinement and Collection Framework to provide automatic data refinement for
the nondeterminism monad and various collection datastructures.
notify = lammich@in.tum.de
[EdmondsKarp_Maxflow]
title = Formalizing the Edmonds-Karp Algorithm
author = Peter Lammich <mailto:lammich@in.tum.de>, S. Reza Sefidgar<>
notify = lammich@in.tum.de
date = 2016-08-12
-topic = Computer Science/Algorithms/Graph
+topic = Computer science/Algorithms/Graph
abstract =
We present a formalization of the Ford-Fulkerson method for computing
the maximum flow in a network. Our formal proof closely follows a
standard textbook proof, and is accessible even without being an
expert in Isabelle/HOL--- the interactive theorem prover used for the
formalization. We then use stepwise refinement to obtain the
Edmonds-Karp algorithm, and formally prove a bound on its complexity.
Further refinement yields a verified implementation, whose execution
time compares well to an unverified reference implementation in Java.
This entry is based on our ITP-2016 paper with the same title.
[VerifyThis2018]
title = VerifyThis 2018 - Polished Isabelle Solutions
author = Peter Lammich <http://www21.in.tum.de/~lammich>, Simon Wimmer <http://in.tum.de/~wimmers>
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
date = 2018-04-27
notify = lammich@in.tum.de
abstract =
<a
href="http://www.pm.inf.ethz.ch/research/verifythis.html">VerifyThis
2018</a> was a program verification competition associated with
ETAPS 2018. It was the 7th event in the VerifyThis competition series.
In this entry, we present polished and completed versions of our
solutions that we created during the competition.
[PseudoHoops]
title = Pseudo Hoops
author = George Georgescu <>, Laurentiu Leustean <>, Viorel Preoteasa <http://users.abo.fi/vpreotea/>
topic = Mathematics/Algebra
date = 2011-09-22
abstract = Pseudo-hoops are algebraic structures introduced by B. Bosbach under the name of complementary semigroups. In this formalization we prove some properties of pseudo-hoops and we define the basic concepts of filter and normal filter. The lattice of normal filters is isomorphic with the lattice of congruences of a pseudo-hoop. We also study some important classes of pseudo-hoops. Bounded Wajsberg pseudo-hoops are equivalent to pseudo-Wajsberg algebras and bounded basic pseudo-hoops are equivalent to pseudo-BL algebras. Some examples of pseudo-hoops are given in the last section of the formalization.
notify = viorel.preoteasa@aalto.fi
[MonoBoolTranAlgebra]
title = Algebra of Monotonic Boolean Transformers
author = Viorel Preoteasa <http://users.abo.fi/vpreotea/>
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
date = 2011-09-22
abstract = Algebras of imperative programming languages have been successful in reasoning about programs. In general an algebra of programs is an algebraic structure with programs as elements and with program compositions (sequential composition, choice, skip) as algebra operations. Various versions of these algebras were introduced to model partial correctness, total correctness, refinement, demonic choice, and other aspects. We formalize here an algebra which can be used to model total correctness, refinement, demonic and angelic choice. The basic model of this algebra are monotonic Boolean transformers (monotonic functions from a Boolean algebra to itself).
notify = viorel.preoteasa@aalto.fi
[LatticeProperties]
title = Lattice Properties
author = Viorel Preoteasa <http://users.abo.fi/vpreotea/>
topic = Mathematics/Order
date = 2011-09-22
abstract = This formalization introduces and collects some algebraic structures based on lattices and complete lattices for use in other developments. The structures introduced are modular, and lattice ordered groups. In addition to the results proved for the new lattices, this formalization also introduces theorems about latices and complete lattices in general.
extra-history =
Change history:
[2012-01-05]: Removed the theory about distributive complete lattices which is in the standard library now.
Added a theory about well founded and transitive relations and a result about fixpoints in complete lattices and well founded relations.
Moved the results about conjunctive and disjunctive functions to a new theory.
Removed the syntactic classes for inf and sup which are in the standard library now.
notify = viorel.preoteasa@aalto.fi
[Impossible_Geometry]
title = Proving the Impossibility of Trisecting an Angle and Doubling the Cube
author = Ralph Romanos <mailto:ralph.romanos@student.ecp.fr>, Lawrence C. Paulson <http://www.cl.cam.ac.uk/~lp15/>
topic = Mathematics/Algebra, Mathematics/Geometry
date = 2012-08-05
abstract = Squaring the circle, doubling the cube and trisecting an angle, using a compass and straightedge alone, are classic unsolved problems first posed by the ancient Greeks. All three problems were proved to be impossible in the 19th century. The following document presents the proof of the impossibility of solving the latter two problems using Isabelle/HOL, following a proof by Carrega. The proof uses elementary methods: no Galois theory or field extensions. The set of points constructible using a compass and straightedge is defined inductively. Radical expressions, which involve only square roots and arithmetic of rational numbers, are defined, and we find that all constructive points have radical coordinates. Finally, doubling the cube and trisecting certain angles requires solving certain cubic equations that can be proved to have no rational roots. The Isabelle proofs require a great many detailed calculations.
notify = ralph.romanos@student.ecp.fr, lp15@cam.ac.uk
[IP_Addresses]
title = IP Addresses
author = Cornelius Diekmann <http://net.in.tum.de/~diekmann>, Julius Michaelis <http://liftm.de>, Lars Hupel <https://www21.in.tum.de/~hupel/>
notify = diekmann@net.in.tum.de
date = 2016-06-28
-topic = Computer Science/Networks
+topic = Computer science/Networks
abstract =
This entry contains a definition of IP addresses and a library to work
with them. Generic IP addresses are modeled as machine words of
arbitrary length. Derived from this generic definition, IPv4 addresses
are 32bit machine words, IPv6 addresses are 128bit words.
Additionally, IPv4 addresses can be represented in dot-decimal
notation and IPv6 addresses in (compressed) colon-separated notation.
We support toString functions and parsers for both notations. Sets of
IP addresses can be represented with a netmask (e.g.
192.168.0.0/255.255.0.0) or in CIDR notation (e.g. 192.168.0.0/16). To
provide executable code for set operations on IP address ranges, the
library includes a datatype to work on arbitrary intervals of machine
words.
[Simple_Firewall]
title = Simple Firewall
author = Cornelius Diekmann <http://net.in.tum.de/~diekmann>, Julius Michaelis <http://liftm.de>, Maximilian Haslbeck<http://cl-informatik.uibk.ac.at/users/mhaslbeck//>
notify = diekmann@net.in.tum.de, max.haslbeck@gmx.de
date = 2016-08-24
-topic = Computer Science/Networks
+topic = Computer science/Networks
abstract =
We present a simple model of a firewall. The firewall can accept or
drop a packet and can match on interfaces, IP addresses, protocol, and
ports. It was designed to feature nice mathematical properties: The
type of match expressions was carefully crafted such that the
conjunction of two match expressions is only one match expression.
This model is too simplistic to mirror all aspects of the real world.
In the upcoming entry "Iptables Semantics", we will translate the
Linux firewall iptables to this model. For a fixed service (e.g. ssh,
http), we provide an algorithm to compute an overview of the
firewall's filtering behavior. The algorithm computes minimal service
matrices, i.e. graphs which partition the complete IPv4 and IPv6
address space and visualize the allowed accesses between partitions.
For a detailed description, see
<a href="http://dl.ifip.org/db/conf/networking/networking2016/1570232858.pdf">Verified iptables Firewall
Analysis</a>, IFIP Networking 2016.
[Iptables_Semantics]
title = Iptables Semantics
author = Cornelius Diekmann <http://net.in.tum.de/~diekmann>, Lars Hupel <https://www21.in.tum.de/~hupel/>
notify = diekmann@net.in.tum.de, hupel@in.tum.de
date = 2016-09-09
-topic = Computer Science/Networks
+topic = Computer science/Networks
abstract =
We present a big step semantics of the filtering behavior of the
Linux/netfilter iptables firewall. We provide algorithms to simplify
complex iptables rulests to a simple firewall model (c.f. AFP entry <a
href="https://www.isa-afp.org/entries/Simple_Firewall.html">Simple_Firewall</a>)
and to verify spoofing protection of a ruleset.
Internally, we embed our semantics into ternary logic, ultimately
supporting every iptables match condition by abstracting over
unknowns. Using this AFP entry and all entries it depends on, we
created an easy-to-use, stand-alone haskell tool called <a
href="http://iptables.isabelle.systems">fffuu</a>. The tool does not
require any input &mdash;except for the <tt>iptables-save</tt> dump of
the analyzed firewall&mdash; and presents interesting results about
the user's ruleset. Real-Word firewall errors have been uncovered, and
the correctness of rulesets has been proved, with the help of
our tool.
[Routing]
title = Routing
author = Julius Michaelis <http://liftm.de>, Cornelius Diekmann <http://net.in.tum.de/~diekmann>
notify = afp@liftm.de
date = 2016-08-31
-topic = Computer Science/Networks
+topic = Computer science/Networks
abstract =
This entry contains definitions for routing with routing
tables/longest prefix matching. A routing table entry is modelled as
a record of a prefix match, a metric, an output port, and an optional
next hop. A routing table is a list of entries, sorted by prefix
length and metric. Additionally, a parser and serializer for the
output of the ip-route command, a function to create a relation from
output port to corresponding destination IP space, and a model of a
Linux-style router are included.
[KBPs]
title = Knowledge-based programs
author = Peter Gammie <http://peteg.org>
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
date = 2011-05-17
abstract = Knowledge-based programs (KBPs) are a formalism for directly relating agents' knowledge and behaviour. Here we present a general scheme for compiling KBPs to executable automata with a proof of correctness in Isabelle/HOL. We develop the algorithm top-down, using Isabelle's locale mechanism to structure these proofs, and show that two classic examples can be synthesised using Isabelle's code generator.
extra-history =
Change history:
[2012-03-06]: Add some more views and revive the code generation.
notify = kleing@cse.unsw.edu.au
[Tarskis_Geometry]
title = The independence of Tarski's Euclidean axiom
author = T. J. M. Makarios <mailto:tjm1983@gmail.com>
topic = Mathematics/Geometry
date = 2012-10-30
abstract =
Tarski's axioms of plane geometry are formalized and, using the standard
real Cartesian model, shown to be consistent. A substantial theory of
the projective plane is developed. Building on this theory, the
Klein-Beltrami model of the hyperbolic plane is defined and shown to
satisfy all of Tarski's axioms except his Euclidean axiom; thus Tarski's
Euclidean axiom is shown to be independent of his other axioms of plane
geometry.
<p>
An earlier version of this work was the subject of the author's
<a href="http://researcharchive.vuw.ac.nz/handle/10063/2315">MSc thesis</a>,
which contains natural-language explanations of some of the
more interesting proofs.
notify = tjm1983@gmail.com
[General-Triangle]
title = The General Triangle Is Unique
author = Joachim Breitner <mailto:mail@joachim-breitner.de>
topic = Mathematics/Geometry
date = 2011-04-01
abstract = Some acute-angled triangles are special, e.g. right-angled or isoscele triangles. Some are not of this kind, but, without measuring angles, look as if they were. In that sense, there is exactly one general triangle. This well-known fact is proven here formally.
notify = mail@joachim-breitner.de
[LightweightJava]
title = Lightweight Java
author = Rok Strniša <http://rok.strnisa.com/lj/>, Matthew Parkinson <http://research.microsoft.com/people/mattpark/>
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
date = 2011-02-07
abstract = A fully-formalized and extensible minimal imperative fragment of Java.
notify = rok@strnisa.com
[Lower_Semicontinuous]
title = Lower Semicontinuous Functions
author = Bogdan Grechuk <mailto:grechukbogdan@yandex.ru>
topic = Mathematics/Analysis
date = 2011-01-08
abstract = We define the notions of lower and upper semicontinuity for functions from a metric space to the extended real line. We prove that a function is both lower and upper semicontinuous if and only if it is continuous. We also give several equivalent characterizations of lower semicontinuity. In particular, we prove that a function is lower semicontinuous if and only if its epigraph is a closed set. Also, we introduce the notion of the lower semicontinuous hull of an arbitrary function and prove its basic properties.
notify = hoelzl@in.tum.de
[RIPEMD-160-SPARK]
title = RIPEMD-160
author = Fabian Immler <mailto:immler@in.tum.de>
-topic = Computer Science/Programming Languages/Static Analysis
+topic = Computer science/Programming languages/Static analysis
date = 2011-01-10
abstract = This work presents a verification of an implementation in SPARK/ADA of the cryptographic hash-function RIPEMD-160. A functional specification of RIPEMD-160 is given in Isabelle/HOL. Proofs for the verification conditions generated by the static-analysis toolset of SPARK certify the functional correctness of the implementation.
extra-history =
Change history:
[2015-11-09]: Entry is now obsolete, moved to Isabelle distribution.
notify = immler@in.tum.de
[Regular-Sets]
title = Regular Sets and Expressions
author = Alexander Krauss <http://www.in.tum.de/~krauss>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
contributors = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
date = 2010-05-12
abstract = This is a library of constructions on regular expressions and languages. It provides the operations of concatenation, Kleene star and derivative on languages. Regular expressions and their meaning are defined. An executable equivalence checker for regular expressions is verified; it does not need automata but works directly on regular expressions. <i>By mapping regular expressions to binary relations, an automatic and complete proof method for (in)equalities of binary relations over union, concatenation and (reflexive) transitive closure is obtained.</i> <P> Extended regular expressions with complement and intersection are also defined and an equivalence checker is provided.
extra-history =
Change history:
[2011-08-26]: Christian Urban added a theory about derivatives and partial derivatives of regular expressions<br>
[2012-05-10]: Tobias Nipkow added extended regular expressions<br>
[2012-05-10]: Tobias Nipkow added equivalence checking with partial derivatives
notify = nipkow@in.tum.de, krauss@in.tum.de, christian.urban@kcl.ac.uk
[Regex_Equivalence]
title = Unified Decision Procedures for Regular Expression Equivalence
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>, Dmitriy Traytel <mailto:traytel@in.tum.de>
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
date = 2014-01-30
abstract =
We formalize a unified framework for verified decision procedures for regular
expression equivalence. Five recently published formalizations of such
decision procedures (three based on derivatives, two on marked regular
expressions) can be obtained as instances of the framework. We discover that
the two approaches based on marked regular expressions, which were previously
thought to be the same, are different, and one seems to produce uniformly
smaller automata. The common framework makes it possible to compare the
performance of the different decision procedures in a meaningful way.
<a href="http://www21.in.tum.de/~nipkow/pubs/itp14.html">
The formalization is described in a paper of the same name presented at
Interactive Theorem Proving 2014</a>.
notify = nipkow@in.tum.de, traytel@in.tum.de
[MSO_Regex_Equivalence]
title = Decision Procedures for MSO on Words Based on Derivatives of Regular Expressions
author = Dmitriy Traytel <mailto:traytel@in.tum.de>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
-topic = Computer Science/Automata and Formal Languages, Logic/General logic/Decidability of theories
+topic = Computer science/Automata and formal languages, Logic/General logic/Decidability of theories
date = 2014-06-12
abstract =
Monadic second-order logic on finite words (MSO) is a decidable yet
expressive logic into which many decision problems can be encoded. Since MSO
formulas correspond to regular languages, equivalence of MSO formulas can be
reduced to the equivalence of some regular structures (e.g. automata). We
verify an executable decision procedure for MSO formulas that is not based
on automata but on regular expressions.
<p>
Decision procedures for regular expression equivalence have been formalized
before, usually based on Brzozowski derivatives. Yet, for a straightforward
embedding of MSO formulas into regular expressions an extension of regular
expressions with a projection operation is required. We prove total
correctness and completeness of an equivalence checker for regular
expressions extended in that way. We also define a language-preserving
translation of formulas into regular expressions with respect to two
different semantics of MSO.
<p>
The formalization is described in this <a href="http://www21.in.tum.de/~nipkow/pubs/icfp13.html">ICFP 2013 functional pearl</a>.
notify = traytel@in.tum.de, nipkow@in.tum.de
[Formula_Derivatives]
title = Derivatives of Logical Formulas
author = Dmitriy Traytel <http://www21.in.tum.de/~traytel>
-topic = Computer Science/Automata and Formal Languages, Logic/General logic/Decidability of theories
+topic = Computer science/Automata and formal languages, Logic/General logic/Decidability of theories
date = 2015-05-28
abstract =
We formalize new decision procedures for WS1S, M2L(Str), and Presburger
Arithmetics. Formulas of these logics denote regular languages. Unlike
traditional decision procedures, we do <em>not</em> translate formulas into automata
(nor into regular expressions), at least not explicitly. Instead we devise
notions of derivatives (inspired by Brzozowski derivatives for regular
expressions) that operate on formulas directly and compute a syntactic
bisimulation using these derivatives. The treatment of Boolean connectives and
quantifiers is uniform for all mentioned logics and is abstracted into a
locale. This locale is then instantiated by different atomic formulas and their
derivatives (which may differ even for the same logic under different encodings
of interpretations as formal words).
<p>
The WS1S instance is described in the draft paper <a
href="https://people.inf.ethz.ch/trayteld/papers/csl15-ws1s_derivatives/index.html">A
Coalgebraic Decision Procedure for WS1S</a> by the author.
notify = traytel@in.tum.de
[Myhill-Nerode]
title = The Myhill-Nerode Theorem Based on Regular Expressions
author = Chunhan Wu <>, Xingyuan Zhang <>, Christian Urban <http://www.inf.kcl.ac.uk/staff/urbanc/>
contributors = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
date = 2011-08-26
abstract = There are many proofs of the Myhill-Nerode theorem using automata. In this library we give a proof entirely based on regular expressions, since regularity of languages can be conveniently defined using regular expressions (it is more painful in HOL to define regularity in terms of automata). We prove the first direction of the Myhill-Nerode theorem by solving equational systems that involve regular expressions. For the second direction we give two proofs: one using tagging-functions and another using partial derivatives. We also establish various closure properties of regular languages. Most details of the theories are described in our ITP 2011 paper.
notify = christian.urban@kcl.ac.uk
[Universal_Turing_Machine]
title = Universal Turing Machine
author = Jian Xu<>, Xingyuan Zhang<>, Christian Urban <https://nms.kcl.ac.uk/christian.urban/>, Sebastiaan J. C. Joosten <http://sjcjoosten.nl/>
-topic = Logic/Computability, Computer Science/Automata and Formal Languages
+topic = Logic/Computability, Computer science/Automata and formal languages
date = 2019-02-08
notify = sjcjoosten@gmail.com, christian.urban@kcl.ac.uk
abstract =
We formalise results from computability theory: recursive functions,
undecidability of the halting problem, and the existence of a
universal Turing machine. This formalisation is the AFP entry
corresponding to the paper Mechanising Turing Machines and Computability Theory
in Isabelle/HOL, ITP 2013.
[CYK]
title = A formalisation of the Cocke-Younger-Kasami algorithm
author = Maksym Bortin <mailto:Maksym.Bortin@nicta.com.au>
date = 2016-04-27
-topic = Computer Science/Algorithms, Computer Science/Automata and Formal Languages
+topic = Computer science/Algorithms, Computer science/Automata and formal languages
abstract =
The theory provides a formalisation of the Cocke-Younger-Kasami
algorithm (CYK for short), an approach to solving the word problem
for context-free languages. CYK decides if a word is in the
languages generated by a context-free grammar in Chomsky normal form.
The formalized algorithm is executable.
notify = maksym.bortin@nicta.com.au
[Boolean_Expression_Checkers]
title = Boolean Expression Checkers
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2014-06-08
-topic = Computer Science/Algorithms, Logic/General logic/Mechanization of proofs
+topic = Computer science/Algorithms, Logic/General logic/Mechanization of proofs
abstract =
This entry provides executable checkers for the following properties of
boolean expressions: satisfiability, tautology and equivalence. Internally,
the checkers operate on binary decision trees and are reasonably efficient
(for purely functional algorithms).
extra-history =
Change history: [2015-09-23]: Salomon Sickert added an interface that does not require the usage of the Boolean formula datatype. Furthermore the general Mapping type is used instead of an association list.
notify = nipkow@in.tum.de
[Presburger-Automata]
title = Formalizing the Logic-Automaton Connection
author = Stefan Berghofer <http://www.in.tum.de/~berghofe>, Markus Reiter <>
date = 2009-12-03
-topic = Computer Science/Automata and Formal Languages, Logic/General logic/Decidability of theories
+topic = Computer science/Automata and formal languages, Logic/General logic/Decidability of theories
abstract = This work presents a formalization of a library for automata on bit strings. It forms the basis of a reflection-based decision procedure for Presburger arithmetic, which is efficiently executable thanks to Isabelle's code generator. With this work, we therefore provide a mechanized proof of a well-known connection between logic and automata theory. The formalization is also described in a publication [TPHOLs 2009].
notify = berghofe@in.tum.de
[Functional-Automata]
title = Functional Automata
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2004-03-30
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
abstract = This theory defines deterministic and nondeterministic automata in a functional representation: the transition function/relation and the finality predicate are just functions. Hence the state space may be infinite. It is shown how to convert regular expressions into such automata. A scanner (generator) is implemented with the help of functional automata: the scanner chops the input up into longest recognized substrings. Finally we also show how to convert a certain subclass of functional automata (essentially the finite deterministic ones) into regular sets.
notify = nipkow@in.tum.de
[Statecharts]
title = Formalizing Statecharts using Hierarchical Automata
author = Steffen Helke <mailto:helke@cs.tu-berlin.de>, Florian Kammüller <mailto:flokam@cs.tu-berlin.de>
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
date = 2010-08-08
abstract = We formalize in Isabelle/HOL the abtract syntax and a synchronous
step semantics for the specification language Statecharts. The formalization
is based on Hierarchical Automata which allow a structural decomposition of
Statecharts into Sequential Automata. To support the composition of
Statecharts, we introduce calculating operators to construct a Hierarchical
Automaton in a stepwise manner. Furthermore, we present a complete semantics
of Statecharts including a theory of data spaces, which enables the modelling
of racing effects. We also adapt CTL for
Statecharts to build a bridge for future combinations with model
checking. However the main motivation of this work is to provide a sound and
complete basis for reasoning on Statecharts. As a central meta theorem we
prove that the well-formedness of a Statechart is preserved by the semantics.
notify = nipkow@in.tum.de
[Stuttering_Equivalence]
title = Stuttering Equivalence
author = Stephan Merz <http://www.loria.fr/~merz>
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
date = 2012-05-07
abstract = <p>Two omega-sequences are stuttering equivalent if they differ only by finite repetitions of elements. Stuttering equivalence is a fundamental concept in the theory of concurrent and distributed systems. Notably, Lamport argues that refinement notions for such systems should be insensitive to finite stuttering. Peled and Wilke showed that all PLTL (propositional linear-time temporal logic) properties that are insensitive to stuttering equivalence can be expressed without the next-time operator. Stuttering equivalence is also important for certain verification techniques such as partial-order reduction for model checking.</p> <p>We formalize stuttering equivalence in Isabelle/HOL. Our development relies on the notion of stuttering sampling functions that may skip blocks of identical sequence elements. We also encode PLTL and prove the theorem due to Peled and Wilke.</p>
extra-history =
Change history:
[2013-01-31]: Added encoding of PLTL and proved Peled and Wilke's theorem. Adjusted abstract accordingly.
notify = Stephan.Merz@loria.fr
[Coinductive_Languages]
title = A Codatatype of Formal Languages
author = Dmitriy Traytel <mailto:traytel@in.tum.de>
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
date = 2013-11-15
abstract = <p>We define formal languages as a codataype of infinite trees
branching over the alphabet. Each node in such a tree indicates whether the
path to this node constitutes a word inside or outside of the language. This
codatatype is isormorphic to the set of lists representation of languages,
but caters for definitions by corecursion and proofs by coinduction.</p>
<p>Regular operations on languages are then defined by primitive corecursion.
A difficulty arises here, since the standard definitions of concatenation and
iteration from the coalgebraic literature are not primitively
corecursive-they require guardedness up-to union/concatenation.
Without support for up-to corecursion, these operation must be defined as a
composition of primitive ones (and proved being equal to the standard
definitions). As an exercise in coinduction we also prove the axioms of
Kleene algebra for the defined regular operations.</p>
<p>Furthermore, a language for context-free grammars given by productions in
Greibach normal form and an initial nonterminal is constructed by primitive
corecursion, yielding an executable decision procedure for the word problem
without further ado.</p>
notify = traytel@in.tum.de
[Tree-Automata]
title = Tree Automata
author = Peter Lammich <http://www21.in.tum.de/~lammich>
date = 2009-11-25
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
abstract = This work presents a machine-checked tree automata library for Standard-ML, OCaml and Haskell. The algorithms are efficient by using appropriate data structures like RB-trees. The available algorithms for non-deterministic automata include membership query, reduction, intersection, union, and emptiness check with computation of a witness for non-emptiness. The executable algorithms are derived from less-concrete, non-executable algorithms using data-refinement techniques. The concrete data structures are from the Isabelle Collections Framework. Moreover, this work contains a formalization of the class of tree-regular languages and its closure properties under set operations.
notify = peter.lammich@uni-muenster.de, nipkow@in.tum.de
[Depth-First-Search]
title = Depth First Search
author = Toshiaki Nishihara <>, Yasuhiko Minamide <>
date = 2004-06-24
-topic = Computer Science/Algorithms/Graph
+topic = Computer science/Algorithms/Graph
abstract = Depth-first search of a graph is formalized with recdef. It is shown that it visits all of the reachable nodes from a given list of nodes. Executable ML code of depth-first search is obtained using the code generation feature of Isabelle/HOL.
notify = lp15@cam.ac.uk, krauss@in.tum.de
[FFT]
title = Fast Fourier Transform
author = Clemens Ballarin <http://www21.in.tum.de/~ballarin/>
date = 2005-10-12
-topic = Computer Science/Algorithms/Mathematical
+topic = Computer science/Algorithms/Mathematical
abstract = We formalise a functional implementation of the FFT algorithm over the complex numbers, and its inverse. Both are shown equivalent to the usual definitions of these operations through Vandermonde matrices. They are also shown to be inverse to each other, more precisely, that composition of the inverse and the transformation yield the identity up to a scalar.
notify = ballarin@in.tum.de
[Gauss-Jordan-Elim-Fun]
title = Gauss-Jordan Elimination for Matrices Represented as Functions
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2011-08-19
-topic = Computer Science/Algorithms/Mathematical, Mathematics/Algebra
+topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra
abstract = This theory provides a compact formulation of Gauss-Jordan elimination for matrices represented as functions. Its distinctive feature is succinctness. It is not meant for large computations.
notify = nipkow@in.tum.de
[UpDown_Scheme]
title = Verification of the UpDown Scheme
author = Johannes Hölzl <mailto:hoelzl@in.tum.de>
date = 2015-01-28
-topic = Computer Science/Algorithms/Mathematical
+topic = Computer science/Algorithms/Mathematical
abstract =
The UpDown scheme is a recursive scheme used to compute the stiffness matrix
on a special form of sparse grids. Usually, when discretizing a Euclidean
space of dimension d we need O(n^d) points, for n points along each dimension.
Sparse grids are a hierarchical representation where the number of points is
reduced to O(n * log(n)^d). One disadvantage of such sparse grids is that the
algorithm now operate recursively in the dimensions and levels of the sparse grid.
<p>
The UpDown scheme allows us to compute the stiffness matrix on such a sparse
grid. The stiffness matrix represents the influence of each representation
function on the L^2 scalar product. For a detailed description see
Dirk Pflüger's PhD thesis. This formalization was developed as an
interdisciplinary project (IDP) at the Technische Universität München.
notify = hoelzl@in.tum.de
[GraphMarkingIBP]
title = Verification of the Deutsch-Schorr-Waite Graph Marking Algorithm using Data Refinement
author = Viorel Preoteasa <http://users.abo.fi/vpreotea/>, Ralph-Johan Back <http://users.abo.fi/Ralph-Johan.Back/>
date = 2010-05-28
-topic = Computer Science/Algorithms/Graph
+topic = Computer science/Algorithms/Graph
abstract = The verification of the Deutsch-Schorr-Waite graph marking algorithm is used as a benchmark in many formalizations of pointer programs. The main purpose of this mechanization is to show how data refinement of invariant based programs can be used in verifying practical algorithms. The verification starts with an abstract algorithm working on a graph given by a relation <i>next</i> on nodes. Gradually the abstract program is refined into Deutsch-Schorr-Waite graph marking algorithm where only one bit per graph node of additional memory is used for marking.
extra-history =
Change history:
[2012-01-05]: Updated for the new definition of data refinement and the new syntax for demonic and angelic update statements
notify = viorel.preoteasa@aalto.fi
[Efficient-Mergesort]
title = Efficient Mergesort
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
date = 2011-11-09
author = Christian Sternagel <mailto:c.sternagel@gmail.com>
abstract = We provide a formalization of the mergesort algorithm as used in GHC's Data.List module, proving correctness and stability. Furthermore, experimental data suggests that generated (Haskell-)code for this algorithm is much faster than for previous algorithms available in the Isabelle distribution.
extra-history =
Change history:
[2012-10-24]:
Added reference to journal article.<br>
[2018-09-17]:
Added theory Efficient_Mergesort that works exclusively with the mutual
induction schemas generated by the function package.<br>
[2018-09-19]:
Added theory Mergesort_Complexity that proves an upper bound on the number of
comparisons that are required by mergesort.<br>
[2018-09-19]:
Theory Efficient_Mergesort replaces theory Efficient_Sort but keeping the old
name Efficient_Sort.
notify = c.sternagel@gmail.com
[SATSolverVerification]
title = Formal Verification of Modern SAT Solvers
author = Filip Marić <http://poincare.matf.bg.ac.rs/~filip/>
date = 2008-07-23
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
abstract = This document contains formal correctness proofs of modern SAT solvers. Following (Krstic et al, 2007) and (Nieuwenhuis et al., 2006), solvers are described using state-transition systems. Several different SAT solver descriptions are given and their partial correctness and termination is proved. These include: <ul> <li> a solver based on classical DPLL procedure (using only a backtrack-search with unit propagation),</li> <li> a very general solver with backjumping and learning (similar to the description given in (Nieuwenhuis et al., 2006)), and</li> <li> a solver with a specific conflict analysis algorithm (similar to the description given in (Krstic et al., 2007)).</li> </ul> Within the SAT solver correctness proofs, a large number of lemmas about propositional logic and CNF formulae are proved. This theory is self-contained and could be used for further exploring of properties of CNF based SAT algorithms.
notify =
[Transitive-Closure]
title = Executable Transitive Closures of Finite Relations
-topic = Computer Science/Algorithms/Graph
+topic = Computer science/Algorithms/Graph
date = 2011-03-14
author = Christian Sternagel <mailto:c.sternagel@gmail.com>, René Thiemann <mailto:rene.thiemann@uibk.ac.at>
license = LGPL
abstract = We provide a generic work-list algorithm to compute the transitive closure of finite relations where only successors of newly detected states are generated. This algorithm is then instantiated for lists over arbitrary carriers and red black trees (which are faster but require a linear order on the carrier), respectively. Our formalization was performed as part of the IsaFoR/CeTA project where reflexive transitive closures of large tree automata have to be computed.
extra-history =
Change history:
[2014-09-04] added example simprocs in Finite_Transitive_Closure_Simprocs
notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at
[Transitive-Closure-II]
title = Executable Transitive Closures
-topic = Computer Science/Algorithms/Graph
+topic = Computer science/Algorithms/Graph
date = 2012-02-29
author = René Thiemann <mailto:rene.thiemann@uibk.ac.at>
license = LGPL
abstract =
<p>
We provide a generic work-list algorithm to compute the
(reflexive-)transitive closure of relations where only successors of newly
detected states are generated.
In contrast to our previous work, the relations do not have to be finite,
but each element must only have finitely many (indirect) successors.
Moreover, a subsumption relation can be used instead of pure equality.
An executable variant of the algorithm is available where the generic operations
are instantiated with list operations.
</p><p>
This formalization was performed as part of the IsaFoR/CeTA project,
and it has been used to certify size-change
termination proofs where large transitive closures have to be computed.
</p>
notify = rene.thiemann@uibk.ac.at
[MuchAdoAboutTwo]
title = Much Ado About Two
author = Sascha Böhme <http://www21.in.tum.de/~boehmes/>
date = 2007-11-06
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
abstract = This article is an Isabelle formalisation of a paper with the same title. In a similar way as Knuth's 0-1-principle for sorting algorithms, that paper develops a 0-1-2-principle for parallel prefix computations.
notify = boehmes@in.tum.de
[DiskPaxos]
title = Proving the Correctness of Disk Paxos
date = 2005-06-22
author = Mauro Jaskelioff <http://www.fceia.unr.edu.ar/~mauro/>, Stephan Merz <http://www.loria.fr/~merz>
-topic = Computer Science/Algorithms/Distributed
+topic = Computer science/Algorithms/Distributed
abstract = Disk Paxos is an algorithm for building arbitrary fault-tolerant distributed systems. The specification of Disk Paxos has been proved correct informally and tested using the TLC model checker, but up to now, it has never been fully formally verified. In this work we have formally verified its correctness using the Isabelle theorem prover and the HOL logic system, showing that Isabelle is a practical tool for verifying properties of TLA+ specifications.
notify = kleing@cse.unsw.edu.au
[GenClock]
title = Formalization of a Generalized Protocol for Clock Synchronization
author = Alwen Tiu <http://users.cecs.anu.edu.au/~tiu/>
date = 2005-06-24
-topic = Computer Science/Algorithms/Distributed
+topic = Computer science/Algorithms/Distributed
abstract = We formalize the generalized Byzantine fault-tolerant clock synchronization protocol of Schneider. This protocol abstracts from particular algorithms or implementations for clock synchronization. This abstraction includes several assumptions on the behaviors of physical clocks and on general properties of concrete algorithms/implementations. Based on these assumptions the correctness of the protocol is proved by Schneider. His proof was later verified by Shankar using the theorem prover EHDM (precursor to PVS). Our formalization in Isabelle/HOL is based on Shankar's formalization.
notify = kleing@cse.unsw.edu.au
[ClockSynchInst]
title = Instances of Schneider's generalized protocol of clock synchronization
author = Damián Barsotti <http://www.cs.famaf.unc.edu.ar/~damian/>
date = 2006-03-15
-topic = Computer Science/Algorithms/Distributed
+topic = Computer science/Algorithms/Distributed
abstract = F. B. Schneider ("Understanding protocols for Byzantine clock synchronization") generalizes a number of protocols for Byzantine fault-tolerant clock synchronization and presents a uniform proof for their correctness. In Schneider's schema, each processor maintains a local clock by periodically adjusting each value to one computed by a convergence function applied to the readings of all the clocks. Then, correctness of an algorithm, i.e. that the readings of two clocks at any time are within a fixed bound of each other, is based upon some conditions on the convergence function. To prove that a particular clock synchronization algorithm is correct it suffices to show that the convergence function used by the algorithm meets Schneider's conditions. Using the theorem prover Isabelle, we formalize the proofs that the convergence functions of two algorithms, namely, the Interactive Convergence Algorithm (ICA) of Lamport and Melliar-Smith and the Fault-tolerant Midpoint algorithm of Lundelius-Lynch, meet Schneider's conditions. Furthermore, we experiment on handling some parts of the proofs with fully automatic tools like ICS and CVC-lite. These theories are part of a joint work with Alwen Tiu and Leonor P. Nieto <a href="http://users.rsise.anu.edu.au/~tiu/clocksync.pdf">"Verification of Clock Synchronization Algorithms: Experiments on a combination of deductive tools"</a> in proceedings of AVOCS 2005. In this work the correctness of Schneider schema was also verified using Isabelle (entry <a href="GenClock.html">GenClock</a> in AFP).
notify = kleing@cse.unsw.edu.au
[Heard_Of]
title = Verifying Fault-Tolerant Distributed Algorithms in the Heard-Of Model
date = 2012-07-27
author = Henri Debrat <mailto:henri.debrat@loria.fr>, Stephan Merz <http://www.loria.fr/~merz>
-topic = Computer Science/Algorithms/Distributed
+topic = Computer science/Algorithms/Distributed
abstract =
Distributed computing is inherently based on replication, promising
increased tolerance to failures of individual computing nodes or
communication channels. Realizing this promise, however, involves
quite subtle algorithmic mechanisms, and requires precise statements
about the kinds and numbers of faults that an algorithm tolerates (such
as process crashes, communication faults or corrupted values). The
landmark theorem due to Fischer, Lynch, and Paterson shows that it is
impossible to achieve Consensus among N asynchronously communicating
nodes in the presence of even a single permanent failure. Existing
solutions must rely on assumptions of "partial synchrony".
<p>
Indeed, there have been numerous misunderstandings on what exactly a given
algorithm is supposed to realize in what kinds of environments. Moreover, the
abundance of subtly different computational models complicates comparisons
between different algorithms. Charron-Bost and Schiper introduced the Heard-Of
model for representing algorithms and failure assumptions in a uniform
framework, simplifying comparisons between algorithms.
<p>
In this contribution, we represent the Heard-Of model in Isabelle/HOL. We define
two semantics of runs of algorithms with different unit of atomicity and relate
these through a reduction theorem that allows us to verify algorithms in the
coarse-grained semantics (where proofs are easier) and infer their correctness
for the fine-grained one (which corresponds to actual executions). We
instantiate the framework by verifying six Consensus algorithms that differ in
the underlying algorithmic mechanisms and the kinds of faults they tolerate.
notify = Stephan.Merz@loria.fr
[Consensus_Refined]
title = Consensus Refined
date = 2015-03-18
author = Ognjen Maric <>, Christoph Sprenger <mailto:sprenger@inf.ethz.ch>
-topic = Computer Science/Algorithms/Distributed
+topic = Computer science/Algorithms/Distributed
abstract =
Algorithms for solving the consensus problem are fundamental to
distributed computing. Despite their brevity, their
ability to operate in concurrent, asynchronous and failure-prone
environments comes at the cost of complex and subtle
behaviors. Accordingly, understanding how they work and proving
their correctness is a non-trivial endeavor where abstraction
is immensely helpful.
Moreover, research on consensus has yielded a large number of
algorithms, many of which appear to share common algorithmic
ideas. A natural question is whether and how these similarities can
be distilled and described in a precise, unified way.
In this work, we combine stepwise refinement and
lockstep models to provide an abstract and unified
view of a sizeable family of consensus algorithms. Our models
provide insights into the design choices underlying the different
algorithms, and classify them based on those choices.
notify = sprenger@inf.ethz.ch
[Key_Agreement_Strong_Adversaries]
title = Refining Authenticated Key Agreement with Strong Adversaries
author = Joseph Lallemand <mailto:joseph.lallemand@loria.fr>, Christoph Sprenger <mailto:sprenger@inf.ethz.ch>
-topic = Computer Science/Security
+topic = Computer science/Security
license = LGPL
date = 2017-01-31
notify = joseph.lallemand@loria.fr, sprenger@inf.ethz.ch
abstract =
We develop a family of key agreement protocols that are correct by
construction. Our work substantially extends prior work on developing
security protocols by refinement. First, we strengthen the adversary
by allowing him to compromise different resources of protocol
participants, such as their long-term keys or their session keys. This
enables the systematic development of protocols that ensure strong
properties such as perfect forward secrecy. Second, we broaden the
class of protocols supported to include those with non-atomic keys and
equationally defined cryptographic operators. We use these extensions
to develop key agreement protocols including signed Diffie-Hellman and
the core of IKEv1 and SKEME.
[Security_Protocol_Refinement]
title = Developing Security Protocols by Refinement
author = Christoph Sprenger <mailto:sprenger@inf.ethz.ch>, Ivano Somaini<>
-topic = Computer Science/Security
+topic = Computer science/Security
license = LGPL
date = 2017-05-24
notify = sprenger@inf.ethz.ch
abstract =
We propose a development method for security protocols based on
stepwise refinement. Our refinement strategy transforms abstract
security goals into protocols that are secure when operating over an
insecure channel controlled by a Dolev-Yao-style intruder. As
intermediate levels of abstraction, we employ messageless guard
protocols and channel protocols communicating over channels with
security properties. These abstractions provide insights on why
protocols are secure and foster the development of families of
protocols sharing common structure and properties. We have implemented
our method in Isabelle/HOL and used it to develop different entity
authentication and key establishment protocols, including realistic
features such as key confirmation, replay caches, and encrypted
tickets. Our development highlights that guard protocols and channel
protocols provide fundamental abstractions for bridging the gap
between security properties and standard protocol descriptions based
on cryptographic messages. It also shows that our refinement approach
scales to protocols of nontrivial size and complexity.
[Abortable_Linearizable_Modules]
title = Abortable Linearizable Modules
author = Rachid Guerraoui <mailto:rachid.guerraoui@epfl.ch>, Viktor Kuncak <http://lara.epfl.ch/~kuncak/>, Giuliano Losa <mailto:giuliano.losa@epfl.ch>
date = 2012-03-01
-topic = Computer Science/Algorithms/Distributed
+topic = Computer science/Algorithms/Distributed
abstract =
We define the Abortable Linearizable Module automaton (ALM for short)
and prove its key composition property using the IOA theory of
HOLCF. The ALM is at the heart of the Speculative Linearizability
framework. This framework simplifies devising correct speculative
algorithms by enabling their decomposition into independent modules
that can be analyzed and proved correct in isolation. It is
particularly useful when working in a distributed environment, where
the need to tolerate faults and asynchrony has made current
monolithic protocols so intricate that it is no longer tractable to
check their correctness. Our theory contains a typical example of a
refinement proof in the I/O-automata framework of Lynch and Tuttle.
notify = giuliano@losa.fr, nipkow@in.tum.de
[Amortized_Complexity]
title = Amortized Complexity Verified
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2014-07-07
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
A framework for the analysis of the amortized complexity of functional
data structures is formalized in Isabelle/HOL and applied to a number of
standard examples and to the folowing non-trivial ones: skew heaps,
splay trees, splay heaps and pairing heaps.
<p>
A preliminary version of this work (without pairing heaps) is described
in a <a href="http://www21.in.tum.de/~nipkow/pubs/itp15.html">paper</a>
published in the proceedings of the conference on Interactive
Theorem Proving ITP 2015. An extended version of this publication
is available <a href="http://www21.in.tum.de/~nipkow/pubs/jfp16.html">here</a>.
extra-history =
Change history:
[2015-03-17]: Added pairing heaps by Hauke Brinkop.<br>
[2016-07-12]: Moved splay heaps from here to Splay_Tree<br>
[2016-07-14]: Moved pairing heaps from here to the new Pairing_Heap
notify = nipkow@in.tum.de
[Dynamic_Tables]
title = Parameterized Dynamic Tables
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2015-06-07
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
This article formalizes the amortized analysis of dynamic tables
parameterized with their minimal and maximal load factors and the
expansion and contraction factors.
<P>
A full description is found in a
<a href="http://www21.in.tum.de/~nipkow/pubs">companion paper</a>.
notify = nipkow@in.tum.de
[AVL-Trees]
title = AVL Trees
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>, Cornelia Pusch <>
date = 2004-03-19
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract = Two formalizations of AVL trees with room for extensions. The first formalization is monolithic and shorter, the second one in two stages, longer and a bit simpler. The final implementation is the same. If you are interested in developing this further, please contact <tt>gerwin.klein@nicta.com.au</tt>.
extra-history =
Change history:
[2011-04-11]: Ondrej Kuncar added delete function
notify = kleing@cse.unsw.edu.au
[BDD]
title = BDD Normalisation
author = Veronika Ortner <>, Norbert Schirmer <>
date = 2008-02-29
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract = We present the verification of the normalisation of a binary decision diagram (BDD). The normalisation follows the original algorithm presented by Bryant in 1986 and transforms an ordered BDD in a reduced, ordered and shared BDD. The verification is based on Hoare logics.
notify = kleing@cse.unsw.edu.au, norbert.schirmer@web.de
[BinarySearchTree]
title = Binary Search Trees
author = Viktor Kuncak <http://lara.epfl.ch/~kuncak/>
date = 2004-04-05
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract = The correctness is shown of binary search tree operations (lookup, insert and remove) implementing a set. Two versions are given, for both structured and linear (tactic-style) proofs. An implementation of integer-indexed maps is also verified.
notify = lp15@cam.ac.uk
[Splay_Tree]
title = Splay Tree
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
notify = nipkow@in.tum.de
date = 2014-08-12
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
Splay trees are self-adjusting binary search trees which were invented by Sleator and Tarjan [JACM 1985].
This entry provides executable and verified functional splay trees
as well as the related splay heaps (due to Okasaki).
<p>
The amortized complexity of splay trees and heaps is analyzed in the AFP entry
<a href="http://isa-afp.org/entries/Amortized_Complexity.html">Amortized Complexity</a>.
extra-history =
Change history:
[2016-07-12]: Moved splay heaps here from Amortized_Complexity
[Root_Balanced_Tree]
title = Root-Balanced Tree
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
notify = nipkow@in.tum.de
date = 2017-08-20
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
<p>
Andersson introduced <em>general balanced trees</em>,
search trees based on the design principle of partial rebuilding:
perform update operations naively until the tree becomes too
unbalanced, at which point a whole subtree is rebalanced. This article
defines and analyzes a functional version of general balanced trees,
which we call <em>root-balanced trees</em>. Using a lightweight model
of execution time, amortized logarithmic complexity is verified in
the theorem prover Isabelle.
</p>
<p>
This is the Isabelle formalization of the material decribed in the APLAS 2017 article
<a href="http://www21.in.tum.de/~nipkow/pubs/aplas17.html">Verified Root-Balanced Trees</a>
by the same author, which also presents experimental results that show
competitiveness of root-balanced with AVL and red-black trees.
</p>
[Skew_Heap]
title = Skew Heap
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2014-08-13
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
Skew heaps are an amazingly simple and lightweight implementation of
priority queues. They were invented by Sleator and Tarjan [SIAM 1986]
and have logarithmic amortized complexity. This entry provides executable
and verified functional skew heaps.
<p>
The amortized complexity of skew heaps is analyzed in the AFP entry
<a href="http://isa-afp.org/entries/Amortized_Complexity.html">Amortized Complexity</a>.
notify = nipkow@in.tum.de
[Pairing_Heap]
title = Pairing Heap
author = Hauke Brinkop <mailto:hauke.brinkop@googlemail.com>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2016-07-14
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
This library defines three different versions of pairing heaps: a
functional version of the original design based on binary
trees [Fredman et al. 1986], the version by Okasaki [1998] and
a modified version of the latter that is free of structural invariants.
<p>
The amortized complexity of pairing heaps is analyzed in the AFP article
<a href="http://isa-afp.org/entries/Amortized_Complexity.html">Amortized Complexity</a>.
extra-0 = Origin: This library was extracted from Amortized Complexity and extended.
notify = nipkow@in.tum.de
[Priority_Queue_Braun]
title = Priority Queues Based on Braun Trees
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2014-09-04
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
This entry verifies priority queues based on Braun trees. Insertion
and deletion take logarithmic time and preserve the balanced nature
of Braun trees. Two implementations of deletion are provided.
notify = nipkow@in.tum.de
extra-history =
Change history:
[2019-12-16]: Added theory Priority_Queue_Braun2 with second version of del_min
[Binomial-Queues]
title = Functional Binomial Queues
author = René Neumann <mailto:neumannr@in.tum.de>
date = 2010-10-28
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract = Priority queues are an important data structure and efficient implementations of them are crucial. We implement a functional variant of binomial queues in Isabelle/HOL and show its functional correctness. A verification against an abstract reference specification of priority queues has also been attempted, but could not be achieved to the full extent.
notify = florian.haftmann@informatik.tu-muenchen.de
[Binomial-Heaps]
title = Binomial Heaps and Skew Binomial Heaps
author = Rene Meis <mailto:rene.meis@uni-muenster.de>, Finn Nielsen <mailto:finn.nielsen@uni-muenster.de>, Peter Lammich <http://www21.in.tum.de/~lammich>
date = 2010-10-28
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
We implement and prove correct binomial heaps and skew binomial heaps.
Both are data-structures for priority queues.
While binomial heaps have logarithmic <em>findMin</em>, <em>deleteMin</em>,
<em>insert</em>, and <em>meld</em> operations,
skew binomial heaps have constant time <em>findMin</em>, <em>insert</em>,
and <em>meld</em> operations, and only the <em>deleteMin</em>-operation is
logarithmic. This is achieved by using <em>skew links</em> to avoid
cascading linking on <em>insert</em>-operations, and <em>data-structural
bootstrapping</em> to get constant-time <em>findMin</em> and <em>meld</em>
operations. Our implementation follows the paper by Brodal and Okasaki.
notify = peter.lammich@uni-muenster.de
[Finger-Trees]
title = Finger Trees
author = Benedikt Nordhoff <mailto:b_nord01@uni-muenster.de>, Stefan Körner <mailto:s_koer03@uni-muenster.de>, Peter Lammich <http://www21.in.tum.de/~lammich>
date = 2010-10-28
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
We implement and prove correct 2-3 finger trees.
Finger trees are a general purpose data structure, that can be used to
efficiently implement other data structures, such as priority queues.
Intuitively, a finger tree is an annotated sequence, where the annotations are
elements of a monoid. Apart from operations to access the ends of the sequence,
the main operation is to split the sequence at the point where a
<em>monotone predicate</em> over the sum of the left part of the sequence
becomes true for the first time.
The implementation follows the paper of Hinze and Paterson.
The code generator can be used to get efficient, verified code.
notify = peter.lammich@uni-muenster.de
[Trie]
title = Trie
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2015-03-30
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
This article formalizes the ``trie'' data structure invented by
Fredkin [CACM 1960]. It also provides a specialization where the entries
in the trie are lists.
extra-0 =
Origin: This article was extracted from existing articles by the authors.
notify = nipkow@in.tum.de
[FinFun]
title = Code Generation for Functions as Data
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>
date = 2009-05-06
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract = FinFuns are total functions that are constant except for a finite set of points, i.e. a generalisation of finite maps. They are formalised as a new type in Isabelle/HOL such that the code generator can handle equality tests and quantification on FinFuns. On the code output level, FinFuns are explicitly represented by constant functions and pointwise updates, similarly to associative lists. Inside the logic, they behave like ordinary functions with extensionality. Via the update/constant pattern, a recursion combinator and an induction rule for FinFuns allow for defining and reasoning about operators on FinFun that are also executable.
extra-history =
Change history:
[2010-08-13]:
new concept domain of a FinFun as a FinFun
(revision 34b3517cbc09)<br>
[2010-11-04]:
new conversion function from FinFun to list of elements in the domain
(revision 0c167102e6ed)<br>
[2012-03-07]:
replace sets as FinFuns by predicates as FinFuns because the set type constructor has been reintroduced
(revision b7aa87989f3a)
notify = nipkow@in.tum.de
[Collections]
title = Collections Framework
author = Peter Lammich <http://www21.in.tum.de/~lammich>
contributors = Andreas Lochbihler <http://www.andreas-lochbihler.de>, Thomas Tuerk <>
date = 2009-11-25
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract = This development provides an efficient, extensible, machine checked collections framework. The library adopts the concepts of interface, implementation and generic algorithm from object-oriented programming and implements them in Isabelle/HOL. The framework features the use of data refinement techniques to refine an abstract specification (using high-level concepts like sets) to a more concrete implementation (using collection datastructures, like red-black-trees). The code-generator of Isabelle/HOL can be used to generate efficient code.
extra-history =
Change history:
[2010-10-08]: New Interfaces: OrderedSet, OrderedMap, List.
Fifo now implements list-interface: Function names changed: put/get --> enqueue/dequeue.
New Implementations: ArrayList, ArrayHashMap, ArrayHashSet, TrieMap, TrieSet.
Invariant-free datastructures: Invariant implicitely hidden in typedef.
Record-interfaces: All operations of an interface encapsulated as record.
Examples moved to examples subdirectory.<br>
[2010-12-01]: New Interfaces: Priority Queues, Annotated Lists. Implemented by finger trees, (skew) binomial queues.<br>
[2011-10-10]: SetSpec: Added operations: sng, isSng, bexists, size_abort, diff, filter, iterate_rule_insertP
MapSpec: Added operations: sng, isSng, iterate_rule_insertP, bexists, size, size_abort, restrict,
map_image_filter, map_value_image_filter
Some maintenance changes<br>
[2012-04-25]: New iterator foundation by Tuerk. Various maintenance changes.<br>
[2012-08]: Collections V2. New features: Polymorphic iterators. Generic algorithm instantiation where required. Naming scheme changed from xx_opname to xx.opname.
A compatibility file CollectionsV1 tries to simplify porting of existing theories, by providing old naming scheme and the old monomorphic iterator locales.<br>
[2013-09]: Added Generic Collection Framework based on Autoref. The GenCF provides: Arbitrary nesting, full integration with Autoref.<br>
[2014-06]: Maintenace changes to GenCF: Optimized inj_image on list_set. op_set_cart (Cartesian product). big-Union operation. atLeastLessThan - operation ({a..&lt;b})<br>
notify = lammich@in.tum.de
[Containers]
title = Light-weight Containers
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>
contributors = René Thiemann <mailto:rene.thiemann@uibk.ac.at>
date = 2013-04-15
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
This development provides a framework for container types like sets and maps such that generated code implements these containers with different (efficient) data structures.
Thanks to type classes and refinement during code generation, this light-weight approach can seamlessly replace Isabelle's default setup for code generation.
Heuristics automatically pick one of the available data structures depending on the type of elements to be stored, but users can also choose on their own.
The extensible design permits to add more implementations at any time.
<p>
To support arbitrary nesting of sets, we define a linear order on sets based on a linear order of the elements and provide efficient implementations.
It even allows to compare complements with non-complements.
extra-history =
Change history:
[2013-07-11]: add pretty printing for sets (revision 7f3f52c5f5fa)<br>
[2013-09-20]:
provide generators for canonical type class instantiations
(revision 159f4401f4a8 by René Thiemann)<br>
[2014-07-08]: add support for going from partial functions to mappings (revision 7a6fc957e8ed)<br>
[2018-03-05]: add two application examples: depth-first search and 2SAT (revision e5e1a1da2411)
notify = mail@andreas-lochbihler.de
[FileRefinement]
title = File Refinement
author = Karen Zee <http://www.mit.edu/~kkz/>, Viktor Kuncak <http://lara.epfl.ch/~kuncak/>
date = 2004-12-09
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract = These theories illustrates the verification of basic file operations (file creation, file read and file write) in the Isabelle theorem prover. We describe a file at two levels of abstraction: an abstract file represented as a resizable array, and a concrete file represented using data blocks.
notify = kkz@mit.edu
[Datatype_Order_Generator]
title = Generating linear orders for datatypes
author = René Thiemann <mailto:rene.thiemann@uibk.ac.at>
date = 2012-08-07
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
We provide a framework for registering automatic methods to derive
class instances of datatypes, as it is possible using Haskell's ``deriving Ord, Show, ...'' feature.
<p>
We further implemented such automatic methods to derive (linear) orders or hash-functions which are
required in the Isabelle Collection Framework. Moreover, for the tactic of Huffman and Krauss to show that a
datatype is countable, we implemented a wrapper so that this tactic becomes accessible in our framework.
<p>
Our formalization was performed as part of the <a href="http://cl-informatik.uibk.ac.at/software/ceta">IsaFoR/CeTA</a> project.
With our new tactic we could completely remove
tedious proofs for linear orders of two datatypes.
<p>
This development is aimed at datatypes generated by the "old_datatype"
command.
notify = rene.thiemann@uibk.ac.at
[Deriving]
title = Deriving class instances for datatypes
author = Christian Sternagel <mailto:c.sternagel@gmail.com>, René Thiemann <mailto:rene.thiemann@uibk.ac.at>
date = 2015-03-11
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
<p>We provide a framework for registering automatic methods
to derive class instances of datatypes,
as it is possible using Haskell's ``deriving Ord, Show, ...'' feature.</p>
<p>We further implemented such automatic methods to derive comparators, linear orders, parametrizable equality functions,
and hash-functions which are required in the
Isabelle Collection Framework and the Container Framework.
Moreover, for the tactic of Blanchette to show that a datatype is countable, we implemented a
wrapper so that this tactic becomes accessible in our framework. All of the generators are based on
the infrastructure that is provided by the BNF-based datatype package.</p>
<p>Our formalization was performed as part of the <a href="http://cl-informatik.uibk.ac.at/software/ceta">IsaFoR/CeTA</a> project.
With our new tactics we could remove
several tedious proofs for (conditional) linear orders, and conditional equality operators
within IsaFoR and the Container Framework.</p>
notify = rene.thiemann@uibk.ac.at
[List-Index]
title = List Index
date = 2010-02-20
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract = This theory provides functions for finding the index of an element in a list, by predicate and by value.
notify = nipkow@in.tum.de
[List-Infinite]
title = Infinite Lists
date = 2011-02-23
author = David Trachtenherz <>
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract = We introduce a theory of infinite lists in HOL formalized as functions over naturals (folder ListInf, theories ListInf and ListInf_Prefix). It also provides additional results for finite lists (theory ListInf/List2), natural numbers (folder CommonArith, esp. division/modulo, naturals with infinity), sets (folder CommonSet, esp. cutting/truncating sets, traversing sets of naturals).
notify = nipkow@in.tum.de
[Matrix]
title = Executable Matrix Operations on Matrices of Arbitrary Dimensions
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
date = 2010-06-17
author = Christian Sternagel <mailto:c.sternagel@gmail.com>, René Thiemann <http://cl-informatik.uibk.ac.at/~thiemann>
license = LGPL
abstract =
We provide the operations of matrix addition, multiplication,
transposition, and matrix comparisons as executable functions over
ordered semirings. Moreover, it is proven that strongly normalizing
(monotone) orders can be lifted to strongly normalizing (monotone) orders
over matrices. We further show that the standard semirings over the
naturals, integers, and rationals, as well as the arctic semirings
satisfy the axioms that are required by our matrix theory. Our
formalization is part of the <a
href="http://cl-informatik.uibk.ac.at/software/ceta">CeTA</a> system
which contains several termination techniques. The provided theories have
been essential to formalize matrix-interpretations and arctic
interpretations.
extra-history =
Change history:
[2010-09-17]: Moved theory on arbitrary (ordered) semirings to Abstract Rewriting.
notify = rene.thiemann@uibk.ac.at, christian.sternagel@uibk.ac.at
[Matrix_Tensor]
title = Tensor Product of Matrices
-topic = Computer Science/Data Structures, Mathematics/Algebra
+topic = Computer science/Data structures, Mathematics/Algebra
date = 2016-01-18
author = T.V.H. Prathamesh <mailto:prathamesh@imsc.res.in>
abstract =
In this work, the Kronecker tensor product of matrices and the proofs of
some of its properties are formalized. Properties which have been formalized
include associativity of the tensor product and the mixed-product
property.
notify = prathamesh@imsc.res.in
[Huffman]
title = The Textbook Proof of Huffman's Algorithm
author = Jasmin Christian Blanchette <http://www21.in.tum.de/~blanchet>
date = 2008-10-15
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract = Huffman's algorithm is a procedure for constructing a binary tree with minimum weighted path length. This report presents a formal proof of the correctness of Huffman's algorithm written using Isabelle/HOL. Our proof closely follows the sketches found in standard algorithms textbooks, uncovering a few snags in the process. Another distinguishing feature of our formalization is the use of custom induction rules to help Isabelle's automatic tactics, leading to very short proofs for most of the lemmas.
notify = jasmin.blanchette@gmail.com
[Partial_Function_MR]
title = Mutually Recursive Partial Functions
author = René Thiemann <mailto:rene.thiemann@uibk.ac.at>
-topic = Computer Science/Functional Programming
+topic = Computer science/Functional programming
date = 2014-02-18
license = LGPL
abstract = We provide a wrapper around the partial-function command that supports mutual recursion.
notify = rene.thiemann@uibk.ac.at
[Lifting_Definition_Option]
title = Lifting Definition Option
author = René Thiemann <mailto:rene.thiemann@uibk.ac.at>
-topic = Computer Science/Functional Programming
+topic = Computer science/Functional programming
date = 2014-10-13
license = LGPL
abstract =
We implemented a command that can be used to easily generate
elements of a restricted type <tt>{x :: 'a. P x}</tt>,
provided the definition is of the form
<tt>f ys = (if check ys then Some(generate ys :: 'a) else None)</tt> where
<tt>ys</tt> is a list of variables <tt>y1 ... yn</tt> and
<tt>check ys ==> P(generate ys)</tt> can be proved.
<p>
In principle, such a definition is also directly possible using the
<tt>lift_definition</tt> command. However, then this definition will not be
suitable for code-generation. To this end, we automated a more complex
construction of Joachim Breitner which is amenable for code-generation, and
where the test <tt>check ys</tt> will only be performed once. In the
automation, one auxiliary type is created, and Isabelle's lifting- and
transfer-package is invoked several times.
notify = rene.thiemann@uibk.ac.at
[Coinductive]
title = Coinductive
-topic = Computer Science/Functional Programming
+topic = Computer science/Functional programming
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>
contributors = Johannes Hölzl <mailto:hoelzl@in.tum.de>
date = 2010-02-12
abstract = This article collects formalisations of general-purpose coinductive data types and sets. Currently, it contains coinductive natural numbers, coinductive lists, i.e. lazy lists or streams, infinite streams, coinductive terminated lists, coinductive resumptions, a library of operations on coinductive lists, and a version of König's lemma as an application for coinductive lists.<br>The initial theory was contributed by Paulson and Wenzel. Extensions and other coinductive formalisations of general interest are welcome.
extra-history =
Change history:
[2010-06-10]:
coinductive lists: setup for quotient package
(revision 015574f3bf3c)<br>
[2010-06-28]:
new codatatype terminated lazy lists
(revision e12de475c558)<br>
[2010-08-04]:
terminated lazy lists: setup for quotient package;
more lemmas
(revision 6ead626f1d01)<br>
[2010-08-17]:
Koenig's lemma as an example application for coinductive lists
(revision f81ce373fa96)<br>
[2011-02-01]:
lazy implementation of coinductive (terminated) lists for the code generator
(revision 6034973dce83)<br>
[2011-07-20]:
new codatatype resumption
(revision 811364c776c7)<br>
[2012-06-27]:
new codatatype stream with operations (with contributions by Peter Gammie)
(revision dd789a56473c)<br>
[2013-03-13]:
construct codatatypes with the BNF package and adjust the definitions and proofs,
setup for lifting and transfer packages
(revision f593eda5b2c0)<br>
[2013-09-20]:
stream theory uses type and operations from HOL/BNF/Examples/Stream
(revision 692809b2b262)<br>
[2014-04-03]:
ccpo structure on codatatypes used to define ldrop, ldropWhile, lfilter, lconcat as least fixpoint;
ccpo topology on coinductive lists contributed by Johannes Hölzl;
added examples
(revision 23cd8156bd42)<br>
notify = mail@andreas-lochbihler.de
[Stream-Fusion]
title = Stream Fusion
author = Brian Huffman <http://cs.pdx.edu/~brianh>
-topic = Computer Science/Functional Programming
+topic = Computer science/Functional programming
date = 2009-04-29
abstract = Stream Fusion is a system for removing intermediate list structures from Haskell programs; it consists of a Haskell library along with several compiler rewrite rules. (The library is available <a href="http://hackage.haskell.org/package/stream-fusion">online</a>.)<br><br>These theories contain a formalization of much of the Stream Fusion library in HOLCF. Lazy list and stream types are defined, along with coercions between the two types, as well as an equivalence relation for streams that generate the same list. List and stream versions of map, filter, foldr, enumFromTo, append, zipWith, and concatMap are defined, and the stream versions are shown to respect stream equivalence.
notify = brianh@cs.pdx.edu
[Tycon]
title = Type Constructor Classes and Monad Transformers
author = Brian Huffman <mailto:huffman@in.tum.de>
date = 2012-06-26
-topic = Computer Science/Functional Programming
+topic = Computer science/Functional programming
abstract =
These theories contain a formalization of first class type constructors
and axiomatic constructor classes for HOLCF. This work is described
in detail in the ICFP 2012 paper <i>Formal Verification of Monad
Transformers</i> by the author. The formalization is a revised and
updated version of earlier joint work with Matthews and White.
<P>
Based on the hierarchy of type classes in Haskell, we define classes
for functors, monads, monad-plus, etc. Each one includes all the
standard laws as axioms. We also provide a new user command,
tycondef, for defining new type constructors in HOLCF. Using tycondef,
we instantiate the type class hierarchy with various monads and monad
transformers.
notify = huffman@in.tum.de
[CoreC++]
title = CoreC++
author = Daniel Wasserrab <http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php>
date = 2006-05-15
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
abstract = We present an operational semantics and type safety proof for multiple inheritance in C++. The semantics models the behavior of method calls, field accesses, and two forms of casts in C++ class hierarchies. For explanations see the OOPSLA 2006 paper by Wasserrab, Nipkow, Snelting and Tip.
notify = nipkow@in.tum.de
[FeatherweightJava]
title = A Theory of Featherweight Java in Isabelle/HOL
author = J. Nathan Foster <http://www.cs.cornell.edu/~jnfoster/>, Dimitrios Vytiniotis <http://research.microsoft.com/en-us/people/dimitris/>
date = 2006-03-31
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
abstract = We formalize the type system, small-step operational semantics, and type soundness proof for Featherweight Java, a simple object calculus, in Isabelle/HOL.
notify = kleing@cse.unsw.edu.au
[Jinja]
title = Jinja is not Java
author = Gerwin Klein <http://www.cse.unsw.edu.au/~kleing/>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2005-06-01
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
abstract = We introduce Jinja, a Java-like programming language with a formal semantics designed to exhibit core features of the Java language architecture. Jinja is a compromise between realism of the language and tractability and clarity of the formal semantics. The following aspects are formalised: a big and a small step operational semantics for Jinja and a proof of their equivalence; a type system and a definite initialisation analysis; a type safety proof of the small step semantics; a virtual machine (JVM), its operational semantics and its type system; a type safety proof for the JVM; a bytecode verifier, i.e. data flow analyser for the JVM; a correctness proof of the bytecode verifier w.r.t. the type system; a compiler and a proof that it preserves semantics and well-typedness. The emphasis of this work is not on particular language features but on providing a unified model of the source language, the virtual machine and the compiler. The whole development has been carried out in the theorem prover Isabelle/HOL.
notify = kleing@cse.unsw.edu.au, nipkow@in.tum.de
[JinjaThreads]
title = Jinja with Threads
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>
date = 2007-12-03
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
abstract = We extend the Jinja source code semantics by Klein and Nipkow with Java-style arrays and threads. Concurrency is captured in a generic framework semantics for adding concurrency through interleaving to a sequential semantics, which features dynamic thread creation, inter-thread communication via shared memory, lock synchronisation and joins. Also, threads can suspend themselves and be notified by others. We instantiate the framework with the adapted versions of both Jinja source and byte code and show type safety for the multithreaded case. Equally, the compiler from source to byte code is extended, for which we prove weak bisimilarity between the source code small step semantics and the defensive Jinja virtual machine. On top of this, we formalise the JMM and show the DRF guarantee and consistency. For description of the different parts, see Lochbihler's papers at FOOL 2008, ESOP 2010, ITP 2011, and ESOP 2012.
extra-history =
Change history:
[2008-04-23]:
added bytecode formalisation with arrays and threads, added thread joins
(revision f74a8be156a7)<br>
[2009-04-27]:
added verified compiler from source code to bytecode;
encapsulate native methods in separate semantics
(revision e4f26541e58a)<br>
[2009-11-30]:
extended compiler correctness proof to infinite and deadlocking computations
(revision e50282397435)<br>
[2010-06-08]:
added thread interruption;
new abstract memory model with sequential consistency as implementation
(revision 0cb9e8dbd78d)<br>
[2010-06-28]:
new thread interruption model
(revision c0440d0a1177)<br>
[2010-10-15]:
preliminary version of the Java memory model for source code
(revision 02fee0ef3ca2)<br>
[2010-12-16]:
improved version of the Java memory model, also for bytecode
executable scheduler for source code semantics
(revision 1f41c1842f5a)<br>
[2011-02-02]:
simplified code generator setup
new random scheduler
(revision 3059dafd013f)<br>
[2011-07-21]:
new interruption model,
generalized JMM proof of DRF guarantee,
allow class Object to declare methods and fields,
simplified subtyping relation,
corrected division and modulo implementation
(revision 46e4181ed142)<br>
[2012-02-16]:
added example programs
(revision bf0b06c8913d)<br>
[2012-11-21]:
type safety proof for the Java memory model,
allow spurious wake-ups
(revision 76063d860ae0)<br>
[2013-05-16]:
support for non-deterministic memory allocators
(revision cc3344a49ced)<br>
[2017-10-20]:
add an atomic compare-and-swap operation for volatile fields
(revision a6189b1d6b30)<br>
notify = mail@andreas-lochbihler.de
[Locally-Nameless-Sigma]
title = Locally Nameless Sigma Calculus
author = Ludovic Henrio <mailto:Ludovic.Henrio@sophia.inria.fr>, Florian Kammüller <mailto:flokam@cs.tu-berlin.de>, Bianca Lutz <mailto:sowilo@cs.tu-berlin.de>, Henry Sudhof <mailto:hsudhof@cs.tu-berlin.de>
date = 2010-04-30
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
abstract = We present a Theory of Objects based on the original functional sigma-calculus by Abadi and Cardelli but with an additional parameter to methods. We prove confluence of the operational semantics following the outline of Nipkow's proof of confluence for the lambda-calculus reusing his theory Commutation, a generic diamond lemma reduction. We furthermore formalize a simple type system for our sigma-calculus including a proof of type safety. The entire development uses the concept of Locally Nameless representation for binders. We reuse an earlier proof of confluence for a simpler sigma-calculus based on de Bruijn indices and lists to represent objects.
notify = nipkow@in.tum.de
+[Attack_Trees]
+title = Attack Trees in Isabelle for GDPR compliance of IoT healthcare systems
+author = Florian Kammueller <http://www.cs.mdx.ac.uk/people/florian-kammueller/>
+topic = Computer science/Security
+date = 2020-04-27
+notify = florian.kammuller@gmail.com
+abstract =
+ In this article, we present a proof theory for Attack Trees. Attack
+ Trees are a well established and useful model for the construction of
+ attacks on systems since they allow a stepwise exploration of high
+ level attacks in application scenarios. Using the expressiveness of
+ Higher Order Logic in Isabelle, we develop a generic
+ theory of Attack Trees with a state-based semantics based on Kripke
+ structures and CTL. The resulting framework
+ allows mechanically supported logic analysis of the meta-theory of the
+ proof calculus of Attack Trees and at the same time the developed
+ proof theory enables application to case studies. A central
+ correctness and completeness result proved in Isabelle establishes a
+ connection between the notion of Attack Tree validity and CTL. The
+ application is illustrated on the example of a healthcare IoT system
+ and GDPR compliance verification.
+
[AutoFocus-Stream]
title = AutoFocus Stream Processing for Single-Clocking and Multi-Clocking Semantics
author = David Trachtenherz <>
date = 2011-02-23
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
abstract = We formalize the AutoFocus Semantics (a time-synchronous subset of the Focus formalism) as stream processing functions on finite and infinite message streams represented as finite/infinite lists. The formalization comprises both the conventional single-clocking semantics (uniform global clock for all components and communications channels) and its extension to multi-clocking semantics (internal execution clocking of a component may be a multiple of the external communication clocking). The semantics is defined by generic stream processing functions making it suitable for simulation/code generation in Isabelle/HOL. Furthermore, a number of AutoFocus semantics properties are formalized using definitions from the IntervalLogic theories.
notify = nipkow@in.tum.de
[FocusStreamsCaseStudies]
title = Stream Processing Components: Isabelle/HOL Formalisation and Case Studies
author = Maria Spichkova <mailto:maria.spichkova@rmit.edu.au>
date = 2013-11-14
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
abstract = This set of theories presents an Isabelle/HOL formalisation of stream processing components introduced
in Focus,
a framework for formal specification and development of interactive systems.
This is an extended and updated version of the formalisation, which was
elaborated within the methodology "Focus on Isabelle".
In addition, we also applied the formalisation on three case studies
that cover different application areas: process control (Steam Boiler System),
data transmission (FlexRay communication protocol),
memory and processing components (Automotive-Gateway System).
notify = lp15@cam.ac.uk, maria.spichkova@rmit.edu.au
[Isabelle_Meta_Model]
title = A Meta-Model for the Isabelle API
author = Frédéric Tuong <mailto:tuong@users.gforge.inria.fr>, Burkhart Wolff <https://www.lri.fr/~wolff/>
date = 2015-09-16
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
abstract =
We represent a theory <i>of</i> (a fragment of) Isabelle/HOL <i>in</i>
Isabelle/HOL. The purpose of this exercise is to write packages for
domain-specific specifications such as class models, B-machines, ...,
and generally speaking, any domain-specific languages whose
abstract syntax can be defined by a HOL "datatype". On this basis, the
Isabelle code-generator can then be used to generate code for global
context transformations as well as tactic code.
<p>
Consequently the package is geared towards
parsing, printing and code-generation to the Isabelle API.
It is at the moment not sufficiently rich for doing meta theory on
Isabelle itself. Extensions in this direction are possible though.
<p>
Moreover, the chosen fragment is fairly rudimentary. However it should be
easily adapted to one's needs if a package is written on top of it.
The supported API contains types, terms, transformation of
global context like definitions and data-type declarations as well
as infrastructure for Isar-setups.
<p>
This theory is drawn from the
<a href="http://isa-afp.org/entries/Featherweight_OCL.html">Featherweight OCL</a>
project where
it is used to construct a package for object-oriented data-type theories
generated from UML class diagrams. The Featherweight OCL, for example, allows for
both the direct execution of compiled tactic code by the Isabelle API
as well as the generation of ".thy"-files for debugging purposes.
<p>
Gained experience from this project shows that the compiled code is sufficiently
efficient for practical purposes while being based on a formal <i>model</i>
on which properties of the package can be proven such as termination of certain
transformations, correctness, etc.
notify = tuong@users.gforge.inria.fr, wolff@lri.fr
[Clean]
title = Clean - An Abstract Imperative Programming Language and its Theory
author = Frédéric Tuong <https://www.lri.fr/~ftuong/>, Burkhart Wolff <https://www.lri.fr/~wolff/>
-topic = Computer Science/Programming Languages, Computer Science/Semantics
+topic = Computer science/Programming languages, Computer science/Semantics
date = 2019-10-04
notify = wolff@lri.fr, ftuong@lri.fr
abstract =
Clean is based on a simple, abstract execution model for an imperative
target language. “Abstract” is understood in contrast to “Concrete
Semantics”; alternatively, the term “shallow-style embedding” could be
used. It strives for a type-safe notion of program-variables, an
incremental construction of the typed state-space, support of
incremental verification, and open-world extensibility of new type
definitions being intertwined with the program definitions. Clean is
based on a “no-frills” state-exception monad with the usual
definitions of bind and unit for the compositional glue of state-based
computations. Clean offers conditionals and loops supporting C-like
control-flow operators such as break and return. The state-space
construction is based on the extensible record package. Direct
recursion of procedures is supported. Clean’s design strives for
extreme simplicity. It is geared towards symbolic execution and proven
correct verification tools. The underlying libraries of this package,
however, deliberately restrict themselves to the most elementary
infrastructure for these tasks. The package is intended to serve as
demonstrator semantic backend for Isabelle/C, or for the
test-generation techniques.
[PCF]
title = Logical Relations for PCF
author = Peter Gammie <mailto:peteg42@gmail.com>
date = 2012-07-01
-topic = Computer Science/Programming Languages/Lambda Calculi
+topic = Computer science/Programming languages/Lambda calculi
abstract = We apply Andy Pitts's methods of defining relations over domains to
several classical results in the literature. We show that the Y
combinator coincides with the domain-theoretic fixpoint operator,
that parallel-or and the Plotkin existential are not definable in
PCF, that the continuation semantics for PCF coincides with the
direct semantics, and that our domain-theoretic semantics for PCF is
adequate for reasoning about contextual equivalence in an
operational semantics. Our version of PCF is untyped and has both
strict and non-strict function abstractions. The development is
carried out in HOLCF.
notify = peteg42@gmail.com
[POPLmark-deBruijn]
title = POPLmark Challenge Via de Bruijn Indices
author = Stefan Berghofer <http://www.in.tum.de/~berghofe>
date = 2007-08-02
-topic = Computer Science/Programming Languages/Lambda Calculi
+topic = Computer science/Programming languages/Lambda calculi
abstract = We present a solution to the POPLmark challenge designed by Aydemir et al., which has as a goal the formalization of the meta-theory of System F<sub>&lt;:</sub>. The formalization is carried out in the theorem prover Isabelle/HOL using an encoding based on de Bruijn indices. We start with a relatively simple formalization covering only the basic features of System F<sub>&lt;:</sub>, and explain how it can be extended to also cover records and more advanced binding constructs.
notify = berghofe@in.tum.de
[Lam-ml-Normalization]
title = Strong Normalization of Moggis's Computational Metalanguage
author = Christian Doczkal <mailto:doczkal@ps.uni-saarland.de>
date = 2010-08-29
-topic = Computer Science/Programming Languages/Lambda Calculi
+topic = Computer science/Programming languages/Lambda calculi
abstract = Handling variable binding is one of the main difficulties in formal proofs. In this context, Moggi's computational metalanguage serves as an interesting case study. It features monadic types and a commuting conversion rule that rearranges the binding structure. Lindley and Stark have given an elegant proof of strong normalization for this calculus. The key construction in their proof is a notion of relational TT-lifting, using stacks of elimination contexts to obtain a Girard-Tait style logical relation. I give a formalization of their proof in Isabelle/HOL-Nominal with a particular emphasis on the treatment of bound variables.
notify = doczkal@ps.uni-saarland.de, nipkow@in.tum.de
[MiniML]
title = Mini ML
author = Wolfgang Naraschewski <>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2004-03-19
-topic = Computer Science/Programming Languages/Type Systems
+topic = Computer science/Programming languages/Type systems
abstract = This theory defines the type inference rules and the type inference algorithm <i>W</i> for MiniML (simply-typed lambda terms with <tt>let</tt>) due to Milner. It proves the soundness and completeness of <i>W</i> w.r.t. the rules.
notify = kleing@cse.unsw.edu.au
[Simpl]
title = A Sequential Imperative Programming Language Syntax, Semantics, Hoare Logics and Verification Environment
author = Norbert Schirmer <>
date = 2008-02-29
-topic = Computer Science/Programming Languages/Language Definitions, Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Language definitions, Computer science/Programming languages/Logics
license = LGPL
abstract = We present the theory of Simpl, a sequential imperative programming language. We introduce its syntax, its semantics (big and small-step operational semantics) and Hoare logics for both partial as well as total correctness. We prove soundness and completeness of the Hoare logic. We integrate and automate the Hoare logic in Isabelle/HOL to obtain a practically usable verification environment for imperative programs. Simpl is independent of a concrete programming language but expressive enough to cover all common language features: mutually recursive procedures, abrupt termination and exceptions, runtime faults, local and global variables, pointers and heap, expressions with side effects, pointers to procedures, partial application and closures, dynamic method invocation and also unbounded nondeterminism.
notify = kleing@cse.unsw.edu.au, norbert.schirmer@web.de
[Separation_Algebra]
title = Separation Algebra
author = Gerwin Klein <mailto:kleing@cse.unsw.edu.au>, Rafal Kolanski <mailto:rafal.kolanski@nicta.com.au>, Andrew Boyton <mailto:andrew.boyton@nicta.com.au>
date = 2012-05-11
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
license = BSD
abstract = We present a generic type class implementation of separation algebra for Isabelle/HOL as well as lemmas and generic tactics which can be used directly for any instantiation of the type class. <P> The ex directory contains example instantiations that include structures such as a heap or virtual memory. <P> The abstract separation algebra is based upon "Abstract Separation Logic" by Calcagno et al. These theories are also the basis of the ITP 2012 rough diamond "Mechanised Separation Algebra" by the authors. <P> The aim of this work is to support and significantly reduce the effort for future separation logic developments in Isabelle/HOL by factoring out the part of separation logic that can be treated abstractly once and for all. This includes developing typical default rule sets for reasoning as well as automated tactic support for separation logic.
notify = kleing@cse.unsw.edu.au, rafal.kolanski@nicta.com.au
[Separation_Logic_Imperative_HOL]
title = A Separation Logic Framework for Imperative HOL
author = Peter Lammich <http://www21.in.tum.de/~lammich>, Rene Meis <mailto:rene.meis@uni-due.de>
date = 2012-11-14
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
license = BSD
abstract =
We provide a framework for separation-logic based correctness proofs of
Imperative HOL programs. Our framework comes with a set of proof methods to
automate canonical tasks such as verification condition generation and
frame inference. Moreover, we provide a set of examples that show the
applicability of our framework. The examples include algorithms on lists,
hash-tables, and union-find trees. We also provide abstract interfaces for
lists, maps, and sets, that allow to develop generic imperative algorithms
and use data-refinement techniques.
<br>
As we target Imperative HOL, our programs can be translated to
efficiently executable code in various target languages, including
ML, OCaml, Haskell, and Scala.
notify = lammich@in.tum.de
[Inductive_Confidentiality]
title = Inductive Study of Confidentiality
author = Giampaolo Bella <http://www.dmi.unict.it/~giamp/>
date = 2012-05-02
-topic = Computer Science/Security
+topic = Computer science/Security
abstract = This document contains the full theory files accompanying article <i>Inductive Study of Confidentiality --- for Everyone</i> in <i>Formal Aspects of Computing</i>. They aim at an illustrative and didactic presentation of the Inductive Method of protocol analysis, focusing on the treatment of one of the main goals of security protocols: confidentiality against a threat model. The treatment of confidentiality, which in fact forms a key aspect of all protocol analysis tools, has been found cryptic by many learners of the Inductive Method, hence the motivation for this work. The theory files in this document guide the reader step by step towards design and proof of significant confidentiality theorems. These are developed against two threat models, the standard Dolev-Yao and a more audacious one, the General Attacker, which turns out to be particularly useful also for teaching purposes.
notify = giamp@dmi.unict.it
[Possibilistic_Noninterference]
title = Possibilistic Noninterference
author = Andrei Popescu <mailto:uuomul@yahoo.com>, Johannes Hölzl <mailto:hoelzl@in.tum.de>
date = 2012-09-10
-topic = Computer Science/Security, Computer Science/Programming Languages/Type Systems
+topic = Computer science/Security, Computer science/Programming languages/Type systems
abstract = We formalize a wide variety of Volpano/Smith-style noninterference
notions for a while language with parallel composition.
We systematize and classify these notions according to
compositionality w.r.t. the language constructs. Compositionality
yields sound syntactic criteria (a.k.a. type systems) in a uniform way.
<p>
An <a href="http://www21.in.tum.de/~nipkow/pubs/cpp12.html">article</a>
about these proofs is published in the proceedings
of the conference Certified Programs and Proofs 2012.
notify = hoelzl@in.tum.de
[SIFUM_Type_Systems]
title = A Formalization of Assumptions and Guarantees for Compositional Noninterference
author = Sylvia Grewe <mailto:grewe@cs.tu-darmstadt.de>, Heiko Mantel <mailto:mantel@mais.informatik.tu-darmstadt.de>, Daniel Schoepe <mailto:daniel@schoepe.org>
date = 2014-04-23
-topic = Computer Science/Security, Computer Science/Programming Languages/Type Systems
+topic = Computer science/Security, Computer science/Programming languages/Type systems
abstract = Research in information-flow security aims at developing methods to
identify undesired information leaks within programs from private
(high) sources to public (low) sinks. For a concurrent system, it is
desirable to have compositional analysis methods that allow for
analyzing each thread independently and that nevertheless guarantee
that the parallel composition of successfully analyzed threads
satisfies a global security guarantee. However, such a compositional
analysis should not be overly pessimistic about what an environment
might do with shared resources. Otherwise, the analysis will reject
many intuitively secure programs.
<p>
The paper "Assumptions and Guarantees for Compositional
Noninterference" by Mantel et. al. presents one solution for this problem:
an approach for compositionally reasoning about non-interference in
concurrent programs via rely-guarantee-style reasoning. We present an
Isabelle/HOL formalization of the concepts and proofs of this approach.
notify = grewe@cs.tu-darmstadt.de
[Dependent_SIFUM_Type_Systems]
title = A Dependent Security Type System for Concurrent Imperative Programs
author = Toby Murray <http://people.eng.unimelb.edu.au/tobym/>, Robert Sison<>, Edward Pierzchalski<>, Christine Rizkallah<https://www.mpi-inf.mpg.de/~crizkall/>
notify = toby.murray@unimelb.edu.au
date = 2016-06-25
-topic = Computer Science/Security, Computer Science/Programming Languages/Type Systems
+topic = Computer science/Security, Computer science/Programming languages/Type systems
abstract =
The paper "Compositional Verification and Refinement of Concurrent
Value-Dependent Noninterference" by Murray et. al. (CSF 2016) presents
a dependent security type system for compositionally verifying a
value-dependent noninterference property, defined in (Murray, PLAS
2015), for concurrent programs. This development formalises that
security definition, the type system and its soundness proof, and
demonstrates its application on some small examples. It was derived
from the SIFUM_Type_Systems AFP entry, by Sylvia Grewe, Heiko Mantel
and Daniel Schoepe, and whose structure it inherits.
extra-history =
Change history:
[2016-08-19]:
Removed unused "stop" parameter and "stop_no_eval" assumption from the sifum_security locale.
(revision dbc482d36372)
[2016-09-27]:
Added security locale support for the imposition of requirements on the initial memory.
(revision cce4ceb74ddb)
[Dependent_SIFUM_Refinement]
title = Compositional Security-Preserving Refinement for Concurrent Imperative Programs
author = Toby Murray <http://people.eng.unimelb.edu.au/tobym/>, Robert Sison<>, Edward Pierzchalski<>, Christine Rizkallah<https://www.mpi-inf.mpg.de/~crizkall/>
notify = toby.murray@unimelb.edu.au
date = 2016-06-28
-topic = Computer Science/Security
+topic = Computer science/Security
abstract =
The paper "Compositional Verification and Refinement of Concurrent
Value-Dependent Noninterference" by Murray et. al. (CSF 2016) presents
a compositional theory of refinement for a value-dependent
noninterference property, defined in (Murray, PLAS 2015), for
concurrent programs. This development formalises that refinement
theory, and demonstrates its application on some small examples.
extra-history =
Change history:
[2016-08-19]:
Removed unused "stop" parameters from the sifum_refinement locale.
(revision dbc482d36372)
[2016-09-02]:
TobyM extended "simple" refinement theory to be usable for all bisimulations.
(revision 547f31c25f60)
[Relational-Incorrectness-Logic]
title = An Under-Approximate Relational Logic
author = Toby Murray <https://people.eng.unimelb.edu.au/tobym/>
-topic = Computer Science/Programming Languages/Logics, Computer Science/Security
+topic = Computer science/Programming languages/Logics, Computer science/Security
date = 2020-03-12
notify = toby.murray@unimelb.edu.au
abstract =
Recently, authors have proposed under-approximate logics for reasoning
about programs. So far, all such logics have been confined to
reasoning about individual program behaviours. Yet there exist many
over-approximate relational logics for reasoning about pairs of
programs and relating their behaviours. We present the first
under-approximate relational logic, for the simple imperative language
IMP. We prove our logic is both sound and complete. Additionally, we
show how reasoning in this logic can be decomposed into non-relational
reasoning in an under-approximate Hoare logic, mirroring Beringer’s
result for over-approximate relational logics. We illustrate the
application of our logic on some small examples in which we provably
demonstrate the presence of insecurity.
[Strong_Security]
title = A Formalization of Strong Security
author = Sylvia Grewe <mailto:grewe@cs.tu-darmstadt.de>, Alexander Lux <mailto:lux@mais.informatik.tu-darmstadt.de>, Heiko Mantel <mailto:mantel@mais.informatik.tu-darmstadt.de>, Jens Sauer <mailto:sauer@mais.informatik.tu-darmstadt.de>
date = 2014-04-23
-topic = Computer Science/Security, Computer Science/Programming Languages/Type Systems
+topic = Computer science/Security, Computer science/Programming languages/Type systems
abstract = Research in information-flow security aims at developing methods to
identify undesired information leaks within programs from private
sources to public sinks. Noninterference captures this
intuition. Strong security from Sabelfeld and Sands
formalizes noninterference for concurrent systems.
<p>
We present an Isabelle/HOL formalization of strong security for
arbitrary security lattices (Sabelfeld and Sands use
a two-element security lattice in the original publication).
The formalization includes
compositionality proofs for strong security and a soundness proof
for a security type system that checks strong security for programs
in a simple while language with dynamic thread creation.
<p>
Our formalization of the security type system is abstract in the
language for expressions and in the semantic side conditions for
expressions. It can easily be instantiated with different syntactic
approximations for these side conditions. The soundness proof of
such an instantiation boils down to showing that these syntactic
approximations imply the semantic side conditions.
notify = grewe@cs.tu-darmstadt.de
[WHATandWHERE_Security]
title = A Formalization of Declassification with WHAT-and-WHERE-Security
author = Sylvia Grewe <mailto:grewe@cs.tu-darmstadt.de>, Alexander Lux <mailto:lux@mais.informatik.tu-darmstadt.de>, Heiko Mantel <mailto:mantel@mais.informatik.tu-darmstadt.de>, Jens Sauer <mailto:sauer@mais.informatik.tu-darmstadt.de>
date = 2014-04-23
-topic = Computer Science/Security, Computer Science/Programming Languages/Type Systems
+topic = Computer science/Security, Computer science/Programming languages/Type systems
abstract = Research in information-flow security aims at developing methods to
identify undesired information leaks within programs from private
sources to public sinks. Noninterference captures this intuition by
requiring that no information whatsoever flows from private sources
to public sinks. However, in practice this definition is often too
strict: Depending on the intuitive desired security policy, the
controlled declassification of certain private information (WHAT) at
certain points in the program (WHERE) might not result in an
undesired information leak.
<p>
We present an Isabelle/HOL formalization of such a security property
for controlled declassification, namely WHAT&WHERE-security from
"Scheduler-Independent Declassification" by Lux, Mantel, and Perner.
The formalization includes
compositionality proofs for and a soundness proof for a security
type system that checks for programs in a simple while language with
dynamic thread creation.
<p>
Our formalization of the security type system is abstract in the
language for expressions and in the semantic side conditions for
expressions. It can easily be instantiated with different syntactic
approximations for these side conditions. The soundness proof of
such an instantiation boils down to showing that these syntactic
approximations imply the semantic side conditions.
<p>
This Isabelle/HOL formalization uses theories from the entry
Strong Security.
notify = grewe@cs.tu-darmstadt.de
[VolpanoSmith]
title = A Correctness Proof for the Volpano/Smith Security Typing System
author = Gregor Snelting <http://pp.info.uni-karlsruhe.de/personhp/gregor_snelting.php>, Daniel Wasserrab <http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php>
date = 2008-09-02
-topic = Computer Science/Programming Languages/Type Systems, Computer Science/Security
+topic = Computer science/Programming languages/Type systems, Computer science/Security
abstract = The Volpano/Smith/Irvine security type systems requires that variables are annotated as high (secret) or low (public), and provides typing rules which guarantee that secret values cannot leak to public output ports. This property of a program is called confidentiality. For a simple while-language without threads, our proof shows that typeability in the Volpano/Smith system guarantees noninterference. Noninterference means that if two initial states for program execution are low-equivalent, then the final states are low-equivalent as well. This indeed implies that secret values cannot leak to public ports. The proof defines an abstract syntax and operational semantics for programs, formalizes noninterference, and then proceeds by rule induction on the operational semantics. The mathematically most intricate part is the treatment of implicit flows. Note that the Volpano/Smith system is not flow-sensitive and thus quite unprecise, resulting in false alarms. However, due to the correctness property, all potential breaks of confidentiality are discovered.
notify =
[Abstract-Hoare-Logics]
title = Abstract Hoare Logics
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2006-08-08
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
abstract = These therories describe Hoare logics for a number of imperative language constructs, from while-loops to mutually recursive procedures. Both partial and total correctness are treated. In particular a proof system for total correctness of recursive procedures in the presence of unbounded nondeterminism is presented.
notify = nipkow@in.tum.de
[Stone_Algebras]
title = Stone Algebras
author = Walter Guttmann <http://www.cosc.canterbury.ac.nz/walter.guttmann/>
notify = walter.guttmann@canterbury.ac.nz
date = 2016-09-06
topic = Mathematics/Order
abstract =
A range of algebras between lattices and Boolean algebras generalise
the notion of a complement. We develop a hierarchy of these
pseudo-complemented algebras that includes Stone algebras.
Independently of this theory we study filters based on partial orders.
Both theories are combined to prove Chen and Grätzer's construction
theorem for Stone algebras. The latter involves extensive reasoning
about algebraic structures in addition to reasoning in algebraic
structures.
[Kleene_Algebra]
title = Kleene Algebra
author = Alasdair Armstrong <>, Georg Struth <http://staffwww.dcs.shef.ac.uk/people/G.Struth/>, Tjark Weber <http://user.it.uu.se/~tjawe125/>
date = 2013-01-15
-topic = Computer Science/Programming Languages/Logics, Computer Science/Automata and Formal Languages, Mathematics/Algebra
+topic = Computer science/Programming languages/Logics, Computer science/Automata and formal languages, Mathematics/Algebra
abstract =
These files contain a formalisation of variants of Kleene algebras and
their most important models as axiomatic type classes in Isabelle/HOL.
Kleene algebras are foundational structures in computing with
applications ranging from automata and language theory to computational
modeling, program construction and verification.
<p>
We start with formalising dioids, which are additively idempotent
semirings, and expand them by axiomatisations of the Kleene star for
finite iteration and an omega operation for infinite iteration. We
show that powersets over a given monoid, (regular) languages, sets of
paths in a graph, sets of computation traces, binary relations and
formal power series form Kleene algebras, and consider further models
based on lattices, max-plus semirings and min-plus semirings. We also
demonstrate that dioids are closed under the formation of matrices
(proofs for Kleene algebras remain to be completed).
<p>
On the one hand we have aimed at a reference formalisation of variants
of Kleene algebras that covers a wide range of variants and the core
theorems in a structured and modular way and provides readable proofs
at text book level. On the other hand, we intend to use this algebraic
hierarchy and its models as a generic algebraic middle-layer from which
programming applications can quickly be explored, implemented and verified.
notify = g.struth@sheffield.ac.uk, tjark.weber@it.uu.se
[KAT_and_DRA]
title = Kleene Algebra with Tests and Demonic Refinement Algebras
author = Alasdair Armstrong <>, Victor B. F. Gomes <http://www.dcs.shef.ac.uk/~victor>, Georg Struth <http://www.dcs.shef.ac.uk/~georg>
date = 2014-01-23
-topic = Computer Science/Programming Languages/Logics, Computer Science/Automata and Formal Languages, Mathematics/Algebra
+topic = Computer science/Programming languages/Logics, Computer science/Automata and formal languages, Mathematics/Algebra
abstract =
We formalise Kleene algebra with tests (KAT) and demonic refinement
algebra (DRA) in Isabelle/HOL. KAT is relevant for program verification
and correctness proofs in the partial correctness setting. While DRA
targets similar applications in the context of total correctness. Our
formalisation contains the two most important models of these algebras:
binary relations in the case of KAT and predicate transformers in the
case of DRA. In addition, we derive the inference rules for Hoare logic
in KAT and its relational model and present a simple formally verified
program verification tool prototype based on the algebraic approach.
notify = g.struth@dcs.shef.ac.uk
[KAD]
title = Kleene Algebras with Domain
author = Victor B. F. Gomes <http://www.dcs.shef.ac.uk/~victor>, Walter Guttmann <http://www.cosc.canterbury.ac.nz/walter.guttmann/>, Peter Höfner <http://www.hoefner-online.de/>, Georg Struth <http://www.dcs.shef.ac.uk/~georg>, Tjark Weber <http://user.it.uu.se/~tjawe125/>
date = 2016-04-12
-topic = Computer Science/Programming Languages/Logics, Computer Science/Automata and Formal Languages, Mathematics/Algebra
+topic = Computer science/Programming languages/Logics, Computer science/Automata and formal languages, Mathematics/Algebra
abstract =
Kleene algebras with domain are Kleene algebras endowed with an
operation that maps each element of the algebra to its domain of
definition (or its complement) in abstract fashion. They form a simple
algebraic basis for Hoare logics, dynamic logics or predicate
transformer semantics. We formalise a modular hierarchy of algebras
with domain and antidomain (domain complement) operations in
Isabelle/HOL that ranges from domain and antidomain semigroups to
modal Kleene algebras and divergence Kleene algebras. We link these
algebras with models of binary relations and program traces. We
include some examples from modal logics, termination and program
analysis.
notify = walter.guttman@canterbury.ac.nz, g.struth@sheffield.ac.uk, tjark.weber@it.uu.se
[Regular_Algebras]
title = Regular Algebras
author = Simon Foster <http://www-users.cs.york.ac.uk/~simonf>, Georg Struth <http://www.dcs.shef.ac.uk/~georg>
date = 2014-05-21
-topic = Computer Science/Automata and Formal Languages, Mathematics/Algebra
+topic = Computer science/Automata and formal languages, Mathematics/Algebra
abstract =
Regular algebras axiomatise the equational theory of regular expressions as induced by
regular language identity. We use Isabelle/HOL for a detailed systematic study of regular
algebras given by Boffa, Conway, Kozen and Salomaa. We investigate the relationships between
these classes, formalise a soundness proof for the smallest class (Salomaa's) and obtain
completeness of the largest one (Boffa's) relative to a deep result by Krob. In addition
we provide a large collection of regular identities in the general setting of Boffa's axiom.
Our regular algebra hierarchy is orthogonal to the Kleene algebra hierarchy in the Archive
of Formal Proofs; we have not aimed at an integration for pragmatic reasons.
notify = simon.foster@york.ac.uk, g.struth@sheffield.ac.uk
[BytecodeLogicJmlTypes]
title = A Bytecode Logic for JML and Types
author = Lennart Beringer <>, Martin Hofmann <http://www.tcs.informatik.uni-muenchen.de/~mhofmann>
date = 2008-12-12
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
abstract = This document contains the Isabelle/HOL sources underlying the paper <i>A bytecode logic for JML and types</i> by Beringer and Hofmann, updated to Isabelle 2008. We present a program logic for a subset of sequential Java bytecode that is suitable for representing both, features found in high-level specification language JML as well as interpretations of high-level type systems. To this end, we introduce a fine-grained collection of assertions, including strong invariants, local annotations and VDM-reminiscent partial-correctness specifications. Thanks to a goal-oriented structure and interpretation of judgements, verification may proceed without recourse to an additional control flow analysis. The suitability for interpreting intensional type systems is illustrated by the proof-carrying-code style encoding of a type system for a first-order functional language which guarantees a constant upper bound on the number of objects allocated throughout an execution, be the execution terminating or non-terminating. Like the published paper, the formal development is restricted to a comparatively small subset of the JVML, lacking (among other features) exceptions, arrays, virtual methods, and static fields. This shortcoming has been overcome meanwhile, as our paper has formed the basis of the Mobius base logic, a program logic for the full sequential fragment of the JVML. Indeed, the present formalisation formed the basis of a subsequent formalisation of the Mobius base logic in the proof assistant Coq, which includes a proof of soundness with respect to the Bicolano operational semantics by Pichardie.
notify =
[DataRefinementIBP]
title = Semantics and Data Refinement of Invariant Based Programs
author = Viorel Preoteasa <http://users.abo.fi/vpreotea/>, Ralph-Johan Back <http://users.abo.fi/Ralph-Johan.Back/>
date = 2010-05-28
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
abstract = The invariant based programming is a technique of constructing correct programs by first identifying the basic situations (pre- and post-conditions and invariants) that can occur during the execution of the program, and then defining the transitions and proving that they preserve the invariants. Data refinement is a technique of building correct programs working on concrete datatypes as refinements of more abstract programs. In the theories presented here we formalize the predicate transformer semantics for invariant based programs and their data refinement.
extra-history =
Change history:
[2012-01-05]: Moved some general complete lattice properties to the AFP entry Lattice Properties.
Changed the definition of the data refinement relation to be more general and updated all corresponding theorems.
Added new syntax for demonic and angelic update statements.
notify = viorel.preoteasa@aalto.fi
[RefinementReactive]
title = Formalization of Refinement Calculus for Reactive Systems
author = Viorel Preoteasa <mailto:viorel.preoteasa@aalto.fi>
date = 2014-10-08
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
abstract =
We present a formalization of refinement calculus for reactive systems.
Refinement calculus is based on monotonic predicate transformers
(monotonic functions from sets of post-states to sets of pre-states),
and it is a powerful formalism for reasoning about imperative programs.
We model reactive systems as monotonic property transformers
that transform sets of output infinite sequences into sets of input
infinite sequences. Within this semantics we can model
refinement of reactive systems, (unbounded) angelic and
demonic nondeterminism, sequential composition, and
other semantic properties. We can model systems that may
fail for some inputs, and we can model compatibility of systems.
We can specify systems that have liveness properties using
linear temporal logic, and we can refine system specifications
into systems based on symbolic transitions systems, suitable
for implementations.
notify = viorel.preoteasa@aalto.fi
[SIFPL]
title = Secure information flow and program logics
author = Lennart Beringer <>, Martin Hofmann <http://www.tcs.informatik.uni-muenchen.de/~mhofmann>
date = 2008-11-10
-topic = Computer Science/Programming Languages/Logics, Computer Science/Security
+topic = Computer science/Programming languages/Logics, Computer science/Security
abstract = We present interpretations of type systems for secure information flow in Hoare logic, complementing previous encodings in relational program logics. We first treat the imperative language IMP, extended by a simple procedure call mechanism. For this language we consider base-line non-interference in the style of Volpano et al. and the flow-sensitive type system by Hunt and Sands. In both cases, we show how typing derivations may be used to automatically generate proofs in the program logic that certify the absence of illicit flows. We then add instructions for object creation and manipulation, and derive appropriate proof rules for base-line non-interference. As a consequence of our work, standard verification technology may be used for verifying that a concrete program satisfies the non-interference property.<br><br>The present proof development represents an update of the formalisation underlying our paper [CSF 2007] and is intended to resolve any ambiguities that may be present in the paper.
notify = lennart.beringer@ifi.lmu.de
[TLA]
title = A Definitional Encoding of TLA* in Isabelle/HOL
author = Gudmund Grov <http://homepages.inf.ed.ac.uk/ggrov>, Stephan Merz <http://www.loria.fr/~merz>
date = 2011-11-19
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
abstract = We mechanise the logic TLA*
<a href="http://www.springerlink.com/content/ax3qk557qkdyt7n6/">[Merz 1999]</a>,
an extension of Lamport's Temporal Logic of Actions (TLA)
<a href="http://dl.acm.org/citation.cfm?doid=177492.177726">[Lamport 1994]</a>
for specifying and reasoning
about concurrent and reactive systems. Aiming at a framework for mechanising] the verification of TLA (or TLA*) specifications, this contribution reuses
some elements from a previous axiomatic encoding of TLA in Isabelle/HOL
by the second author [Merz 1998], which has been part of the Isabelle
distribution. In contrast to that previous work, we give here a shallow,
definitional embedding, with the following highlights:
<ul>
<li>a theory of infinite sequences, including a formalisation of the concepts of stuttering invariance central to TLA and TLA*;
<li>a definition of the semantics of TLA*, which extends TLA by a mutually-recursive definition of formulas and pre-formulas, generalising TLA action formulas;
<li>a substantial set of derived proof rules, including the TLA* axioms and Lamport's proof rules for system verification;
<li>a set of examples illustrating the usage of Isabelle/TLA* for reasoning about systems.
</ul>
Note that this work is unrelated to the ongoing development of a proof system
for the specification language TLA+, which includes an encoding of TLA+ as a
new Isabelle object logic <a href="http://www.springerlink.com/content/354026160p14j175/">[Chaudhuri et al 2010]</a>.
notify = ggrov@inf.ed.ac.uk
[Compiling-Exceptions-Correctly]
title = Compiling Exceptions Correctly
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2004-07-09
-topic = Computer Science/Programming Languages/Compiling
+topic = Computer science/Programming languages/Compiling
abstract = An exception compilation scheme that dynamically creates and removes exception handler entries on the stack. A formalization of an article of the same name by <a href="http://www.cs.nott.ac.uk/~gmh/">Hutton</a> and Wright.
notify = nipkow@in.tum.de
[NormByEval]
title = Normalization by Evaluation
author = Klaus Aehlig <http://www.linta.de/~aehlig/>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2008-02-18
-topic = Computer Science/Programming Languages/Compiling
+topic = Computer science/Programming languages/Compiling
abstract = This article formalizes normalization by evaluation as implemented in Isabelle. Lambda calculus plus term rewriting is compiled into a functional program with pattern matching. It is proved that the result of a successful evaluation is a) correct, i.e. equivalent to the input, and b) in normal form.
notify = nipkow@in.tum.de
[Program-Conflict-Analysis]
title = Formalization of Conflict Analysis of Programs with Procedures, Thread Creation, and Monitors
-topic = Computer Science/Programming Languages/Static Analysis
+topic = Computer science/Programming languages/Static analysis
author = Peter Lammich <http://www21.in.tum.de/~lammich>, Markus Müller-Olm <http://cs.uni-muenster.de/u/mmo/>
date = 2007-12-14
abstract = In this work we formally verify the soundness and precision of a static program analysis that detects conflicts (e. g. data races) in programs with procedures, thread creation and monitors with the Isabelle theorem prover. As common in static program analysis, our program model abstracts guarded branching by nondeterministic branching, but completely interprets the call-/return behavior of procedures, synchronization by monitors, and thread creation. The analysis is based on the observation that all conflicts already occur in a class of particularly restricted schedules. These restricted schedules are suited to constraint-system-based program analysis. The formalization is based upon a flowgraph-based program model with an operational semantics as reference point.
notify = peter.lammich@uni-muenster.de
[Shivers-CFA]
title = Shivers' Control Flow Analysis
-topic = Computer Science/Programming Languages/Static Analysis
+topic = Computer science/Programming languages/Static analysis
author = Joachim Breitner <mailto:mail@joachim-breitner.de>
date = 2010-11-16
abstract =
In his dissertation, Olin Shivers introduces a concept of control flow graphs
for functional languages, provides an algorithm to statically derive a safe
approximation of the control flow graph and proves this algorithm correct. In
this research project, Shivers' algorithms and proofs are formalized
in the HOLCF extension of HOL.
notify = mail@joachim-breitner.de, nipkow@in.tum.de
[Slicing]
title = Towards Certified Slicing
author = Daniel Wasserrab <http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php>
date = 2008-09-16
-topic = Computer Science/Programming Languages/Static Analysis
+topic = Computer science/Programming languages/Static analysis
abstract = Slicing is a widely-used technique with applications in e.g. compiler technology and software security. Thus verification of algorithms in these areas is often based on the correctness of slicing, which should ideally be proven independent of concrete programming languages and with the help of well-known verifying techniques such as proof assistants. As a first step in this direction, this contribution presents a framework for dynamic and static intraprocedural slicing based on control flow and program dependence graphs. Abstracting from concrete syntax we base the framework on a graph representation of the program fulfilling certain structural and well-formedness properties.<br><br>The formalization consists of the basic framework (in subdirectory Basic/), the correctness proof for dynamic slicing (in subdirectory Dynamic/), the correctness proof for static intraprocedural slicing (in subdirectory StaticIntra/) and instantiations of the framework with a simple While language (in subdirectory While/) and the sophisticated object-oriented bytecode language of Jinja (in subdirectory JinjaVM/). For more information on the framework, see the TPHOLS 2008 paper by Wasserrab and Lochbihler and the PLAS 2009 paper by Wasserrab et al.
notify =
[HRB-Slicing]
title = Backing up Slicing: Verifying the Interprocedural Two-Phase Horwitz-Reps-Binkley Slicer
author = Daniel Wasserrab <http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php>
date = 2009-11-13
-topic = Computer Science/Programming Languages/Static Analysis
+topic = Computer science/Programming languages/Static analysis
abstract = After verifying <a href="Slicing.html">dynamic and static interprocedural slicing</a>, we present a modular framework for static interprocedural slicing. To this end, we formalized the standard two-phase slicer from Horwitz, Reps and Binkley (see their TOPLAS 12(1) 1990 paper) together with summary edges as presented by Reps et al. (see FSE 1994). The framework is again modular in the programming language by using an abstract CFG, defined via structural and well-formedness properties. Using a weak simulation between the original and sliced graph, we were able to prove the correctness of static interprocedural slicing. We also instantiate our framework with a simple While language with procedures. This shows that the chosen abstractions are indeed valid.
notify = nipkow@in.tum.de
[WorkerWrapper]
title = The Worker/Wrapper Transformation
author = Peter Gammie <http://peteg.org>
date = 2009-10-30
-topic = Computer Science/Programming Languages/Transformations
+topic = Computer science/Programming languages/Transformations
abstract = Gill and Hutton formalise the worker/wrapper transformation, building on the work of Launchbury and Peyton-Jones who developed it as a way of changing the type at which a recursive function operates. This development establishes the soundness of the technique and several examples of its use.
notify = peteg42@gmail.com, nipkow@in.tum.de
[JiveDataStoreModel]
title = Jive Data and Store Model
author = Nicole Rauch <mailto:rauch@informatik.uni-kl.de>, Norbert Schirmer <>
date = 2005-06-20
license = LGPL
-topic = Computer Science/Programming Languages/Misc
+topic = Computer science/Programming languages/Misc
abstract = This document presents the formalization of an object-oriented data and store model in Isabelle/HOL. This model is being used in the Java Interactive Verification Environment, Jive.
notify = kleing@cse.unsw.edu.au, schirmer@in.tum.de
[HotelKeyCards]
title = Hotel Key Card System
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2006-09-09
-topic = Computer Science/Security
+topic = Computer science/Security
abstract = Two models of an electronic hotel key card system are contrasted: a state based and a trace based one. Both are defined, verified, and proved equivalent in the theorem prover Isabelle/HOL. It is shown that if a guest follows a certain safety policy regarding her key cards, she can be sure that nobody but her can enter her room.
notify = nipkow@in.tum.de
[RSAPSS]
title = SHA1, RSA, PSS and more
author = Christina Lindenberg <>, Kai Wirt <>
date = 2005-05-02
-topic = Computer Science/Security/Cryptography
+topic = Computer science/Security/Cryptography
abstract = Formal verification is getting more and more important in computer science. However the state of the art formal verification methods in cryptography are very rudimentary. These theories are one step to provide a tool box allowing the use of formal methods in every aspect of cryptography. Moreover we present a proof of concept for the feasibility of verification techniques to a standard signature algorithm.
notify = nipkow@in.tum.de
[InformationFlowSlicing]
title = Information Flow Noninterference via Slicing
author = Daniel Wasserrab <http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php>
date = 2010-03-23
-topic = Computer Science/Security
+topic = Computer science/Security
abstract =
<p>
In this contribution, we show how correctness proofs for <a
href="Slicing.html">intra-</a> and <a
href="HRB-Slicing.html">interprocedural slicing</a> can be used to prove
that slicing is able to guarantee information flow noninterference.
Moreover, we also illustrate how to lift the control flow graphs of the
respective frameworks such that they fulfil the additional assumptions
needed in the noninterference proofs. A detailed description of the
intraprocedural proof and its interplay with the slicing framework can be
found in the PLAS'09 paper by Wasserrab et al.
</p>
<p>
This entry contains the part for intra-procedural slicing. See entry
<a href="InformationFlowSlicing_Inter.html">InformationFlowSlicing_Inter</a>
for the inter-procedural part.
</p>
extra-history =
Change history:
[2016-06-10]: The original entry <a
href="InformationFlowSlicing.html">InformationFlowSlicing</a> contained both
the <a href="InformationFlowSlicing_Inter.html">inter-</a> and <a
href="InformationFlowSlicing.html">intra-procedural</a> case was split into
two for easier maintenance.
notify =
[InformationFlowSlicing_Inter]
title = Inter-Procedural Information Flow Noninterference via Slicing
author = Daniel Wasserrab <http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php>
date = 2010-03-23
-topic = Computer Science/Security
+topic = Computer science/Security
abstract =
<p>
In this contribution, we show how correctness proofs for <a
href="Slicing.html">intra-</a> and <a
href="HRB-Slicing.html">interprocedural slicing</a> can be used to prove
that slicing is able to guarantee information flow noninterference.
Moreover, we also illustrate how to lift the control flow graphs of the
respective frameworks such that they fulfil the additional assumptions
needed in the noninterference proofs. A detailed description of the
intraprocedural proof and its interplay with the slicing framework can be
found in the PLAS'09 paper by Wasserrab et al.
</p>
<p>
This entry contains the part for inter-procedural slicing. See entry
<a href="InformationFlowSlicing.html">InformationFlowSlicing</a>
for the intra-procedural part.
</p>
extra-history =
Change history:
[2016-06-10]: The original entry <a
href="InformationFlowSlicing.html">InformationFlowSlicing</a> contained both
the <a href="InformationFlowSlicing_Inter.html">inter-</a> and <a
href="InformationFlowSlicing.html">intra-procedural</a> case was split into
two for easier maintenance.
notify =
[ComponentDependencies]
title = Formalisation and Analysis of Component Dependencies
author = Maria Spichkova <mailto:maria.spichkova@rmit.edu.au>
date = 2014-04-28
-topic = Computer Science/System Description Languages
+topic = Computer science/System description languages
abstract = This set of theories presents a formalisation in Isabelle/HOL of data dependencies between components. The approach allows to analyse system structure oriented towards efficient checking of system: it aims at elaborating for a concrete system, which parts of the system are necessary to check a given property.
notify = maria.spichkova@rmit.edu.au
[Verified-Prover]
title = A Mechanically Verified, Efficient, Sound and Complete Theorem Prover For First Order Logic
author = Tom Ridge <>
date = 2004-09-28
topic = Logic/General logic/Mechanization of proofs
abstract = Soundness and completeness for a system of first order logic are formally proved, building on James Margetson's formalization of work by Wainer and Wallen. The completeness proofs naturally suggest an algorithm to derive proofs. This algorithm, which can be implemented tail recursively, is formalized in Isabelle/HOL. The algorithm can be executed via the rewriting tactics of Isabelle. Alternatively, the definitions can be exported to OCaml, yielding a directly executable program.
notify = lp15@cam.ac.uk
[Completeness]
title = Completeness theorem
author = James Margetson <>, Tom Ridge <>
date = 2004-09-20
topic = Logic/Proof theory
abstract = The completeness of first-order logic is proved, following the first five pages of Wainer and Wallen's chapter of the book <i>Proof Theory</i> by Aczel et al., CUP, 1992. Their presentation of formulas allows the proofs to use symmetry arguments. Margetson formalized this theorem by early 2000. The Isar conversion is thanks to Tom Ridge. A paper describing the formalization is available <a href="Completeness-paper.pdf">[pdf]</a>.
notify = lp15@cam.ac.uk
[Ordinal]
title = Countable Ordinals
author = Brian Huffman <http://web.cecs.pdx.edu/~brianh/>
date = 2005-11-11
topic = Logic/Set theory
abstract = This development defines a well-ordered type of countable ordinals. It includes notions of continuous and normal functions, recursively defined functions over ordinals, least fixed-points, and derivatives. Much of ordinal arithmetic is formalized, including exponentials and logarithms. The development concludes with formalizations of Cantor Normal Form and Veblen hierarchies over normal functions.
notify = lcp@cl.cam.ac.uk
[Ordinals_and_Cardinals]
title = Ordinals and Cardinals
author = Andrei Popescu <>
date = 2009-09-01
topic = Logic/Set theory
abstract = We develop a basic theory of ordinals and cardinals in Isabelle/HOL, up to the point where some cardinality facts relevant for the ``working mathematician" become available. Unlike in set theory, here we do not have at hand canonical notions of ordinal and cardinal. Therefore, here an ordinal is merely a well-order relation and a cardinal is an ordinal minim w.r.t. order embedding on its field.
extra-history =
Change history:
[2012-09-25]: This entry has been discontinued because it is now part of the Isabelle distribution.
notify = uuomul@yahoo.com, nipkow@in.tum.de
[FOL-Fitting]
title = First-Order Logic According to Fitting
author = Stefan Berghofer <http://www.in.tum.de/~berghofe>
contributors = Asta Halkjær From <https://people.compute.dtu.dk/ahfrom/>
date = 2007-08-02
topic = Logic/General logic/Classical first-order logic
abstract = We present a formalization of parts of Melvin Fitting's book "First-Order Logic and Automated Theorem Proving". The formalization covers the syntax of first-order logic, its semantics, the model existence theorem, a natural deduction proof calculus together with a proof of correctness and completeness, as well as the Löwenheim-Skolem theorem.
extra-history =
Change history:
[2018-07-21]: Proved completeness theorem for open formulas. Proofs are now written in the declarative style. Enumeration of pairs and datatypes is automated using the Countable theory.
notify = berghofe@in.tum.de
[Epistemic_Logic]
title = Epistemic Logic
author = Asta Halkjær From <https://people.compute.dtu.dk/ahfrom/>
topic = Logic/General logic/Logics of knowledge and belief
date = 2018-10-29
notify = ahfrom@dtu.dk
abstract =
This work is a formalization of epistemic logic with countably many
agents. It includes proofs of soundness and completeness for the axiom
system K. The completeness proof is based on the textbook
"Reasoning About Knowledge" by Fagin, Halpern, Moses and
Vardi (MIT Press 1995).
[SequentInvertibility]
title = Invertibility in Sequent Calculi
author = Peter Chapman <>
date = 2009-08-28
topic = Logic/Proof theory
license = LGPL
abstract = The invertibility of the rules of a sequent calculus is important for guiding proof search and can be used in some formalised proofs of Cut admissibility. We present sufficient conditions for when a rule is invertible with respect to a calculus. We illustrate the conditions with examples. It must be noted we give purely syntactic criteria; no guarantees are given as to the suitability of the rules.
notify = pc@cs.st-andrews.ac.uk, nipkow@in.tum.de
[LinearQuantifierElim]
title = Quantifier Elimination for Linear Arithmetic
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2008-01-11
topic = Logic/General logic/Decidability of theories
abstract = This article formalizes quantifier elimination procedures for dense linear orders, linear real arithmetic and Presburger arithmetic. In each case both a DNF-based non-elementary algorithm and one or more (doubly) exponential NNF-based algorithms are formalized, including the well-known algorithms by Ferrante and Rackoff and by Cooper. The NNF-based algorithms for dense linear orders are new but based on Ferrante and Rackoff and on an algorithm by Loos and Weisspfenning which simulates infenitesimals. All algorithms are directly executable. In particular, they yield reflective quantifier elimination procedures for HOL itself. The formalization makes heavy use of locales and is therefore highly modular.
notify = nipkow@in.tum.de
[Nat-Interval-Logic]
title = Interval Temporal Logic on Natural Numbers
author = David Trachtenherz <>
date = 2011-02-23
-topic = Logic/General logic/Decidability of theories
+topic = Logic/General logic/Temporal logic
abstract = We introduce a theory of temporal logic operators using sets of natural numbers as time domain, formalized in a shallow embedding manner. The theory comprises special natural intervals (theory IL_Interval: open and closed intervals, continuous and modulo intervals, interval traversing results), operators for shifting intervals to left/right on the number axis as well as expanding/contracting intervals by constant factors (theory IL_IntervalOperators.thy), and ultimately definitions and results for unary and binary temporal operators on arbitrary natural sets (theory IL_TemporalOperators).
notify = nipkow@in.tum.de
[Recursion-Theory-I]
title = Recursion Theory I
author = Michael Nedzelsky <>
date = 2008-04-05
topic = Logic/Computability
abstract = This document presents the formalization of introductory material from recursion theory --- definitions and basic properties of primitive recursive functions, Cantor pairing function and computably enumerable sets (including a proof of existence of a one-complete computably enumerable set and a proof of the Rice's theorem).
notify = MichaelNedzelsky@yandex.ru
[Free-Boolean-Algebra]
topic = Logic/General logic/Classical propositional logic
title = Free Boolean Algebra
author = Brian Huffman <http://web.cecs.pdx.edu/~brianh/>
date = 2010-03-29
abstract = This theory defines a type constructor representing the free Boolean algebra over a set of generators. Values of type (α)<i>formula</i> represent propositional formulas with uninterpreted variables from type α, ordered by implication. In addition to all the standard Boolean algebra operations, the library also provides a function for building homomorphisms to any other Boolean algebra type.
notify = brianh@cs.pdx.edu
[Sort_Encodings]
title = Sound and Complete Sort Encodings for First-Order Logic
author = Jasmin Christian Blanchette <http://www21.in.tum.de/~blanchet>, Andrei Popescu <http://www21.in.tum.de/~popescua>
date = 2013-06-27
topic = Logic/General logic/Mechanization of proofs
abstract =
This is a formalization of the soundness and completeness properties
for various efficient encodings of sorts in unsorted first-order logic
used by Isabelle's Sledgehammer tool.
<p>
Essentially, the encodings proceed as follows:
a many-sorted problem is decorated with (as few as possible) tags or
guards that make the problem monotonic; then sorts can be soundly
erased.
<p>
The development employs a formalization of many-sorted first-order logic
in clausal form (clauses, structures and the basic properties
of the satisfaction relation), which could be of interest as the starting
point for other formalizations of first-order logic metatheory.
notify = uuomul@yahoo.com
[Lambda_Free_RPOs]
title = Formalization of Recursive Path Orders for Lambda-Free Higher-Order Terms
author = Jasmin Christian Blanchette <mailto:jasmin.blanchette@gmail.com>, Uwe Waldmann <mailto:waldmann@mpi-inf.mpg.de>, Daniel Wand <mailto:dwand@mpi-inf.mpg.de>
date = 2016-09-23
topic = Logic/Rewriting
abstract = This Isabelle/HOL formalization defines recursive path orders (RPOs) for higher-order terms without lambda-abstraction and proves many useful properties about them. The main order fully coincides with the standard RPO on first-order terms also in the presence of currying, distinguishing it from previous work. An optimized variant is formalized as well. It appears promising as the basis of a higher-order superposition calculus.
notify = jasmin.blanchette@gmail.com
[Lambda_Free_KBOs]
title = Formalization of Knuth–Bendix Orders for Lambda-Free Higher-Order Terms
author = Heiko Becker <mailto:hbecker@mpi-sws.org>, Jasmin Christian Blanchette <mailto:jasmin.blanchette@gmail.com>, Uwe Waldmann <mailto:waldmann@mpi-inf.mpg.de>, Daniel Wand <mailto:dwand@mpi-inf.mpg.de>
date = 2016-11-12
topic = Logic/Rewriting
abstract = This Isabelle/HOL formalization defines Knuth–Bendix orders for higher-order terms without lambda-abstraction and proves many useful properties about them. The main order fully coincides with the standard transfinite KBO with subterm coefficients on first-order terms. It appears promising as the basis of a higher-order superposition calculus.
notify = jasmin.blanchette@gmail.com
[Lambda_Free_EPO]
title = Formalization of the Embedding Path Order for Lambda-Free Higher-Order Terms
author = Alexander Bentkamp <https://www.cs.vu.nl/~abp290/>
topic = Logic/Rewriting
date = 2018-10-19
notify = a.bentkamp@vu.nl
abstract =
This Isabelle/HOL formalization defines the Embedding Path Order (EPO)
for higher-order terms without lambda-abstraction and proves many
useful properties about it. In contrast to the lambda-free recursive
path orders, it does not fully coincide with RPO on first-order terms,
but it is compatible with arbitrary higher-order contexts.
[Nested_Multisets_Ordinals]
title = Formalization of Nested Multisets, Hereditary Multisets, and Syntactic Ordinals
author = Jasmin Christian Blanchette <mailto:jasmin.blanchette@gmail.com>, Mathias Fleury <mailto:fleury@mpi-inf.mpg.de>, Dmitriy Traytel <mailto:traytel@inf.ethz.ch>
date = 2016-11-12
topic = Logic/Rewriting
abstract = This Isabelle/HOL formalization introduces a nested multiset datatype and defines Dershowitz and Manna's nested multiset order. The order is proved well founded and linear. By removing one constructor, we transform the nested multisets into hereditary multisets. These are isomorphic to the syntactic ordinals—the ordinals can be recursively expressed in Cantor normal form. Addition, subtraction, multiplication, and linear orders are provided on this type.
notify = jasmin.blanchette@gmail.com
[Abstract-Rewriting]
title = Abstract Rewriting
topic = Logic/Rewriting
date = 2010-06-14
author = Christian Sternagel <mailto:c.sternagel@gmail.com>, René Thiemann <http://cl-informatik.uibk.ac.at/~thiemann>
license = LGPL
abstract =
We present an Isabelle formalization of abstract rewriting (see, e.g.,
the book by Baader and Nipkow). First, we define standard relations like
<i>joinability</i>, <i>meetability</i>, <i>conversion</i>, etc. Then, we
formalize important properties of abstract rewrite systems, e.g.,
confluence and strong normalization. Our main concern is on strong
normalization, since this formalization is the basis of <a
href="http://cl-informatik.uibk.ac.at/software/ceta">CeTA</a> (which is
mainly about strong normalization of term rewrite systems). Hence lemmas
involving strong normalization constitute by far the biggest part of this
theory. One of those is Newman's lemma.
extra-history =
Change history:
[2010-09-17]: Added theories defining several (ordered)
semirings related to strong normalization and giving some standard
instances. <br>
[2013-10-16]: Generalized delta-orders from rationals to Archimedean fields.
notify = christian.sternagel@uibk.ac.at, rene.thiemann@uibk.ac.at
[First_Order_Terms]
title = First-Order Terms
author = Christian Sternagel <mailto:c.sternagel@gmail.com>, René Thiemann <http://cl-informatik.uibk.ac.at/users/thiemann/>
-topic = Logic/Rewriting, Computer Science/Algorithms
+topic = Logic/Rewriting, Computer science/Algorithms
license = LGPL
date = 2018-02-06
notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at
abstract =
We formalize basic results on first-order terms, including matching and a
first-order unification algorithm, as well as well-foundedness of the
subsumption order. This entry is part of the <i>Isabelle
Formalization of Rewriting</i> <a
href="http://cl-informatik.uibk.ac.at/isafor">IsaFoR</a>,
where first-order terms are omni-present: the unification algorithm is
used to certify several confluence and termination techniques, like
critical-pair computation and dependency graph approximations; and the
subsumption order is a crucial ingredient for completion.
[Free-Groups]
title = Free Groups
author = Joachim Breitner <mailto:mail@joachim-breitner.de>
date = 2010-06-24
topic = Mathematics/Algebra
abstract =
Free Groups are, in a sense, the most generic kind of group. They
are defined over a set of generators with no additional relations in between
them. They play an important role in the definition of group presentations
and in other fields. This theory provides the definition of Free Group as
the set of fully canceled words in the generators. The universal property is
proven, as well as some isomorphisms results about Free Groups.
extra-history =
Change history:
[2011-12-11]: Added the Ping Pong Lemma.
notify =
[CofGroups]
title = An Example of a Cofinitary Group in Isabelle/HOL
author = Bart Kastermans <http://kasterma.net>
date = 2009-08-04
topic = Mathematics/Algebra
abstract = We formalize the usual proof that the group generated by the function k -> k + 1 on the integers gives rise to a cofinitary group.
notify = nipkow@in.tum.de
[Group-Ring-Module]
title = Groups, Rings and Modules
author = Hidetsune Kobayashi <>, L. Chen <>, H. Murao <>
date = 2004-05-18
topic = Mathematics/Algebra
abstract = The theory of groups, rings and modules is developed to a great depth. Group theory results include Zassenhaus's theorem and the Jordan-Hoelder theorem. The ring theory development includes ideals, quotient rings and the Chinese remainder theorem. The module development includes the Nakayama lemma, exact sequences and Tensor products.
notify = lp15@cam.ac.uk
[Robbins-Conjecture]
title = A Complete Proof of the Robbins Conjecture
author = Matthew Wampler-Doty <>
date = 2010-05-22
topic = Mathematics/Algebra
abstract = This document gives a formalization of the proof of the Robbins conjecture, following A. Mann, <i>A Complete Proof of the Robbins Conjecture</i>, 2003.
notify = nipkow@in.tum.de
[Valuation]
title = Fundamental Properties of Valuation Theory and Hensel's Lemma
author = Hidetsune Kobayashi <>
date = 2007-08-08
topic = Mathematics/Algebra
abstract = Convergence with respect to a valuation is discussed as convergence of a Cauchy sequence. Cauchy sequences of polynomials are defined. They are used to formalize Hensel's lemma.
notify = lp15@cam.ac.uk
[Rank_Nullity_Theorem]
title = Rank-Nullity Theorem in Linear Algebra
author = Jose Divasón <http://www.unirioja.es/cu/jodivaso>, Jesús Aransay <http://www.unirioja.es/cu/jearansa>
topic = Mathematics/Algebra
date = 2013-01-16
abstract = In this contribution, we present some formalizations based on the HOL-Multivariate-Analysis session of Isabelle. Firstly, a generalization of several theorems of such library are presented. Secondly, some definitions and proofs involving Linear Algebra and the four fundamental subspaces of a matrix are shown. Finally, we present a proof of the result known in Linear Algebra as the ``Rank-Nullity Theorem'', which states that, given any linear map f from a finite dimensional vector space V to a vector space W, then the dimension of V is equal to the dimension of the kernel of f (which is a subspace of V) and the dimension of the range of f (which is a subspace of W). The proof presented here is based on the one given by Sheldon Axler in his book <i>Linear Algebra Done Right</i>. As a corollary of the previous theorem, and taking advantage of the relationship between linear maps and matrices, we prove that, for every matrix A (which has associated a linear map between finite dimensional vector spaces), the sum of its null space and its column space (which is equal to the range of the linear map) is equal to the number of columns of A.
extra-history =
Change history:
[2014-07-14]: Added some generalizations that allow us to formalize the Rank-Nullity Theorem over finite dimensional vector spaces, instead of over the more particular euclidean spaces. Updated abstract.
notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es
[Affine_Arithmetic]
title = Affine Arithmetic
author = Fabian Immler <http://www21.in.tum.de/~immler>
date = 2014-02-07
topic = Mathematics/Analysis
abstract =
We give a formalization of affine forms as abstract representations of zonotopes.
We provide affine operations as well as overapproximations of some non-affine operations like multiplication and division.
Expressions involving those operations can automatically be turned into (executable) functions approximating the original
expression in affine arithmetic.
extra-history =
Change history:
[2015-01-31]: added algorithm for zonotope/hyperplane intersection<br>
[2017-09-20]: linear approximations for all symbols from the floatarith data
type
notify = immler@in.tum.de
[Laplace_Transform]
title = Laplace Transform
author = Fabian Immler <https://home.in.tum.de/~immler/>
topic = Mathematics/Analysis
date = 2019-08-14
notify = fimmler@cs.cmu.edu
abstract =
This entry formalizes the Laplace transform and concrete Laplace
transforms for arithmetic functions, frequency shift, integration and
(higher) differentiation in the time domain. It proves Lerch's
lemma and uniqueness of the Laplace transform for continuous
functions. In order to formalize the foundational assumptions, this
entry contains a formalization of piecewise continuous functions and
functions of exponential order.
[Cauchy]
title = Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality
author = Benjamin Porter <>
date = 2006-03-14
topic = Mathematics/Analysis
abstract = This document presents the mechanised proofs of two popular theorems attributed to Augustin Louis Cauchy - Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality.
notify = kleing@cse.unsw.edu.au
[Integration]
title = Integration theory and random variables
author = Stefan Richter <http://www-lti.informatik.rwth-aachen.de/~richter/>
date = 2004-11-19
topic = Mathematics/Analysis
abstract = Lebesgue-style integration plays a major role in advanced probability. We formalize concepts of elementary measure theory, real-valued random variables as Borel-measurable functions, and a stepwise inductive definition of the integral itself. All proofs are carried out in human readable style using the Isar language.
extra-note = Note: This article is of historical interest only. Lebesgue-style integration and probability theory are now available as part of the Isabelle/HOL distribution (directory Probability).
notify = richter@informatik.rwth-aachen.de, nipkow@in.tum.de, hoelzl@in.tum.de
[Ordinary_Differential_Equations]
title = Ordinary Differential Equations
author = Fabian Immler <http://www21.in.tum.de/~immler>, Johannes Hölzl <http://in.tum.de/~hoelzl>
topic = Mathematics/Analysis
date = 2012-04-26
abstract =
<p>Session Ordinary-Differential-Equations formalizes ordinary differential equations (ODEs) and initial value
problems. This work comprises proofs for local and global existence of unique solutions
(Picard-Lindelöf theorem). Moreover, it contains a formalization of the (continuous or even
differentiable) dependency of the flow on initial conditions as the <i>flow</i> of ODEs.</p>
<p>
Not in the generated document are the following sessions:
<ul>
<li> HOL-ODE-Numerics:
Rigorous numerical algorithms for computing enclosures of solutions based on Runge-Kutta methods
and affine arithmetic. Reachability analysis with splitting and reduction at hyperplanes.</li>
<li> HOL-ODE-Examples:
Applications of the numerical algorithms to concrete systems of ODEs.</li>
<li> Lorenz_C0, Lorenz_C1:
Verified algorithms for checking C1-information according to Tucker's proof,
computation of C0-information.</li>
</ul>
</p>
extra-history =
Change history:
[2014-02-13]: added an implementation of the Euler method based on affine arithmetic<br>
[2016-04-14]: added flow and variational equation<br>
[2016-08-03]: numerical algorithms for reachability analysis (using second-order Runge-Kutta methods, splitting, and reduction) implemented using Lammich's framework for automatic refinement<br>
[2017-09-20]: added Poincare map and propagation of variational equation in
reachability analysis, verified algorithms for C1-information and computations
for C0-information of the Lorenz attractor.
notify = immler@in.tum.de, hoelzl@in.tum.de
[Polynomials]
title = Executable Multivariate Polynomials
author = Christian Sternagel <mailto:c.sternagel@gmail.com>, René Thiemann <http://cl-informatik.uibk.ac.at/~thiemann>, Alexander Maletzky <https://risc.jku.at/m/alexander-maletzky/>, Fabian Immler <http://www21.in.tum.de/~immler>, Florian Haftmann <http://isabelle.in.tum.de/~haftmann>, Andreas Lochbihler <http://www.andreas-lochbihler.de>, Alexander Bentkamp <mailto:bentkamp@gmail.com>
date = 2010-08-10
-topic = Mathematics/Analysis, Mathematics/Algebra, Computer Science/Algorithms/Mathematical
+topic = Mathematics/Analysis, Mathematics/Algebra, Computer science/Algorithms/Mathematical
license = LGPL
abstract =
We define multivariate polynomials over arbitrary (ordered) semirings in
combination with (executable) operations like addition, multiplication,
and substitution. We also define (weak) monotonicity of polynomials and
comparison of polynomials where we provide standard estimations like
absolute positiveness or the more recent approach of Neurauter, Zankl,
and Middeldorp. Moreover, it is proven that strongly normalizing
(monotone) orders can be lifted to strongly normalizing (monotone) orders
over polynomials. Our formalization was performed as part of the <a
href="http://cl-informatik.uibk.ac.at/software/ceta">IsaFoR/CeTA-system</a>
which contains several termination techniques. The provided theories have
been essential to formalize polynomial interpretations.
<p>
This formalization also contains an abstract representation as coefficient functions with finite
support and a type of power-products. If this type is ordered by a linear (term) ordering, various
additional notions, such as leading power-product, leading coefficient etc., are introduced as
well. Furthermore, a lot of generic properties of, and functions on, multivariate polynomials are
formalized, including the substitution and evaluation homomorphisms, embeddings of polynomial rings
into larger rings (i.e. with one additional indeterminate), homogenization and dehomogenization of
polynomials, and the canonical isomorphism between R[X,Y] and R[X][Y].
extra-history =
Change history:
[2010-09-17]: Moved theories on arbitrary (ordered) semirings to Abstract Rewriting.<br>
[2016-10-28]: Added abstract representation of polynomials and authors Maletzky/Immler.<br>
[2018-01-23]: Added authors Haftmann, Lochbihler after incorporating
their formalization of multivariate polynomials based on Polynomial mappings.
Moved material from Bentkamp's entry "Deep Learning".<br>
[2019-04-18]: Added material about polynomials whose power-products are represented themselves
by polynomial mappings.
notify = rene.thiemann@uibk.ac.at, christian.sternagel@uibk.ac.at, alexander.maletzky@risc.jku.at, immler@in.tum.de
[Sqrt_Babylonian]
title = Computing N-th Roots using the Babylonian Method
author = René Thiemann <mailto:rene.thiemann@uibk.ac.at>
date = 2013-01-03
topic = Mathematics/Analysis
license = LGPL
abstract =
We implement the Babylonian method to compute n-th roots of numbers.
We provide precise algorithms for naturals, integers and rationals, and
offer an approximation algorithm for square roots over linear ordered fields. Moreover, there
are precise algorithms to compute the floor and the ceiling of n-th roots.
extra-history =
Change history:
[2013-10-16]: Added algorithms to compute floor and ceiling of sqrt of integers.
[2014-07-11]: Moved NthRoot_Impl from Real-Impl to this entry.
notify = rene.thiemann@uibk.ac.at
[Sturm_Sequences]
title = Sturm's Theorem
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
date = 2014-01-11
topic = Mathematics/Analysis
abstract = Sturm's Theorem states that polynomial sequences with certain
properties, so-called Sturm sequences, can be used to count the number
of real roots of a real polynomial. This work contains a proof of
Sturm's Theorem and code for constructing Sturm sequences efficiently.
It also provides the “sturm” proof method, which can decide certain
statements about the roots of real polynomials, such as “the polynomial
P has exactly n roots in the interval I” or “P(x) > Q(x) for all x
&#8712; &#8477;”.
notify = eberlm@in.tum.de
[Sturm_Tarski]
title = The Sturm-Tarski Theorem
author = Wenda Li <mailto:wl302@cam.ac.uk>
date = 2014-09-19
topic = Mathematics/Analysis
abstract = We have formalized the Sturm-Tarski theorem (also referred as the Tarski theorem), which generalizes Sturm's theorem. Sturm's theorem is usually used as a way to count distinct real roots, while the Sturm-Tarksi theorem forms the basis for Tarski's classic quantifier elimination for real closed field.
notify = wl302@cam.ac.uk
[Markov_Models]
title = Markov Models
author = Johannes Hölzl <http://in.tum.de/~hoelzl>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2012-01-03
-topic = Mathematics/Probability Theory, Computer Science/Automata and Formal Languages
+topic = Mathematics/Probability theory, Computer science/Automata and formal languages
abstract = This is a formalization of Markov models in Isabelle/HOL. It
builds on Isabelle's probability theory. The available models are
currently Discrete-Time Markov Chains and a extensions of them with
rewards.
<p>
As application of these models we formalize probabilistic model
checking of pCTL formulas, analysis of IPv4 address allocation in
ZeroConf and an analysis of the anonymity of the Crowds protocol.
<a href="http://arxiv.org/abs/1212.3870">See here for the corresponding paper.</a>
notify = hoelzl@in.tum.de
[Probabilistic_System_Zoo]
title = A Zoo of Probabilistic Systems
author = Johannes Hölzl <http://in.tum.de/~hoelzl>,
Andreas Lochbihler <http://www.andreas-lochbihler.de>,
Dmitriy Traytel <http://www21.in.tum.de/~traytel>
date = 2015-05-27
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
abstract =
Numerous models of probabilistic systems are studied in the literature.
Coalgebra has been used to classify them into system types and compare their
expressiveness. We formalize the resulting hierarchy of probabilistic system
types by modeling the semantics of the different systems as codatatypes.
This approach yields simple and concise proofs, as bisimilarity coincides
with equality for codatatypes.
<p>
This work is described in detail in the ITP 2015 publication by the authors.
notify = traytel@in.tum.de
[Density_Compiler]
title = A Verified Compiler for Probability Density Functions
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>, Johannes Hölzl <http://in.tum.de/~hoelzl>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2014-10-09
-topic = Mathematics/Probability Theory, Computer Science/Programming Languages/Compiling
+topic = Mathematics/Probability theory, Computer science/Programming languages/Compiling
abstract =
<a href="https://doi.org/10.1007/978-3-642-36742-7_35">Bhat et al. [TACAS 2013]</a> developed an inductive compiler that computes
density functions for probability spaces described by programs in a
probabilistic functional language. In this work, we implement such a
compiler for a modified version of this language within the theorem prover
Isabelle and give a formal proof of its soundness w.r.t. the semantics of
the source and target language. Together with Isabelle's code generation
for inductive predicates, this yields a fully verified, executable density
compiler. The proof is done in two steps: First, an abstract compiler
working with abstract functions modelled directly in the theorem prover's
logic is defined and proved sound. Then, this compiler is refined to a
concrete version that returns a target-language expression.
<p>
An article with the same title and authors is published in the proceedings
of ESOP 2015.
A detailed presentation of this work can be found in the first author's
master's thesis.
notify = hoelzl@in.tum.de
[CAVA_Automata]
title = The CAVA Automata Library
author = Peter Lammich <http://www21.in.tum.de/~lammich>
date = 2014-05-28
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
abstract =
We report on the graph and automata library that is used in the fully
verified LTL model checker CAVA.
As most components of CAVA use some type of graphs or automata, a common
automata library simplifies assembly of the components and reduces
redundancy.
<p>
The CAVA Automata Library provides a hierarchy of graph and automata
classes, together with some standard algorithms.
Its object oriented design allows for sharing of algorithms, theorems,
and implementations between its classes, and also simplifies extensions
of the library.
Moreover, it is integrated into the Automatic Refinement Framework,
supporting automatic refinement of the abstract automata types to
efficient data structures.
<p>
Note that the CAVA Automata Library is work in progress. Currently, it
is very specifically tailored towards the requirements of the CAVA model
checker.
Nevertheless, the formalization techniques presented here allow an
extension of the library to a wider scope. Moreover, they are not
limited to graph libraries, but apply to class hierarchies in general.
<p>
The CAVA Automata Library is described in the paper: Peter Lammich, The
CAVA Automata Library, Isabelle Workshop 2014.
notify = lammich@in.tum.de
[LTL]
title = Linear Temporal Logic
author = Salomon Sickert <https://www7.in.tum.de/~sickert>
contributors = Benedikt Seidl <mailto:benedikt.seidl@tum.de>
date = 2016-03-01
-topic = Logic/General logic/Temporal logic, Computer Science/Automata and Formal Languages
+topic = Logic/General logic/Temporal logic, Computer science/Automata and formal languages
abstract =
This theory provides a formalisation of linear temporal logic (LTL)
and unifies previous formalisations within the AFP. This entry
establishes syntax and semantics for this logic and decouples it from
existing entries, yielding a common environment for theories reasoning
about LTL. Furthermore a parser written in SML and an executable
simplifier are provided.
extra-history =
Change history:
[2019-03-12]:
Support for additional operators, implementation of common equivalence relations,
definition of syntactic fragments of LTL and the minimal disjunctive normal form. <br>
notify = sickert@in.tum.de
[LTL_to_GBA]
title = Converting Linear-Time Temporal Logic to Generalized Büchi Automata
author = Alexander Schimpf <mailto:schimpfa@informatik.uni-freiburg.de>, Peter Lammich <http://www21.in.tum.de/~lammich>
date = 2014-05-28
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
abstract =
We formalize linear-time temporal logic (LTL) and the algorithm by Gerth
et al. to convert LTL formulas to generalized Büchi automata.
We also formalize some syntactic rewrite rules that can be applied to
optimize the LTL formula before conversion.
Moreover, we integrate the Stuttering Equivalence AFP-Entry by Stefan
Merz, adapting the lemma that next-free LTL formula cannot distinguish
between stuttering equivalent runs to our setting.
<p>
We use the Isabelle Refinement and Collection framework, as well as the
Autoref tool, to obtain a refined version of our algorithm, from which
efficiently executable code can be extracted.
notify = lammich@in.tum.de
[Gabow_SCC]
title = Verified Efficient Implementation of Gabow's Strongly Connected Components Algorithm
author = Peter Lammich <http://www21.in.tum.de/~lammich>
date = 2014-05-28
-topic = Computer Science/Algorithms/Graph, Mathematics/Graph Theory
+topic = Computer science/Algorithms/Graph, Mathematics/Graph theory
abstract =
We present an Isabelle/HOL formalization of Gabow's algorithm for
finding the strongly connected components of a directed graph.
Using data refinement techniques, we extract efficient code that
performs comparable to a reference implementation in Java.
Our style of formalization allows for re-using large parts of the proofs
when defining variants of the algorithm. We demonstrate this by
verifying an algorithm for the emptiness check of generalized Büchi
automata, re-using most of the existing proofs.
notify = lammich@in.tum.de
[Promela]
title = Promela Formalization
author = René Neumann <mailto:rene.neumann@in.tum.de>
date = 2014-05-28
-topic = Computer Science/System Description Languages
+topic = Computer science/System description languages
abstract =
We present an executable formalization of the language Promela, the
description language for models of the model checker SPIN. This
formalization is part of the work for a completely verified model
checker (CAVA), but also serves as a useful (and executable!)
description of the semantics of the language itself, something that is
currently missing.
The formalization uses three steps: It takes an abstract syntax tree
generated from an SML parser, removes syntactic sugar and enriches it
with type information. This further gets translated into a transition
system, on which the semantic engine (read: successor function) operates.
notify =
[CAVA_LTL_Modelchecker]
title = A Fully Verified Executable LTL Model Checker
author = Javier Esparza <https://www7.in.tum.de/~esparza/>,
Peter Lammich <http://www21.in.tum.de/~lammich>,
René Neumann <mailto:rene.neumann@in.tum.de>,
Tobias Nipkow <http://www21.in.tum.de/~nipkow>,
Alexander Schimpf <mailto:schimpfa@informatik.uni-freiburg.de>,
Jan-Georg Smaus <http://www.irit.fr/~Jan-Georg.Smaus>
date = 2014-05-28
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
abstract =
We present an LTL model checker whose code has been completely verified
using the Isabelle theorem prover. The checker consists of over 4000
lines of ML code. The code is produced using the Isabelle Refinement
Framework, which allows us to split its correctness proof into (1) the
proof of an abstract version of the checker, consisting of a few hundred
lines of ``formalized pseudocode'', and (2) a verified refinement step
in which mathematical sets and other abstract structures are replaced by
implementations of efficient structures like red-black trees and
functional arrays. This leads to a checker that,
while still slower than unverified checkers, can already be used as a
trusted reference implementation against which advanced implementations
can be tested.
<p>
An early version of this model checker is described in the
<a href="http://www21.in.tum.de/~nipkow/pubs/cav13.html">CAV 2013 paper</a>
with the same title.
notify = lammich@in.tum.de
[Fermat3_4]
title = Fermat's Last Theorem for Exponents 3 and 4 and the Parametrisation of Pythagorean Triples
author = Roelof Oosterhuis <>
date = 2007-08-12
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
abstract = This document presents the mechanised proofs of<ul><li>Fermat's Last Theorem for exponents 3 and 4 and</li><li>the parametrisation of Pythagorean Triples.</li></ul>
notify = nipkow@in.tum.de, roelofoosterhuis@gmail.com
[Perfect-Number-Thm]
title = Perfect Number Theorem
author = Mark Ijbema <mailto:ijbema@fmf.nl>
date = 2009-11-22
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
abstract = These theories present the mechanised proof of the Perfect Number Theorem.
notify = nipkow@in.tum.de
[SumSquares]
title = Sums of Two and Four Squares
author = Roelof Oosterhuis <>
date = 2007-08-12
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
abstract = This document presents the mechanised proofs of the following results:<ul><li>any prime number of the form 4m+1 can be written as the sum of two squares;</li><li>any natural number can be written as the sum of four squares</li></ul>
notify = nipkow@in.tum.de, roelofoosterhuis@gmail.com
[Lehmer]
title = Lehmer's Theorem
author = Simon Wimmer <mailto:simon.wimmer@tum.de>, Lars Noschinski <http://www21.in.tum.de/~noschinl/>
date = 2013-07-22
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
abstract = In 1927, Lehmer presented criterions for primality, based on the converse of Fermat's litte theorem. This work formalizes the second criterion from Lehmer's paper, a necessary and sufficient condition for primality.
<p>
As a side product we formalize some properties of Euler's phi-function,
the notion of the order of an element of a group, and the cyclicity of the multiplicative group of a finite field.
notify = noschinl@gmail.com, simon.wimmer@tum.de
[Pratt_Certificate]
title = Pratt's Primality Certificates
author = Simon Wimmer <mailto:simon.wimmer@tum.de>, Lars Noschinski <http://www21.in.tum.de/~noschinl/>
date = 2013-07-22
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
abstract = In 1975, Pratt introduced a proof system for certifying primes. He showed that a number <i>p</i> is prime iff a primality certificate for <i>p</i> exists. By showing a logarithmic upper bound on the length of the certificates in size of the prime number, he concluded that the decision problem for prime numbers is in NP. This work formalizes soundness and completeness of Pratt's proof system as well as an upper bound for the size of the certificate.
notify = noschinl@gmail.com, simon.wimmer@tum.de
[Monad_Memo_DP]
title = Monadification, Memoization and Dynamic Programming
author = Simon Wimmer <http://home.in.tum.de/~wimmers/>, Shuwei Hu <mailto:shuwei.hu@tum.de>, Tobias Nipkow <http://www21.in.tum.de/~nipkow/>
-topic = Computer Science/Programming Languages/Transformations, Computer Science/Algorithms, Computer Science/Functional Programming
+topic = Computer science/Programming languages/Transformations, Computer science/Algorithms, Computer science/Functional programming
date = 2018-05-22
notify = wimmers@in.tum.de
abstract =
We present a lightweight framework for the automatic verified
(functional or imperative) memoization of recursive functions. Our
tool can turn a pure Isabelle/HOL function definition into a
monadified version in a state monad or the Imperative HOL heap monad,
and prove a correspondence theorem. We provide a variety of memory
implementations for the two types of monads. A number of simple
techniques allow us to achieve bottom-up computation and
space-efficient memoization. The framework’s utility is demonstrated
on a number of representative dynamic programming problems. A detailed
description of our work can be found in the accompanying paper [2].
[Probabilistic_Timed_Automata]
title = Probabilistic Timed Automata
author = Simon Wimmer <http://in.tum.de/~wimmers>, Johannes Hölzl <http://home.in.tum.de/~hoelzl>
-topic = Mathematics/Probability Theory, Computer Science/Automata and Formal Languages
+topic = Mathematics/Probability theory, Computer science/Automata and formal languages
date = 2018-05-24
notify = wimmers@in.tum.de, hoelzl@in.tum.de
abstract =
We present a formalization of probabilistic timed automata (PTA) for
which we try to follow the formula MDP + TA = PTA as far as possible:
our work starts from our existing formalizations of Markov decision
processes (MDP) and timed automata (TA) and combines them modularly.
We prove the fundamental result for probabilistic timed automata: the
region construction that is known from timed automata carries over to
the probabilistic setting. In particular, this allows us to prove that
minimum and maximum reachability probabilities can be computed via a
reduction to MDP model checking, including the case where one wants to
disregard unrealizable behavior. Further information can be found in
our ITP paper [2].
[Hidden_Markov_Models]
title = Hidden Markov Models
author = Simon Wimmer <http://in.tum.de/~wimmers>
-topic = Mathematics/Probability Theory, Computer Science/Algorithms
+topic = Mathematics/Probability theory, Computer science/Algorithms
date = 2018-05-25
notify = wimmers@in.tum.de
abstract =
This entry contains a formalization of hidden Markov models [3] based
on Johannes Hölzl's formalization of discrete time Markov chains
[1]. The basic definitions are provided and the correctness of two
main (dynamic programming) algorithms for hidden Markov models is
proved: the forward algorithm for computing the likelihood of an
observed sequence, and the Viterbi algorithm for decoding the most
probable hidden state sequence. The Viterbi algorithm is made
executable including memoization. Hidden markov models have various
applications in natural language processing. For an introduction see
Jurafsky and Martin [2].
[ArrowImpossibilityGS]
title = Arrow and Gibbard-Satterthwaite
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2008-09-01
-topic = Mathematics/Games and Economics
+topic = Mathematics/Games and economics
abstract = This article formalizes two proofs of Arrow's impossibility theorem due to Geanakoplos and derives the Gibbard-Satterthwaite theorem as a corollary. One formalization is based on utility functions, the other one on strict partial orders.<br><br>An article about these proofs is found <a href="http://www21.in.tum.de/~nipkow/pubs/arrow.html">here</a>.
notify = nipkow@in.tum.de
[SenSocialChoice]
title = Some classical results in Social Choice Theory
author = Peter Gammie <http://peteg.org>
date = 2008-11-09
-topic = Mathematics/Games and Economics
+topic = Mathematics/Games and economics
abstract = Drawing on Sen's landmark work "Collective Choice and Social Welfare" (1970), this development proves Arrow's General Possibility Theorem, Sen's Liberal Paradox and May's Theorem in a general setting. The goal was to make precise the classical statements and proofs of these results, and to provide a foundation for more recent results such as the Gibbard-Satterthwaite and Duggan-Schwartz theorems.
notify = nipkow@in.tum.de
[Vickrey_Clarke_Groves]
title = VCG - Combinatorial Vickrey-Clarke-Groves Auctions
author = Marco B. Caminati <>, Manfred Kerber <http://www.cs.bham.ac.uk/~mmk>, Christoph Lange<mailto:math.semantic.web@gmail.com>, Colin Rowat<mailto:c.rowat@bham.ac.uk>
date = 2015-04-30
-topic = Mathematics/Games and Economics
+topic = Mathematics/Games and economics
abstract =
A VCG auction (named after their inventors Vickrey, Clarke, and
Groves) is a generalization of the single-good, second price Vickrey
auction to the case of a combinatorial auction (multiple goods, from
which any participant can bid on each possible combination). We
formalize in this entry VCG auctions, including tie-breaking and prove
that the functions for the allocation and the price determination are
well-defined. Furthermore we show that the allocation function
allocates goods only to participants, only goods in the auction are
allocated, and no good is allocated twice. We also show that the price
function is non-negative. These properties also hold for the
automatically extracted Scala code.
notify = mnfrd.krbr@gmail.com
[Topology]
title = Topology
author = Stefan Friedrich <>
date = 2004-04-26
topic = Mathematics/Topology
abstract = This entry contains two theories. The first, <tt>Topology</tt>, develops the basic notions of general topology. The second, which can be viewed as a demonstration of the first, is called <tt>LList_Topology</tt>. It develops the topology of lazy lists.
notify = lcp@cl.cam.ac.uk
[Knot_Theory]
title = Knot Theory
author = T.V.H. Prathamesh <mailto:prathamesh@imsc.res.in>
date = 2016-01-20
topic = Mathematics/Topology
abstract =
This work contains a formalization of some topics in knot theory.
The concepts that were formalized include definitions of tangles, links,
framed links and link/tangle equivalence. The formalization is based on a
formulation of links in terms of tangles. We further construct and prove the
invariance of the Bracket polynomial. Bracket polynomial is an invariant of
framed links closely linked to the Jones polynomial. This is perhaps the first
attempt to formalize any aspect of knot theory in an interactive proof assistant.
notify = prathamesh@imsc.res.in
[Graph_Theory]
title = Graph Theory
author = Lars Noschinski <http://www21.in.tum.de/~noschinl/>
date = 2013-04-28
-topic = Mathematics/Graph Theory
+topic = Mathematics/Graph theory
abstract = This development provides a formalization of directed graphs, supporting (labelled) multi-edges and infinite graphs. A polymorphic edge type allows edges to be treated as pairs of vertices, if multi-edges are not required. Formalized properties are i.a. walks (and related concepts), connectedness and subgraphs and basic properties of isomorphisms.
<p>
This formalization is used to prove characterizations of Euler Trails, Shortest Paths and Kuratowski subgraphs.
notify = noschinl@gmail.com
[Planarity_Certificates]
title = Planarity Certificates
author = Lars Noschinski <http://www21.in.tum.de/~noschinl/>
date = 2015-11-11
-topic = Mathematics/Graph Theory
+topic = Mathematics/Graph theory
abstract =
This development provides a formalization of planarity based on
combinatorial maps and proves that Kuratowski's theorem implies
combinatorial planarity.
Moreover, it contains verified implementations of programs checking
certificates for planarity (i.e., a combinatorial map) or non-planarity
(i.e., a Kuratowski subgraph).
notify = noschinl@gmail.com
[Max-Card-Matching]
title = Maximum Cardinality Matching
author = Christine Rizkallah <https://www.mpi-inf.mpg.de/~crizkall/>
date = 2011-07-21
-topic = Mathematics/Graph Theory
+topic = Mathematics/Graph theory
abstract =
<p>
A <em>matching</em> in a graph <i>G</i> is a subset <i>M</i> of the
edges of <i>G</i> such that no two share an endpoint. A matching has maximum
cardinality if its cardinality is at least as large as that of any other
matching. An <em>odd-set cover</em> <i>OSC</i> of a graph <i>G</i> is a
labeling of the nodes of <i>G</i> with integers such that every edge of
<i>G</i> is either incident to a node labeled 1 or connects two nodes
labeled with the same number <i>i &ge; 2</i>.
</p><p>
This article proves Edmonds theorem:<br>
Let <i>M</i> be a matching in a graph <i>G</i> and let <i>OSC</i> be an
odd-set cover of <i>G</i>.
For any <i>i &ge; 0</i>, let <var>n(i)</var> be the number of nodes
labeled <i>i</i>. If <i>|M| = n(1) +
&sum;<sub>i &ge; 2</sub>(n(i) div 2)</i>,
then <i>M</i> is a maximum cardinality matching.
</p>
notify = nipkow@in.tum.de
[Girth_Chromatic]
title = A Probabilistic Proof of the Girth-Chromatic Number Theorem
author = Lars Noschinski <http://www21.in.tum.de/~noschinl/>
date = 2012-02-06
-topic = Mathematics/Graph Theory
+topic = Mathematics/Graph theory
abstract = This works presents a formalization of the Girth-Chromatic number theorem in graph theory, stating that graphs with arbitrarily large girth and chromatic number exist. The proof uses the theory of Random Graphs to prove the existence with probabilistic arguments.
notify = noschinl@gmail.com
[Random_Graph_Subgraph_Threshold]
title = Properties of Random Graphs -- Subgraph Containment
author = Lars Hupel <mailto:hupel@in.tum.de>
date = 2014-02-13
-topic = Mathematics/Graph Theory, Mathematics/Probability Theory
+topic = Mathematics/Graph theory, Mathematics/Probability theory
abstract = Random graphs are graphs with a fixed number of vertices, where each edge is present with a fixed probability. We are interested in the probability that a random graph contains a certain pattern, for example a cycle or a clique. A very high edge probability gives rise to perhaps too many edges (which degrades performance for many algorithms), whereas a low edge probability might result in a disconnected graph. We prove a theorem about a threshold probability such that a higher edge probability will asymptotically almost surely produce a random graph with the desired subgraph.
notify = hupel@in.tum.de
[Flyspeck-Tame]
title = Flyspeck I: Tame Graphs
author = Gertrud Bauer <>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2006-05-22
-topic = Mathematics/Graph Theory
+topic = Mathematics/Graph theory
abstract =
These theories present the verified enumeration of <i>tame</i> plane graphs
as defined by Thomas C. Hales in his proof of the Kepler Conjecture in his
book <i>Dense Sphere Packings. A Blueprint for Formal Proofs.</i> [CUP 2012].
The values of the constants in the definition of tameness are identical to
those in the <a href="https://code.google.com/p/flyspeck/">Flyspeck project</a>.
The <a href="http://www21.in.tum.de/~nipkow/pubs/Flyspeck/">IJCAR 2006 paper by Nipkow, Bauer and Schultz</a> refers to the original version of Hales' proof,
the <a href="http://www21.in.tum.de/~nipkow/pubs/itp11.html">ITP 2011 paper by Nipkow</a> refers to the Blueprint version of the proof.
extra-history =
Change history:
[2010-11-02]: modified theories to reflect the modified definition of tameness in Hales' revised proof.<br>
[2014-07-03]: modified constants in def of tameness and Archive according to the final state of the Flyspeck proof.
notify = nipkow@in.tum.de
[Well_Quasi_Orders]
title = Well-Quasi-Orders
author = Christian Sternagel <mailto:c.sternagel@gmail.com>
date = 2012-04-13
topic = Mathematics/Combinatorics
abstract = Based on Isabelle/HOL's type class for preorders,
we introduce a type class for well-quasi-orders (wqo)
which is characterized by the absence of "bad" sequences
(our proofs are along the lines of the proof of Nash-Williams,
from which we also borrow terminology). Our main results are
instantiations for the product type, the list type, and a type of finite trees,
which (almost) directly follow from our proofs of (1) Dickson's Lemma, (2)
Higman's Lemma, and (3) Kruskal's Tree Theorem. More concretely:
<ul>
<li>If the sets A and B are wqo then their Cartesian product is wqo.</li>
<li>If the set A is wqo then the set of finite lists over A is wqo.</li>
<li>If the set A is wqo then the set of finite trees over A is wqo.</li>
</ul>
The research was funded by the Austrian Science Fund (FWF): J3202.
extra-history =
Change history:
[2012-06-11]: Added Kruskal's Tree Theorem.<br>
[2012-12-19]: New variant of Kruskal's tree theorem for terms (as opposed to
variadic terms, i.e., trees), plus finite version of the tree theorem as
corollary.<br>
[2013-05-16]: Simplified construction of minimal bad sequences.<br>
[2014-07-09]: Simplified proofs of Higman's lemma and Kruskal's tree theorem,
based on homogeneous sequences.<br>
[2016-01-03]: An alternative proof of Higman's lemma by open induction.<br>
[2017-06-08]: Proved (classical) equivalence to inductive definition of
almost-full relations according to the ITP 2012 paper "Stop When You Are
Almost-Full" by Vytiniotis, Coquand, and Wahlstedt.
notify = c.sternagel@gmail.com
[Marriage]
title = Hall's Marriage Theorem
author = Dongchen Jiang <mailto:dongchenjiang@googlemail.com>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2010-12-17
topic = Mathematics/Combinatorics
abstract = Two proofs of Hall's Marriage Theorem: one due to Halmos and Vaughan, one due to Rado.
extra-history =
Change history:
[2011-09-09]: Added Rado's proof
notify = nipkow@in.tum.de
[Bondy]
title = Bondy's Theorem
author = Jeremy Avigad <http://www.andrew.cmu.edu/user/avigad/>, Stefan Hetzl <http://www.logic.at/people/hetzl/>
date = 2012-10-27
topic = Mathematics/Combinatorics
abstract = A proof of Bondy's theorem following B. Bollabas, Combinatorics, 1986, Cambridge University Press.
notify = avigad@cmu.edu, hetzl@logic.at
[Ramsey-Infinite]
title = Ramsey's theorem, infinitary version
author = Tom Ridge <>
date = 2004-09-20
topic = Mathematics/Combinatorics
abstract = This formalization of Ramsey's theorem (infinitary version) is taken from Boolos and Jeffrey, <i>Computability and Logic</i>, 3rd edition, Chapter 26. It differs slightly from the text by assuming a slightly stronger hypothesis. In particular, the induction hypothesis is stronger, holding for any infinite subset of the naturals. This avoids the rather peculiar mapping argument between kj and aikj on p.263, which is unnecessary and slightly mars this really beautiful result.
notify = lp15@cam.ac.uk
[Derangements]
title = Derangements Formula
author = Lukas Bulwahn <mailto:lukas.bulwahn@gmail.com>
date = 2015-06-27
topic = Mathematics/Combinatorics
abstract =
The Derangements Formula describes the number of fixpoint-free permutations
as a closed formula. This theorem is the 88th theorem in a list of the
``<a href="http://www.cs.ru.nl/~freek/100/">Top 100 Mathematical Theorems</a>''.
notify = lukas.bulwahn@gmail.com
[Euler_Partition]
title = Euler's Partition Theorem
author = Lukas Bulwahn <mailto:lukas.bulwahn@gmail.com>
date = 2015-11-19
topic = Mathematics/Combinatorics
abstract =
Euler's Partition Theorem states that the number of partitions with only
distinct parts is equal to the number of partitions with only odd parts.
The combinatorial proof follows John Harrison's HOL Light formalization.
This theorem is the 45th theorem of the Top 100 Theorems list.
notify = lukas.bulwahn@gmail.com
[Discrete_Summation]
title = Discrete Summation
author = Florian Haftmann <http://isabelle.in.tum.de/~haftmann>
contributors = Amine Chaieb <>
date = 2014-04-13
topic = Mathematics/Combinatorics
abstract = These theories introduce basic concepts and proofs about discrete summation: shifts, formal summation, falling factorials and stirling numbers. As proof of concept, a simple summation conversion is provided.
notify = florian.haftmann@informatik.tu-muenchen.de
[Open_Induction]
title = Open Induction
author = Mizuhito Ogawa <>, Christian Sternagel <mailto:c.sternagel@gmail.com>
date = 2012-11-02
topic = Mathematics/Combinatorics
abstract =
A proof of the open induction schema based on J.-C. Raoult, Proving open properties by induction, <i>Information Processing Letters</i> 29, 1988, pp.19-23.
<p>This research was supported by the Austrian Science Fund (FWF): J3202.</p>
notify = c.sternagel@gmail.com
[Category]
title = Category Theory to Yoneda's Lemma
author = Greg O'Keefe <http://users.rsise.anu.edu.au/~okeefe/>
date = 2005-04-21
-topic = Mathematics/Category Theory
+topic = Mathematics/Category theory
license = LGPL
abstract = This development proves Yoneda's lemma and aims to be readable by humans. It only defines what is needed for the lemma: categories, functors and natural transformations. Limits, adjunctions and other important concepts are not included.
extra-history =
Change history:
[2010-04-23]: The definition of the constant <tt>equinumerous</tt> was slightly too weak in the original submission and has been fixed in revision <a href="https://bitbucket.org/isa-afp/afp-devel/commits/8c2b5b3c995f">8c2b5b3c995f</a>.
notify = lcp@cl.cam.ac.uk
[Category2]
title = Category Theory
author = Alexander Katovsky <mailto:apk32@cam.ac.uk>
date = 2010-06-20
-topic = Mathematics/Category Theory
+topic = Mathematics/Category theory
abstract = This article presents a development of Category Theory in Isabelle/HOL. A Category is defined using records and locales. Functors and Natural Transformations are also defined. The main result that has been formalized is that the Yoneda functor is a full and faithful embedding. We also formalize the completeness of many sorted monadic equational logic. Extensive use is made of the HOLZF theory in both cases. For an informal description see <a href="http://www.srcf.ucam.org/~apk32/Isabelle/Category/Cat.pdf">here [pdf]</a>.
notify = alexander.katovsky@cantab.net
[FunWithFunctions]
title = Fun With Functions
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
date = 2008-08-26
topic = Mathematics/Misc
abstract = This is a collection of cute puzzles of the form ``Show that if a function satisfies the following constraints, it must be ...'' Please add further examples to this collection!
notify = nipkow@in.tum.de
[FunWithTilings]
title = Fun With Tilings
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>, Lawrence C. Paulson <http://www.cl.cam.ac.uk/~lp15/>
date = 2008-11-07
topic = Mathematics/Misc
abstract = Tilings are defined inductively. It is shown that one form of mutilated chess board cannot be tiled with dominoes, while another one can be tiled with L-shaped tiles. Please add further fun examples of this kind!
notify = nipkow@in.tum.de
[Lazy-Lists-II]
title = Lazy Lists II
author = Stefan Friedrich <>
date = 2004-04-26
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract = This theory contains some useful extensions to the LList (lazy list) theory by <a href="http://www.cl.cam.ac.uk/~lp15/">Larry Paulson</a>, including finite, infinite, and positive llists over an alphabet, as well as the new constants take and drop and the prefix order of llists. Finally, the notions of safety and liveness in the sense of Alpern and Schneider (1985) are defined.
notify = lcp@cl.cam.ac.uk
[Ribbon_Proofs]
title = Ribbon Proofs
author = John Wickerson <>
date = 2013-01-19
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
abstract = This document concerns the theory of ribbon proofs: a diagrammatic proof system, based on separation logic, for verifying program correctness. We include the syntax, proof rules, and soundness results for two alternative formalisations of ribbon proofs. <p> Compared to traditional proof outlines, ribbon proofs emphasise the structure of a proof, so are intelligible and pedagogical. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they may be more scalable. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs.
notify =
[Koenigsberg_Friendship]
title = The Königsberg Bridge Problem and the Friendship Theorem
author = Wenda Li <mailto:wl302@cam.ac.uk>
date = 2013-07-19
-topic = Mathematics/Graph Theory
+topic = Mathematics/Graph theory
abstract = This development provides a formalization of undirected graphs and simple graphs, which are based on Benedikt Nordhoff and Peter Lammich's simple formalization of labelled directed graphs in the archive. Then, with our formalization of graphs, we show both necessary and sufficient conditions for Eulerian trails and circuits as well as the fact that the Königsberg Bridge Problem does not have a solution. In addition, we show the Friendship Theorem in simple graphs.
notify =
[Tree_Decomposition]
title = Tree Decomposition
author = Christoph Dittmann <http://logic.las.tu-berlin.de/Members/Dittmann/>
notify =
date = 2016-05-31
-topic = Mathematics/Graph Theory
+topic = Mathematics/Graph theory
abstract =
We formalize tree decompositions and tree width in Isabelle/HOL,
proving that trees have treewidth 1. We also show that every edge of
a tree decomposition is a separation of the underlying graph. As an
application of this theorem we prove that complete graphs of size n
have treewidth n-1.
[Menger]
title = Menger's Theorem
author = Christoph Dittmann <mailto:isabelle@christoph-d.de>
-topic = Mathematics/Graph Theory
+topic = Mathematics/Graph theory
date = 2017-02-26
notify = isabelle@christoph-d.de
abstract =
We present a formalization of Menger's Theorem for directed and
undirected graphs in Isabelle/HOL. This well-known result shows that
if two non-adjacent distinct vertices u, v in a directed graph have no
separator smaller than n, then there exist n internally
vertex-disjoint paths from u to v. The version for undirected graphs
follows immediately because undirected graphs are a special case of
directed graphs.
[IEEE_Floating_Point]
title = A Formal Model of IEEE Floating Point Arithmetic
author = Lei Yu <mailto:ly271@cam.ac.uk>
contributors = Fabian Hellauer <mailto:hellauer@in.tum.de>, Fabian Immler <http://www21.in.tum.de/~immler>
date = 2013-07-27
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract = This development provides a formal model of IEEE-754 floating-point arithmetic. This formalization, including formal specification of the standard and proofs of important properties of floating-point arithmetic, forms the foundation for verifying programs with floating-point computation. There is also a code generation setup for floats so that we can execute programs using this formalization in functional programming languages.
notify = lp15@cam.ac.uk, immler@in.tum.de
extra-history =
Change history:
[2017-09-25]: Added conversions from and to software floating point numbers
(by Fabian Hellauer and Fabian Immler).<br>
[2018-02-05]: 'Modernized' representation following the formalization in HOL4:
former "float_format" and predicate "is_valid" is now encoded in a type "('e, 'f) float" where
'e and 'f encode the size of exponent and fraction.
[Native_Word]
title = Native Word
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>
contributors = Peter Lammich <http://www21.in.tum.de/~lammich>
date = 2013-09-17
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract = This entry makes machine words and machine arithmetic available for code generation from Isabelle/HOL. It provides a common abstraction that hides the differences between the different target languages. The code generator maps these operations to the APIs of the target languages. Apart from that, we extend the available bit operations on types int and integer, and map them to the operations in the target languages.
extra-history =
Change history:
[2013-11-06]:
added conversion function between native words and characters
(revision fd23d9a7fe3a)<br>
[2014-03-31]:
added words of default size in the target language (by Peter Lammich)
(revision 25caf5065833)<br>
[2014-10-06]:
proper test setup with compilation and execution of tests in all target languages
(revision 5d7a1c9ae047)<br>
[2017-09-02]:
added 64-bit words (revision c89f86244e3c)<br>
[2018-07-15]:
added cast operators for default-size words (revision fc1f1fb8dd30)<br>
notify = mail@andreas-lochbihler.de
[XML]
title = XML
author = Christian Sternagel <mailto:c.sternagel@gmail.com>, René Thiemann <mailto:rene.thiemann@uibk.ac.at>
date = 2014-10-03
-topic = Computer Science/Functional Programming, Computer Science/Data Structures
+topic = Computer science/Functional programming, Computer science/Data structures
abstract =
This entry provides an XML library for Isabelle/HOL. This includes parsing
and pretty printing of XML trees as well as combinators for transforming XML
trees into arbitrary user-defined data. The main contribution of this entry is
an interface (fit for code generation) that allows for communication between
verified programs formalized in Isabelle/HOL and the outside world via XML.
This library was developed as part of the IsaFoR/CeTA project
to which we refer for examples of its usage.
notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at
[HereditarilyFinite]
title = The Hereditarily Finite Sets
author = Lawrence C. Paulson <http://www.cl.cam.ac.uk/~lp15/>
date = 2013-11-17
topic = Logic/Set theory
abstract = The theory of hereditarily finite sets is formalised, following
the <a href="http://journals.impan.gov.pl/dm/Inf/422-0-1.html">development</a> of Swierczkowski.
An HF set is a finite collection of other HF sets; they enjoy an induction principle
and satisfy all the axioms of ZF set theory apart from the axiom of infinity, which is negated.
All constructions that are possible in ZF set theory (Cartesian products, disjoint sums, natural numbers,
functions) without using infinite sets are possible here.
The definition of addition for the HF sets follows Kirby.
This development forms the foundation for the Isabelle proof of Gödel's incompleteness theorems,
which has been <a href="Incompleteness.html">formalised separately</a>.
extra-history =
Change history:
[2015-02-23]: Added the theory "Finitary" defining the class of types that can be embedded in hf, including int, char, option, list, etc.
notify = lp15@cam.ac.uk
[Incompleteness]
title = Gödel's Incompleteness Theorems
author = Lawrence C. Paulson <http://www.cl.cam.ac.uk/~lp15/>
date = 2013-11-17
topic = Logic/Proof theory
abstract = Gödel's two incompleteness theorems are formalised, following a careful <a href="http://journals.impan.gov.pl/dm/Inf/422-0-1.html">presentation</a> by Swierczkowski, in the theory of <a href="HereditarilyFinite.html">hereditarily finite sets</a>. This represents the first ever machine-assisted proof of the second incompleteness theorem. Compared with traditional formalisations using Peano arithmetic (see e.g. Boolos), coding is simpler, with no need to formalise the notion
of multiplication (let alone that of a prime number)
in the formalised calculus upon which the theorem is based.
However, other technical problems had to be solved in order to complete the argument.
notify = lp15@cam.ac.uk
[Finite_Automata_HF]
title = Finite Automata in Hereditarily Finite Set Theory
author = Lawrence C. Paulson <http://www.cl.cam.ac.uk/~lp15/>
date = 2015-02-05
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
abstract = Finite Automata, both deterministic and non-deterministic, for regular languages.
The Myhill-Nerode Theorem. Closure under intersection, concatenation, etc.
Regular expressions define regular languages. Closure under reversal;
the powerset construction mapping NFAs to DFAs. Left and right languages; minimal DFAs.
Brzozowski's minimization algorithm. Uniqueness up to isomorphism of minimal DFAs.
notify = lp15@cam.ac.uk
[Decreasing-Diagrams]
title = Decreasing Diagrams
author = Harald Zankl <http://cl-informatik.uibk.ac.at/users/hzankl>
license = LGPL
date = 2013-11-01
topic = Logic/Rewriting
abstract = This theory contains a formalization of decreasing diagrams showing that any locally decreasing abstract rewrite system is confluent. We consider the valley (van Oostrom, TCS 1994) and the conversion version (van Oostrom, RTA 2008) and closely follow the original proofs. As an application we prove Newman's lemma.
notify = Harald.Zankl@uibk.ac.at
[Decreasing-Diagrams-II]
title = Decreasing Diagrams II
author = Bertram Felgenhauer <mailto:bertram.felgenhauer@uibk.ac.at>
license = LGPL
date = 2015-08-20
topic = Logic/Rewriting
abstract = This theory formalizes the commutation version of decreasing diagrams for Church-Rosser modulo. The proof follows Felgenhauer and van Oostrom (RTA 2013). The theory also provides important specializations, in particular van Oostrom’s conversion version (TCS 2008) of decreasing diagrams.
notify = bertram.felgenhauer@uibk.ac.at
[GoedelGod]
title = Gödel's God in Isabelle/HOL
author = Christoph Benzmüller <http://page.mi.fu-berlin.de/cbenzmueller/>, Bruno Woltzenlogel Paleo <http://www.logic.at/staff/bruno/>
date = 2013-11-12
topic = Logic/Philosophical aspects
abstract = Dana Scott's version of Gödel's proof of God's existence is formalized in quantified
modal logic KB (QML KB).
QML KB is modeled as a fragment of classical higher-order logic (HOL);
thus, the formalization is essentially a formalization in HOL.
notify = lp15@cam.ac.uk, c.benzmueller@fu-berlin.de
[Types_Tableaus_and_Goedels_God]
title = Types, Tableaus and Gödel’s God in Isabelle/HOL
author = David Fuenmayor <mailto:davfuenmayor@gmail.com>, Christoph Benzmüller <http://www.christoph-benzmueller.de>
topic = Logic/Philosophical aspects
date = 2017-05-01
notify = davfuenmayor@gmail.com, c.benzmueller@gmail.com
abstract =
A computer-formalisation of the essential parts of Fitting's
textbook "Types, Tableaus and Gödel's God" in
Isabelle/HOL is presented. In particular, Fitting's (and
Anderson's) variant of the ontological argument is verified and
confirmed. This variant avoids the modal collapse, which has been
criticised as an undesirable side-effect of Kurt Gödel's (and
Dana Scott's) versions of the ontological argument.
Fitting's work is employing an intensional higher-order modal
logic, which we shallowly embed here in classical higher-order logic.
We then utilize the embedded logic for the formalisation of
Fitting's argument. (See also the earlier AFP entry ``Gödel's God in Isabelle/HOL''.)
[GewirthPGCProof]
title = Formalisation and Evaluation of Alan Gewirth's Proof for the Principle of Generic Consistency in Isabelle/HOL
author = David Fuenmayor <mailto:davfuenmayor@gmail.com>, Christoph Benzmüller <http://christoph-benzmueller.de>
topic = Logic/Philosophical aspects
date = 2018-10-30
notify = davfuenmayor@gmail.com, c.benzmueller@gmail.com
abstract =
An ambitious ethical theory ---Alan Gewirth's "Principle of
Generic Consistency"--- is encoded and analysed in Isabelle/HOL.
Gewirth's theory has stirred much attention in philosophy and
ethics and has been proposed as a potential means to bound the impact
of artificial general intelligence.
extra-history =
Change history:
[2019-04-09]:
added proof for a stronger variant of the PGC and examplary inferences
(revision 88182cb0a2f6)<br>
[Lowe_Ontological_Argument]
title = Computer-assisted Reconstruction and Assessment of E. J. Lowe's Modal Ontological Argument
author = David Fuenmayor <mailto:davfuenmayor@gmail.com>, Christoph Benzmüller <http://www.christoph-benzmueller.de>
topic = Logic/Philosophical aspects
date = 2017-09-21
notify = davfuenmayor@gmail.com, c.benzmueller@gmail.com
abstract =
Computers may help us to understand --not just verify-- philosophical
arguments. By utilizing modern proof assistants in an iterative
interpretive process, we can reconstruct and assess an argument by
fully formal means. Through the mechanization of a variant of St.
Anselm's ontological argument by E. J. Lowe, which is a
paradigmatic example of a natural-language argument with strong ties
to metaphysics and religion, we offer an ideal showcase for our
computer-assisted interpretive method.
[AnselmGod]
title = Anselm's God in Isabelle/HOL
author = Ben Blumson <https://philpapers.org/profile/805>
topic = Logic/Philosophical aspects
date = 2017-09-06
notify = benblumson@gmail.com
abstract =
Paul Oppenheimer and Edward Zalta's formalisation of
Anselm's ontological argument for the existence of God is
automated by embedding a free logic for definite descriptions within
Isabelle/HOL.
[Tail_Recursive_Functions]
title = A General Method for the Proof of Theorems on Tail-recursive Functions
author = Pasquale Noce <mailto:pasquale.noce.lavoro@gmail.com>
date = 2013-12-01
-topic = Computer Science/Functional Programming
+topic = Computer science/Functional programming
abstract =
<p>
Tail-recursive function definitions are sometimes more straightforward than
alternatives, but proving theorems on them may be roundabout because of the
peculiar form of the resulting recursion induction rules.
</p><p>
This paper describes a proof method that provides a general solution to
this problem by means of suitable invariants over inductive sets, and
illustrates the application of such method by examining two case studies.
</p>
notify = pasquale.noce.lavoro@gmail.com
[CryptoBasedCompositionalProperties]
title = Compositional Properties of Crypto-Based Components
author = Maria Spichkova <mailto:maria.spichkova@rmit.edu.au>
date = 2014-01-11
-topic = Computer Science/Security
+topic = Computer science/Security
abstract = This paper presents an Isabelle/HOL set of theories which allows the specification of crypto-based components and the verification of their composition properties wrt. cryptographic aspects. We introduce a formalisation of the security property of data secrecy, the corresponding definitions and proofs. Please note that here we import the Isabelle/HOL theory ListExtras.thy, presented in the AFP entry FocusStreamsCaseStudies-AFP.
notify = maria.spichkova@rmit.edu.au
[Featherweight_OCL]
title = Featherweight OCL: A Proposal for a Machine-Checked Formal Semantics for OCL 2.5
author = Achim D. Brucker <mailto:brucker@spamfence.net>, Frédéric Tuong <mailto:tuong@users.gforge.inria.fr>, Burkhart Wolff <mailto:wolff@lri.fr>
date = 2014-01-16
-topic = Computer Science/System Description Languages
+topic = Computer science/System description languages
abstract = The Unified Modeling Language (UML) is one of the few
modeling languages that is widely used in industry. While
UML is mostly known as diagrammatic modeling language
(e.g., visualizing class models), it is complemented by a
textual language, called Object Constraint Language
(OCL). The current version of OCL is based on a four-valued
logic that turns UML into a formal language. Any type
comprises the elements "invalid" and "null" which are
propagated as strict and non-strict, respectively.
Unfortunately, the former semi-formal semantics of this
specification language, captured in the "Annex A" of the
OCL standard, leads to different interpretations of corner
cases. We formalize the core of OCL: denotational
definitions, a logical calculus and operational rules that
allow for the execution of OCL expressions by a mixture of
term rewriting and code compilation. Our formalization
reveals several inconsistencies and contradictions in the
current version of the OCL standard. Overall, this document
is intended to provide the basis for a machine-checked text
"Annex A" of the OCL standard targeting at tool
implementors.
extra-history =
Change history:
[2015-10-13]:
<a href="https://bitbucket.org/isa-afp/afp-devel/commits/ea3b38fc54d68535bcfafd40357b6ff8f1092057">afp-devel@ea3b38fc54d6</a> and
<a href="https://projects.brucker.ch/hol-testgen/log/trunk?rev=12148">hol-testgen@12148</a><br>
&nbsp;&nbsp;&nbsp;Update of Featherweight OCL including a change in the abstract.<br>
[2014-01-16]:
<a href="https://bitbucket.org/isa-afp/afp-devel/commits/9091ce05cb20d4ad3dc1961c18f1846d85e87f8e">afp-devel@9091ce05cb20</a> and
<a href="https://projects.brucker.ch/hol-testgen/log/trunk?rev=10241">hol-testgen@10241</a><br>
&nbsp;&nbsp;&nbsp;New Entry: Featherweight OCL
notify = brucker@spamfence.net, tuong@users.gforge.inria.fr, wolff@lri.fr
[Relation_Algebra]
title = Relation Algebra
author = Alasdair Armstrong <>,
Simon Foster <mailto:simon.foster@york.ac.uk>,
Georg Struth <http://staffwww.dcs.shef.ac.uk/people/G.Struth/>,
Tjark Weber <http://user.it.uu.se/~tjawe125/>
date = 2014-01-25
topic = Mathematics/Algebra
abstract = Tarski's algebra of binary relations is formalised along the lines of
the standard textbooks of Maddux and Schmidt and Ströhlein. This
includes relation-algebraic concepts such as subidentities, vectors and
a domain operation as well as various notions associated to functions.
Relation algebras are also expanded by a reflexive transitive closure
operation, and they are linked with Kleene algebras and models of binary
relations and Boolean matrices.
notify = g.struth@sheffield.ac.uk, tjark.weber@it.uu.se
[PSemigroupsConvolution]
title = Partial Semigroups and Convolution Algebras
author = Brijesh Dongol <mailto:brijesh.dongol@brunel.ac.uk>, Victor B. F. Gomes <mailto:victor.gomes@cl.cam.ac.uk>, Ian J. Hayes <mailto:ian.hayes@itee.uq.edu.au>, Georg Struth <mailto:g.struth@sheffield.ac.uk>
topic = Mathematics/Algebra
date = 2017-06-13
notify = g.struth@sheffield.ac.uk, victor.gomes@cl.cam.ac.uk
abstract =
Partial Semigroups are relevant to the foundations of quantum
mechanics and combinatorics as well as to interval and separation
logics. Convolution algebras can be understood either as algebras of
generalised binary modalities over ternary Kripke frames, in
particular over partial semigroups, or as algebras of quantale-valued
functions which are equipped with a convolution-style operation of
multiplication that is parametrised by a ternary relation. Convolution
algebras provide algebraic semantics for various substructural logics,
including categorial, relevance and linear logics, for separation
logic and for interval logics; they cover quantitative and qualitative
applications. These mathematical components for partial semigroups and
convolution algebras provide uniform foundations from which models of
computation based on relations, program traces or pomsets, and
verification components for separation or interval temporal logics can
be built with little effort.
[Secondary_Sylow]
title = Secondary Sylow Theorems
author = Jakob von Raumer <mailto:psxjv4@nottingham.ac.uk>
date = 2014-01-28
topic = Mathematics/Algebra
abstract = These theories extend the existing proof of the first Sylow theorem
(written by Florian Kammueller and L. C. Paulson) by what are often
called the second, third and fourth Sylow theorems. These theorems
state propositions about the number of Sylow p-subgroups of a group
and the fact that they are conjugate to each other. The proofs make
use of an implementation of group actions and their properties.
notify = psxjv4@nottingham.ac.uk
[Jordan_Hoelder]
title = The Jordan-Hölder Theorem
author = Jakob von Raumer <mailto:psxjv4@nottingham.ac.uk>
date = 2014-09-09
topic = Mathematics/Algebra
abstract = This submission contains theories that lead to a formalization of the proof of the Jordan-Hölder theorem about composition series of finite groups. The theories formalize the notions of isomorphism classes of groups, simple groups, normal series, composition series, maximal normal subgroups. Furthermore, they provide proofs of the second isomorphism theorem for groups, the characterization theorem for maximal normal subgroups as well as many useful lemmas about normal subgroups and factor groups. The proof is inspired by course notes of Stuart Rankin.
notify = psxjv4@nottingham.ac.uk
[Cayley_Hamilton]
title = The Cayley-Hamilton Theorem
author = Stephan Adelsberger <http://nm.wu.ac.at/nm/sadelsbe>,
Stefan Hetzl <http://www.logic.at/people/hetzl/>,
Florian Pollak <mailto:florian.pollak@gmail.com>
date = 2014-09-15
topic = Mathematics/Algebra
abstract =
This document contains a proof of the Cayley-Hamilton theorem
based on the development of matrices in HOL/Multivariate Analysis.
notify = stvienna@gmail.com
[Probabilistic_Noninterference]
title = Probabilistic Noninterference
author = Andrei Popescu <http://www21.in.tum.de/~popescua>, Johannes Hölzl <http://in.tum.de/~hoelzl>
date = 2014-03-11
-topic = Computer Science/Security
+topic = Computer science/Security
abstract = We formalize a probabilistic noninterference for a multi-threaded language with uniform scheduling, where probabilistic behaviour comes from both the scheduler and the individual threads. We define notions probabilistic noninterference in two variants: resumption-based and trace-based. For the resumption-based notions, we prove compositionality w.r.t. the language constructs and establish sound type-system-like syntactic criteria. This is a formalization of the mathematical development presented at CPP 2013 and CALCO 2013. It is the probabilistic variant of the Possibilistic Noninterference AFP entry.
notify = hoelzl@in.tum.de
[HyperCTL]
title = A shallow embedding of HyperCTL*
author = Markus N. Rabe <http://www.react.uni-saarland.de/people/rabe.html>, Peter Lammich <http://www21.in.tum.de/~lammich>, Andrei Popescu <http://www21.in.tum.de/~popescua>
date = 2014-04-16
-topic = Computer Science/Security, Logic/General logic/Temporal logic
+topic = Computer science/Security, Logic/General logic/Temporal logic
abstract = We formalize HyperCTL*, a temporal logic for expressing security properties. We
first define a shallow embedding of HyperCTL*, within which we prove inductive and coinductive
rules for the operators. Then we show that a HyperCTL* formula captures Goguen-Meseguer
noninterference, a landmark information flow property. We also define a deep embedding and
connect it to the shallow embedding by a denotational semantics, for which we prove sanity w.r.t.
dependence on the free variables. Finally, we show that under some finiteness assumptions about
the model, noninterference is given by a (finitary) syntactic formula.
notify = uuomul@yahoo.com
[Bounded_Deducibility_Security]
title = Bounded-Deducibility Security
author = Andrei Popescu <http://www21.in.tum.de/~popescua>, Peter Lammich <http://www21.in.tum.de/~lammich>
date = 2014-04-22
-topic = Computer Science/Security
+topic = Computer science/Security
abstract = This is a formalization of bounded-deducibility security (BD
security), a flexible notion of information-flow security applicable
to arbitrary input-output automata. It generalizes Sutherland's
classic notion of nondeducibility by factoring in declassification
bounds and trigger, whereas nondeducibility states that, in a
system, information cannot flow between specified sources and sinks,
BD security indicates upper bounds for the flow and triggers under
which these upper bounds are no longer guaranteed.
notify = uuomul@yahoo.com, lammich@in.tum.de
[Network_Security_Policy_Verification]
title = Network Security Policy Verification
author = Cornelius Diekmann <http://net.in.tum.de/~diekmann>
date = 2014-07-04
-topic = Computer Science/Security
+topic = Computer science/Security
abstract =
We present a unified theory for verifying network security policies.
A security policy is represented as directed graph.
To check high-level security goals, security invariants over the policy are
expressed. We cover monotonic security invariants, i.e. prohibiting more does not harm
security. We provide the following contributions for the security invariant theory.
<ul>
<li>Secure auto-completion of scenario-specific knowledge, which eases usability.</li>
<li>Security violations can be repaired by tightening the policy iff the
security invariants hold for the deny-all policy.</li>
<li>An algorithm to compute a security policy.</li>
<li>A formalization of stateful connection semantics in network security mechanisms.</li>
<li>An algorithm to compute a secure stateful implementation of a policy.</li>
<li>An executable implementation of all the theory.</li>
<li>Examples, ranging from an aircraft cabin data network to the analysis
of a large real-world firewall.</li>
<li>More examples: A fully automated translation of high-level security goals to both
firewall and SDN configurations (see Examples/Distributed_WebApp.thy).</li>
</ul>
For a detailed description, see
<ul>
<li>C. Diekmann, A. Korsten, and G. Carle.
<a href="http://www.net.in.tum.de/fileadmin/bibtex/publications/papers/diekmann2015mansdnnfv.pdf">Demonstrating
topoS: Theorem-prover-based synthesis of secure network configurations.</a>
In 2nd International Workshop on Management of SDN and NFV Systems, manSDN/NFV, Barcelona, Spain, November 2015.</li>
<li>C. Diekmann, S.-A. Posselt, H. Niedermayer, H. Kinkelin, O. Hanka, and G. Carle.
<a href="http://www.net.in.tum.de/pub/diekmann/forte14.pdf">Verifying Security Policies using Host Attributes.</a>
In FORTE, 34th IFIP International Conference on Formal Techniques for Distributed Objects,
Components and Systems, Berlin, Germany, June 2014.</li>
<li>C. Diekmann, L. Hupel, and G. Carle. Directed Security Policies:
<a href="http://rvg.web.cse.unsw.edu.au/eptcs/paper.cgi?ESSS2014.3">A Stateful Network Implementation.</a>
In J. Pang and Y. Liu, editors, Engineering Safety and Security Systems,
volume 150 of Electronic Proceedings in Theoretical Computer Science,
pages 20-34, Singapore, May 2014. Open Publishing Association.</li>
</ul>
extra-history =
Change history:
[2015-04-14]:
Added Distributed WebApp example and improved graphviz visualization
(revision 4dde08ca2ab8)<br>
notify = diekmann@net.in.tum.de
[Abstract_Completeness]
title = Abstract Completeness
author = Jasmin Christian Blanchette <http://www21.in.tum.de/~blanchet>, Andrei Popescu <http://www21.in.tum.de/~popescua>, Dmitriy Traytel <http://www21.in.tum.de/~traytel>
date = 2014-04-16
topic = Logic/Proof theory
abstract = A formalization of an abstract property of possibly infinite derivation trees (modeled by a codatatype), representing the core of a proof (in Beth/Hintikka style) of the first-order logic completeness theorem, independent of the concrete syntax or inference rules. This work is described in detail in the IJCAR 2014 publication by the authors.
The abstract proof can be instantiated for a wide range of Gentzen and tableau systems as well as various flavors of FOL---e.g., with or without predicates, equality, or sorts. Here, we give only a toy example instantiation with classical propositional logic. A more serious instance---many-sorted FOL with equality---is described elsewhere [Blanchette and Popescu, FroCoS 2013].
notify = traytel@in.tum.de
[Pop_Refinement]
title = Pop-Refinement
author = Alessandro Coglio <http://www.kestrel.edu/~coglio>
date = 2014-07-03
-topic = Computer Science/Programming Languages/Misc
+topic = Computer science/Programming languages/Misc
abstract = Pop-refinement is an approach to stepwise refinement, carried out inside an interactive theorem prover by constructing a monotonically decreasing sequence of predicates over deeply embedded target programs. The sequence starts with a predicate that characterizes the possible implementations, and ends with a predicate that characterizes a unique program in explicit syntactic form. Pop-refinement enables more requirements (e.g. program-level and non-functional) to be captured in the initial specification and preserved through refinement. Security requirements expressed as hyperproperties (i.e. predicates over sets of traces) are always preserved by pop-refinement, unlike the popular notion of refinement as trace set inclusion. Two simple examples in Isabelle/HOL are presented, featuring program-level requirements, non-functional requirements, and hyperproperties.
notify = coglio@kestrel.edu
[VectorSpace]
title = Vector Spaces
author = Holden Lee <mailto:holdenl@princeton.edu>
date = 2014-08-29
topic = Mathematics/Algebra
abstract = This formalisation of basic linear algebra is based completely on locales, building off HOL-Algebra. It includes basic definitions: linear combinations, span, linear independence; linear transformations; interpretation of function spaces as vector spaces; the direct sum of vector spaces, sum of subspaces; the replacement theorem; existence of bases in finite-dimensional; vector spaces, definition of dimension; the rank-nullity theorem. Some concepts are actually defined and proved for modules as they also apply there. Infinite-dimensional vector spaces are supported, but dimension is only supported for finite-dimensional vector spaces. The proofs are standard; the proofs of the replacement theorem and rank-nullity theorem roughly follow the presentation in Linear Algebra by Friedberg, Insel, and Spence. The rank-nullity theorem generalises the existing development in the Archive of Formal Proof (originally using type classes, now using a mix of type classes and locales).
notify = holdenl@princeton.edu
[Special_Function_Bounds]
title = Real-Valued Special Functions: Upper and Lower Bounds
author = Lawrence C. Paulson <http://www.cl.cam.ac.uk/~lp15/>
date = 2014-08-29
topic = Mathematics/Analysis
abstract = This development proves upper and lower bounds for several familiar real-valued functions. For sin, cos, exp and sqrt, it defines and verifies infinite families of upper and lower bounds, mostly based on Taylor series expansions. For arctan, ln and exp, it verifies a finite collection of upper and lower bounds, originally obtained from the functions' continued fraction expansions using the computer algebra system Maple. A common theme in these proofs is to take the difference between a function and its approximation, which should be zero at one point, and then consider the sign of the derivative. The immediate purpose of this development is to verify axioms used by MetiTarski, an automatic theorem prover for real-valued special functions. Crucial to MetiTarski's operation is the provision of upper and lower bounds for each function of interest.
notify = lp15@cam.ac.uk
[Landau_Symbols]
title = Landau Symbols
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
date = 2015-07-14
topic = Mathematics/Analysis
abstract = This entry provides Landau symbols to describe and reason about the asymptotic growth of functions for sufficiently large inputs. A number of simplification procedures are provided for additional convenience: cancelling of dominated terms in sums under a Landau symbol, cancelling of common factors in products, and a decision procedure for Landau expressions containing products of powers of functions like x, ln(x), ln(ln(x)) etc.
notify = eberlm@in.tum.de
[Error_Function]
title = The Error Function
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
topic = Mathematics/Analysis
date = 2018-02-06
notify = eberlm@in.tum.de
abstract =
<p> This entry provides the definitions and basic properties of
the complex and real error function erf and the complementary error
function erfc. Additionally, it gives their full asymptotic
expansions. </p>
[Akra_Bazzi]
title = The Akra-Bazzi theorem and the Master theorem
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
date = 2015-07-14
topic = Mathematics/Analysis
abstract = This article contains a formalisation of the Akra-Bazzi method
based on a proof by Leighton. It is a generalisation of the well-known
Master Theorem for analysing the complexity of Divide & Conquer algorithms.
We also include a generalised version of the Master theorem based on the
Akra-Bazzi theorem, which is easier to apply than the Akra-Bazzi theorem
itself.
<p>
Some proof methods that facilitate applying the Master theorem are also
included. For a more detailed explanation of the formalisation and the
proof methods, see the accompanying paper (publication forthcoming).
notify = eberlm@in.tum.de
[Dirichlet_Series]
title = Dirichlet Series
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
date = 2017-10-12
notify = eberlm@in.tum.de
abstract =
This entry is a formalisation of much of Chapters 2, 3, and 11 of
Apostol's &ldquo;Introduction to Analytic Number
Theory&rdquo;. This includes: <ul> <li>Definitions and
basic properties for several number-theoretic functions (Euler's
&phi;, M&ouml;bius &mu;, Liouville's &lambda;,
the divisor function &sigma;, von Mangoldt's
&Lambda;)</li> <li>Executable code for most of these
functions, the most efficient implementations using the factoring
algorithm by Thiemann <i>et al.</i></li>
<li>Dirichlet products and formal Dirichlet series</li>
<li>Analytic results connecting convergent formal Dirichlet
series to complex functions</li> <li>Euler product
expansions</li> <li>Asymptotic estimates of
number-theoretic functions including the density of squarefree
integers and the average number of divisors of a natural
number</li> </ul> These results are useful as a basis for
developing more number-theoretic results, such as the Prime Number
Theorem.
[Gauss_Sums]
title = Gauss Sums and the Pólya–Vinogradov Inequality
author = Rodrigo Raya <https://people.epfl.ch/rodrigo.raya>, Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
date = 2019-12-10
notify = manuel.eberl@tum.de
abstract =
<p>This article provides a full formalisation of Chapter 8 of
Apostol's <em><a
href="https://www.springer.com/de/book/9780387901633">Introduction
to Analytic Number Theory</a></em>. Subjects that are
covered are:</p> <ul> <li>periodic arithmetic
functions and their finite Fourier series</li>
<li>(generalised) Ramanujan sums</li> <li>Gauss sums
and separable characters</li> <li>induced moduli and
primitive characters</li> <li>the
Pólya&mdash;Vinogradov inequality</li> </ul>
[Zeta_Function]
title = The Hurwitz and Riemann ζ Functions
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Number Theory, Mathematics/Analysis
+topic = Mathematics/Number theory, Mathematics/Analysis
date = 2017-10-12
notify = eberlm@in.tum.de
abstract =
<p>This entry builds upon the results about formal and analytic Dirichlet
series to define the Hurwitz &zeta; function &zeta;(<em>a</em>,<em>s</em>) and,
based on that, the Riemann &zeta; function &zeta;(<em>s</em>).
This is done by first defining them for &real;(<em>z</em>) > 1
and then successively extending the domain to the left using the
Euler&ndash;MacLaurin formula.</p>
<p>Apart from the most basic facts such as analyticity, the following
results are provided:</p>
<ul>
<li>the Stieltjes constants and the Laurent expansion of
&zeta;(<em>s</em>) at <em>s</em> = 1</li>
<li>the non-vanishing of &zeta;(<em>s</em>)
for &real;(<em>z</em>) &ge; 1</li>
<li>the relationship between &zeta;(<em>a</em>,<em>s</em>) and &Gamma;</li>
<li>the special values at negative integers and positive even integers</li>
<li>Hurwitz's formula and the reflection formula for &zeta;(<em>s</em>)</li>
<li>the <a href="https://arxiv.org/abs/math/0405478">
Hadjicostas&ndash;Chapman formula</a></li>
</ul>
<p>The entry also contains Euler's analytic proof of the infinitude of primes,
based on the fact that &zeta;(<i>s</i>) has a pole at <i>s</i> = 1.</p>
[Linear_Recurrences]
title = Linear Recurrences
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
topic = Mathematics/Analysis
date = 2017-10-12
notify = eberlm@in.tum.de
abstract =
<p> Linear recurrences with constant coefficients are an
interesting class of recurrence equations that can be solved
explicitly. The most famous example are certainly the Fibonacci
numbers with the equation <i>f</i>(<i>n</i>) =
<i>f</i>(<i>n</i>-1) +
<i>f</i>(<i>n</i> - 2) and the quite
non-obvious closed form
(<i>&phi;</i><sup><i>n</i></sup>
-
(-<i>&phi;</i>)<sup>-<i>n</i></sup>)
/ &radic;<span style="text-decoration:
overline">5</span> where &phi; is the golden ratio.
</p> <p> In this work, I build on existing tools in
Isabelle &ndash; such as formal power series and polynomial
factorisation algorithms &ndash; to develop a theory of these
recurrences and derive a fully executable solver for them that can be
exported to programming languages like Haskell. </p>
[Cartan_FP]
title = The Cartan Fixed Point Theorems
author = Lawrence C. Paulson <http://www.cl.cam.ac.uk/~lp15/>
date = 2016-03-08
topic = Mathematics/Analysis
abstract =
The Cartan fixed point theorems concern the group of holomorphic
automorphisms on a connected open set of C<sup>n</sup>. Ciolli et al.
have formalised the one-dimensional case of these theorems in HOL
Light. This entry contains their proofs, ported to Isabelle/HOL. Thus
it addresses the authors' remark that "it would be important to write
a formal proof in a language that can be read by both humans and
machines".
notify = lp15@cam.ac.uk
[Gauss_Jordan]
title = Gauss-Jordan Algorithm and Its Applications
author = Jose Divasón <http://www.unirioja.es/cu/jodivaso>, Jesús Aransay <http://www.unirioja.es/cu/jearansa>
-topic = Computer Science/Algorithms/Mathematical
+topic = Computer science/Algorithms/Mathematical
date = 2014-09-03
abstract = The Gauss-Jordan algorithm states that any matrix over a field can be transformed by means of elementary row operations to a matrix in reduced row echelon form. The formalization is based on the Rank Nullity Theorem entry of the AFP and on the HOL-Multivariate-Analysis session of Isabelle, where matrices are represented as functions over finite types. We have set up the code generator to make this representation executable. In order to improve the performance, a refinement to immutable arrays has been carried out. We have formalized some of the applications of the Gauss-Jordan algorithm. Thanks to this development, the following facts can be computed over matrices whose elements belong to a field: Ranks, Determinants, Inverses, Bases and dimensions and Solutions of systems of linear equations. Code can be exported to SML and Haskell.
notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es
[Echelon_Form]
title = Echelon Form
author = Jose Divasón <http://www.unirioja.es/cu/jodivaso>, Jesús Aransay <http://www.unirioja.es/cu/jearansa>
-topic = Computer Science/Algorithms/Mathematical, Mathematics/Algebra
+topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra
date = 2015-02-12
abstract = We formalize an algorithm to compute the Echelon Form of a matrix. We have proved its existence over Bézout domains and made it executable over Euclidean domains, such as the integer ring and the univariate polynomials over a field. This allows us to compute determinants, inverses and characteristic polynomials of matrices. The work is based on the HOL-Multivariate Analysis library, and on both the Gauss-Jordan and Cayley-Hamilton AFP entries. As a by-product, some algebraic structures have been implemented (principal ideal domains, Bézout domains...). The algorithm has been refined to immutable arrays and code can be generated to functional languages as well.
notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es
[QR_Decomposition]
title = QR Decomposition
author = Jose Divasón <http://www.unirioja.es/cu/jodivaso>, Jesús Aransay <http://www.unirioja.es/cu/jearansa>
-topic = Computer Science/Algorithms/Mathematical, Mathematics/Algebra
+topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra
date = 2015-02-12
abstract = QR decomposition is an algorithm to decompose a real matrix A into the product of two other matrices Q and R, where Q is orthogonal and R is invertible and upper triangular. The algorithm is useful for the least squares problem; i.e., the computation of the best approximation of an unsolvable system of linear equations. As a side-product, the Gram-Schmidt process has also been formalized. A refinement using immutable arrays is presented as well. The development relies, among others, on the AFP entry "Implementing field extensions of the form Q[sqrt(b)]" by René Thiemann, which allows execution of the algorithm using symbolic computations. Verified code can be generated and executed using floats as well.
extra-history =
Change history:
[2015-06-18]: The second part of the Fundamental Theorem of Linear Algebra has been generalized to more general inner product spaces.
notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es
[Hermite]
title = Hermite Normal Form
author = Jose Divasón <http://www.unirioja.es/cu/jodivaso>, Jesús Aransay <http://www.unirioja.es/cu/jearansa>
-topic = Computer Science/Algorithms/Mathematical, Mathematics/Algebra
+topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra
date = 2015-07-07
abstract = Hermite Normal Form is a canonical matrix analogue of Reduced Echelon Form, but involving matrices over more general rings. In this work we formalise an algorithm to compute the Hermite Normal Form of a matrix by means of elementary row operations, taking advantage of the Echelon Form AFP entry. We have proven the correctness of such an algorithm and refined it to immutable arrays. Furthermore, we have also formalised the uniqueness of the Hermite Normal Form of a matrix. Code can be exported and some examples of execution involving integer matrices and polynomial matrices are presented as well.
notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es
[Imperative_Insertion_Sort]
title = Imperative Insertion Sort
author = Christian Sternagel <mailto:c.sternagel@gmail.com>
date = 2014-09-25
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
abstract = The insertion sort algorithm of Cormen et al. (Introduction to Algorithms) is expressed in Imperative HOL and proved to be correct and terminating. For this purpose we also provide a theory about imperative loop constructs with accompanying induction/invariant rules for proving partial and total correctness. Furthermore, the formalized algorithm is fit for code generation.
notify = lp15@cam.ac.uk
[Stream_Fusion_Code]
title = Stream Fusion in HOL with Code Generation
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>, Alexandra Maximova <mailto:amaximov@student.ethz.ch>
date = 2014-10-10
-topic = Computer Science/Functional Programming
+topic = Computer science/Functional programming
abstract = Stream Fusion is a system for removing intermediate list data structures from functional programs, in particular Haskell. This entry adapts stream fusion to Isabelle/HOL and its code generator. We define stream types for finite and possibly infinite lists and stream versions for most of the fusible list functions in the theories List and Coinductive_List, and prove them correct with respect to the conversion functions between lists and streams. The Stream Fusion transformation itself is implemented as a simproc in the preprocessor of the code generator. [Brian Huffman's <a href="http://isa-afp.org/entries/Stream-Fusion.html">AFP entry</a> formalises stream fusion in HOLCF for the domain of lazy lists to prove the GHC compiler rewrite rules correct. In contrast, this work enables Isabelle's code generator to perform stream fusion itself. To that end, it covers both finite and coinductive lists from the HOL library and the Coinductive entry. The fusible list functions require specification and proof principles different from Huffman's.]
notify = mail@andreas-lochbihler.de
[Case_Labeling]
title = Generating Cases from Labeled Subgoals
author = Lars Noschinski <http://www21.in.tum.de/~noschinl/>
date = 2015-07-21
-topic = Tools, Computer Science/Programming Languages/Misc
+topic = Tools, Computer science/Programming languages/Misc
abstract =
Isabelle/Isar provides named cases to structure proofs. This article
contains an implementation of a proof method <tt>casify</tt>, which can
be used to easily extend proof tools with support for named cases. Such
a proof tool must produce labeled subgoals, which are then interpreted
by <tt>casify</tt>.
<p>
As examples, this work contains verification condition generators
producing named cases for three languages: The Hoare language from
<tt>HOL/Library</tt>, a monadic language for computations with failure
(inspired by the AutoCorres tool), and a language of conditional
expressions. These VCGs are demonstrated by a number of example programs.
notify = noschinl@gmail.com
[DPT-SAT-Solver]
title = A Fast SAT Solver for Isabelle in Standard ML
topic = Tools
author = Armin Heller <>
date = 2009-12-09
abstract = This contribution contains a fast SAT solver for Isabelle written in Standard ML. By loading the theory <tt>DPT_SAT_Solver</tt>, the SAT solver installs itself (under the name ``dptsat'') and certain Isabelle tools like Refute will start using it automatically. This is a port of the DPT (Decision Procedure Toolkit) SAT Solver written in OCaml.
notify = jasmin.blanchette@gmail.com
[Rep_Fin_Groups]
title = Representations of Finite Groups
topic = Mathematics/Algebra
author = Jeremy Sylvestre <http://ualberta.ca/~jsylvest/>
date = 2015-08-12
abstract = We provide a formal framework for the theory of representations of finite groups, as modules over the group ring. Along the way, we develop the general theory of groups (relying on the group_add class for the basics), modules, and vector spaces, to the extent required for theory of group representations. We then provide formal proofs of several important introductory theorems in the subject, including Maschke's theorem, Schur's lemma, and Frobenius reciprocity. We also prove that every irreducible representation is isomorphic to a submodule of the group ring, leading to the fact that for a finite group there are only finitely many isomorphism classes of irreducible representations. In all of this, no restriction is made on the characteristic of the ring or field of scalars until the definition of a group representation, and then the only restriction made is that the characteristic must not divide the order of the group.
notify = jsylvest@ualberta.ca
[Noninterference_Inductive_Unwinding]
title = The Inductive Unwinding Theorem for CSP Noninterference Security
-topic = Computer Science/Security
+topic = Computer science/Security
author = Pasquale Noce <mailto:pasquale.noce.lavoro@gmail.com>
date = 2015-08-18
abstract =
<p>
The necessary and sufficient condition for CSP noninterference security stated by the Ipurge Unwinding Theorem is expressed in terms of a pair of event lists varying over the set of process traces. This does not render it suitable for the subsequent application of rule induction in the case of a process defined inductively, since rule induction may rather be applied to a single variable ranging over an inductively defined set.
</p><p>
Starting from the Ipurge Unwinding Theorem, this paper derives a necessary and sufficient condition for CSP noninterference security that involves a single event list varying over the set of process traces, and is thus suitable for rule induction; hence its name, Inductive Unwinding Theorem. Similarly to the Ipurge Unwinding Theorem, the new theorem only requires to consider individual accepted and refused events for each process trace, and applies to the general case of a possibly intransitive noninterference policy. Specific variants of this theorem are additionally proven for deterministic processes and trace set processes.
</p>
notify = pasquale.noce.lavoro@gmail.com
[Password_Authentication_Protocol]
title = Verification of a Diffie-Hellman Password-based Authentication Protocol by Extending the Inductive Method
author = Pasquale Noce <mailto:pasquale.noce.lavoro@gmail.com>
-topic = Computer Science/Security
+topic = Computer science/Security
date = 2017-01-03
notify = pasquale.noce.lavoro@gmail.com
abstract =
This paper constructs a formal model of a Diffie-Hellman
password-based authentication protocol between a user and a smart
card, and proves its security. The protocol provides for the dispatch
of the user's password to the smart card on a secure messaging
channel established by means of Password Authenticated Connection
Establishment (PACE), where the mapping method being used is Chip
Authentication Mapping. By applying and suitably extending
Paulson's Inductive Method, this paper proves that the protocol
establishes trustworthy secure messaging channels, preserves the
secrecy of users' passwords, and provides an effective mutual
authentication service. What is more, these security properties turn
out to hold independently of the secrecy of the PACE authentication
key.
[Jordan_Normal_Form]
title = Matrices, Jordan Normal Forms, and Spectral Radius Theory
topic = Mathematics/Algebra
author = René Thiemann <mailto:rene.thiemann@uibk.ac.at>, Akihisa Yamada <mailto:akihisa.yamada@uibk.ac.at>
contributors = Alexander Bentkamp <mailto:bentkamp@gmail.com>
date = 2015-08-21
abstract =
<p>
Matrix interpretations are useful as measure functions in termination proving. In order to use these interpretations also for complexity analysis, the growth rate of matrix powers has to examined. Here, we formalized a central result of spectral radius theory, namely that the growth rate is polynomially bounded if and only if the spectral radius of a matrix is at most one.
</p><p>
To formally prove this result we first studied the growth rates of matrices in Jordan normal form, and prove the result that every complex matrix has a Jordan normal form using a constructive prove via Schur decomposition.
</p><p>
The whole development is based on a new abstract type for matrices, which is also executable by a suitable setup of the code generator. It completely subsumes our former AFP-entry on executable matrices, and its main advantage is its close connection to the HMA-representation which allowed us to easily adapt existing proofs on determinants.
</p><p>
All the results have been applied to improve CeTA, our certifier to validate termination and complexity proof certificates.
</p>
extra-history =
Change history:
[2016-01-07]: Added Schur-decomposition, Gram-Schmidt orthogonalization, uniqueness of Jordan normal forms<br/>
[2018-04-17]: Integrated lemmas from deep-learning AFP-entry of Alexander Bentkamp
notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp
[LTL_to_DRA]
title = Converting Linear Temporal Logic to Deterministic (Generalized) Rabin Automata
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
author = Salomon Sickert <mailto:sickert@in.tum.de>
date = 2015-09-04
abstract = Recently, Javier Esparza and Jan Kretinsky proposed a new method directly translating linear temporal logic (LTL) formulas to deterministic (generalized) Rabin automata. Compared to the existing approaches of constructing a non-deterministic Buechi-automaton in the first step and then applying a determinization procedure (e.g. some variant of Safra's construction) in a second step, this new approach preservers a relation between the formula and the states of the resulting automaton. While the old approach produced a monolithic structure, the new method is compositional. Furthermore, in some cases the resulting automata are much smaller than the automata generated by existing approaches. In order to ensure the correctness of the construction, this entry contains a complete formalisation and verification of the translation. Furthermore from this basis executable code is generated.
extra-history =
Change history:
[2015-09-23]: Enable code export for the eager unfolding optimisation and reduce running time of the generated tool. Moreover, add support for the mlton SML compiler.<br>
[2016-03-24]: Make use of the LTL entry and include the simplifier.
notify = sickert@in.tum.de
[Timed_Automata]
title = Timed Automata
author = Simon Wimmer <http://in.tum.de/~wimmers>
date = 2016-03-08
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
abstract =
Timed automata are a widely used formalism for modeling real-time
systems, which is employed in a class of successful model checkers
such as UPPAAL [LPY97], HyTech [HHWt97] or Kronos [Yov97]. This work
formalizes the theory for the subclass of diagonal-free timed
automata, which is sufficient to model many interesting problems. We
first define the basic concepts and semantics of diagonal-free timed
automata. Based on this, we prove two types of decidability results
for the language emptiness problem. The first is the classic result
of Alur and Dill [AD90, AD94], which uses a finite partitioning of
the state space into so-called `regions`. Our second result focuses
on an approach based on `Difference Bound Matrices (DBMs)`, which is
practically used by model checkers. We prove the correctness of the
basic forward analysis operations on DBMs. One of these operations is
the Floyd-Warshall algorithm for the all-pairs shortest paths problem.
To obtain a finite search space, a widening operation has to be used
for this kind of analysis. We use Patricia Bouyer's [Bou04] approach
to prove that this widening operation is correct in the sense that
DBM-based forward analysis in combination with the widening operation
also decides language emptiness. The interesting property of this
proof is that the first decidability result is reused to obtain the
second one.
notify = wimmers@in.tum.de
[Parity_Game]
title = Positional Determinacy of Parity Games
author = Christoph Dittmann <http://logic.las.tu-berlin.de/Members/Dittmann/>
date = 2015-11-02
-topic = Mathematics/Games and Economics, Mathematics/Graph Theory
+topic = Mathematics/Games and economics, Mathematics/Graph theory
abstract =
We present a formalization of parity games (a two-player game on
directed graphs) and a proof of their positional determinacy in
Isabelle/HOL. This proof works for both finite and infinite games.
notify =
[Ergodic_Theory]
title = Ergodic Theory
author = Sebastien Gouezel <mailto:sebastien.gouezel@univ-rennes1.fr>
date = 2015-12-01
-topic = Mathematics/Probability Theory
+topic = Mathematics/Probability theory
abstract = Ergodic theory is the branch of mathematics that studies the behaviour of measure preserving transformations, in finite or infinite measure. It interacts both with probability theory (mainly through measure theory) and with geometry as a lot of interesting examples are from geometric origin. We implement the first definitions and theorems of ergodic theory, including notably Poicaré recurrence theorem for finite measure preserving systems (together with the notion of conservativity in general), induced maps, Kac's theorem, Birkhoff theorem (arguably the most important theorem in ergodic theory), and variations around it such as conservativity of the corresponding skew product, or Atkinson lemma.
notify = sebastien.gouezel@univ-rennes1.fr, hoelzl@in.tum.de
[Latin_Square]
title = Latin Square
author = Alexander Bentkamp <mailto:bentkamp@gmail.com>
date = 2015-12-02
topic = Mathematics/Combinatorics
abstract =
A Latin Square is a n x n table filled with integers from 1 to n where each number appears exactly once in each row and each column. A Latin Rectangle is a partially filled n x n table with r filled rows and n-r empty rows, such that each number appears at most once in each row and each column. The main result of this theory is that any Latin Rectangle can be completed to a Latin Square.
notify = bentkamp@gmail.com
[Deep_Learning]
title = Expressiveness of Deep Learning
author = Alexander Bentkamp <mailto:bentkamp@gmail.com>
date = 2016-11-10
-topic = Computer Science/Machine Learning, Mathematics/Analysis
+topic = Computer science/Machine learning, Mathematics/Analysis
abstract =
Deep learning has had a profound impact on computer science in recent years, with applications to search engines, image recognition and language processing, bioinformatics, and more. Recently, Cohen et al. provided theoretical evidence for the superiority of deep learning over shallow learning. This formalization of their work simplifies and generalizes the original proof, while working around the limitations of the Isabelle type system. To support the formalization, I developed reusable libraries of formalized mathematics, including results about the matrix rank, the Lebesgue measure, and multivariate polynomials, as well as a library for tensor analysis.
notify = bentkamp@gmail.com
[Applicative_Lifting]
title = Applicative Lifting
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>, Joshua Schneider <>
date = 2015-12-22
-topic = Computer Science/Functional Programming
+topic = Computer science/Functional programming
abstract = Applicative functors augment computations with effects by lifting function application to types which model the effects. As the structure of the computation cannot depend on the effects, applicative expressions can be analysed statically. This allows us to lift universally quantified equations to the effectful types, as observed by Hinze. Thus, equational reasoning over effectful computations can be reduced to pure types.
</p><p>
This entry provides a package for registering applicative functors and two proof methods for lifting of equations over applicative functors. The first method normalises applicative expressions according to the laws of applicative functors. This way, equations whose two sides contain the same list of variables can be lifted to every applicative functor.
</p><p>
To lift larger classes of equations, the second method exploits a number of additional properties (e.g., commutativity of effects) provided the properties have been declared for the concrete applicative functor at hand upon registration.
</p><p>
We declare several types from the Isabelle library as applicative functors and illustrate the use of the methods with two examples: the lifting of the arithmetic type class hierarchy to streams and the verification of a relabelling function on binary trees. We also formalise and verify the normalisation algorithm used by the first proof method.
</p>
extra-history =
Change history:
[2016-03-03]: added formalisation of lifting with combinators<br>
[2016-06-10]:
implemented automatic derivation of lifted combinator reductions;
support arbitrary lifted relations using relators;
improved compatibility with locale interpretation
(revision ec336f354f37)<br>
notify = mail@andreas-lochbihler.de
[Stern_Brocot]
title = The Stern-Brocot Tree
author = Peter Gammie <http://peteg.org>, Andreas Lochbihler <http://www.andreas-lochbihler.de>
date = 2015-12-22
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
abstract = The Stern-Brocot tree contains all rational numbers exactly once and in their lowest terms. We formalise the Stern-Brocot tree as a coinductive tree using recursive and iterative specifications, which we have proven equivalent, and show that it indeed contains all the numbers as stated. Following Hinze, we prove that the Stern-Brocot tree can be linearised looplessly into Stern's diatonic sequence (also known as Dijkstra's fusc function) and that it is a permutation of the Bird tree.
</p><p>
The reasoning stays at an abstract level by appealing to the uniqueness of solutions of guarded recursive equations and lifting algebraic laws point-wise to trees and streams using applicative functors.
</p>
notify = mail@andreas-lochbihler.de
[Algebraic_Numbers]
title = Algebraic Numbers in Isabelle/HOL
topic = Mathematics/Algebra
author = René Thiemann <mailto:rene.thiemann@uibk.ac.at>, Akihisa Yamada <mailto:akihisa.yamada@uibk.ac.at>, Sebastiaan Joosten <mailto:sebastiaan.joosten@uibk.ac.at>
date = 2015-12-22
abstract = Based on existing libraries for matrices, factorization of rational polynomials, and Sturm's theorem, we formalized algebraic numbers in Isabelle/HOL. Our development serves as an implementation for real and complex numbers, and it admits to compute roots and completely factorize real and complex polynomials, provided that all coefficients are rational numbers. Moreover, we provide two implementations to display algebraic numbers, an injective and expensive one, or a faster but approximative version.
</p><p>
To this end, we mechanized several results on resultants, which also required us to prove that polynomials over a unique factorization domain form again a unique factorization domain.
</p>
extra-history =
Change history:
[2016-01-29]: Split off Polynomial Interpolation and Polynomial Factorization<br>
[2017-04-16]: Use certified Berlekamp-Zassenhaus factorization, use subresultant algorithm for computing resultants, improved bisection algorithm
notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp, sebastiaan.joosten@uibk.ac.at
[Polynomial_Interpolation]
title = Polynomial Interpolation
topic = Mathematics/Algebra
author = René Thiemann <mailto:rene.thiemann@uibk.ac.at>, Akihisa Yamada <mailto:akihisa.yamada@uibk.ac.at>
date = 2016-01-29
abstract =
We formalized three algorithms for polynomial interpolation over arbitrary
fields: Lagrange's explicit expression, the recursive algorithm of Neville
and Aitken, and the Newton interpolation in combination with an efficient
implementation of divided differences. Variants of these algorithms for
integer polynomials are also available, where sometimes the interpolation
can fail; e.g., there is no linear integer polynomial <i>p</i> such that
<i>p(0) = 0</i> and <i>p(2) = 1</i>. Moreover, for the Newton interpolation
for integer polynomials, we proved that all intermediate results that are
computed during the algorithm must be integers. This admits an early
failure detection in the implementation. Finally, we proved the uniqueness
of polynomial interpolation.
<p>
The development also contains improved code equations to speed up the
division of integers in target languages.
notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp
[Polynomial_Factorization]
title = Polynomial Factorization
topic = Mathematics/Algebra
author = René Thiemann <mailto:rene.thiemann@uibk.ac.at>, Akihisa Yamada <mailto:akihisa.yamada@uibk.ac.at>
date = 2016-01-29
abstract =
Based on existing libraries for polynomial interpolation and matrices,
we formalized several factorization algorithms for polynomials, including
Kronecker's algorithm for integer polynomials,
Yun's square-free factorization algorithm for field polynomials, and
Berlekamp's algorithm for polynomials over finite fields.
By combining the last one with Hensel's lifting,
we derive an efficient factorization algorithm for the integer polynomials,
which is then lifted for rational polynomials by mechanizing Gauss' lemma.
Finally, we assembled a combined factorization algorithm for rational polynomials,
which combines all the mentioned algorithms and additionally uses the explicit formula for roots
of quadratic polynomials and a rational root test.
<p>
As side products, we developed division algorithms for polynomials over integral domains,
as well as primality-testing and prime-factorization algorithms for integers.
notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp
[Perron_Frobenius]
title = Perron-Frobenius Theorem for Spectral Radius Analysis
author = Jose Divasón <http://www.unirioja.es/cu/jodivaso>, Ondřej Kunčar <http://www21.in.tum.de/~kuncar/>, René Thiemann <mailto:rene.thiemann@uibk.ac.at>, Akihisa Yamada <mailto:akihisa.yamada@uibk.ac.at>
notify = rene.thiemann@uibk.ac.at
date = 2016-05-20
topic = Mathematics/Algebra
abstract =
<p>The spectral radius of a matrix A is the maximum norm of all
eigenvalues of A. In previous work we already formalized that for a
complex matrix A, the values in A<sup>n</sup> grow polynomially in n
if and only if the spectral radius is at most one. One problem with
the above characterization is the determination of all
<em>complex</em> eigenvalues. In case A contains only non-negative
real values, a simplification is possible with the help of the
Perron&ndash;Frobenius theorem, which tells us that it suffices to consider only
the <em>real</em> eigenvalues of A, i.e., applying Sturm's method can
decide the polynomial growth of A<sup>n</sup>. </p><p> We formalize
the Perron&ndash;Frobenius theorem based on a proof via Brouwer's fixpoint
theorem, which is available in the HOL multivariate analysis (HMA)
library. Since the results on the spectral radius is based on matrices
in the Jordan normal form (JNF) library, we further develop a
connection which allows us to easily transfer theorems between HMA and
JNF. With this connection we derive the combined result: if A is a
non-negative real matrix, and no real eigenvalue of A is strictly
larger than one, then A<sup>n</sup> is polynomially bounded in n. </p>
extra-history =
Change history:
[2017-10-18]:
added Perron-Frobenius theorem for irreducible matrices with generalization
(revision bda1f1ce8a1c)<br/>
[2018-05-17]:
prove conjecture of CPP'18 paper: Jordan blocks of spectral radius have maximum size
(revision ffdb3794e5d5)
[Stochastic_Matrices]
title = Stochastic Matrices and the Perron-Frobenius Theorem
author = René Thiemann <http://cl-informatik.uibk.ac.at/~thiemann>
-topic = Mathematics/Algebra, Computer Science/Automata and Formal Languages
+topic = Mathematics/Algebra, Computer science/Automata and formal languages
date = 2017-11-22
notify = rene.thiemann@uibk.ac.at
abstract =
Stochastic matrices are a convenient way to model discrete-time and
finite state Markov chains. The Perron&ndash;Frobenius theorem
tells us something about the existence and uniqueness of non-negative
eigenvectors of a stochastic matrix. In this entry, we formalize
stochastic matrices, link the formalization to the existing AFP-entry
on Markov chains, and apply the Perron&ndash;Frobenius theorem to
prove that stationary distributions always exist, and they are unique
if the stochastic matrix is irreducible.
[Formal_SSA]
title = Verified Construction of Static Single Assignment Form
author = Sebastian Ullrich <mailto:sebasti@nullri.ch>, Denis Lohner <http://pp.ipd.kit.edu/person.php?id=88>
date = 2016-02-05
-topic = Computer Science/Algorithms, Computer Science/Programming Languages/Transformations
+topic = Computer science/Algorithms, Computer science/Programming languages/Transformations
abstract =
<p>
We define a functional variant of the static single assignment (SSA)
form construction algorithm described by <a
href="https://doi.org/10.1007/978-3-642-37051-9_6">Braun et al.</a>,
which combines simplicity and efficiency. The definition is based on a
general, abstract control flow graph representation using Isabelle locales.
</p>
<p>
We prove that the algorithm's output is semantically equivalent to the
input according to a small-step semantics, and that it is in minimal SSA
form for the common special case of reducible inputs. We then show the
satisfiability of the locale assumptions by giving instantiations for a
simple While language.
</p>
<p>
Furthermore, we use a generic instantiation based on typedefs in order
to extract OCaml code and replace the unverified SSA construction
algorithm of the <a href="https://doi.org/10.1145/2579080">CompCertSSA
project</a> with it.
</p>
<p>
A more detailed description of the verified SSA construction can be found
in the paper <a href="https://doi.org/10.1145/2892208.2892211">Verified
Construction of Static Single Assignment Form</a>, CC 2016.
</p>
notify = denis.lohner@kit.edu
[Minimal_SSA]
title = Minimal Static Single Assignment Form
author = Max Wagner <mailto:max@trollbu.de>, Denis Lohner <http://pp.ipd.kit.edu/person.php?id=88>
-topic = Computer Science/Programming Languages/Transformations
+topic = Computer science/Programming languages/Transformations
date = 2017-01-17
notify = denis.lohner@kit.edu
abstract =
<p>This formalization is an extension to <a
href="https://www.isa-afp.org/entries/Formal_SSA.html">"Verified
Construction of Static Single Assignment Form"</a>. In
their work, the authors have shown that <a
href="https://doi.org/10.1007/978-3-642-37051-9_6">Braun
et al.'s static single assignment (SSA) construction
algorithm</a> produces minimal SSA form for input programs with
a reducible control flow graph (CFG). However Braun et al. also
proposed an extension to their algorithm that they claim produces
minimal SSA form even for irreducible CFGs.<br> In this
formalization we support that claim by giving a mechanized proof.
</p>
<p>As the extension of Braun et al.'s algorithm
aims for removing so-called redundant strongly connected components of
phi functions, we show that this suffices to guarantee minimality
according to <a href="https://doi.org/10.1145/115372.115320">Cytron et
al.</a>.</p>
[PropResPI]
title = Propositional Resolution and Prime Implicates Generation
author = Nicolas Peltier <http://membres-lig.imag.fr/peltier/>
notify = Nicolas.Peltier@imag.fr
date = 2016-03-11
topic = Logic/General logic/Mechanization of proofs
abstract =
We provide formal proofs in Isabelle-HOL (using mostly structured Isar
proofs) of the soundness and completeness of the Resolution rule in
propositional logic. The completeness proofs take into account the
usual redundancy elimination rules (tautology elimination and
subsumption), and several refinements of the Resolution rule are
considered: ordered resolution (with selection functions), positive
and negative resolution, semantic resolution and unit resolution (the
latter refinement is complete only for clause sets that are Horn-
renamable). We also define a concrete procedure for computing
saturated sets and establish its soundness and completeness. The
clause sets are not assumed to be finite, so that the results can be
applied to formulas obtained by grounding sets of first-order clauses
(however, a total ordering among atoms is assumed to be given).
Next, we show that the unrestricted Resolution rule is deductive-
complete, in the sense that it is able to generate all (prime)
implicates of any set of propositional clauses (i.e., all entailment-
minimal, non-valid, clausal consequences of the considered set). The
generation of prime implicates is an important problem, with many
applications in artificial intelligence and verification (for
abductive reasoning, knowledge compilation, diagnosis, debugging
etc.). We also show that implicates can be computed in an incremental
way, by fixing an ordering among all the atoms in the considered sets
and resolving upon these atoms one by one in the considered order
(with no backtracking). This feature is critical for the efficient
computation of prime implicates. Building on these results, we provide
a procedure for computing such implicates and establish its soundness
and completeness.
[SuperCalc]
title = A Variant of the Superposition Calculus
author = Nicolas Peltier <http://membres-lig.imag.fr/peltier/>
notify = Nicolas.Peltier@imag.fr
date = 2016-09-06
topic = Logic/Proof theory
abstract =
We provide a formalization of a variant of the superposition
calculus, together with formal proofs of soundness and refutational
completeness (w.r.t. the usual redundancy criteria based on clause
ordering). This version of the calculus uses all the standard
restrictions of the superposition rules, together with the following
refinement, inspired by the basic superposition calculus: each clause
is associated with a set of terms which are assumed to be in normal
form -- thus any application of the replacement rule on these terms is
blocked. The set is initially empty and terms may be added or removed
at each inference step. The set of terms that are assumed to be in
normal form includes any term introduced by previous unifiers as well
as any term occurring in the parent clauses at a position that is
smaller (according to some given ordering on positions) than a
previously replaced term. The standard superposition calculus
corresponds to the case where the set of irreducible terms is always
empty.
[Nominal2]
title = Nominal 2
author = Christian Urban <http://www.inf.kcl.ac.uk/staff/urbanc/>, Stefan Berghofer <http://www.in.tum.de/~berghofe>, Cezary Kaliszyk <http://cl-informatik.uibk.ac.at/users/cek/>
date = 2013-02-21
topic = Tools
abstract =
<p>Dealing with binders, renaming of bound variables, capture-avoiding
substitution, etc., is very often a major problem in formal
proofs, especially in proofs by structural and rule
induction. Nominal Isabelle is designed to make such proofs easy to
formalise: it provides an infrastructure for declaring nominal
datatypes (that is alpha-equivalence classes) and for defining
functions over them by structural recursion. It also provides
induction principles that have Barendregt’s variable convention
already built in.
</p><p>
This entry can be used as a more advanced replacement for
HOL/Nominal in the Isabelle distribution.
</p>
notify = christian.urban@kcl.ac.uk
[First_Welfare_Theorem]
title = Microeconomics and the First Welfare Theorem
author = Julian Parsert <mailto:julian.parsert@gmail.com>, Cezary Kaliszyk<http://cl-informatik.uibk.ac.at/users/cek/>
-topic = Mathematics/Games and Economics
+topic = Mathematics/Games and economics
license = LGPL
date = 2017-09-01
notify = julian.parsert@uibk.ac.at, cezary.kaliszyk@uibk.ac.at
abstract =
Economic activity has always been a fundamental part of society. Due
to modern day politics, economic theory has gained even more influence
on our lives. Thus we want models and theories to be as precise as
possible. This can be achieved using certification with the help of
formal proof technology. Hence we will use Isabelle/HOL to construct
two economic models, that of the the pure exchange economy and a
version of the Arrow-Debreu Model. We will prove that the
<i>First Theorem of Welfare Economics</i> holds within
both. The theorem is the mathematical formulation of Adam Smith's
famous <i>invisible hand</i> and states that a group of
self-interested and rational actors will eventually achieve an
efficient allocation of goods and services.
extra-history =
Change history:
[2018-06-17]: Added some lemmas and a theory file, also introduced Microeconomics folder.
<br>
[Noninterference_Sequential_Composition]
title = Conservation of CSP Noninterference Security under Sequential Composition
author = Pasquale Noce <mailto:pasquale.noce.lavoro@gmail.com>
date = 2016-04-26
-topic = Computer Science/Security, Computer Science/Concurrency/Process Calculi
+topic = Computer science/Security, Computer science/Concurrency/Process calculi
abstract =
<p>In his outstanding work on Communicating Sequential Processes, Hoare
has defined two fundamental binary operations allowing to compose the
input processes into another, typically more complex, process:
sequential composition and concurrent composition. Particularly, the
output of the former operation is a process that initially behaves
like the first operand, and then like the second operand once the
execution of the first one has terminated successfully, as long as it
does.</p>
<p>This paper formalizes Hoare's definition of sequential
composition and proves, in the general case of a possibly intransitive
policy, that CSP noninterference security is conserved under this
operation, provided that successful termination cannot be affected by
confidential events and cannot occur as an alternative to other events
in the traces of the first operand. Both of these assumptions are
shown, by means of counterexamples, to be necessary for the theorem to
hold.</p>
notify = pasquale.noce.lavoro@gmail.com
[Noninterference_Concurrent_Composition]
title = Conservation of CSP Noninterference Security under Concurrent Composition
author = Pasquale Noce <mailto:pasquale.noce.lavoro@gmail.com>
notify = pasquale.noce.lavoro@gmail.com
date = 2016-06-13
-topic = Computer Science/Security, Computer Science/Concurrency/Process Calculi
+topic = Computer science/Security, Computer science/Concurrency/Process calculi
abstract =
<p>In his outstanding work on Communicating Sequential Processes,
Hoare has defined two fundamental binary operations allowing to
compose the input processes into another, typically more complex,
process: sequential composition and concurrent composition.
Particularly, the output of the latter operation is a process in which
any event not shared by both operands can occur whenever the operand
that admits the event can engage in it, whereas any event shared by
both operands can occur just in case both can engage in it.</p>
<p>This paper formalizes Hoare's definition of concurrent composition
and proves, in the general case of a possibly intransitive policy,
that CSP noninterference security is conserved under this operation.
This result, along with the previous analogous one concerning
sequential composition, enables the construction of more and more
complex processes enforcing noninterference security by composing,
sequentially or concurrently, simpler secure processes, whose security
can in turn be proven using either the definition of security, or
unwinding theorems.</p>
[ROBDD]
title = Algorithms for Reduced Ordered Binary Decision Diagrams
author = Julius Michaelis <http://liftm.de>, Maximilian Haslbeck <http://cl-informatik.uibk.ac.at/users/mhaslbeck//>, Peter Lammich <http://www21.in.tum.de/~lammich>, Lars Hupel <https://www21.in.tum.de/~hupel/>
date = 2016-04-27
-topic = Computer Science/Algorithms, Computer Science/Data Structures
+topic = Computer science/Algorithms, Computer science/Data structures
abstract =
We present a verified and executable implementation of ROBDDs in
Isabelle/HOL. Our implementation relates pointer-based computation in
the Heap monad to operations on an abstract definition of boolean
functions. Internally, we implemented the if-then-else combinator in a
recursive fashion, following the Shannon decomposition of the argument
functions. The implementation mixes and adapts known techniques and is
built with efficiency in mind.
notify = bdd@liftm.de, haslbecm@in.tum.de
[No_FTL_observers]
title = No Faster-Than-Light Observers
author = Mike Stannett <mailto:m.stannett@sheffield.ac.uk>, István Németi <http://www.renyi.hu/~nemeti/>
date = 2016-04-28
topic = Mathematics/Physics
abstract =
We provide a formal proof within First Order Relativity Theory that no
observer can travel faster than the speed of light. Originally
reported in Stannett & Németi (2014) "Using Isabelle/HOL to verify
first-order relativity theory", Journal of Automated Reasoning 52(4),
pp. 361-378.
notify = m.stannett@sheffield.ac.uk
[Groebner_Bases]
title = Gröbner Bases Theory
author = Fabian Immler <http://www21.in.tum.de/~immler>, Alexander Maletzky <https://risc.jku.at/m/alexander-maletzky/>
date = 2016-05-02
-topic = Mathematics/Algebra, Computer Science/Algorithms/Mathematical
+topic = Mathematics/Algebra, Computer science/Algorithms/Mathematical
abstract =
This formalization is concerned with the theory of Gröbner bases in
(commutative) multivariate polynomial rings over fields, originally
developed by Buchberger in his 1965 PhD thesis. Apart from the
statement and proof of the main theorem of the theory, the
formalization also implements Buchberger's algorithm for actually
computing Gröbner bases as a tail-recursive function, thus allowing to
effectively decide ideal membership in finitely generated polynomial
ideals. Furthermore, all functions can be executed on a concrete
representation of multivariate polynomials as association lists.
extra-history =
Change history:
[2019-04-18]: Specialized Gröbner bases to less abstract representation of polynomials, where
power-products are represented as polynomial mappings.<br>
notify = alexander.maletzky@risc.jku.at
[Nullstellensatz]
title = Hilbert's Nullstellensatz
author = Alexander Maletzky <https://risc.jku.at/m/alexander-maletzky/>
topic = Mathematics/Algebra, Mathematics/Geometry
date = 2019-06-16
notify = alexander.maletzky@risc-software.at
abstract =
This entry formalizes Hilbert's Nullstellensatz, an important
theorem in algebraic geometry that can be viewed as the generalization
of the Fundamental Theorem of Algebra to multivariate polynomials: If
a set of (multivariate) polynomials over an algebraically closed field
has no common zero, then the ideal it generates is the entire
polynomial ring. The formalization proves several equivalent versions
of this celebrated theorem: the weak Nullstellensatz, the strong
Nullstellensatz (connecting algebraic varieties and radical ideals),
and the field-theoretic Nullstellensatz. The formalization follows
Chapter 4.1. of <a
href="https://link.springer.com/book/10.1007/978-0-387-35651-8">Ideals,
Varieties, and Algorithms</a> by Cox, Little and O'Shea.
[Bell_Numbers_Spivey]
title = Spivey's Generalized Recurrence for Bell Numbers
author = Lukas Bulwahn <mailto:lukas.bulwahn@gmail.com>
date = 2016-05-04
topic = Mathematics/Combinatorics
abstract =
This entry defines the Bell numbers as the cardinality of set partitions for
a carrier set of given size, and derives Spivey's generalized recurrence
relation for Bell numbers following his elegant and intuitive combinatorial
proof.
<p>
As the set construction for the combinatorial proof requires construction of
three intermediate structures, the main difficulty of the formalization is
handling the overall combinatorial argument in a structured way.
The introduced proof structure allows us to compose the combinatorial argument
from its subparts, and supports to keep track how the detailed proof steps are
related to the overall argument. To obtain this structure, this entry uses set
monad notation for the set construction's definition, introduces suitable
predicates and rules, and follows a repeating structure in its Isar proof.
notify = lukas.bulwahn@gmail.com
[Randomised_Social_Choice]
title = Randomised Social Choice Theory
author = Manuel Eberl <mailto:eberlm@in.tum.de>
date = 2016-05-05
-topic = Mathematics/Games and Economics
+topic = Mathematics/Games and economics
abstract =
This work contains a formalisation of basic Randomised Social Choice,
including Stochastic Dominance and Social Decision Schemes (SDSs)
along with some of their most important properties (Anonymity,
Neutrality, ex-post- and SD-Efficiency, SD-Strategy-Proofness) and two
particular SDSs – Random Dictatorship and Random Serial Dictatorship
(with proofs of the properties that they satisfy). Many important
properties of these concepts are also proven – such as the two
equivalent characterisations of Stochastic Dominance and the fact that
SD-efficiency of a lottery only depends on the support. The entry
also provides convenient commands to define Preference Profiles, prove
their well-formedness, and automatically derive restrictions that
sufficiently nice SDSs need to satisfy on the defined profiles.
Currently, the formalisation focuses on weak preferences and
Stochastic Dominance, but it should be easy to extend it to other
domains – such as strict preferences – or other lottery extensions –
such as Bilinear Dominance or Pairwise Comparison.
notify = eberlm@in.tum.de
[SDS_Impossibility]
title = The Incompatibility of SD-Efficiency and SD-Strategy-Proofness
author = Manuel Eberl <mailto:eberlm@in.tum.de>
date = 2016-05-04
-topic = Mathematics/Games and Economics
+topic = Mathematics/Games and economics
abstract =
This formalisation contains the proof that there is no anonymous and
neutral Social Decision Scheme for at least four voters and
alternatives that fulfils both SD-Efficiency and SD-Strategy-
Proofness. The proof is a fully structured and quasi-human-redable
one. It was derived from the (unstructured) SMT proof of the case for
exactly four voters and alternatives by Brandl et al. Their proof
relies on an unverified translation of the original problem to SMT,
and the proof that lifts the argument for exactly four voters and
alternatives to the general case is also not machine-checked. In this
Isabelle proof, on the other hand, all of these steps are fully
proven and machine-checked. This is particularly important seeing as a
previously published informal proof of a weaker statement contained a
mistake in precisely this lifting step.
notify = eberlm@in.tum.de
[Median_Of_Medians_Selection]
title = The Median-of-Medians Selection Algorithm
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
date = 2017-12-21
notify = eberlm@in.tum.de
abstract =
<p>This entry provides an executable functional implementation
of the Median-of-Medians algorithm for selecting the
<em>k</em>-th smallest element of an unsorted list
deterministically in linear time. The size bounds for the recursive
call that lead to the linear upper bound on the run-time of the
algorithm are also proven. </p>
[Mason_Stothers]
title = The Mason–Stothers Theorem
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
topic = Mathematics/Algebra
date = 2017-12-21
notify = eberlm@in.tum.de
abstract =
<p>This article provides a formalisation of Snyder’s simple and
elegant proof of the Mason&ndash;Stothers theorem, which is the
polynomial analogue of the famous abc Conjecture for integers.
Remarkably, Snyder found this very elegant proof when he was still a
high-school student.</p> <p>In short, the statement of the
theorem is that three non-zero coprime polynomials
<em>A</em>, <em>B</em>, <em>C</em>
over a field which sum to 0 and do not all have vanishing derivatives
fulfil max{deg(<em>A</em>), deg(<em>B</em>),
deg(<em>C</em>)} < deg(rad(<em>ABC</em>))
where the rad(<em>P</em>) denotes the
<em>radical</em> of <em>P</em>,
i.&thinsp;e. the product of all unique irreducible factors of
<em>P</em>.</p> <p>This theorem also implies a
kind of polynomial analogue of Fermat’s Last Theorem for polynomials:
except for trivial cases,
<em>A<sup>n</sup></em> +
<em>B<sup>n</sup></em> +
<em>C<sup>n</sup></em> = 0 implies
n&nbsp;&le;&nbsp;2 for coprime polynomials
<em>A</em>, <em>B</em>, <em>C</em>
over a field.</em></p>
[FLP]
title = A Constructive Proof for FLP
author = Benjamin Bisping <mailto:benjamin.bisping@campus.tu-berlin.de>, Paul-David Brodmann <mailto:p.brodmann@tu-berlin.de>, Tim Jungnickel <mailto:tim.jungnickel@tu-berlin.de>, Christina Rickmann <mailto:c.rickmann@tu-berlin.de>, Henning Seidler <mailto:henning.seidler@mailbox.tu-berlin.de>, Anke Stüber <mailto:anke.stueber@campus.tu-berlin.de>, Arno Wilhelm-Weidner <mailto:arno.wilhelm-weidner@tu-berlin.de>, Kirstin Peters <mailto:kirstin.peters@tu-berlin.de>, Uwe Nestmann <https://www.mtv.tu-berlin.de/nestmann/>
date = 2016-05-18
-topic = Computer Science/Concurrency
+topic = Computer science/Concurrency
abstract =
The impossibility of distributed consensus with one faulty process is
a result with important consequences for real world distributed
systems e.g., commits in replicated databases. Since proofs are not
immune to faults and even plausible proofs with a profound formalism
can conclude wrong results, we validate the fundamental result named
FLP after Fischer, Lynch and Paterson.
We present a formalization of distributed systems
and the aforementioned consensus problem. Our proof is based on Hagen
Völzer's paper "A constructive proof for FLP". In addition to the
enhanced confidence in the validity of Völzer's proof, we contribute
the missing gaps to show the correctness in Isabelle/HOL. We clarify
the proof details and even prove fairness of the infinite execution
that contradicts consensus. Our Isabelle formalization can also be
reused for further proofs of properties of distributed systems.
notify = henning.seidler@mailbox.tu-berlin.de
[IMAP-CRDT]
title = The IMAP CmRDT
author = Tim Jungnickel <mailto:tim.jungnickel@tu-berlin.de>, Lennart Oldenburg <>, Matthias Loibl <>
-topic = Computer Science/Algorithms/Distributed, Computer Science/Data Structures
+topic = Computer science/Algorithms/Distributed, Computer science/Data structures
date = 2017-11-09
notify = tim.jungnickel@tu-berlin.de
abstract =
We provide our Isabelle/HOL formalization of a Conflict-free
Replicated Datatype for Internet Message Access Protocol commands.
We show that Strong Eventual Consistency (SEC) is guaranteed
by proving the commutativity of concurrent operations. We base our
formalization on the recently proposed "framework for
establishing Strong Eventual Consistency for Conflict-free Replicated
Datatypes" (AFP.CRDT) from Gomes et al. Hence, we provide an
additional example of how the recently proposed framework can be used
to design and prove CRDTs.
[Incredible_Proof_Machine]
title = The meta theory of the Incredible Proof Machine
author = Joachim Breitner <http://pp.ipd.kit.edu/~breitner>, Denis Lohner <http://pp.ipd.kit.edu/person.php?id=88>
date = 2016-05-20
topic = Logic/Proof theory
abstract =
The <a href="http://incredible.pm">Incredible Proof Machine</a> is an
interactive visual theorem prover which represents proofs as port
graphs. We model this proof representation in Isabelle, and prove that
it is just as powerful as natural deduction.
notify = mail@joachim-breitner.de
[Word_Lib]
title = Finite Machine Word Library
author = Joel Beeren<>, Matthew Fernandez<>, Xin Gao<>, Gerwin Klein <http://www.cse.unsw.edu.au/~kleing/>, Rafal Kolanski<>, Japheth Lim<>, Corey Lewis<>, Daniel Matichuk<>, Thomas Sewell<>
notify = kleing@unsw.edu.au
date = 2016-06-09
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
abstract =
This entry contains an extension to the Isabelle library for
fixed-width machine words. In particular, the entry adds quickcheck setup
for words, printing as hexadecimals, additional operations, reasoning
about alignment, signed words, enumerations of words, normalisation of
word numerals, and an extensive library of properties about generic
fixed-width words, as well as an instantiation of many of these to the
commonly used 32 and 64-bit bases.
[Catalan_Numbers]
title = Catalan Numbers
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
notify = eberlm@in.tum.de
date = 2016-06-21
topic = Mathematics/Combinatorics
abstract =
<p>In this work, we define the Catalan numbers <em>C<sub>n</sub></em>
and prove several equivalent definitions (including some closed-form
formulae). We also show one of their applications (counting the number
of binary trees of size <em>n</em>), prove the asymptotic growth
approximation <em>C<sub>n</sub> &sim; 4<sup>n</sup> / (&radic;<span
style="text-decoration: overline">&pi;</span> &middot;
n<sup>1.5</sup>)</em>, and provide reasonably efficient executable
code to compute them.</p> <p>The derivation of the closed-form
formulae uses algebraic manipulations of the ordinary generating
function of the Catalan numbers, and the asymptotic approximation is
then done using generalised binomial coefficients and the Gamma
function. Thanks to these highly non-elementary mathematical tools,
the proofs are very short and simple.</p>
[Fisher_Yates]
title = Fisher–Yates shuffle
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
notify = eberlm@in.tum.de
date = 2016-09-30
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
abstract =
<p>This work defines and proves the correctness of the Fisher–Yates
algorithm for shuffling – i.e. producing a random permutation – of a
list. The algorithm proceeds by traversing the list and in
each step swapping the current element with a random element from the
remaining list.</p>
[Bertrands_Postulate]
title = Bertrand's postulate
author = Julian Biendarra<>, Manuel Eberl <https://www21.in.tum.de/~eberlm>
contributors = Lawrence C. Paulson <http://www.cl.cam.ac.uk/~lp15/>
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
date = 2017-01-17
notify = eberlm@in.tum.de
abstract =
<p>Bertrand's postulate is an early result on the
distribution of prime numbers: For every positive integer n, there
exists a prime number that lies strictly between n and 2n.
The proof is ported from John Harrison's formalisation
in HOL Light. It proceeds by first showing that the property is true
for all n greater than or equal to 600 and then showing that it also
holds for all n below 600 by case distinction. </p>
[Rewriting_Z]
title = The Z Property
author = Bertram Felgenhauer<>, Julian Nagele<>, Vincent van Oostrom<>, Christian Sternagel <mailto:c.sternagel@gmail.com>
notify = bertram.felgenhauer@uibk.ac.at, julian.nagele@uibk.ac.at, c.sternagel@gmail.com
date = 2016-06-30
topic = Logic/Rewriting
abstract =
We formalize the Z property introduced by Dehornoy and van Oostrom.
First we show that for any abstract rewrite system, Z implies
confluence. Then we give two examples of proofs using Z: confluence of
lambda-calculus with respect to beta-reduction and confluence of
combinatory logic.
[Resolution_FOL]
title = The Resolution Calculus for First-Order Logic
author = Anders Schlichtkrull <https://people.compute.dtu.dk/andschl/>
notify = andschl@dtu.dk
date = 2016-06-30
topic = Logic/General logic/Mechanization of proofs
abstract =
This theory is a formalization of the resolution calculus for
first-order logic. It is proven sound and complete. The soundness
proof uses the substitution lemma, which shows a correspondence
between substitutions and updates to an environment. The completeness
proof uses semantic trees, i.e. trees whose paths are partial Herbrand
interpretations. It employs Herbrand's theorem in a formulation which
states that an unsatisfiable set of clauses has a finite closed
semantic tree. It also uses the lifting lemma which lifts resolution
derivation steps from the ground world up to the first-order world.
The theory is presented in a paper in the Journal of Automated Reasoning
[Sch18] which extends a paper presented at the International Conference
on Interactive Theorem Proving [Sch16]. An earlier version was
presented in an MSc thesis [Sch15]. The formalization mostly follows
textbooks by Ben-Ari [BA12], Chang and Lee [CL73], and Leitsch [Lei97].
The theory is part of the IsaFoL project [IsaFoL]. <p>
<a name="Sch18"></a>[Sch18] Anders Schlichtkrull. "Formalization of the
Resolution Calculus for First-Order Logic". Journal of Automated
Reasoning, 2018.<br> <a name="Sch16"></a>[Sch16] Anders
Schlichtkrull. "Formalization of the Resolution Calculus for First-Order
Logic". In: ITP 2016. Vol. 9807. LNCS. Springer, 2016.<br>
<a name="Sch15"></a>[Sch15] Anders Schlichtkrull. <a href="https://people.compute.dtu.dk/andschl/Thesis.pdf">
"Formalization of Resolution Calculus in Isabelle"</a>.
<a href="https://people.compute.dtu.dk/andschl/Thesis.pdf">https://people.compute.dtu.dk/andschl/Thesis.pdf</a>.
MSc thesis. Technical University of Denmark, 2015.<br>
<a name="BA12"></a>[BA12] Mordechai Ben-Ari. <i>Mathematical Logic for
Computer Science</i>. 3rd. Springer, 2012.<br> <a
name="CL73"></a>[CL73] Chin-Liang Chang and Richard Char-Tung Lee.
<i>Symbolic Logic and Mechanical Theorem Proving</i>. 1st. Academic
Press, Inc., 1973.<br> <a name="Lei97"></a>[Lei97] Alexander
Leitsch. <i>The Resolution Calculus</i>. Texts in theoretical computer
science. Springer, 1997.<br> <a name="IsaFoL"></a>[IsaFoL]
IsaFoL authors. <a href="https://bitbucket.org/jasmin_blanchette/isafol">
IsaFoL: Isabelle Formalization of Logic</a>.
<a href="https://bitbucket.org/jasmin_blanchette/isafol">https://bitbucket.org/jasmin_blanchette/isafol</a>.
extra-history =
Change history:
[2018-01-24]: added several new versions of the soundness and completeness theorems as described in the paper [Sch18]. <br>
[2018-03-20]: added a concrete instance of the unification and completeness theorems using the First-Order Terms AFP-entry from IsaFoR as described in the papers [Sch16] and [Sch18].
[Surprise_Paradox]
title = Surprise Paradox
author = Joachim Breitner <http://pp.ipd.kit.edu/~breitner>
notify = mail@joachim-breitner.de
date = 2016-07-17
topic = Logic/Proof theory
abstract =
In 1964, Fitch showed that the paradox of the surprise hanging can be
resolved by showing that the judge’s verdict is inconsistent. His
formalization builds on Gödel’s coding of provability. In this
theory, we reproduce his proof in Isabelle, building on Paulson’s
formalisation of Gödel’s incompleteness theorems.
[Ptolemys_Theorem]
title = Ptolemy's Theorem
author = Lukas Bulwahn <mailto:lukas.bulwahn@gmail.com>
notify = lukas.bulwahn@gmail.com
date = 2016-08-07
topic = Mathematics/Geometry
abstract =
This entry provides an analytic proof to Ptolemy's Theorem using
polar form transformation and trigonometric identities.
In this formalization, we use ideas from John Harrison's HOL Light
formalization and the proof sketch on the Wikipedia entry of Ptolemy's Theorem.
This theorem is the 95th theorem of the Top 100 Theorems list.
[Falling_Factorial_Sum]
title = The Falling Factorial of a Sum
author = Lukas Bulwahn <mailto:lukas.bulwahn@gmail.com>
topic = Mathematics/Combinatorics
date = 2017-12-22
notify = lukas.bulwahn@gmail.com
abstract =
This entry shows that the falling factorial of a sum can be computed
with an expression using binomial coefficients and the falling
factorial of its summands. The entry provides three different proofs:
a combinatorial proof, an induction proof and an algebraic proof using
the Vandermonde identity. The three formalizations try to follow
their informal presentations from a Mathematics Stack Exchange page as
close as possible. The induction and algebraic formalization end up to
be very close to their informal presentation, whereas the
combinatorial proof first requires the introduction of list
interleavings, and significant more detail than its informal
presentation.
[InfPathElimination]
title = Infeasible Paths Elimination by Symbolic Execution Techniques: Proof of Correctness and Preservation of Paths
author = Romain Aissat<>, Frederic Voisin<>, Burkhart Wolff <mailto:wolff@lri.fr>
notify = wolff@lri.fr
date = 2016-08-18
-topic = Computer Science/Programming Languages/Static Analysis
+topic = Computer science/Programming languages/Static analysis
abstract =
TRACER is a tool for verifying safety properties of sequential C
programs. TRACER attempts at building a finite symbolic execution
graph which over-approximates the set of all concrete reachable states
and the set of feasible paths. We present an abstract framework for
TRACER and similar CEGAR-like systems. The framework provides 1) a
graph- transformation based method for reducing the feasible paths in
control-flow graphs, 2) a model for symbolic execution, subsumption,
predicate abstraction and invariant generation. In this framework we
formally prove two key properties: correct construction of the
symbolic states and preservation of feasible paths. The framework
focuses on core operations, leaving to concrete prototypes to “fit in”
heuristics for combining them. The accompanying paper (published in
ITP 2016) can be found at
https://www.lri.fr/∼wolff/papers/conf/2016-itp-InfPathsNSE.pdf.
[Stirling_Formula]
title = Stirling's formula
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
notify = eberlm@in.tum.de
date = 2016-09-01
topic = Mathematics/Analysis
abstract =
- This work contains a proof of Stirling's formula both for the
- factorial n! &sim; &radic;<span style="text-decoration:
- overline">2&pi;n</span> (n/e)<sup>n</sup> on natural numbers and the
- real Gamma function &Gamma;(x) &sim; &radic;<span
- style="text-decoration: overline">2&pi;/x</span> (x/e)<sup>x</sup>.
- The proof is based on work by <a
- href="http://www.maths.lancs.ac.uk/~jameson/stirlgamma.pdf">Graham
- Jameson</a>.
+ <p>This work contains a proof of Stirling's formula both for the factorial $n! \sim \sqrt{2\pi n} (n/e)^n$ on natural numbers and the real
+ Gamma function $\Gamma(x)\sim \sqrt{2\pi/x} (x/e)^x$. The proof is based on work by <a
+ href="http://www.maths.lancs.ac.uk/~jameson/stirlgamma.pdf">Graham Jameson</a>.</p>
+ <p>This is then extended to the full asymptotic expansion
+ $$\log\Gamma(z) = \big(z - \tfrac{1}{2}\big)\log z - z + \tfrac{1}{2}\log(2\pi) + \sum_{k=1}^{n-1} \frac{B_{k+1}}{k(k+1)} z^{-k}\\
+ {} - \frac{1}{n} \int_0^\infty B_n([t])(t + z)^{-n}\,\text{d}t$$
+ uniformly for all complex $z\neq 0$ in the cone $\text{arg}(z)\leq \alpha$ for any $\alpha\in(0,\pi)$, with which the above asymptotic
+ relation for &Gamma; is also extended to complex arguments.</p>
[Lp]
title = Lp spaces
author = Sebastien Gouezel <http://www.math.sciences.univ-nantes.fr/~gouezel/>
notify = sebastien.gouezel@univ-rennes1.fr
date = 2016-10-05
topic = Mathematics/Analysis
abstract =
Lp is the space of functions whose p-th power is integrable. It is one of the most fundamental Banach spaces that is used in analysis and probability. We develop a framework for function spaces, and then implement the Lp spaces in this framework using the existing integration theory in Isabelle/HOL. Our development contains most fundamental properties of Lp spaces, notably the Hölder and Minkowski inequalities, completeness of Lp, duality, stability under almost sure convergence, multiplication of functions in Lp and Lq, stability under conditional expectation.
[Berlekamp_Zassenhaus]
title = The Factorization Algorithm of Berlekamp and Zassenhaus
author = Jose Divasón <http://www.unirioja.es/cu/jodivaso>, Sebastiaan Joosten <mailto:sebastiaan.joosten@uibk.ac.at>, René Thiemann <mailto:rene.thiemann@uibk.ac.at>, Akihisa Yamada <mailto:akihisa.yamada@uibk.ac.at>
notify = rene.thiemann@uibk.ac.at
date = 2016-10-14
topic = Mathematics/Algebra
abstract =
<p>We formalize the Berlekamp-Zassenhaus algorithm for factoring
square-free integer polynomials in Isabelle/HOL. We further adapt an
existing formalization of Yun’s square-free factorization algorithm to
integer polynomials, and thus provide an efficient and certified
factorization algorithm for arbitrary univariate polynomials.
</p>
<p>The algorithm first performs a factorization in the prime field GF(p) and
then performs computations in the integer ring modulo p^k, where both
p and k are determined at runtime. Since a natural modeling of these
structures via dependent types is not possible in Isabelle/HOL, we
formalize the whole algorithm using Isabelle’s recent addition of
local type definitions.
</p>
<p>Through experiments we verify that our algorithm factors polynomials of degree
100 within seconds.
</p>
[Allen_Calculus]
title = Allen's Interval Calculus
author = Fadoua Ghourabi <>
notify = fadouaghourabi@gmail.com
date = 2016-09-29
topic = Logic/General logic/Temporal logic, Mathematics/Order
abstract =
Allen’s interval calculus is a qualitative temporal representation of
time events. Allen introduced 13 binary relations that describe all
the possible arrangements between two events, i.e. intervals with
non-zero finite length. The compositions are pertinent to
reasoning about knowledge of time. In particular, a consistency
problem of relation constraints is commonly solved with a guideline
from these compositions. We formalize the relations together with an
axiomatic system. We proof the validity of the 169 compositions of
these relations. We also define nests as the sets of intervals that
share a meeting point. We prove that nests give the ordering
properties of points without introducing a new datatype for points.
[1] J.F. Allen. Maintaining Knowledge about Temporal Intervals. In
Commun. ACM, volume 26, pages 832–843, 1983. [2] J. F. Allen and P. J.
Hayes. A Common-sense Theory of Time. In Proceedings of the 9th
International Joint Conference on Artificial Intelligence (IJCAI’85),
pages 528–531, 1985.
[Source_Coding_Theorem]
title = Source Coding Theorem
author = Quentin Hibon <mailto:qh225@cl.cam.ac.uk>, Lawrence C. Paulson <mailto:lp15@cam.ac.uk>
notify = qh225@cl.cam.ac.uk
date = 2016-10-19
-topic = Mathematics/Probability Theory
+topic = Mathematics/Probability theory
abstract =
This document contains a proof of the necessary condition on the code
rate of a source code, namely that this code rate is bounded by the
entropy of the source. This represents one half of Shannon's source
coding theorem, which is itself an equivalence.
[Buffons_Needle]
title = Buffon's Needle Problem
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Probability Theory, Mathematics/Geometry
+topic = Mathematics/Probability theory, Mathematics/Geometry
date = 2017-06-06
notify = eberlm@in.tum.de
abstract =
In the 18th century, Georges-Louis Leclerc, Comte de Buffon posed and
later solved the following problem, which is often called the first
problem ever solved in geometric probability: Given a floor divided
into vertical strips of the same width, what is the probability that a
needle thrown onto the floor randomly will cross two strips? This
entry formally defines the problem in the case where the needle's
position is chosen uniformly at random in a single strip around the
origin (which is equivalent to larger arrangements due to symmetry).
It then provides proofs of the simple solution in the case where the
needle's length is no greater than the width of the strips and
the more complicated solution in the opposite case.
[SPARCv8]
title = A formal model for the SPARCv8 ISA and a proof of non-interference for the LEON3 processor
author = Zhe Hou <mailto:zhe.hou@ntu.edu.sg>, David Sanan <mailto:sanan@ntu.edu.sg>, Alwen Tiu <mailto:ATiu@ntu.edu.sg>, Yang Liu <mailto:yangliu@ntu.edu.sg>
notify = zhe.hou@ntu.edu.sg, sanan@ntu.edu.sg
date = 2016-10-19
-topic = Computer Science/Security, Computer Science/Hardware
+topic = Computer science/Security, Computer science/Hardware
abstract =
We formalise the SPARCv8 instruction set architecture (ISA) which is
used in processors such as LEON3. Our formalisation can be specialised
to any SPARCv8 CPU, here we use LEON3 as a running example. Our model
covers the operational semantics for all the instructions in the
integer unit of the SPARCv8 architecture and it supports Isabelle code
export, which effectively turns the Isabelle model into a SPARCv8 CPU
simulator. We prove the language-based non-interference property for
the LEON3 processor. Our model is based on deterministic monad, which
is a modified version of the non-deterministic monad from NICTA/l4v.
[Separata]
title = Separata: Isabelle tactics for Separation Algebra
author = Zhe Hou <mailto:zhe.hou@ntu.edu.sg>, David Sanan <mailto:sanan@ntu.edu.sg>, Alwen Tiu <mailto:ATiu@ntu.edu.sg>, Rajeev Gore <mailto:rajeev.gore@anu.edu.au>, Ranald Clouston <mailto:ranald.clouston@cs.au.dk>
notify = zhe.hou@ntu.edu.sg
date = 2016-11-16
-topic = Computer Science/Programming Languages/Logics, Tools
+topic = Computer science/Programming languages/Logics, Tools
abstract =
We bring the labelled sequent calculus $LS_{PASL}$ for propositional
abstract separation logic to Isabelle. The tactics given here are
directly applied on an extension of the Separation Algebra in the AFP.
In addition to the cancellative separation algebra, we further
consider some useful properties in the heap model of separation logic,
such as indivisible unit, disjointness, and cross-split. The tactics
are essentially a proof search procedure for the calculus $LS_{PASL}$.
We wrap the tactics in an Isabelle method called separata, and give a
few examples of separation logic formulae which are provable by
separata.
[LOFT]
title = LOFT — Verified Migration of Linux Firewalls to SDN
author = Julius Michaelis <http://liftm.de>, Cornelius Diekmann <http://net.in.tum.de/~diekmann>
notify = isabelleopenflow@liftm.de
date = 2016-10-21
-topic = Computer Science/Networks
+topic = Computer science/Networks
abstract =
We present LOFT — Linux firewall OpenFlow Translator, a system that
transforms the main routing table and FORWARD chain of iptables of a
Linux-based firewall into a set of static OpenFlow rules. Our
implementation is verified against a model of a simplified Linux-based
router and we can directly show how much of the original functionality
is preserved.
[Stable_Matching]
title = Stable Matching
author = Peter Gammie <http://peteg.org>
notify = peteg42@gmail.com
date = 2016-10-24
-topic = Mathematics/Games and Economics
+topic = Mathematics/Games and economics
abstract =
We mechanize proofs of several results from the matching with
contracts literature, which generalize those of the classical
two-sided matching scenarios that go by the name of stable marriage.
Our focus is on game theoretic issues. Along the way we develop
executable algorithms for computing optimal stable matches.
[Modal_Logics_for_NTS]
title = Modal Logics for Nominal Transition Systems
author = Tjark Weber <mailto:tjark.weber@it.uu.se>, Lars-Henrik Eriksson <mailto:lhe@it.uu.se>, Joachim Parrow <mailto:joachim.parrow@it.uu.se>, Johannes Borgström <mailto:johannes.borgstrom@it.uu.se>, Ramunas Gutkovas <mailto:ramunas.gutkovas@it.uu.se>
notify = tjark.weber@it.uu.se
date = 2016-10-25
-topic = Computer Science/Concurrency/Process Calculi, Logic/General logic/Modal logic
+topic = Computer science/Concurrency/Process calculi, Logic/General logic/Modal logic
abstract =
We formalize a uniform semantic substrate for a wide variety of
process calculi where states and action labels can be from arbitrary
nominal sets. A Hennessy-Milner logic for these systems is defined,
and proved adequate for bisimulation equivalence. A main novelty is
the construction of an infinitary nominal data type to model formulas
with (finitely supported) infinite conjunctions and actions that may
contain binding names. The logic is generalized to treat different
bisimulation variants such as early, late and open in a systematic
way.
extra-history =
Change history:
[2017-01-29]:
Formalization of weak bisimilarity added
(revision c87cc2057d9c)
[Abs_Int_ITP2012]
title = Abstract Interpretation of Annotated Commands
author = Tobias Nipkow <http://www21.in.tum.de/~nipkow>
notify = nipkow@in.tum.de
date = 2016-11-23
-topic = Computer Science/Programming Languages/Static Analysis
+topic = Computer science/Programming languages/Static analysis
abstract =
This is the Isabelle formalization of the material decribed in the
eponymous <a href="https://doi.org/10.1007/978-3-642-32347-8_9">ITP 2012 paper</a>.
It develops a generic abstract interpreter for a
while-language, including widening and narrowing. The collecting
semantics and the abstract interpreter operate on annotated commands:
the program is represented as a syntax tree with the semantic
information directly embedded, without auxiliary labels. The aim of
the formalization is simplicity, not efficiency or
precision. This is motivated by the inclusion of the material in a
theorem prover based course on semantics. A similar (but more
polished) development is covered in the book
<a href="https://doi.org/10.1007/978-3-319-10542-0">Concrete Semantics</a>.
[Complx]
title = COMPLX: A Verification Framework for Concurrent Imperative Programs
author = Sidney Amani<>, June Andronick<>, Maksym Bortin<>, Corey Lewis<>, Christine Rizkallah<>, Joseph Tuong<>
notify = sidney.amani@data61.csiro.au, corey.lewis@data61.csiro.au
date = 2016-11-29
-topic = Computer Science/Programming Languages/Logics, Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Logics, Computer science/Programming languages/Language definitions
abstract =
We propose a concurrency reasoning framework for imperative programs,
based on the Owicki-Gries (OG) foundational shared-variable
concurrency method. Our framework combines the approaches of
Hoare-Parallel, a formalisation of OG in Isabelle/HOL for a simple
while-language, and Simpl, a generic imperative language embedded in
Isabelle/HOL, allowing formal reasoning on C programs. We define the
Complx language, extending the syntax and semantics of Simpl with
support for parallel composition and synchronisation. We additionally
define an OG logic, which we prove sound w.r.t. the semantics, and a
verification condition generator, both supporting involved low-level
imperative constructs such as function calls and abrupt termination.
We illustrate our framework on an example that features exceptions,
guards and function calls. We aim to then target concurrent operating
systems, such as the interruptible eChronos embedded operating system
for which we already have a model-level OG proof using Hoare-Parallel.
extra-history =
Change history:
[2017-01-13]:
Improve VCG for nested parallels and sequential sections
(revision 30739dbc3dcb)
[Paraconsistency]
title = Paraconsistency
author = Anders Schlichtkrull <https://people.compute.dtu.dk/andschl/>, Jørgen Villadsen <https://people.compute.dtu.dk/jovi/>
topic = Logic/General logic/Paraconsistent logics
date = 2016-12-07
notify = andschl@dtu.dk, jovi@dtu.dk
abstract =
Paraconsistency is about handling inconsistency in a coherent way. In
classical and intuitionistic logic everything follows from an
inconsistent theory. A paraconsistent logic avoids the explosion.
Quite a few applications in computer science and engineering are
discussed in the Intelligent Systems Reference Library Volume 110:
Towards Paraconsistent Engineering (Springer 2016). We formalize a
paraconsistent many-valued logic that we motivated and described in a
special issue on logical approaches to paraconsistency (Journal of
Applied Non-Classical Logics 2005). We limit ourselves to the
propositional fragment of the higher-order logic. The logic is based
on so-called key equalities and has a countably infinite number of
truth values. We prove theorems in the logic using the definition of
validity. We verify truth tables and also counterexamples for
non-theorems. We prove meta-theorems about the logic and finally we
investigate a case study.
[Proof_Strategy_Language]
title = Proof Strategy Language
author = Yutaka Nagashima<>
topic = Tools
date = 2016-12-20
notify = Yutaka.Nagashima@data61.csiro.au
abstract =
Isabelle includes various automatic tools for finding proofs under
certain conditions. However, for each conjecture, knowing which
automation to use, and how to tweak its parameters, is currently
labour intensive. We have developed a language, PSL, designed to
capture high level proof strategies. PSL offloads the construction of
human-readable fast-to-replay proof scripts to automatic search,
making use of search-time information about each conjecture. Our
preliminary evaluations show that PSL reduces the labour cost of
interactive theorem proving. This submission contains the
implementation of PSL and an example theory file, Example.thy, showing
how to write poof strategies in PSL.
[Concurrent_Ref_Alg]
title = Concurrent Refinement Algebra and Rely Quotients
author = Julian Fell <mailto:julian.fell@uq.net.au>, Ian J. Hayes <mailto:ian.hayes@itee.uq.edu.au>, Andrius Velykis <http://andrius.velykis.lt>
-topic = Computer Science/Concurrency
+topic = Computer science/Concurrency
date = 2016-12-30
notify = Ian.Hayes@itee.uq.edu.au
abstract =
The concurrent refinement algebra developed here is designed to
provide a foundation for rely/guarantee reasoning about concurrent
programs. The algebra builds on a complete lattice of commands by
providing sequential composition, parallel composition and a novel
weak conjunction operator. The weak conjunction operator coincides
with the lattice supremum providing its arguments are non-aborting,
but aborts if either of its arguments do. Weak conjunction provides an
abstract version of a guarantee condition as a guarantee process. We
distinguish between models that distribute sequential composition over
non-deterministic choice from the left (referred to as being
conjunctive in the refinement calculus literature) and those that
don't. Least and greatest fixed points of monotone functions are
provided to allow recursion and iteration operators to be added to the
language. Additional iteration laws are available for conjunctive
models. The rely quotient of processes <i>c</i> and
<i>i</i> is the process that, if executed in parallel with
<i>i</i> implements <i>c</i>. It represents an
abstract version of a rely condition generalised to a process.
[FOL_Harrison]
title = First-Order Logic According to Harrison
author = Alexander Birch Jensen <https://people.compute.dtu.dk/aleje/>, Anders Schlichtkrull <https://people.compute.dtu.dk/andschl/>, Jørgen Villadsen <https://people.compute.dtu.dk/jovi/>
topic = Logic/General logic/Mechanization of proofs
date = 2017-01-01
notify = aleje@dtu.dk, andschl@dtu.dk, jovi@dtu.dk
abstract =
<p>We present a certified declarative first-order prover with equality
based on John Harrison's Handbook of Practical Logic and
Automated Reasoning, Cambridge University Press, 2009. ML code
reflection is used such that the entire prover can be executed within
Isabelle as a very simple interactive proof assistant. As examples we
consider Pelletier's problems 1-46.</p>
<p>Reference: Programming and Verifying a Declarative First-Order
Prover in Isabelle/HOL. Alexander Birch Jensen, John Bruntse Larsen,
Anders Schlichtkrull & Jørgen Villadsen. AI Communications 31:281-299
2018. <a href="https://content.iospress.com/articles/ai-communications/aic764">
https://content.iospress.com/articles/ai-communications/aic764</a></p>
<p>See also: Students' Proof Assistant (SPA).
<a href=https://github.com/logic-tools/spa>
https://github.com/logic-tools/spa</a></p>
extra-history =
Change history:
[2018-07-21]: Proof of Pelletier's problem 34 (Andrews's Challenge) thanks to Asta Halkjær From.
[Bernoulli]
title = Bernoulli Numbers
author = Lukas Bulwahn<mailto:lukas.bulwahn@gmail.com>, Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Analysis, Mathematics/Number Theory
+topic = Mathematics/Analysis, Mathematics/Number theory
date = 2017-01-24
notify = eberlm@in.tum.de
abstract =
<p>Bernoulli numbers were first discovered in the closed-form
expansion of the sum 1<sup>m</sup> +
2<sup>m</sup> + &hellip; + n<sup>m</sup>
for a fixed m and appear in many other places. This entry provides
three different definitions for them: a recursive one, an explicit
one, and one through their exponential generating function.</p>
<p>In addition, we prove some basic facts, e.g. their relation
to sums of powers of integers and that all odd Bernoulli numbers
except the first are zero, and some advanced facts like their
relationship to the Riemann zeta function on positive even
integers.</p>
<p>We also prove the correctness of the
Akiyama&ndash;Tanigawa algorithm for computing Bernoulli numbers
with reasonable efficiency, and we define the periodic Bernoulli
polynomials (which appear e.g. in the Euler&ndash;MacLaurin
summation formula and the expansion of the log-Gamma function) and
prove their basic properties.</p>
[Stone_Relation_Algebras]
title = Stone Relation Algebras
author = Walter Guttmann <http://www.cosc.canterbury.ac.nz/walter.guttmann/>
topic = Mathematics/Algebra
date = 2017-02-07
notify = walter.guttmann@canterbury.ac.nz
abstract =
We develop Stone relation algebras, which generalise relation algebras
by replacing the underlying Boolean algebra structure with a Stone
algebra. We show that finite matrices over extended real numbers form
an instance. As a consequence, relation-algebraic concepts and methods
can be used for reasoning about weighted graphs. We also develop a
fixpoint calculus and apply it to compare different definitions of
reflexive-transitive closures in semirings.
[Stone_Kleene_Relation_Algebras]
title = Stone-Kleene Relation Algebras
author = Walter Guttmann <http://www.cosc.canterbury.ac.nz/walter.guttmann/>
topic = Mathematics/Algebra
date = 2017-07-06
notify = walter.guttmann@canterbury.ac.nz
abstract =
We develop Stone-Kleene relation algebras, which expand Stone relation
algebras with a Kleene star operation to describe reachability in
weighted graphs. Many properties of the Kleene star arise as a special
case of a more general theory of iteration based on Conway semirings
extended by simulation axioms. This includes several theorems
representing complex program transformations. We formally prove the
correctness of Conway's automata-based construction of the Kleene
star of a matrix. We prove numerous results useful for reasoning about
weighted graphs.
[Abstract_Soundness]
title = Abstract Soundness
author = Jasmin Christian Blanchette <mailto:jasmin.blanchette@gmail.com>, Andrei Popescu <mailto:uuomul@yahoo.com>, Dmitriy Traytel <mailto:traytel@inf.ethz.ch>
topic = Logic/Proof theory
date = 2017-02-10
notify = jasmin.blanchette@gmail.com
abstract =
A formalized coinductive account of the abstract development of
Brotherston, Gorogiannis, and Petersen [APLAS 2012], in a slightly
more general form since we work with arbitrary infinite proofs, which
may be acyclic. This work is described in detail in an article by the
authors, published in 2017 in the <em>Journal of Automated
Reasoning</em>. The abstract proof can be instantiated for
various formalisms, including first-order logic with inductive
predicates.
[Differential_Dynamic_Logic]
title = Differential Dynamic Logic
author = Brandon Bohrer <mailto:bbohrer@cs.cmu.edu>
-topic = Logic/General logic/Modal logic, Computer Science/Programming Languages/Logics
+topic = Logic/General logic/Modal logic, Computer science/Programming languages/Logics
date = 2017-02-13
notify = bbohrer@cs.cmu.edu
abstract =
We formalize differential dynamic logic, a logic for proving
properties of hybrid systems. The proof calculus in this formalization
is based on the uniform substitution principle. We show it is sound
with respect to our denotational semantics, which provides increased
confidence in the correctness of the KeYmaera X theorem prover based
on this calculus. As an application, we include a proof term checker
embedded in Isabelle/HOL with several example proofs. Published in:
Brandon Bohrer, Vincent Rahli, Ivana Vukotic, Marcus Völp, André
Platzer: Formally verified differential dynamic logic. CPP 2017.
[Elliptic_Curves_Group_Law]
title = The Group Law for Elliptic Curves
author = Stefan Berghofer <http://www.in.tum.de/~berghofe>
-topic = Computer Science/Security/Cryptography
+topic = Computer science/Security/Cryptography
date = 2017-02-28
notify = berghofe@in.tum.de
abstract =
We prove the group law for elliptic curves in Weierstrass form over
fields of characteristic greater than 2. In addition to affine
coordinates, we also formalize projective coordinates, which allow for
more efficient computations. By specializing the abstract
formalization to prime fields, we can apply the curve operations to
parameters used in standard security protocols.
[Example-Submission]
title = Example Submission
author = Gerwin Klein <http://www.cse.unsw.edu.au/~kleing/>
-topic = Mathematics/Analysis, Mathematics/Number Theory
+topic = Mathematics/Analysis, Mathematics/Number theory
date = 2004-02-25
notify = kleing@cse.unsw.edu.au
abstract =
<p>This is an example submission to the Archive of Formal Proofs. It shows
submission requirements and explains the structure of a simple typical
submission.</p>
<p>Note that you can use <em>HTML tags</em> and LaTeX formulae like
$\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}$ in the abstract. Display formulae like
$$ \int_0^1 x^{-x}\,\text{d}x = \sum_{n=1}^\infty n^{-n}$$
are also possible. Please read the
<a href="../submitting.html">submission guidelines</a> before using this.</p>
extra-no-index = no-index: true
[CRDT]
title = A framework for establishing Strong Eventual Consistency for Conflict-free Replicated Datatypes
author = Victor B. F. Gomes <mailto:vb358@cam.ac.uk>, Martin Kleppmann<mailto:martin.kleppmann@cl.cam.ac.uk>, Dominic P. Mulligan<mailto:dominic.p.mulligan@googlemail.com>, Alastair R. Beresford<mailto:arb33@cam.ac.uk>
-topic = Computer Science/Algorithms/Distributed, Computer Science/Data Structures
+topic = Computer science/Algorithms/Distributed, Computer science/Data structures
date = 2017-07-07
notify = vb358@cam.ac.uk, dominic.p.mulligan@googlemail.com
abstract =
In this work, we focus on the correctness of Conflict-free Replicated
Data Types (CRDTs), a class of algorithm that provides strong eventual
consistency guarantees for replicated data. We develop a modular and
reusable framework for verifying the correctness of CRDT algorithms.
We avoid correctness issues that have dogged previous mechanised
proofs in this area by including a network model in our formalisation,
and proving that our theorems hold in all possible network behaviours.
Our axiomatic network model is a standard abstraction that accurately
reflects the behaviour of real-world computer networks. Moreover, we
identify an abstract convergence theorem, a property of order
relations, which provides a formal definition of strong eventual
consistency. We then obtain the first machine-checked correctness
theorems for three concrete CRDTs: the Replicated Growable Array, the
Observed-Remove Set, and an Increment-Decrement Counter.
[HOLCF-Prelude]
title = HOLCF-Prelude
author = Joachim Breitner<mailto:joachim@cis.upenn.edu>, Brian Huffman<>, Neil Mitchell<>, Christian Sternagel<mailto:c.sternagel@gmail.com>
-topic = Computer Science/Functional Programming
+topic = Computer science/Functional programming
date = 2017-07-15
notify = c.sternagel@gmail.com, joachim@cis.upenn.edu, hupel@in.tum.de
abstract =
The Isabelle/HOLCF-Prelude is a formalization of a large part of
Haskell's standard prelude in Isabelle/HOLCF. We use it to prove
the correctness of the Eratosthenes' Sieve, in its
self-referential implementation commonly used to showcase
Haskell's laziness; prove correctness of GHC's
"fold/build" rule and related rewrite rules; and certify a
number of hints suggested by HLint.
[Decl_Sem_Fun_PL]
title = Declarative Semantics for Functional Languages
author = Jeremy Siek <http://homes.soic.indiana.edu/jsiek/>
-topic = Computer Science/Programming Languages
+topic = Computer science/Programming languages
date = 2017-07-21
notify = jsiek@indiana.edu
abstract =
We present a semantics for an applied call-by-value lambda-calculus
that is compositional, extensional, and elementary. We present four
different views of the semantics: 1) as a relational (big-step)
semantics that is not operational but instead declarative, 2) as a
denotational semantics that does not use domain theory, 3) as a
non-deterministic interpreter, and 4) as a variant of the intersection
type systems of the Torino group. We prove that the semantics is
correct by showing that it is sound and complete with respect to
operational semantics on programs and that is sound with respect to
contextual equivalence. We have not yet investigated whether it is
fully abstract. We demonstrate that this approach to semantics is
useful with three case studies. First, we use the semantics to prove
correctness of a compiler optimization that inlines function
application. Second, we adapt the semantics to the polymorphic
lambda-calculus extended with general recursion and prove semantic
type soundness. Third, we adapt the semantics to the call-by-value
lambda-calculus with mutable references.
<br>
The paper that accompanies these Isabelle theories is <a href="https://arxiv.org/abs/1707.03762">available on arXiv</a>.
[DynamicArchitectures]
title = Dynamic Architectures
author = Diego Marmsoler <http://marmsoler.com>
-topic = Computer Science/System Description Languages
+topic = Computer science/System description languages
date = 2017-07-28
notify = diego.marmsoler@tum.de
abstract =
The architecture of a system describes the system's overall
organization into components and connections between those components.
With the emergence of mobile computing, dynamic architectures have
become increasingly important. In such architectures, components may
appear or disappear, and connections may change over time. In the
following we mechanize a theory of dynamic architectures and verify
the soundness of a corresponding calculus. Therefore, we first
formalize the notion of configuration traces as a model for dynamic
architectures. Then, the behavior of single components is formalized
in terms of behavior traces and an operator is introduced and studied
to extract the behavior of a single component out of a given
configuration trace. Then, behavior trace assertions are introduced as
a temporal specification technique to specify behavior of components.
Reasoning about component behavior in a dynamic context is formalized
in terms of a calculus for dynamic architectures. Finally, the
soundness of the calculus is verified by introducing an alternative
interpretation for behavior trace assertions over configuration traces
and proving the rules of the calculus. Since projection may lead to
finite as well as infinite behavior traces, they are formalized in
terms of coinductive lists. Thus, our theory is based on
Lochbihler's formalization of coinductive lists. The theory may
be applied to verify properties for dynamic architectures.
extra-history =
Change history:
[2018-06-07]: adding logical operators to specify configuration traces (revision 09178f08f050)<br>
[Stewart_Apollonius]
title = Stewart's Theorem and Apollonius' Theorem
author = Lukas Bulwahn <mailto:lukas.bulwahn@gmail.com>
topic = Mathematics/Geometry
date = 2017-07-31
notify = lukas.bulwahn@gmail.com
abstract =
This entry formalizes the two geometric theorems, Stewart's and
Apollonius' theorem. Stewart's Theorem relates the length of
a triangle's cevian to the lengths of the triangle's two
sides. Apollonius' Theorem is a specialisation of Stewart's
theorem, restricting the cevian to be the median. The proof applies
the law of cosines, some basic geometric facts about triangles and
then simply transforms the terms algebraically to yield the
conjectured relation. The formalization in Isabelle can closely follow
the informal proofs described in the Wikipedia articles of those two
theorems.
[LambdaMu]
title = The LambdaMu-calculus
author = Cristina Matache <mailto:cris.matache@gmail.com>, Victor B. F. Gomes <mailto:victorborgesfg@gmail.com>, Dominic P. Mulligan <mailto:dominic.p.mulligan@googlemail.com>
-topic = Computer Science/Programming Languages/Lambda Calculi, Logic/General logic/Lambda calculus
+topic = Computer science/Programming languages/Lambda calculi, Logic/General logic/Lambda calculus
date = 2017-08-16
notify = victorborgesfg@gmail.com, dominic.p.mulligan@googlemail.com
abstract =
The propositions-as-types correspondence is ordinarily presented as
linking the metatheory of typed λ-calculi and the proof theory of
intuitionistic logic. Griffin observed that this correspondence could
be extended to classical logic through the use of control operators.
This observation set off a flurry of further research, leading to the
development of Parigots λμ-calculus. In this work, we formalise λμ-
calculus in Isabelle/HOL and prove several metatheoretical properties
such as type preservation and progress.
[Orbit_Stabiliser]
title = Orbit-Stabiliser Theorem with Application to Rotational Symmetries
author = Jonas Rädle <mailto:jonas.raedle@tum.de>
topic = Mathematics/Algebra
date = 2017-08-20
notify = jonas.raedle@tum.de
abstract =
The Orbit-Stabiliser theorem is a basic result in the algebra of
groups that factors the order of a group into the sizes of its orbits
and stabilisers. We formalize the notion of a group action and the
related concepts of orbits and stabilisers. This allows us to prove
the orbit-stabiliser theorem. In the second part of this work, we
formalize the tetrahedral group and use the orbit-stabiliser theorem
to prove that there are twelve (orientation-preserving) rotations of
the tetrahedron.
[PLM]
title = Representation and Partial Automation of the Principia Logico-Metaphysica in Isabelle/HOL
author = Daniel Kirchner <mailto:daniel@ekpyron.org>
topic = Logic/Philosophical aspects
date = 2017-09-17
notify = daniel@ekpyron.org
abstract =
<p> We present an embedding of the second-order fragment of the
Theory of Abstract Objects as described in Edward Zalta's
upcoming work <a
href="https://mally.stanford.edu/principia.pdf">Principia
Logico-Metaphysica (PLM)</a> in the automated reasoning
framework Isabelle/HOL. The Theory of Abstract Objects is a
metaphysical theory that reifies property patterns, as they for
example occur in the abstract reasoning of mathematics, as
<b>abstract objects</b> and provides an axiomatic
framework that allows to reason about these objects. It thereby serves
as a fundamental metaphysical theory that can be used to axiomatize
and describe a wide range of philosophical objects, such as Platonic
forms or Leibniz' concepts, and has the ambition to function as a
foundational theory of mathematics. The target theory of our embedding
as described in chapters 7-9 of PLM employs a modal relational type
theory as logical foundation for which a representation in functional
type theory is <a
href="https://mally.stanford.edu/Papers/rtt.pdf">known to
be challenging</a>. </p> <p> Nevertheless we arrive
at a functioning representation of the theory in the functional logic
of Isabelle/HOL based on a semantical representation of an Aczel-model
of the theory. Based on this representation we construct an
implementation of the deductive system of PLM which allows to
automatically and interactively find and verify theorems of PLM.
</p> <p> Our work thereby supports the concept of shallow
semantical embeddings of logical systems in HOL as a universal tool
for logical reasoning <a
href="http://www.mi.fu-berlin.de/inf/groups/ag-ki/publications/Universal-Reasoning/1703_09620_pd.pdf">as
promoted by Christoph Benzm&uuml;ller</a>. </p>
<p> The most notable result of the presented work is the
discovery of a previously unknown paradox in the formulation of the
Theory of Abstract Objects. The embedding of the theory in
Isabelle/HOL played a vital part in this discovery. Furthermore it was
possible to immediately offer several options to modify the theory to
guarantee its consistency. Thereby our work could provide a
significant contribution to the development of a proper grounding for
object theory. </p>
[KD_Tree]
title = Multidimensional Binary Search Trees
author = Martin Rau<>
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
date = 2019-05-30
notify = martin.rau@tum.de, mrtnrau@googlemail.com
abstract =
This entry provides a formalization of multidimensional binary trees,
also known as k-d trees. It includes a balanced build algorithm as
well as the nearest neighbor algorithm and the range search algorithm.
It is based on the papers <a
href="https://dl.acm.org/citation.cfm?doid=361002.361007">Multidimensional
binary search trees used for associative searching</a> and <a
href="https://dl.acm.org/citation.cfm?doid=355744.355745">
An Algorithm for Finding Best Matches in Logarithmic Expected
Time</a>.
extra-history =
Change history:
[2020-15-04]: Change representation of k-dimensional points from 'list' to
HOL-Analysis.Finite_Cartesian_Product 'vec'. Update proofs
to incorporate HOL-Analysis 'dist' and 'cbox' primitives.
[Closest_Pair_Points]
title = Closest Pair of Points Algorithms
author = Martin Rau <mailto:martin.rau@tum.de>, Tobias Nipkow <http://www.in.tum.de/~nipkow>
-topic = Computer Science/Algorithms/Geometry
+topic = Computer science/Algorithms/Geometry
date = 2020-01-13
notify = martin.rau@tum.de, nipkow@in.tum.de
abstract =
This entry provides two related verified divide-and-conquer algorithms
solving the fundamental <em>Closest Pair of Points</em>
problem in Computational Geometry. Functional correctness and the
optimal running time of <em>O</em>(<em>n</em> log <em>n</em>) are
proved. Executable code is generated which is empirically competitive
with handwritten reference implementations.
extra-history =
Change history:
[2020-14-04]: Incorporate Time_Monad of the AFP entry Root_Balanced_Tree.
[Approximation_Algorithms]
title = Verified Approximation Algorithms
author = Robin Eßmann <mailto:robin.essmann@tum.de>, Tobias Nipkow <http://www.in.tum.de/~nipkow/>, Simon Robillard <https://simon-robillard.net/>
-topic = Computer Science/Algorithms/Approximation
+topic = Computer science/Algorithms/Approximation
date = 2020-01-16
notify = nipkow@in.tum.de
abstract =
We present the first formal verification of approximation algorithms
for NP-complete optimization problems: vertex cover, independent set,
load balancing, and bin packing. The proofs correct incompletenesses
in existing proofs and improve the approximation ratio in one case.
[Diophantine_Eqns_Lin_Hom]
title = Homogeneous Linear Diophantine Equations
author = Florian Messner <mailto:florian.g.messner@uibk.ac.at>, Julian Parsert <mailto:julian.parsert@gmail.com>, Jonas Schöpf <mailto:jonas.schoepf@uibk.ac.at>, Christian Sternagel <mailto:c.sternagel@gmail.com>
-topic = Computer Science/Algorithms/Mathematical, Mathematics/Number Theory, Tools
+topic = Computer science/Algorithms/Mathematical, Mathematics/Number theory, Tools
license = LGPL
date = 2017-10-14
notify = c.sternagel@gmail.com, julian.parsert@gmail.com
abstract =
We formalize the theory of homogeneous linear diophantine equations,
focusing on two main results: (1) an abstract characterization of
minimal complete sets of solutions, and (2) an algorithm computing
them. Both, the characterization and the algorithm are based on
previous work by Huet. Our starting point is a simple but inefficient
variant of Huet's lexicographic algorithm incorporating improved
bounds due to Clausen and Fortenbacher. We proceed by proving its
soundness and completeness. Finally, we employ code equations to
obtain a reasonably efficient implementation. Thus, we provide a
formally verified solver for homogeneous linear diophantine equations.
[Winding_Number_Eval]
title = Evaluate Winding Numbers through Cauchy Indices
author = Wenda Li <https://www.cl.cam.ac.uk/~wl302/>
topic = Mathematics/Analysis
date = 2017-10-17
notify = wl302@cam.ac.uk, liwenda1990@hotmail.com
abstract =
In complex analysis, the winding number measures the number of times a
path (counterclockwise) winds around a point, while the Cauchy index
can approximate how the path winds. This entry provides a
formalisation of the Cauchy index, which is then shown to be related
to the winding number. In addition, this entry also offers a tactic
that enables users to evaluate the winding number by calculating
Cauchy indices.
[Count_Complex_Roots]
title = Count the Number of Complex Roots
author = Wenda Li <https://www.cl.cam.ac.uk/~wl302/>
topic = Mathematics/Analysis
date = 2017-10-17
notify = wl302@cam.ac.uk, liwenda1990@hotmail.com
abstract =
Based on evaluating Cauchy indices through remainder sequences, this
entry provides an effective procedure to count the number of complex
roots (with multiplicity) of a polynomial within a rectangle box or a
half-plane. Potential applications of this entry include certified
complex root isolation (of a polynomial) and testing the Routh-Hurwitz
stability criterion (i.e., to check whether all the roots of some
characteristic polynomial have negative real parts).
[Buchi_Complementation]
title = Büchi Complementation
author = Julian Brunner <http://www21.in.tum.de/~brunnerj/>
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
date = 2017-10-19
notify = brunnerj@in.tum.de
abstract =
This entry provides a verified implementation of rank-based Büchi
Complementation. The verification is done in three steps: <ol>
<li>Definition of odd rankings and proof that an automaton
rejects a word iff there exists an odd ranking for it.</li>
<li>Definition of the complement automaton and proof that it
accepts exactly those words for which there is an odd
ranking.</li> <li>Verified implementation of the
complement automaton using the Isabelle Collections
Framework.</li> </ol>
[Transition_Systems_and_Automata]
title = Transition Systems and Automata
author = Julian Brunner <http://www21.in.tum.de/~brunnerj/>
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
date = 2017-10-19
notify = brunnerj@in.tum.de
abstract =
This entry provides a very abstract theory of transition systems that
can be instantiated to express various types of automata. A transition
system is typically instantiated by providing a set of initial states,
a predicate for enabled transitions, and a transition execution
function. From this, it defines the concepts of finite and infinite
paths as well as the set of reachable states, among other things. Many
useful theorems, from basic path manipulation rules to coinduction and
run construction rules, are proven in this abstract transition system
context. The library comes with instantiations for DFAs, NFAs, and
Büchi automata.
[Kuratowski_Closure_Complement]
title = The Kuratowski Closure-Complement Theorem
author = Peter Gammie <http://peteg.org>, Gianpaolo Gioiosa<>
topic = Mathematics/Topology
date = 2017-10-26
notify = peteg42@gmail.com
abstract =
We discuss a topological curiosity discovered by Kuratowski (1922):
the fact that the number of distinct operators on a topological space
generated by compositions of closure and complement never exceeds 14,
and is exactly 14 in the case of R. In addition, we prove a theorem
due to Chagrov (1982) that classifies topological spaces according to
the number of such operators they support.
[Hybrid_Multi_Lane_Spatial_Logic]
title = Hybrid Multi-Lane Spatial Logic
author = Sven Linker <mailto:s.linker@liverpool.ac.uk>
topic = Logic/General logic/Modal logic
date = 2017-11-06
notify = s.linker@liverpool.ac.uk
abstract =
We present a semantic embedding of a spatio-temporal multi-modal
logic, specifically defined to reason about motorway traffic, into
Isabelle/HOL. The semantic model is an abstraction of a motorway,
emphasising local spatial properties, and parameterised by the types
of sensors deployed in the vehicles. We use the logic to define
controller constraints to ensure safety, i.e., the absence of
collisions on the motorway. After proving safety with a restrictive
definition of sensors, we relax these assumptions and show how to
amend the controller constraints to still guarantee safety.
[Dirichlet_L]
title = Dirichlet L-Functions and Dirichlet's Theorem
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Number Theory, Mathematics/Algebra
+topic = Mathematics/Number theory, Mathematics/Algebra
date = 2017-12-21
notify = eberlm@in.tum.de
abstract =
<p>This article provides a formalisation of Dirichlet characters
and Dirichlet <em>L</em>-functions including proofs of
their basic properties &ndash; most notably their analyticity,
their areas of convergence, and their non-vanishing for &Re;(s)
&ge; 1. All of this is built in a very high-level style using
Dirichlet series. The proof of the non-vanishing follows a very short
and elegant proof by Newman, which we attempt to reproduce faithfully
in a similar level of abstraction in Isabelle.</p> <p>This
also leads to a relatively short proof of Dirichlet’s Theorem, which
states that, if <em>h</em> and <em>n</em> are
coprime, there are infinitely many primes <em>p</em> with
<em>p</em> &equiv; <em>h</em> (mod
<em>n</em>).</p>
[Symmetric_Polynomials]
title = Symmetric Polynomials
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
topic = Mathematics/Algebra
date = 2018-09-25
notify = eberlm@in.tum.de
abstract =
<p>A symmetric polynomial is a polynomial in variables
<em>X</em><sub>1</sub>,&hellip;,<em>X</em><sub>n</sub>
that does not discriminate between its variables, i.&thinsp;e. it
is invariant under any permutation of them. These polynomials are
important in the study of the relationship between the coefficients of
a univariate polynomial and its roots in its algebraic
closure.</p> <p>This article provides a definition of
symmetric polynomials and the elementary symmetric polynomials
e<sub>1</sub>,&hellip;,e<sub>n</sub> and
proofs of their basic properties, including three notable
ones:</p> <ul> <li> Vieta's formula, which
gives an explicit expression for the <em>k</em>-th
coefficient of a univariate monic polynomial in terms of its roots
<em>x</em><sub>1</sub>,&hellip;,<em>x</em><sub>n</sub>,
namely
<em>c</em><sub><em>k</em></sub> = (-1)<sup><em>n</em>-<em>k</em></sup>&thinsp;e<sub><em>n</em>-<em>k</em></sub>(<em>x</em><sub>1</sub>,&hellip;,<em>x</em><sub>n</sub>).</li>
<li>Second, the Fundamental Theorem of Symmetric Polynomials,
which states that any symmetric polynomial is itself a uniquely
determined polynomial combination of the elementary symmetric
polynomials.</li> <li>Third, as a corollary of the
previous two, that given a polynomial over some ring
<em>R</em>, any symmetric polynomial combination of its
roots is also in <em>R</em> even when the roots are not.
</ul> <p> Both the symmetry property itself and the
witness for the Fundamental Theorem are executable. </p>
[Taylor_Models]
title = Taylor Models
author = Christoph Traut<>, Fabian Immler <http://www21.in.tum.de/~immler>
-topic = Computer Science/Algorithms/Mathematical, Computer Science/Data Structures, Mathematics/Analysis, Mathematics/Algebra
+topic = Computer science/Algorithms/Mathematical, Computer science/Data structures, Mathematics/Analysis, Mathematics/Algebra
date = 2018-01-08
notify = immler@in.tum.de
abstract =
We present a formally verified implementation of multivariate Taylor
models. Taylor models are a form of rigorous polynomial approximation,
consisting of an approximation polynomial based on Taylor expansions,
combined with a rigorous bound on the approximation error. Taylor
models were introduced as a tool to mitigate the dependency problem of
interval arithmetic. Our implementation automatically computes Taylor
models for the class of elementary functions, expressed by composition
of arithmetic operations and basic functions like exp, sin, or square
root.
[Green]
title = An Isabelle/HOL formalisation of Green's Theorem
author = Mohammad Abdulaziz <mailto:mohammad.abdulaziz8@gmail.com>, Lawrence C. Paulson <http://www.cl.cam.ac.uk/~lp15/>
topic = Mathematics/Analysis
date = 2018-01-11
notify = mohammad.abdulaziz8@gmail.com, lp15@cam.ac.uk
abstract =
We formalise a statement of Green’s theorem—the first formalisation to
our knowledge—in Isabelle/HOL. The theorem statement that we formalise
is enough for most applications, especially in physics and
engineering. Our formalisation is made possible by a novel proof that
avoids the ubiquitous line integral cancellation argument. This
eliminates the need to formalise orientations and region boundaries
explicitly with respect to the outwards-pointing normal vector.
Instead we appeal to a homological argument about equivalences between
paths.
[Gromov_Hyperbolicity]
title = Gromov Hyperbolicity
author = Sebastien Gouezel<>
topic = Mathematics/Geometry
date = 2018-01-16
notify = sebastien.gouezel@univ-rennes1.fr
abstract =
A geodesic metric space is Gromov hyperbolic if all its geodesic
triangles are thin, i.e., every side is contained in a fixed
thickening of the two other sides. While this definition looks
innocuous, it has proved extremely important and versatile in modern
geometry since its introduction by Gromov. We formalize the basic
classical properties of Gromov hyperbolic spaces, notably the Morse
lemma asserting that quasigeodesics are close to geodesics, the
invariance of hyperbolicity under quasi-isometries, we define and
study the Gromov boundary and its associated distance, and prove that
a quasi-isometry between Gromov hyperbolic spaces extends to a
homeomorphism of the boundaries. We also prove a less classical
theorem, by Bonk and Schramm, asserting that a Gromov hyperbolic space
embeds isometrically in a geodesic Gromov-hyperbolic space. As the
original proof uses a transfinite sequence of Cauchy completions, this
is an interesting formalization exercise. Along the way, we introduce
basic material on isometries, quasi-isometries, Lipschitz maps,
geodesic spaces, the Hausdorff distance, the Cauchy completion of a
metric space, and the exponential on extended real numbers.
[Ordered_Resolution_Prover]
title = Formalization of Bachmair and Ganzinger's Ordered Resolution Prover
author = Anders Schlichtkrull <https://people.compute.dtu.dk/andschl/>, Jasmin Christian Blanchette <mailto:j.c.blanchette@vu.nl>, Dmitriy Traytel <mailto:traytel@inf.ethz.ch>, Uwe Waldmann <mailto:uwe@mpi-inf.mpg.de>
topic = Logic/General logic/Mechanization of proofs
date = 2018-01-18
notify = andschl@dtu.dk, j.c.blanchette@vu.nl
abstract =
This Isabelle/HOL formalization covers Sections 2 to 4 of Bachmair and
Ganzinger's "Resolution Theorem Proving" chapter in the
<em>Handbook of Automated Reasoning</em>. This includes
soundness and completeness of unordered and ordered variants of ground
resolution with and without literal selection, the standard redundancy
criterion, a general framework for refutational theorem proving, and
soundness and completeness of an abstract first-order prover.
[BNF_Operations]
title = Operations on Bounded Natural Functors
author = Jasmin Christian Blanchette <mailto:jasmin.blanchette@gmail.com>, Andrei Popescu <mailto:uuomul@yahoo.com>, Dmitriy Traytel <mailto:traytel@inf.ethz.ch>
topic = Tools
date = 2017-12-19
notify = jasmin.blanchette@gmail.com,uuomul@yahoo.com,traytel@inf.ethz.ch
abstract =
This entry formalizes the closure property of bounded natural functors
(BNFs) under seven operations. These operations and the corresponding
proofs constitute the core of Isabelle's (co)datatype package. To
be close to the implemented tactics, the proofs are deliberately
formulated as detailed apply scripts. The (co)datatypes together with
(co)induction principles and (co)recursors are byproducts of the
fixpoint operations LFP and GFP. Composition of BNFs is subdivided
into four simpler operations: Compose, Kill, Lift, and Permute. The
N2M operation provides mutual (co)induction principles and
(co)recursors for nested (co)datatypes.
[LLL_Basis_Reduction]
title = A verified LLL algorithm
author = Ralph Bottesch <>, Jose Divasón <http://www.unirioja.es/cu/jodivaso/>, Maximilian Haslbeck <http://cl-informatik.uibk.ac.at/users/mhaslbeck/>, Sebastiaan Joosten <http://sjcjoosten.nl/>, René Thiemann <http://cl-informatik.uibk.ac.at/users/thiemann/>, Akihisa Yamada<>
-topic = Computer Science/Algorithms/Mathematical, Mathematics/Algebra
+topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra
date = 2018-02-02
notify = ralph.bottesch@uibk.ac.at, jose.divason@unirioja.es, maximilian.haslbeck@uibk.ac.at, s.j.c.joosten@utwente.nl, rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp
abstract =
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as
LLL algorithm, is an algorithm to find a basis with short, nearly
orthogonal vectors of an integer lattice. Thereby, it can also be seen
as an approximation to solve the shortest vector problem (SVP), which
is an NP-hard problem, where the approximation quality solely depends
on the dimension of the lattice, but not the lattice itself. The
algorithm also possesses many applications in diverse fields of
computer science, from cryptanalysis to number theory, but it is
specially well-known since it was used to implement the first
polynomial-time algorithm to factor polynomials. In this work we
present the first mechanized soundness proof of the LLL algorithm to
compute short vectors in lattices. The formalization follows a
textbook by von zur Gathen and Gerhard.
extra-history =
Change history:
[2018-04-16]: Integrated formal complexity bounds (Haslbeck, Thiemann)
[2018-05-25]: Integrated much faster LLL implementation based on integer arithmetic (Bottesch, Haslbeck, Thiemann)
[LLL_Factorization]
title = A verified factorization algorithm for integer polynomials with polynomial complexity
author = Jose Divasón <http://www.unirioja.es/cu/jodivaso/>, Sebastiaan Joosten <http://sjcjoosten.nl/>, René Thiemann <http://cl-informatik.uibk.ac.at/users/thiemann/>, Akihisa Yamada <mailto:ayamada@trs.cm.is.nagoya-u.ac.jp>
topic = Mathematics/Algebra
date = 2018-02-06
notify = jose.divason@unirioja.es, s.j.c.joosten@utwente.nl, rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp
abstract =
Short vectors in lattices and factors of integer polynomials are
related. Each factor of an integer polynomial belongs to a certain
lattice. When factoring polynomials, the condition that we are looking
for an irreducible polynomial means that we must look for a small
element in a lattice, which can be done by a basis reduction
algorithm. In this development we formalize this connection and
thereby one main application of the LLL basis reduction algorithm: an
algorithm to factor square-free integer polynomials which runs in
polynomial time. The work is based on our previous
Berlekamp–Zassenhaus development, where the exponential reconstruction
phase has been replaced by the polynomial-time basis reduction
algorithm. Thanks to this formalization we found a serious flaw in a
textbook.
[Treaps]
title = Treaps
author = Maximilian Haslbeck <http://cl-informatik.uibk.ac.at/users/mhaslbeck/>, Manuel Eberl <https://www.in.tum.de/~eberlm>, Tobias Nipkow <https://www.in.tum.de/~nipkow>
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
date = 2018-02-06
notify = eberlm@in.tum.de
abstract =
<p> A Treap is a binary tree whose nodes contain pairs
consisting of some payload and an associated priority. It must have
the search-tree property w.r.t. the payloads and the heap property
w.r.t. the priorities. Treaps are an interesting data structure that
is related to binary search trees (BSTs) in the following way: if one
forgets all the priorities of a treap, the resulting BST is exactly
the same as if one had inserted the elements into an empty BST in
order of ascending priority. This means that a treap behaves like a
BST where we can pretend the elements were inserted in a different
order from the one in which they were actually inserted. </p>
<p> In particular, by choosing these priorities at random upon
insertion of an element, we can pretend that we inserted the elements
in <em>random order</em>, so that the shape of the
resulting tree is that of a random BST no matter in what order we
insert the elements. This is the main result of this
formalisation.</p>
[Skip_Lists]
title = Skip Lists
author = Max W. Haslbeck <http://cl-informatik.uibk.ac.at/users/mhaslbeck/>, Manuel Eberl <https://www21.in.tum.de/~eberlm/>
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
date = 2020-01-09
notify = max.haslbeck@gmx.de
abstract =
<p> Skip lists are sorted linked lists enhanced with shortcuts
and are an alternative to binary search trees. A skip lists consists
of multiple levels of sorted linked lists where a list on level n is a
subsequence of the list on level n − 1. In the ideal case, elements
are skipped in such a way that a lookup in a skip lists takes O(log n)
time. In a randomised skip list the skipped elements are choosen
randomly. </p> <p> This entry contains formalized proofs
of the textbook results about the expected height and the expected
length of a search path in a randomised skip list. </p>
[Mersenne_Primes]
title = Mersenne primes and the Lucas–Lehmer test
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
date = 2020-01-17
notify = eberlm@in.tum.de
abstract =
<p>This article provides formal proofs of basic properties of
Mersenne numbers, i. e. numbers of the form
2<sup><em>n</em></sup> - 1, and especially of
Mersenne primes.</p> <p>In particular, an efficient,
verified, and executable version of the Lucas&ndash;Lehmer test is
developed. This test decides primality for Mersenne numbers in time
polynomial in <em>n</em>.</p>
[Hoare_Time]
title = Hoare Logics for Time Bounds
author = Maximilian P. L. Haslbeck <http://www.in.tum.de/~haslbema>, Tobias Nipkow <https://www.in.tum.de/~nipkow>
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
date = 2018-02-26
notify = haslbema@in.tum.de
abstract =
We study three different Hoare logics for reasoning about time bounds
of imperative programs and formalize them in Isabelle/HOL: a classical
Hoare like logic due to Nielson, a logic with potentials due to
Carbonneaux <i>et al.</i> and a <i>separation
logic</i> following work by Atkey, Chaguérand and Pottier.
These logics are formally shown to be sound and complete. Verification
condition generators are developed and are shown sound and complete
too. We also consider variants of the systems where we abstract from
multiplicative constants in the running time bounds, thus supporting a
big-O style of reasoning. Finally we compare the expressive power of
the three systems.
[Architectural_Design_Patterns]
title = A Theory of Architectural Design Patterns
author = Diego Marmsoler <http://marmsoler.com>
-topic = Computer Science/System Description Languages
+topic = Computer science/System description languages
date = 2018-03-01
notify = diego.marmsoler@tum.de
abstract =
The following document formalizes and verifies several architectural
design patterns. Each pattern specification is formalized in terms of
a locale where the locale assumptions correspond to the assumptions
which a pattern poses on an architecture. Thus, pattern specifications
may build on top of each other by interpreting the corresponding
locale. A pattern is verified using the framework provided by the AFP
entry Dynamic Architectures. Currently, the document consists of
formalizations of 4 different patterns: the singleton, the publisher
subscriber, the blackboard pattern, and the blockchain pattern.
Thereby, the publisher component of the publisher subscriber pattern
is modeled as an instance of the singleton pattern and the blackboard
pattern is modeled as an instance of the publisher subscriber pattern.
In general, this entry provides the first steps towards an overall
theory of architectural design patterns.
extra-history =
Change history:
[2018-05-25]: changing the major assumption for blockchain architectures from alternative minings to relative mining frequencies (revision 5043c5c71685)<br>
[2019-04-08]: adapting the terminology: honest instead of trusted, dishonest instead of untrusted (revision 7af3431a22ae)
[Weight_Balanced_Trees]
title = Weight-Balanced Trees
author = Tobias Nipkow <https://www.in.tum.de/~nipkow>, Stefan Dirix<>
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
date = 2018-03-13
notify = nipkow@in.tum.de
abstract =
This theory provides a verified implementation of weight-balanced
trees following the work of <a
href="https://doi.org/10.1017/S0956796811000104">Hirai
and Yamamoto</a> who proved that all parameters in a certain
range are valid, i.e. guarantee that insertion and deletion preserve
weight-balance. Instead of a general theorem we provide parameterized
proofs of preservation of the invariant that work for many (all?)
valid parameters.
[Fishburn_Impossibility]
title = The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency
author = Felix Brandt <http://dss.in.tum.de/staff/brandt.html>, Manuel Eberl <https://www21.in.tum.de/~eberlm>, Christian Saile <http://dss.in.tum.de/staff/christian-saile.html>, Christian Stricker <http://dss.in.tum.de/staff/christian-stricker.html>
-topic = Mathematics/Games and Economics
+topic = Mathematics/Games and economics
date = 2018-03-22
notify = eberlm@in.tum.de
abstract =
<p>This formalisation contains the proof that there is no
anonymous Social Choice Function for at least three agents and
alternatives that fulfils both Pareto-Efficiency and
Fishburn-Strategyproofness. It was derived from a proof of <a
href="http://dss.in.tum.de/files/brandt-research/stratset.pdf">Brandt
<em>et al.</em></a>, which relies on an unverified
translation of a fixed finite instance of the original problem to SAT.
This Isabelle proof contains a machine-checked version of both the
statement for exactly three agents and alternatives and the lifting to
the general case.</p>
[BNF_CC]
title = Bounded Natural Functors with Covariance and Contravariance
author = Andreas Lochbihler <http://www.andreas-lochbihler.de>, Joshua Schneider <mailto:joshua.schneider@inf.ethz.ch>
-topic = Computer Science/Functional Programming, Tools
+topic = Computer science/Functional programming, Tools
date = 2018-04-24
notify = mail@andreas-lochbihler.de, joshua.schneider@inf.ethz.ch
abstract =
Bounded natural functors (BNFs) provide a modular framework for the
construction of (co)datatypes in higher-order logic. Their functorial
operations, the mapper and relator, are restricted to a subset of the
parameters, namely those where recursion can take place. For certain
applications, such as free theorems, data refinement, quotients, and
generalised rewriting, it is desirable that these operations do not
ignore the other parameters. In this article, we formalise the
generalisation BNF<sub>CC</sub> that extends the mapper
and relator to covariant and contravariant parameters. We show that
<ol> <li> BNF<sub>CC</sub>s are closed under
functor composition and least and greatest fixpoints,</li>
<li> subtypes inherit the BNF<sub>CC</sub> structure
under conditions that generalise those for the BNF case,
and</li> <li> BNF<sub>CC</sub>s preserve
quotients under mild conditions.</li> </ol> These proofs
are carried out for abstract BNF<sub>CC</sub>s similar to
the AFP entry BNF Operations. In addition, we apply the
BNF<sub>CC</sub> theory to several concrete functors.
[Modular_Assembly_Kit_Security]
title = An Isabelle/HOL Formalization of the Modular Assembly Kit for Security Properties
author = Oliver Bračevac <mailto:bracevac@st.informatik.tu-darmstadt.de>, Richard Gay <mailto:gay@mais.informatik.tu-darmstadt.de>, Sylvia Grewe <mailto:grewe@st.informatik.tu-darmstadt.de>, Heiko Mantel <mailto:mantel@mais.informatik.tu-darmstadt.de>, Henning Sudbrock <mailto:sudbrock@mais.informatik.tu-darmstadt.de>, Markus Tasch <mailto:tasch@mais.informatik.tu-darmstadt.de>
-topic = Computer Science/Security
+topic = Computer science/Security
date = 2018-05-07
notify = tasch@mais.informatik.tu-darmstadt.de
abstract =
The "Modular Assembly Kit for Security Properties" (MAKS) is
a framework for both the definition and verification of possibilistic
information-flow security properties at the specification-level. MAKS
supports the uniform representation of a wide range of possibilistic
information-flow properties and provides support for the verification
of such properties via unwinding results and compositionality results.
We provide a formalization of this framework in Isabelle/HOL.
[AxiomaticCategoryTheory]
title = Axiom Systems for Category Theory in Free Logic
author = Christoph Benzmüller <http://christoph-benzmueller.de>, Dana Scott <http://www.cs.cmu.edu/~scott/>
-topic = Mathematics/Category Theory
+topic = Mathematics/Category theory
date = 2018-05-23
notify = c.benzmueller@gmail.com
abstract =
This document provides a concise overview on the core results of our
previous work on the exploration of axioms systems for category
theory. Extending the previous studies
(http://arxiv.org/abs/1609.01493) we include one further axiomatic
theory in our experiments. This additional theory has been suggested
by Mac Lane in 1948. We show that the axioms proposed by Mac Lane are
equivalent to the ones we studied before, which includes an axioms set
suggested by Scott in the 1970s and another axioms set proposed by
Freyd and Scedrov in 1990, which we slightly modified to remedy a
minor technical issue.
[OpSets]
title = OpSets: Sequential Specifications for Replicated Datatypes
author = Martin Kleppmann <mailto:mk428@cl.cam.ac.uk>, Victor B. F. Gomes <mailto:vb358@cl.cam.ac.uk>, Dominic P. Mulligan <mailto:Dominic.Mulligan@arm.com>, Alastair R. Beresford <mailto:arb33@cl.cam.ac.uk>
-topic = Computer Science/Algorithms/Distributed, Computer Science/Data Structures
+topic = Computer science/Algorithms/Distributed, Computer science/Data structures
date = 2018-05-10
notify = vb358@cam.ac.uk
abstract =
We introduce OpSets, an executable framework for specifying and
reasoning about the semantics of replicated datatypes that provide
eventual consistency in a distributed system, and for mechanically
verifying algorithms that implement these datatypes. Our approach is
simple but expressive, allowing us to succinctly specify a variety of
abstract datatypes, including maps, sets, lists, text, graphs, trees,
and registers. Our datatypes are also composable, enabling the
construction of complex data structures. To demonstrate the utility of
OpSets for analysing replication algorithms, we highlight an important
correctness property for collaborative text editing that has
traditionally been overlooked; algorithms that do not satisfy this
property can exhibit awkward interleaving of text. We use OpSets to
specify this correctness property and prove that although one existing
replication algorithm satisfies this property, several other published
algorithms do not.
[Irrationality_J_Hancl]
title = Irrational Rapidly Convergent Series
author = Angeliki Koutsoukou-Argyraki <http://www.cl.cam.ac.uk/~ak2110/>, Wenda Li <http://www.cl.cam.ac.uk/~wl302/>
-topic = Mathematics/Number Theory, Mathematics/Analysis
+topic = Mathematics/Number theory, Mathematics/Analysis
date = 2018-05-23
notify = ak2110@cam.ac.uk, wl302@cam.ac.uk
abstract =
We formalize with Isabelle/HOL a proof of a theorem by J. Hancl asserting the
irrationality of the sum of a series consisting of rational numbers, built up
by sequences that fulfill certain properties. Even though the criterion is a
number theoretic result, the proof makes use only of analytical arguments. We
also formalize a corollary of the theorem for a specific series fulfilling the
assumptions of the theorem.
[Optimal_BST]
title = Optimal Binary Search Trees
author = Tobias Nipkow <https://www.in.tum.de/~nipkow>, Dániel Somogyi <>
-topic = Computer Science/Algorithms, Computer Science/Data Structures
+topic = Computer science/Algorithms, Computer science/Data structures
date = 2018-05-27
notify = nipkow@in.tum.de
abstract =
This article formalizes recursive algorithms for the construction
of optimal binary search trees given fixed access frequencies.
We follow Knuth (1971), Yao (1980) and Mehlhorn (1984).
The algorithms are memoized with the help of the AFP article
<a href="Monad_Memo_DP.html">Monadification, Memoization and Dynamic Programming</a>,
thus yielding dynamic programming algorithms.
[Projective_Geometry]
title = Projective Geometry
author = Anthony Bordg <https://sites.google.com/site/anthonybordg/>
topic = Mathematics/Geometry
date = 2018-06-14
notify = apdb3@cam.ac.uk
abstract =
We formalize the basics of projective geometry. In particular, we give
a proof of the so-called Hessenberg's theorem in projective plane
geometry. We also provide a proof of the so-called Desargues's
theorem based on an axiomatization of (higher) projective space
geometry using the notion of rank of a matroid. This last approach
allows to handle incidence relations in an homogeneous way dealing
only with points and without the need of talking explicitly about
lines, planes or any higher entity.
[Localization_Ring]
title = The Localization of a Commutative Ring
author = Anthony Bordg <https://sites.google.com/site/anthonybordg/>
topic = Mathematics/Algebra
date = 2018-06-14
notify = apdb3@cam.ac.uk
abstract =
We formalize the localization of a commutative ring R with respect to
a multiplicative subset (i.e. a submonoid of R seen as a
multiplicative monoid). This localization is itself a commutative ring
and we build the natural homomorphism of rings from R to its
localization.
[Minsky_Machines]
title = Minsky Machines
author = Bertram Felgenhauer<>
topic = Logic/Computability
date = 2018-08-14
notify = int-e@gmx.de
abstract =
<p> We formalize undecidablity results for Minsky machines. To
this end, we also formalize recursive inseparability.
</p><p> We start by proving that Minsky machines can
compute arbitrary primitive recursive and recursive functions. We then
show that there is a deterministic Minsky machine with one argument
and two final states such that the set of inputs that are accepted in
one state is recursively inseparable from the set of inputs that are
accepted in the other state. </p><p> As a corollary, the
set of Minsky configurations that reach the first state but not the
second recursively inseparable from the set of Minsky configurations
that reach the second state but not the first. In particular both
these sets are undecidable. </p><p> We do
<em>not</em> prove that recursive functions can simulate
Minsky machines. </p>
[Neumann_Morgenstern_Utility]
title = Von-Neumann-Morgenstern Utility Theorem
author = Julian Parsert<mailto:julian.parsert@gmail.com>, Cezary Kaliszyk<http://cl-informatik.uibk.ac.at/users/cek/>
-topic = Mathematics/Games and Economics
+topic = Mathematics/Games and economics
license = LGPL
date = 2018-07-04
notify = julian.parsert@uibk.ac.at, cezary.kaliszyk@uibk.ac.at
abstract =
Utility functions form an essential part of game theory and economics.
In order to guarantee the existence of utility functions most of the
time sufficient properties are assumed in an axiomatic manner. One
famous and very common set of such assumptions is that of expected
utility theory. Here, the rationality, continuity, and independence of
preferences is assumed. The von-Neumann-Morgenstern Utility theorem
shows that these assumptions are necessary and sufficient for an
expected utility function to exists. This theorem was proven by
Neumann and Morgenstern in ``Theory of Games and Economic
Behavior'' which is regarded as one of the most influential
works in game theory. The formalization includes formal definitions of
the underlying concepts including continuity and independence of
preferences.
[Simplex]
title = An Incremental Simplex Algorithm with Unsatisfiable Core Generation
author = Filip Marić <mailto:filip@matf.bg.ac.rs>, Mirko Spasić <mailto:mirko@matf.bg.ac.rs>, René Thiemann <http://cl-informatik.uibk.ac.at/~thiemann/>
-topic = Computer Science/Algorithms/Optimization
+topic = Computer science/Algorithms/Optimization
date = 2018-08-24
notify = rene.thiemann@uibk.ac.at
abstract =
We present an Isabelle/HOL formalization and total correctness proof
for the incremental version of the Simplex algorithm which is used in
most state-of-the-art SMT solvers. It supports extraction of
satisfying assignments, extraction of minimal unsatisfiable cores, incremental
assertion of constraints and backtracking. The formalization relies on
stepwise program refinement, starting from a simple specification,
going through a number of refinement steps, and ending up in a fully
executable functional implementation. Symmetries present in the
algorithm are handled with special care.
[Budan_Fourier]
title = The Budan-Fourier Theorem and Counting Real Roots with Multiplicity
author = Wenda Li <https://www.cl.cam.ac.uk/~wl302/>
topic = Mathematics/Analysis
date = 2018-09-02
notify = wl302@cam.ac.uk, liwenda1990@hotmail.com
abstract =
This entry is mainly about counting and approximating real roots (of a
polynomial) with multiplicity. We have first formalised the
Budan-Fourier theorem: given a polynomial with real coefficients, we
can calculate sign variations on Fourier sequences to over-approximate
the number of real roots (counting multiplicity) within an interval.
When all roots are known to be real, the over-approximation becomes
tight: we can utilise this theorem to count real roots exactly. It is
also worth noting that Descartes' rule of sign is a direct
consequence of the Budan-Fourier theorem, and has been included in
this entry. In addition, we have extended previous formalised
Sturm's theorem to count real roots with multiplicity, while the
original Sturm's theorem only counts distinct real roots.
Compared to the Budan-Fourier theorem, our extended Sturm's
theorem always counts roots exactly but may suffer from greater
computational cost.
[Quaternions]
title = Quaternions
author = Lawrence C. Paulson <https://www.cl.cam.ac.uk/~lp15/>
topic = Mathematics/Algebra, Mathematics/Geometry
date = 2018-09-05
notify = lp15@cam.ac.uk
abstract =
This theory is inspired by the HOL Light development of quaternions,
but follows its own route. Quaternions are developed coinductively, as
in the existing formalisation of the complex numbers. Quaternions are
quickly shown to belong to the type classes of real normed division
algebras and real inner product spaces. And therefore they inherit a
great body of facts involving algebraic laws, limits, continuity,
etc., which must be proved explicitly in the HOL Light version. The
development concludes with the geometric interpretation of the product
of imaginary quaternions.
[Octonions]
title = Octonions
author = Angeliki Koutsoukou-Argyraki <http://www.cl.cam.ac.uk/~ak2110/>
topic = Mathematics/Algebra, Mathematics/Geometry
date = 2018-09-14
notify = ak2110@cam.ac.uk
abstract =
We develop the basic theory of Octonions, including various identities
and properties of the octonions and of the octonionic product, a
description of 7D isometries and representations of orthogonal
transformations. To this end we first develop the theory of the vector
cross product in 7 dimensions. The development of the theory of
Octonions is inspired by that of the theory of Quaternions by Lawrence
Paulson. However, we do not work within the type class real_algebra_1
because the octonionic product is not associative.
[Aggregation_Algebras]
title = Aggregation Algebras
author = Walter Guttmann <http://www.cosc.canterbury.ac.nz/walter.guttmann/>
topic = Mathematics/Algebra
date = 2018-09-15
notify = walter.guttmann@canterbury.ac.nz
abstract =
We develop algebras for aggregation and minimisation for weight
matrices and for edge weights in graphs. We verify the correctness of
Prim's and Kruskal's minimum spanning tree algorithms based
on these algebras. We also show numerous instances of these algebras
based on linearly ordered commutative semigroups.
[Prime_Number_Theorem]
title = The Prime Number Theorem
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>, Lawrence C. Paulson <https://www.cl.cam.ac.uk/~lp15/>
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
date = 2018-09-19
notify = eberlm@in.tum.de
abstract =
<p>This article provides a short proof of the Prime Number
Theorem in several equivalent forms, most notably
&pi;(<em>x</em>) ~ <em>x</em>/ln
<em>x</em> where &pi;(<em>x</em>) is the
number of primes no larger than <em>x</em>. It also
defines other basic number-theoretic functions related to primes like
Chebyshev's functions &thetasym; and &psi; and the
&ldquo;<em>n</em>-th prime number&rdquo; function
p<sub><em>n</em></sub>. We also show various
bounds and relationship between these functions are shown. Lastly, we
derive Mertens' First and Second Theorem, i.&thinsp;e.
&sum;<sub><em>p</em>&le;<em>x</em></sub>
ln <em>p</em>/<em>p</em> = ln
<em>x</em> + <em>O</em>(1) and
&sum;<sub><em>p</em>&le;<em>x</em></sub>
1/<em>p</em> = ln ln <em>x</em> + M +
<em>O</em>(1/ln <em>x</em>). We also give
explicit bounds for the remainder terms.</p> <p>The proof
of the Prime Number Theorem builds on a library of Dirichlet series
and analytic combinatorics. We essentially follow the presentation by
Newman. The core part of the proof is a Tauberian theorem for
Dirichlet series, which is proven using complex analysis and then used
to strengthen Mertens' First Theorem to
&sum;<sub><em>p</em>&le;<em>x</em></sub>
ln <em>p</em>/<em>p</em> = ln
<em>x</em> + c + <em>o</em>(1).</p>
<p>A variant of this proof has been formalised before by
Harrison in HOL Light, and formalisations of Selberg's elementary
proof exist both by Avigad <em>et al.</em> in Isabelle and
by Carneiro in Metamath. The advantage of the analytic proof is that,
while it requires more powerful mathematical tools, it is considerably
shorter and clearer. This article attempts to provide a short and
clear formalisation of all components of that proof using the full
range of mathematical machinery available in Isabelle, staying as
close as possible to Newman's simple paper proof.</p>
[Signature_Groebner]
title = Signature-Based Gröbner Basis Algorithms
author = Alexander Maletzky <https://risc.jku.at/m/alexander-maletzky/>
-topic = Mathematics/Algebra, Computer Science/Algorithms/Mathematical
+topic = Mathematics/Algebra, Computer science/Algorithms/Mathematical
date = 2018-09-20
notify = alexander.maletzky@risc.jku.at
abstract =
<p>This article formalizes signature-based algorithms for computing
Gr&ouml;bner bases. Such algorithms are, in general, superior to
other algorithms in terms of efficiency, and have not been formalized
in any proof assistant so far. The present development is both
generic, in the sense that most known variants of signature-based
algorithms are covered by it, and effectively executable on concrete
input thanks to Isabelle's code generator. Sample computations of
benchmark problems show that the verified implementation of
signature-based algorithms indeed outperforms the existing
implementation of Buchberger's algorithm in Isabelle/HOL.</p>
<p>Besides total correctness of the algorithms, the article also proves
that under certain conditions they a-priori detect and avoid all
useless zero-reductions, and always return 'minimal' (in
some sense) Gr&ouml;bner bases if an input parameter is chosen in
the right way.</p><p>The formalization follows the recent survey article by
Eder and Faug&egrave;re.</p>
[Factored_Transition_System_Bounding]
title = Upper Bounding Diameters of State Spaces of Factored Transition Systems
author = Friedrich Kurz <>, Mohammad Abdulaziz <http://home.in.tum.de/~mansour/>
-topic = Computer Science/Automata and Formal Languages, Mathematics/Graph Theory
+topic = Computer science/Automata and formal languages, Mathematics/Graph theory
date = 2018-10-12
notify = friedrich.kurz@tum.de, mohammad.abdulaziz@in.tum.de
abstract =
A completeness threshold is required to guarantee the completeness of
planning as satisfiability, and bounded model checking of safety
properties. One valid completeness threshold is the diameter of the
underlying transition system. The diameter is the maximum element in
the set of lengths of all shortest paths between pairs of states. The
diameter is not calculated exactly in our setting, where the
transition system is succinctly described using a (propositionally)
factored representation. Rather, an upper bound on the diameter is
calculated compositionally, by bounding the diameters of small
abstract subsystems, and then composing those. We port a HOL4
formalisation of a compositional algorithm for computing a relatively
tight upper bound on the system diameter. This compositional algorithm
exploits acyclicity in the state space to achieve compositionality,
and it was introduced by Abdulaziz et. al. The formalisation that we
port is described as a part of another paper by Abdulaziz et. al. As a
part of this porting we developed a libray about transition systems,
which shall be of use in future related mechanisation efforts.
[Smooth_Manifolds]
title = Smooth Manifolds
author = Fabian Immler <http://home.in.tum.de/~immler/>, Bohua Zhan <http://lcs.ios.ac.cn/~bzhan/>
topic = Mathematics/Analysis, Mathematics/Topology
date = 2018-10-22
notify = immler@in.tum.de, bzhan@ios.ac.cn
abstract =
We formalize the definition and basic properties of smooth manifolds
in Isabelle/HOL. Concepts covered include partition of unity, tangent
and cotangent spaces, and the fundamental theorem of path integrals.
We also examine some concrete manifolds such as spheres and projective
spaces. The formalization makes extensive use of the analysis and
linear algebra libraries in Isabelle/HOL, in particular its
“types-to-sets” mechanism.
[Matroids]
title = Matroids
author = Jonas Keinholz<>
topic = Mathematics/Combinatorics
date = 2018-11-16
notify = eberlm@in.tum.de
abstract =
<p>This article defines the combinatorial structures known as
<em>Independence Systems</em> and
<em>Matroids</em> and provides basic concepts and theorems
related to them. These structures play an important role in
combinatorial optimisation, e. g. greedy algorithms such as
Kruskal's algorithm. The development is based on Oxley's
<a href="http://www.math.lsu.edu/~oxley/survey4.pdf">`What
is a Matroid?'</a>.</p>
[Graph_Saturation]
title = Graph Saturation
author = Sebastiaan J. C. Joosten<>
-topic = Logic/Rewriting, Mathematics/Graph Theory
+topic = Logic/Rewriting, Mathematics/Graph theory
date = 2018-11-23
notify = sjcjoosten@gmail.com
abstract =
This is an Isabelle/HOL formalisation of graph saturation, closely
following a <a href="https://doi.org/10.1016/j.jlamp.2018.06.005">paper by the author</a> on graph saturation.
Nine out of ten lemmas of the original paper are proven in this
formalisation. The formalisation additionally includes two theorems
that show the main premise of the paper: that consistency and
entailment are decided through graph saturation. This formalisation
does not give executable code, and it did not implement any of the
optimisations suggested in the paper.
[Functional_Ordered_Resolution_Prover]
title = A Verified Functional Implementation of Bachmair and Ganzinger's Ordered Resolution Prover
author = Anders Schlichtkrull <https://people.compute.dtu.dk/andschl/>, Jasmin Christian Blanchette <mailto:j.c.blanchette@vu.nl>, Dmitriy Traytel <mailto:traytel@inf.ethz.ch>
topic = Logic/General logic/Mechanization of proofs
date = 2018-11-23
notify = andschl@dtu.dk,j.c.blanchette@vu.nl,traytel@inf.ethz.ch
abstract =
This Isabelle/HOL formalization refines the abstract ordered
resolution prover presented in Section 4.3 of Bachmair and
Ganzinger's "Resolution Theorem Proving" chapter in the
<i>Handbook of Automated Reasoning</i>. The result is a
functional implementation of a first-order prover.
[Auto2_HOL]
title = Auto2 Prover
author = Bohua Zhan <http://lcs.ios.ac.cn/~bzhan/>
topic = Tools
date = 2018-11-20
notify = bzhan@ios.ac.cn
abstract =
Auto2 is a saturation-based heuristic prover for higher-order logic,
implemented as a tactic in Isabelle. This entry contains the
instantiation of auto2 for Isabelle/HOL, along with two basic
examples: solutions to some of the Pelletier’s problems, and
elementary number theory of primes.
[Order_Lattice_Props]
title = Properties of Orderings and Lattices
author = Georg Struth <http://staffwww.dcs.shef.ac.uk/people/G.Struth/>
topic = Mathematics/Order
date = 2018-12-11
notify = g.struth@sheffield.ac.uk
abstract =
These components add further fundamental order and lattice-theoretic
concepts and properties to Isabelle's libraries. They follow by
and large the introductory sections of the Compendium of Continuous
Lattices, covering directed and filtered sets, down-closed and
up-closed sets, ideals and filters, Galois connections, closure and
co-closure operators. Some emphasis is on duality and morphisms
between structures, as in the Compendium. To this end, three ad-hoc
approaches to duality are compared.
[Quantales]
title = Quantales
author = Georg Struth <http://staffwww.dcs.shef.ac.uk/people/G.Struth/>
topic = Mathematics/Algebra
date = 2018-12-11
notify = g.struth@sheffield.ac.uk
abstract =
These mathematical components formalise basic properties of quantales,
together with some important models, constructions, and concepts,
including quantic nuclei and conuclei.
[Transformer_Semantics]
title = Transformer Semantics
author = Georg Struth <http://staffwww.dcs.shef.ac.uk/people/G.Struth/>
-topic = Mathematics/Algebra, Computer Science/Semantics
+topic = Mathematics/Algebra, Computer science/Semantics
date = 2018-12-11
notify = g.struth@sheffield.ac.uk
abstract =
These mathematical components formalise predicate transformer
semantics for programs, yet currently only for partial correctness and
in the absence of faults. A first part for isotone (or monotone),
Sup-preserving and Inf-preserving transformers follows Back and von
Wright's approach, with additional emphasis on the quantalic
structure of algebras of transformers. The second part develops
Sup-preserving and Inf-preserving predicate transformers from the
powerset monad, via its Kleisli category and Eilenberg-Moore algebras,
with emphasis on adjunctions and dualities, as well as isomorphisms
between relations, state transformers and predicate transformers.
[Concurrent_Revisions]
title = Formalization of Concurrent Revisions
author = Roy Overbeek <mailto:Roy.Overbeek@cwi.nl>
-topic = Computer Science/Concurrency
+topic = Computer science/Concurrency
date = 2018-12-25
notify = Roy.Overbeek@cwi.nl
abstract =
Concurrent revisions is a concurrency control model developed by
Microsoft Research. It has many interesting properties that
distinguish it from other well-known models such as transactional
memory. One of these properties is <em>determinacy</em>:
programs written within the model always produce the same outcome,
independent of scheduling activity. The concurrent revisions model has
an operational semantics, with an informal proof of determinacy. This
document contains an Isabelle/HOL formalization of this semantics and
the proof of determinacy.
[Core_DOM]
title = A Formal Model of the Document Object Model
author = Achim D. Brucker <https://www.brucker.ch/>, Michael Herzberg <http://www.dcs.shef.ac.uk/cgi-bin/makeperson?M.Herzberg>
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
date = 2018-12-26
notify = adbrucker@0x5f.org
abstract =
In this AFP entry, we formalize the core of the Document Object Model
(DOM). At its core, the DOM defines a tree-like data structure for
representing documents in general and HTML documents in particular. It
is the heart of any modern web browser. Formalizing the key concepts
of the DOM is a prerequisite for the formal reasoning over client-side
JavaScript programs and for the analysis of security concepts in
modern web browsers. We present a formalization of the core DOM, with
focus on the node-tree and the operations defined on node-trees, in
Isabelle/HOL. We use the formalization to verify the functional
correctness of the most important functions defined in the DOM
standard. Moreover, our formalization is 1) extensible, i.e., can be
extended without the need of re-proving already proven properties and
2) executable, i.e., we can generate executable code from our
specification.
[Store_Buffer_Reduction]
title = A Reduction Theorem for Store Buffers
author = Ernie Cohen <mailto:ecohen@amazon.com>, Norbert Schirmer <mailto:norbert.schirmer@web.de>
-topic = Computer Science/Concurrency
+topic = Computer science/Concurrency
date = 2019-01-07
notify = norbert.schirmer@web.de
abstract =
When verifying a concurrent program, it is usual to assume that memory
is sequentially consistent. However, most modern multiprocessors
depend on store buffering for efficiency, and provide native
sequential consistency only at a substantial performance penalty. To
regain sequential consistency, a programmer has to follow an
appropriate programming discipline. However, na&iuml;ve disciplines,
such as protecting all shared accesses with locks, are not flexible
enough for building high-performance multiprocessor software. We
present a new discipline for concurrent programming under TSO (total
store order, with store buffer forwarding). It does not depend on
concurrency primitives, such as locks. Instead, threads use ghost
operations to acquire and release ownership of memory addresses. A
thread can write to an address only if no other thread owns it, and
can read from an address only if it owns it or it is shared and the
thread has flushed its store buffer since it last wrote to an address
it did not own. This discipline covers both coarse-grained concurrency
(where data is protected by locks) as well as fine-grained concurrency
(where atomic operations race to memory). We formalize this
discipline in Isabelle/HOL, and prove that if every execution of a
program in a system without store buffers follows the discipline, then
every execution of the program with store buffers is sequentially
consistent. Thus, we can show sequential consistency under TSO by
ordinary assertional reasoning about the program, without having to
consider store buffers at all.
[IMP2]
title = IMP2 – Simple Program Verification in Isabelle/HOL
author = Peter Lammich <http://www21.in.tum.de/~lammich>, Simon Wimmer <http://in.tum.de/~wimmers>
-topic = Computer Science/Programming Languages/Logics, Computer Science/Algorithms
+topic = Computer science/Programming languages/Logics, Computer science/Algorithms
date = 2019-01-15
notify = lammich@in.tum.de
abstract =
IMP2 is a simple imperative language together with Isabelle tooling to
create a program verification environment in Isabelle/HOL. The tools
include a C-like syntax, a verification condition generator, and
Isabelle commands for the specification of programs. The framework is
modular, i.e., it allows easy reuse of already proved programs within
larger programs. This entry comes with a quickstart guide and a large
collection of examples, spanning basic algorithms with simple proofs
to more advanced algorithms and proof techniques like data refinement.
Some highlights from the examples are: <ul> <li>Bisection
Square Root, </li> <li>Extended Euclid, </li>
<li>Exponentiation by Squaring, </li> <li>Binary
Search, </li> <li>Insertion Sort, </li>
<li>Quicksort, </li> <li>Depth First Search.
</li> </ul> The abstract syntax and semantics are very
simple and well-documented. They are suitable to be used in a course,
as extension to the IMP language which comes with the Isabelle
distribution. While this entry is limited to a simple imperative
language, the ideas could be extended to more sophisticated languages.
[Farkas]
title = Farkas' Lemma and Motzkin's Transposition Theorem
author = Ralph Bottesch <http://cl-informatik.uibk.ac.at/users/bottesch/>, Max W. Haslbeck <http://cl-informatik.uibk.ac.at/users/mhaslbeck/>, René Thiemann <http://cl-informatik.uibk.ac.at/~thiemann/>
topic = Mathematics/Algebra
date = 2019-01-17
notify = rene.thiemann@uibk.ac.at
abstract =
We formalize a proof of Motzkin's transposition theorem and
Farkas' lemma in Isabelle/HOL. Our proof is based on the
formalization of the simplex algorithm which, given a set of linear
constraints, either returns a satisfying assignment to the problem or
detects unsatisfiability. By reusing facts about the simplex algorithm
we show that a set of linear constraints is unsatisfiable if and only
if there is a linear combination of the constraints which evaluates to
a trivially unsatisfiable inequality.
[Auto2_Imperative_HOL]
title = Verifying Imperative Programs using Auto2
author = Bohua Zhan <http://lcs.ios.ac.cn/~bzhan/>
-topic = Computer Science/Algorithms, Computer Science/Data Structures
+topic = Computer science/Algorithms, Computer science/Data structures
date = 2018-12-21
notify = bzhan@ios.ac.cn
abstract =
This entry contains the application of auto2 to verifying functional
and imperative programs. Algorithms and data structures that are
verified include linked lists, binary search trees, red-black trees,
interval trees, priority queue, quicksort, union-find, Dijkstra's
algorithm, and a sweep-line algorithm for detecting rectangle
intersection. The imperative verification is based on Imperative HOL
and its separation logic framework. A major goal of this work is to
set up automation in order to reduce the length of proof that the user
needs to provide, both for verifying functional programs and for
working with separation logic.
[UTP]
title = Isabelle/UTP: Mechanised Theory Engineering for Unifying Theories of Programming
author = Simon Foster <https://www-users.cs.york.ac.uk/~simonf/>, Frank Zeyda<>, Yakoub Nemouchi <mailto:yakoub.nemouchi@york.ac.uk>, Pedro Ribeiro<>, Burkhart Wolff<mailto:wolff@lri.fr>
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
date = 2019-02-01
notify = simon.foster@york.ac.uk
abstract =
Isabelle/UTP is a mechanised theory engineering toolkit based on Hoare
and He’s Unifying Theories of Programming (UTP). UTP enables the
creation of denotational, algebraic, and operational semantics for
different programming languages using an alphabetised relational
calculus. We provide a semantic embedding of the alphabetised
relational calculus in Isabelle/HOL, including new type definitions,
relational constructors, automated proof tactics, and accompanying
algebraic laws. Isabelle/UTP can be used to both capture laws of
programming for different languages, and put these fundamental
theorems to work in the creation of associated verification tools,
using calculi like Hoare logics. This document describes the
relational core of the UTP in Isabelle/HOL.
[HOL-CSP]
title = HOL-CSP Version 2.0
author = Safouan Taha <mailto:safouan.taha@lri.fr>, Lina Ye <mailto:lina.ye@lri.fr>, Burkhart Wolff<mailto:wolff@lri.fr>
-topic = Computer Science/Concurrency/Process Calculi, Computer Science/Semantics
+topic = Computer science/Concurrency/Process calculi, Computer science/Semantics
date = 2019-04-26
notify = wolff@lri.fr
abstract =
This is a complete formalization of the work of Hoare and Roscoe on
the denotational semantics of the Failure/Divergence Model of CSP. It
follows essentially the presentation of CSP in Roscoe’s Book ”Theory
and Practice of Concurrency” [8] and the semantic details in a joint
Paper of Roscoe and Brooks ”An improved failures model for
communicating processes". The present work is based on a prior
formalization attempt, called HOL-CSP 1.0, done in 1997 by H. Tej and
B. Wolff with the Isabelle proof technology available at that time.
This work revealed minor, but omnipresent foundational errors in key
concepts like the process invariant. The present version HOL-CSP
profits from substantially improved libraries (notably HOLCF),
improved automated proof techniques, and structured proof techniques
in Isar and is substantially shorter but more complete.
[Probabilistic_Prime_Tests]
title = Probabilistic Primality Testing
author = Daniel Stüwe<>, Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
date = 2019-02-11
notify = eberlm@in.tum.de
abstract =
<p>The most efficient known primality tests are
<em>probabilistic</em> in the sense that they use
randomness and may, with some probability, mistakenly classify a
composite number as prime &ndash; but never a prime number as
composite. Examples of this are the Miller&ndash;Rabin test, the
Solovay&ndash;Strassen test, and (in most cases) Fermat's
test.</p> <p>This entry defines these three tests and
proves their correctness. It also develops some of the
number-theoretic foundations, such as Carmichael numbers and the
Jacobi symbol with an efficient executable algorithm to compute
it.</p>
[Kruskal]
title = Kruskal's Algorithm for Minimum Spanning Forest
author = Maximilian P.L. Haslbeck <http://in.tum.de/~haslbema/>, Peter Lammich <http://www21.in.tum.de/~lammich>, Julian Biendarra<>
-topic = Computer Science/Algorithms/Graph
+topic = Computer science/Algorithms/Graph
date = 2019-02-14
notify = haslbema@in.tum.de, lammich@in.tum.de
abstract =
This Isabelle/HOL formalization defines a greedy algorithm for finding
a minimum weight basis on a weighted matroid and proves its
correctness. This algorithm is an abstract version of Kruskal's
algorithm. We interpret the abstract algorithm for the cycle matroid
(i.e. forests in a graph) and refine it to imperative executable code
using an efficient union-find data structure. Our formalization can
be instantiated for different graph representations. We provide
instantiations for undirected graphs and symmetric directed graphs.
[List_Inversions]
title = The Inversions of a List
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
date = 2019-02-01
notify = eberlm@in.tum.de
abstract =
<p>This entry defines the set of <em>inversions</em>
of a list, i.e. the pairs of indices that violate sortedness. It also
proves the correctness of the well-known
<em>O</em>(<em>n log n</em>)
divide-and-conquer algorithm to compute the number of
inversions.</p>
[Prime_Distribution_Elementary]
title = Elementary Facts About the Distribution of Primes
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
date = 2019-02-21
notify = eberlm@in.tum.de
abstract =
<p>This entry is a formalisation of Chapter 4 (and parts of
Chapter 3) of Apostol's <a
href="https://www.springer.com/de/book/9780387901633"><em>Introduction
to Analytic Number Theory</em></a>. The main topics that
are addressed are properties of the distribution of prime numbers that
can be shown in an elementary way (i.&thinsp;e. without the Prime
Number Theorem), the various equivalent forms of the PNT (which imply
each other in elementary ways), and consequences that follow from the
PNT in elementary ways. The latter include, most notably, asymptotic
bounds for the number of distinct prime factors of
<em>n</em>, the divisor function
<em>d(n)</em>, Euler's totient function
<em>&phi;(n)</em>, and
lcm(1,&hellip;,<em>n</em>).</p>
[Safe_OCL]
title = Safe OCL
author = Denis Nikiforov <>
-topic = Computer Science/Programming Languages/Language Definitions
+topic = Computer science/Programming languages/Language definitions
license = LGPL
date = 2019-03-09
notify = denis.nikif@gmail.com
abstract =
<p>The theory is a formalization of the
<a href="https://www.omg.org/spec/OCL/">OCL</a> type system, its abstract
syntax and expression typing rules. The theory does not define a concrete
syntax and a semantics. In contrast to
<a href="https://www.isa-afp.org/entries/Featherweight_OCL.html">Featherweight OCL</a>,
it is based on a deep embedding approach. The type system is defined from scratch,
it is not based on the Isabelle HOL type system.</p>
<p>The Safe OCL distincts nullable and non-nullable types. Also the theory gives a
formal definition of <a href="http://ceur-ws.org/Vol-1512/paper07.pdf">safe
navigation operations</a>. The Safe OCL typing rules are much stricter than rules
given in the OCL specification. It allows one to catch more errors on a type
checking phase.</p>
<p>The type theory presented is four-layered: classes, basic types, generic types,
errorable types. We introduce the following new types: non-nullable types (T[1]),
nullable types (T[?]), OclSuper. OclSuper is a supertype of all other types (basic
types, collections, tuples). This type allows us to define a total supremum function,
so types form an upper semilattice. It allows us to define rich expression typing
rules in an elegant manner.</p>
<p>The Preliminaries Chapter of the theory defines a number of helper lemmas for
transitive closures and tuples. It defines also a generic object model independent
from OCL. It allows one to use the theory as a reference for formalization of analogous languages.</p>
[QHLProver]
title = Quantum Hoare Logic
author = Junyi Liu<>, Bohua Zhan <http://lcs.ios.ac.cn/~bzhan/>, Shuling Wang<>, Shenggang Ying<>, Tao Liu<>, Yangjia Li<>, Mingsheng Ying<>, Naijun Zhan<>
-topic = Computer Science/Programming Languages/Logics, Computer Science/Semantics
+topic = Computer science/Programming languages/Logics, Computer science/Semantics
date = 2019-03-24
notify = bzhan@ios.ac.cn
abstract =
We formalize quantum Hoare logic as given in [1]. In particular, we
specify the syntax and denotational semantics of a simple model of
quantum programs. Then, we write down the rules of quantum Hoare logic
for partial correctness, and show the soundness and completeness of
the resulting proof system. As an application, we verify the
correctness of Grover’s algorithm.
[Transcendence_Series_Hancl_Rucki]
title = The Transcendence of Certain Infinite Series
author = Angeliki Koutsoukou-Argyraki <https://www.cl.cam.ac.uk/~ak2110/>, Wenda Li <https://www.cl.cam.ac.uk/~wl302/>
-topic = Mathematics/Analysis, Mathematics/Number Theory
+topic = Mathematics/Analysis, Mathematics/Number theory
date = 2019-03-27
notify = wl302@cam.ac.uk, ak2110@cam.ac.uk
abstract =
We formalize the proofs of two transcendence criteria by J. Hančl
and P. Rucki that assert the transcendence of the sums of certain
infinite series built up by sequences that fulfil certain properties.
Both proofs make use of Roth's celebrated theorem on diophantine
approximations to algebraic numbers from 1955 which we implement as
an assumption without having formalised its proof.
[Binding_Syntax_Theory]
title = A General Theory of Syntax with Bindings
author = Lorenzo Gheri <mailto:lor.gheri@gmail.com>, Andrei Popescu <mailto:a.popescu@mdx.ac.uk>
-topic = Computer Science/Programming Languages/Lambda Calculi, Computer Science/Functional Programming, Logic/General logic/Mechanization of proofs
+topic = Computer science/Programming languages/Lambda calculi, Computer science/Functional programming, Logic/General logic/Mechanization of proofs
date = 2019-04-06
notify = a.popescu@mdx.ac.uk, lor.gheri@gmail.com
abstract =
We formalize a theory of syntax with bindings that has been developed
and refined over the last decade to support several large
formalization efforts. Terms are defined for an arbitrary number of
constructors of varying numbers of inputs, quotiented to
alpha-equivalence and sorted according to a binding signature. The
theory includes many properties of the standard operators on terms:
substitution, swapping and freshness. It also includes bindings-aware
induction and recursion principles and support for semantic
interpretation. This work has been presented in the ITP 2017 paper “A
Formalized General Theory of Syntax with Bindings”.
[LTL_Master_Theorem]
title = A Compositional and Unified Translation of LTL into ω-Automata
author = Benedikt Seidl <mailto:benedikt.seidl@tum.de>, Salomon Sickert <mailto:s.sickert@tum.de>
-topic = Computer Science/Automata and Formal Languages
+topic = Computer science/Automata and formal languages
date = 2019-04-16
notify = benedikt.seidl@tum.de, s.sickert@tum.de
abstract =
We present a formalisation of the unified translation approach of
linear temporal logic (LTL) into ω-automata from [1]. This approach
decomposes LTL formulas into ``simple'' languages and allows
a clear separation of concerns: first, we formalise the purely logical
result yielding this decomposition; second, we instantiate this
generic theory to obtain a construction for deterministic
(state-based) Rabin automata (DRA). We extract from this particular
instantiation an executable tool translating LTL to DRAs. To the best
of our knowledge this is the first verified translation from LTL to
DRAs that is proven to be double exponential in the worst case which
asymptotically matches the known lower bound.
<p>
[1] Javier Esparza, Jan Kretínský, Salomon Sickert. One Theorem to Rule Them All:
A Unified Translation of LTL into ω-Automata. LICS 2018
[LambdaAuth]
title = Formalization of Generic Authenticated Data Structures
author = Matthias Brun<>, Dmitriy Traytel <http://people.inf.ethz.ch/trayteld/>
-topic = Computer Science/Security, Computer Science/Programming Languages/Lambda Calculi
+topic = Computer science/Security, Computer science/Programming languages/Lambda calculi
date = 2019-05-14
notify = traytel@inf.ethz.ch
abstract =
Authenticated data structures are a technique for outsourcing data
storage and maintenance to an untrusted server. The server is required
to produce an efficiently checkable and cryptographically secure proof
that it carried out precisely the requested computation. <a
href="https://doi.org/10.1145/2535838.2535851">Miller et
al.</a> introduced &lambda;&bull; (pronounced
<i>lambda auth</i>)&mdash;a functional programming
language with a built-in primitive authentication construct, which
supports a wide range of user-specified authenticated data structures
while guaranteeing certain correctness and security properties for all
well-typed programs. We formalize &lambda;&bull; and prove its
correctness and security properties. With Isabelle's help, we
uncover and repair several mistakes in the informal proofs and lemma
statements. Our findings are summarized in a <a
href="http://people.inf.ethz.ch/trayteld/papers/lambdaauth/lambdaauth.pdf">paper
draft</a>.
[IMP2_Binary_Heap]
title = Binary Heaps for IMP2
author = Simon Griebel<>
-topic = Computer Science/Data Structures, Computer Science/Algorithms
+topic = Computer science/Data structures, Computer science/Algorithms
date = 2019-06-13
notify = s.griebel@tum.de
abstract =
In this submission array-based binary minimum heaps are formalized.
The correctness of the following heap operations is proved: insert,
get-min, delete-min and make-heap. These are then used to verify an
in-place heapsort. The formalization is based on IMP2, an imperative
program verification framework implemented in Isabelle/HOL. The
verified heap functions are iterative versions of the partly recursive
functions found in "Algorithms and Data Structures – The Basic
Toolbox" by K. Mehlhorn and P. Sanders and "Introduction to
Algorithms" by T. H. Cormen, C. E. Leiserson, R. L. Rivest and C.
Stein.
[Groebner_Macaulay]
title = Gröbner Bases, Macaulay Matrices and Dubé's Degree Bounds
author = Alexander Maletzky <https://risc.jku.at/m/alexander-maletzky/>
topic = Mathematics/Algebra
date = 2019-06-15
notify = alexander.maletzky@risc.jku.at
abstract =
This entry formalizes the connection between Gröbner bases and
Macaulay matrices (sometimes also referred to as `generalized
Sylvester matrices'). In particular, it contains a method for
computing Gröbner bases, which proceeds by first constructing some
Macaulay matrix of the initial set of polynomials, then row-reducing
this matrix, and finally converting the result back into a set of
polynomials. The output is shown to be a Gröbner basis if the Macaulay
matrix constructed in the first step is sufficiently large. In order
to obtain concrete upper bounds on the size of the matrix (and hence
turn the method into an effectively executable algorithm), Dubé's
degree bounds on Gröbner bases are utilized; consequently, they are
also part of the formalization.
[Linear_Inequalities]
title = Linear Inequalities
author = Ralph Bottesch <http://cl-informatik.uibk.ac.at/users/bottesch/>, Alban Reynaud <>, René Thiemann <http://cl-informatik.uibk.ac.at/~thiemann/>
topic = Mathematics/Algebra
date = 2019-06-21
notify = rene.thiemann@uibk.ac.at
abstract =
We formalize results about linear inqualities, mainly from
Schrijver's book. The main results are the proof of the
fundamental theorem on linear inequalities, Farkas' lemma,
Carathéodory's theorem, the Farkas-Minkowsky-Weyl theorem, the
decomposition theorem of polyhedra, and Meyer's result that the
integer hull of a polyhedron is a polyhedron itself. Several theorems
include bounds on the appearing numbers, and in particular we provide
an a-priori bound on mixed-integer solutions of linear inequalities.
[Linear_Programming]
title = Linear Programming
author = Julian Parsert <http://www.parsert.com/>, Cezary Kaliszyk <http://cl-informatik.uibk.ac.at/cek/>
topic = Mathematics/Algebra
date = 2019-08-06
notify = julian.parsert@gmail.com, cezary.kaliszyk@uibk.ac.at
abstract =
We use the previous formalization of the general simplex algorithm to
formulate an algorithm for solving linear programs. We encode the
linear programs using only linear constraints. Solving these
constraints also solves the original linear program. This algorithm is
proven to be sound by applying the weak duality theorem which is also
part of this formalization.
[Differential_Game_Logic]
title = Differential Game Logic
author = André Platzer <http://www.cs.cmu.edu/~aplatzer/>
-topic = Computer Science/Programming Languages/Logics
+topic = Computer science/Programming languages/Logics
date = 2019-06-03
notify = aplatzer@cs.cmu.edu
abstract =
This formalization provides differential game logic (dGL), a logic for
proving properties of hybrid game. In addition to the syntax and
semantics, it formalizes a uniform substitution calculus for dGL.
Church's uniform substitutions substitute a term or formula for a
function or predicate symbol everywhere. The uniform substitutions for
dGL also substitute hybrid games for a game symbol everywhere. We
prove soundness of one-pass uniform substitutions and the axioms of
differential game logic with respect to their denotational semantics.
One-pass uniform substitutions are faster by postponing
soundness-critical admissibility checks with a linear pass homomorphic
application and regain soundness by a variable condition at the
replacements. The formalization is based on prior non-mechanized
soundness proofs for dGL.
[Complete_Non_Orders]
title = Complete Non-Orders and Fixed Points
author = Akihisa Yamada <http://group-mmm.org/~ayamada/>, Jérémy Dubut <http://group-mmm.org/~dubut/>
topic = Mathematics/Order
date = 2019-06-27
notify = akihisayamada@nii.ac.jp, dubut@nii.ac.jp
abstract =
We develop an Isabelle/HOL library of order-theoretic concepts, such
as various completeness conditions and fixed-point theorems. We keep
our formalization as general as possible: we reprove several
well-known results about complete orders, often without any properties
of ordering, thus complete non-orders. In particular, we generalize
the Knaster–Tarski theorem so that we ensure the existence of a
quasi-fixed point of monotone maps over complete non-orders, and show
that the set of quasi-fixed points is complete under a mild
condition—attractivity—which is implied by either antisymmetry or
transitivity. This result generalizes and strengthens a result by
Stauti and Maaden. Finally, we recover Kleene’s fixed-point theorem
for omega-complete non-orders, again using attractivity to prove that
Kleene’s fixed points are least quasi-fixed points.
[Priority_Search_Trees]
title = Priority Search Trees
author = Peter Lammich <http://www21.in.tum.de/~lammich>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
date = 2019-06-25
notify = lammich@in.tum.de
abstract =
We present a new, purely functional, simple and efficient data
structure combining a search tree and a priority queue, which we call
a <em>priority search tree</em>. The salient feature of priority search
trees is that they offer a decrease-key operation, something that is
missing from other simple, purely functional priority queue
implementations. Priority search trees can be implemented on top of
any search tree. This entry does the implementation for red-black
trees. This entry formalizes the first part of our ITP-2019 proof
pearl <em>Purely Functional, Simple and Efficient Priority
Search Trees and Applications to Prim and Dijkstra</em>.
[Prim_Dijkstra_Simple]
title = Purely Functional, Simple, and Efficient Implementation of Prim and Dijkstra
author = Peter Lammich <http://www21.in.tum.de/~lammich>, Tobias Nipkow <http://www21.in.tum.de/~nipkow>
-topic = Computer Science/Algorithms/Graph
+topic = Computer science/Algorithms/Graph
date = 2019-06-25
notify = lammich@in.tum.de
abstract =
We verify purely functional, simple and efficient implementations of
Prim's and Dijkstra's algorithms. This constitutes the first
verification of an executable and even efficient version of
Prim's algorithm. This entry formalizes the second part of our
ITP-2019 proof pearl <em>Purely Functional, Simple and Efficient
Priority Search Trees and Applications to Prim and Dijkstra</em>.
[MFOTL_Monitor]
title = Formalization of a Monitoring Algorithm for Metric First-Order Temporal Logic
author = Joshua Schneider <mailto:joshua.schneider@inf.ethz.ch>, Dmitriy Traytel <http://people.inf.ethz.ch/trayteld/>
-topic = Computer Science/Algorithms, Logic/General logic/Temporal logic, Computer Science/Automata and Formal Languages
+topic = Computer science/Algorithms, Logic/General logic/Temporal logic, Computer science/Automata and formal languages
date = 2019-07-04
notify = joshua.schneider@inf.ethz.ch, traytel@inf.ethz.ch
abstract =
A monitor is a runtime verification tool that solves the following
problem: Given a stream of time-stamped events and a policy formulated
in a specification language, decide whether the policy is satisfied at
every point in the stream. We verify the correctness of an executable
monitor for specifications given as formulas in metric first-order
temporal logic (MFOTL), an expressive extension of linear temporal
logic with real-time constraints and first-order quantification. The
verified monitor implements a simplified variant of the algorithm used
in the efficient MonPoly monitoring tool. The formalization is
presented in a forthcoming <a
href="http://people.inf.ethz.ch/trayteld/papers/rv19-verimon/verimon.pdf">RV
2019 paper</a>, which also compares the output of the verified
monitor to that of other monitoring tools on randomly generated
inputs. This case study revealed several errors in the optimized but
unverified tools.
[FOL_Seq_Calc1]
title = A Sequent Calculus for First-Order Logic
author = Asta Halkjær From <https://people.compute.dtu.dk/ahfrom/>
contributors = Alexander Birch Jensen <https://people.compute.dtu.dk/aleje/>,
Anders Schlichtkrull <https://people.compute.dtu.dk/andschl/>,
Jørgen Villadsen <https://people.compute.dtu.dk/jovi/>
topic = Logic/Proof theory
date = 2019-07-18
notify = ahfrom@dtu.dk
abstract =
This work formalizes soundness and completeness of a one-sided sequent
calculus for first-order logic. The completeness is shown via a
translation from a complete semantic tableau calculus, the proof of
which is based on the First-Order Logic According to Fitting theory.
The calculi and proof techniques are taken from Ben-Ari's
Mathematical Logic for Computer Science.
[Szpilrajn]
title = Szpilrajn Extension Theorem
author = Peter Zeller <mailto:p_zeller@cs.uni-kl.de>
topic = Mathematics/Order
date = 2019-07-27
notify = p_zeller@cs.uni-kl.de
abstract =
We formalize the Szpilrajn extension theorem, also known as
order-extension principal: Every strict partial order can be extended
to a strict linear order.
[TESL_Language]
title = A Formal Development of a Polychronous Polytimed Coordination Language
author = Hai Nguyen Van <mailto:hai.nguyenvan.phie@gmail.com>, Frédéric Boulanger <mailto:frederic.boulanger@centralesupelec.fr>, Burkhart Wolff <mailto:burkhart.wolff@lri.fr>
-topic = Computer Science/System Description Languages, Computer Science/Semantics, Computer Science/Concurrency
+topic = Computer science/System description languages, Computer science/Semantics, Computer science/Concurrency
date = 2019-07-30
notify = frederic.boulanger@centralesupelec.fr, burkhart.wolff@lri.fr
abstract =
The design of complex systems involves different formalisms for
modeling their different parts or aspects. The global model of a
system may therefore consist of a coordination of concurrent
sub-models that use different paradigms. We develop here a theory for
a language used to specify the timed coordination of such
heterogeneous subsystems by addressing the following issues:
<ul><li>the
behavior of the sub-systems is observed only at a series of discrete
instants,</li><li>events may occur in different sub-systems at unrelated
times, leading to polychronous systems, which do not necessarily have
a common base clock,</li><li>coordination between subsystems involves
causality, so the occurrence of an event may enforce the occurrence of
other events, possibly after a certain duration has elapsed or an
event has occurred a given number of times,</li><li>the domain of time
(discrete, rational, continuous...) may be different in the
subsystems, leading to polytimed systems,</li><li>the time frames of
different sub-systems may be related (for instance, time in a GPS
satellite and in a GPS receiver on Earth are related although they are
not the same).</li></ul>
Firstly, a denotational semantics of the language is
defined. Then, in order to be able to incrementally check the behavior
of systems, an operational semantics is given, with proofs of
progress, soundness and completeness with regard to the denotational
semantics. These proofs are made according to a setup that can scale
up when new operators are added to the language. In order for
specifications to be composed in a clean way, the language should be
invariant by stuttering (i.e., adding observation instants at which
nothing happens). The proof of this invariance is also given.
[Stellar_Quorums]
title = Stellar Quorum Systems
author = Giuliano Losa <mailto:giuliano@galois.com>
-topic = Computer Science/Algorithms/Distributed
+topic = Computer science/Algorithms/Distributed
date = 2019-08-01
notify = giuliano@galois.com
abstract =
We formalize the static properties of personal Byzantine quorum
systems (PBQSs) and Stellar quorum systems, as described in the paper
``Stellar Consensus by Reduction'' (to appear at DISC 2019).
[IMO2019]
title = Selected Problems from the International Mathematical Olympiad 2019
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
topic = Mathematics/Misc
date = 2019-08-05
notify = eberlm@in.tum.de
abstract =
<p>This entry contains formalisations of the answers to three of
the six problem of the International Mathematical Olympiad 2019,
namely Q1, Q4, and Q5.</p> <p>The reason why these
problems were chosen is that they are particularly amenable to
formalisation: they can be solved with minimal use of libraries. The
remaining three concern geometry and graph theory, which, in the
author's opinion, are more difficult to formalise resp. require a
more complex library.</p>
[Adaptive_State_Counting]
title = Formalisation of an Adaptive State Counting Algorithm
author = Robert Sachtleben <mailto:rob_sac@uni-bremen.de>
-topic = Computer Science/Automata and Formal Languages, Computer Science/Algorithms
+topic = Computer science/Automata and formal languages, Computer science/Algorithms
date = 2019-08-16
notify = rob_sac@uni-bremen.de
abstract =
This entry provides a formalisation of a refinement of an adaptive
state counting algorithm, used to test for reduction between finite
state machines. The algorithm has been originally presented by Hierons
in the paper <a
href="https://doi.org/10.1109/TC.2004.85">Testing from a
Non-Deterministic Finite State Machine Using Adaptive State
Counting</a>. Definitions for finite state machines and
adaptive test cases are given and many useful theorems are derived
from these. The algorithm is formalised using mutually recursive
functions, for which it is proven that the generated test suite is
sufficient to test for reduction against finite state machines of a
certain fault domain. Additionally, the algorithm is specified in a
simple WHILE-language and its correctness is shown using Hoare-logic.
[Jacobson_Basic_Algebra]
title = A Case Study in Basic Algebra
author = Clemens Ballarin <http://www21.in.tum.de/~ballarin/>
topic = Mathematics/Algebra
date = 2019-08-30
notify = ballarin@in.tum.de
abstract =
The focus of this case study is re-use in abstract algebra. It
contains locale-based formalisations of selected parts of set, group
and ring theory from Jacobson's <i>Basic Algebra</i>
leading to the respective fundamental homomorphism theorems. The
study is not intended as a library base for abstract algebra. It
rather explores an approach towards abstract algebra in Isabelle.
[Hybrid_Systems_VCs]
title = Verification Components for Hybrid Systems
author = Jonathan Julian Huerta y Munive <>
topic = Mathematics/Algebra, Mathematics/Analysis
date = 2019-09-10
notify = jjhuertaymunive1@sheffield.ac.uk, jonjulian23@gmail.com
abstract =
These components formalise a semantic framework for the deductive
verification of hybrid systems. They support reasoning about
continuous evolutions of hybrid programs in the style of differential
dynamics logic. Vector fields or flows model these evolutions, and
their verification is done with invariants for the former or orbits
for the latter. Laws of modal Kleene algebra or categorical predicate
transformers implement the verification condition generation. Examples
show the approach at work.
[Generic_Join]
title = Formalization of Multiway-Join Algorithms
author = Thibault Dardinier<>
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
date = 2019-09-16
notify = tdardini@student.ethz.ch, traytel@inf.ethz.ch
abstract =
Worst-case optimal multiway-join algorithms are recent seminal
achievement of the database community. These algorithms compute the
natural join of multiple relational databases and improve in the worst
case over traditional query plan optimizations of nested binary joins.
In 2014, <a
href="https://doi.org/10.1145/2590989.2590991">Ngo, Ré,
and Rudra</a> gave a unified presentation of different multi-way
join algorithms. We formalized and proved correct their "Generic
Join" algorithm and extended it to support negative joins.
[Aristotles_Assertoric_Syllogistic]
title = Aristotle's Assertoric Syllogistic
author = Angeliki Koutsoukou-Argyraki <https://www.cl.cam.ac.uk/~ak2110/>
topic = Logic/Philosophical aspects
date = 2019-10-08
notify = ak2110@cam.ac.uk
abstract =
We formalise with Isabelle/HOL some basic elements of Aristotle's
assertoric syllogistic following the <a
href="https://plato.stanford.edu/entries/aristotle-logic/">article from the Stanford Encyclopedia of Philosophy by Robin Smith.</a> To
this end, we use a set theoretic formulation (covering both individual
and general predication). In particular, we formalise the deductions
in the Figures and after that we present Aristotle's
metatheoretical observation that all deductions in the Figures can in
fact be reduced to either Barbara or Celarent. As the formal proofs
prove to be straightforward, the interest of this entry lies in
illustrating the functionality of Isabelle and high efficiency of
Sledgehammer for simple exercises in philosophy.
[VerifyThis2019]
title = VerifyThis 2019 -- Polished Isabelle Solutions
author = Peter Lammich<>, Simon Wimmer<http://home.in.tum.de/~wimmers/>
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
date = 2019-10-16
notify = lammich@in.tum.de, wimmers@in.tum.de
abstract =
VerifyThis 2019 (http://www.pm.inf.ethz.ch/research/verifythis.html)
was a program verification competition associated with ETAPS 2019. It
was the 8th event in the VerifyThis competition series. In this entry,
we present polished and completed versions of our solutions that we
created during the competition.
[ZFC_in_HOL]
title = Zermelo Fraenkel Set Theory in Higher-Order Logic
author = Lawrence C. Paulson <https://www.cl.cam.ac.uk/~lp15/>
-topic = Mathematics/Set Theory
+topic = Logic/Set theory
date = 2019-10-24
notify = lp15@cam.ac.uk
abstract =
<p>This entry is a new formalisation of ZFC set theory in Isabelle/HOL. It is
logically equivalent to Obua's HOLZF; the point is to have the closest
possible integration with the rest of Isabelle/HOL, minimising the amount of
new notations and exploiting type classes.</p>
<p>There is a type <em>V</em> of sets and a function <em>elts :: V =&gt; V
set</em> mapping a set to its elements. Classes simply have type <em>V
set</em>, and a predicate identifies the small classes: those that correspond
to actual sets. Type classes connected with orders and lattices are used to
minimise the amount of new notation for concepts such as the subset relation,
union and intersection. Basic concepts — Cartesian products, disjoint sums,
natural numbers, functions, etc. — are formalised.</p>
<p>More advanced set-theoretic concepts, such as transfinite induction,
ordinals, cardinals and the transitive closure of a set, are also provided.
The definition of addition and multiplication for general sets (not just
ordinals) follows Kirby.</p>
<p>The theory provides two type classes with the aim of facilitating
developments that combine <em>V</em> with other Isabelle/HOL types:
<em>embeddable</em>, the class of types that can be injected into <em>V</em>
(including <em>V</em> itself as well as <em>V*V</em>, etc.), and
<em>small</em>, the class of types that correspond to some ZF set.</p>
extra-history =
Change history:
[2020-01-28]: Generalisation of the "small" predicate and order types to arbitrary sets;
ordinal exponentiation;
introduction of the coercion ord_of_nat :: "nat => V";
numerous new lemmas. (revision 6081d5be8d08)
[Interval_Arithmetic_Word32]
title = Interval Arithmetic on 32-bit Words
author = Brandon Bohrer <mailto:bbohrer@cs.cmu.edu>
-topic = Computer Science/Data Structures
+topic = Computer science/Data structures
date = 2019-11-27
notify = bjbohrer@gmail.com, bbohrer@cs.cmu.edu
abstract =
Interval_Arithmetic implements conservative interval arithmetic
computations, then uses this interval arithmetic to implement a simple
programming language where all terms have 32-bit signed word values,
with explicit infinities for terms outside the representable bounds.
Our target use case is interpreters for languages that must have a
well-understood low-level behavior. We include a formalization of
bounded-length strings which are used for the identifiers of our
language. Bounded-length identifiers are useful in some applications,
for example the <a href="https://www.isa-afp.org/entries/Differential_Dynamic_Logic.html">Differential_Dynamic_Logic</a> article,
where a Euclidean space indexed by identifiers demands that identifiers
are finitely many.
[Generalized_Counting_Sort]
title = An Efficient Generalization of Counting Sort for Large, possibly Infinite Key Ranges
author = Pasquale Noce <mailto:pasquale.noce.lavoro@gmail.com>
-topic = Computer Science/Algorithms, Computer Science/Functional Programming
+topic = Computer science/Algorithms, Computer science/Functional programming
date = 2019-12-04
notify = pasquale.noce.lavoro@gmail.com
abstract =
Counting sort is a well-known algorithm that sorts objects of any kind
mapped to integer keys, or else to keys in one-to-one correspondence
with some subset of the integers (e.g. alphabet letters). However, it
is suitable for direct use, viz. not just as a subroutine of another
sorting algorithm (e.g. radix sort), only if the key range is not
significantly larger than the number of the objects to be sorted.
This paper describes a tail-recursive generalization of counting sort
making use of a bounded number of counters, suitable for direct use in
case of a large, or even infinite key range of any kind, subject to
the only constraint of being a subset of an arbitrary linear order.
After performing a pen-and-paper analysis of how such algorithm has to
be designed to maximize its efficiency, this paper formalizes the
resulting generalized counting sort (GCsort) algorithm and then
formally proves its correctness properties, namely that (a) the
counters' number is maximized never exceeding the fixed upper
bound, (b) objects are conserved, (c) objects get sorted, and (d) the
algorithm is stable.
[Poincare_Bendixson]
title = The Poincaré-Bendixson Theorem
author = Fabian Immler <http://home.in.tum.de/~immler/>, Yong Kiam Tan <https://www.cs.cmu.edu/~yongkiat/>
topic = Mathematics/Analysis
date = 2019-12-18
notify = fimmler@cs.cmu.edu, yongkiat@cs.cmu.edu
abstract =
The Poincaré-Bendixson theorem is a classical result in the study of
(continuous) dynamical systems. Colloquially, it restricts the
possible behaviors of planar dynamical systems: such systems cannot be
chaotic. In practice, it is a useful tool for proving the existence of
(limiting) periodic behavior in planar systems. The theorem is an
interesting and challenging benchmark for formalized mathematics
because proofs in the literature rely on geometric sketches and only
hint at symmetric cases. It also requires a substantial background of
mathematical theories, e.g., the Jordan curve theorem, real analysis,
ordinary differential equations, and limiting (long-term) behavior of
dynamical systems.
[Isabelle_C]
title = Isabelle/C
author = Frédéric Tuong <https://www.lri.fr/~ftuong/>, Burkhart Wolff <https://www.lri.fr/~wolff/>
-topic = Computer Science/Programming Languages/Language Definitions, Computer Science/Semantics, Tools
+topic = Computer science/Programming languages/Language definitions, Computer science/Semantics, Tools
date = 2019-10-22
notify = tuong@users.gforge.inria.fr, wolff@lri.fr
abstract =
We present a framework for C code in C11 syntax deeply integrated into
the Isabelle/PIDE development environment. Our framework provides an
abstract interface for verification back-ends to be plugged-in
independently. Thus, various techniques such as deductive program
verification or white-box testing can be applied to the same source,
which is part of an integrated PIDE document model. Semantic back-ends
are free to choose the supported C fragment and its semantics. In
particular, they can differ on the chosen memory model or the
specification mechanism for framing conditions. Our framework supports
semantic annotations of C sources in the form of comments. Annotations
serve to locally control back-end settings, and can express the term
focus to which an annotation refers. Both the logical and the
syntactic context are available when semantic annotations are
evaluated. As a consequence, a formula in an annotation can refer both
to HOL or C variables. Our approach demonstrates the degree of
maturity and expressive power the Isabelle/PIDE sub-system has
achieved in recent years. Our integration technique employs Lex and
Yacc style grammars to ensure efficient deterministic parsing. This
is the core-module of Isabelle/C; the AFP package for Clean and
Clean_wrapper as well as AutoCorres and AutoCorres_wrapper (available
via git) are applications of this front-end.
[Zeta_3_Irrational]
title = The Irrationality of ζ(3)
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
date = 2019-12-27
notify = manuel.eberl@tum.de
abstract =
<p>This article provides a formalisation of Beukers's
straightforward analytic proof that ζ(3) is irrational. This was first
proven by Apéry (which is why this result is also often called
‘Apéry's Theorem’) using a more algebraic approach. This
formalisation follows <a
href="http://people.math.sc.edu/filaseta/gradcourses/Math785/Math785Notes4.pdf">Filaseta's
presentation</a> of Beukers's proof.</p>
[Hybrid_Logic]
title = Formalizing a Seligman-Style Tableau System for Hybrid Logic
author = Asta Halkjær From <https://people.compute.dtu.dk/ahfrom/>
topic = Logic/General logic/Modal logic
date = 2019-12-20
notify = ahfrom@dtu.dk
abstract =
This work is a formalization of soundness and completeness proofs
for a Seligman-style tableau system for hybrid logic. The completeness
result is obtained via a synthetic approach using maximally
consistent sets of tableau blocks. The formalization differs from
the cited work in a few ways. First, to avoid the need to backtrack in
the construction of a tableau, the formalized system has no unnamed
initial segment, and therefore no Name rule. Second, I show that the
full Bridge rule is admissible in the system. Third, I start from rules
restricted to only extend the branch with new formulas, including only
witnessing diamonds that are not already witnessed, and show that
the unrestricted rules are admissible. Similarly, I start from simpler
versions of the @-rules and show the general ones admissible. Finally,
the GoTo rule is restricted using a notion of coins such that each
application consumes a coin and coins are earned through applications of
the remaining rules. I show that if a branch can be closed then it can
be closed starting from a single coin. These restrictions are imposed
to rule out some means of nontermination.
[Bicategory]
title = Bicategories
author = Eugene W. Stark <mailto:stark@cs.stonybrook.edu>
-topic = Mathematics/Category Theory
+topic = Mathematics/Category theory
date = 2020-01-06
notify = stark@cs.stonybrook.edu
abstract =
Taking as a starting point the author's previous work on
developing aspects of category theory in Isabelle/HOL, this article
gives a compatible formalization of the notion of
"bicategory" and develops a framework within which formal
proofs of facts about bicategories can be given. The framework
includes a number of basic results, including the Coherence Theorem,
the Strictness Theorem, pseudofunctors and biequivalence, and facts
about internal equivalences and adjunctions in a bicategory. As a
driving application and demonstration of the utility of the framework,
it is used to give a formal proof of a theorem, due to Carboni,
Kasangian, and Street, that characterizes up to biequivalence the
bicategories of spans in a category with pullbacks. The formalization
effort necessitated the filling-in of many details that were not
evident from the brief presentation in the original paper, as well as
identifying a few minor corrections along the way.
extra-history =
Change history:
[2020-02-15]:
Move ConcreteCategory.thy from Bicategory to Category3 and use it systematically.
Make other minor improvements throughout.
(revision a51840d36867)<br>
[Subset_Boolean_Algebras]
title = A Hierarchy of Algebras for Boolean Subsets
author = Walter Guttmann <http://www.cosc.canterbury.ac.nz/walter.guttmann/>, Bernhard Möller <https://www.informatik.uni-augsburg.de/en/chairs/dbis/pmi/staff/moeller/>
topic = Mathematics/Algebra
date = 2020-01-31
notify = walter.guttmann@canterbury.ac.nz
abstract =
We present a collection of axiom systems for the construction of
Boolean subalgebras of larger overall algebras. The subalgebras are
defined as the range of a complement-like operation on a semilattice.
This technique has been used, for example, with the antidomain
operation, dynamic negation and Stone algebras. We present a common
ground for these constructions based on a new equational
axiomatisation of Boolean algebras.
[Goodstein_Lambda]
title = Implementing the Goodstein Function in &lambda;-Calculus
author = Bertram Felgenhauer <mailto:int-e@gmx.de>
topic = Logic/Rewriting
date = 2020-02-21
notify = int-e@gmx.de
abstract =
In this formalization, we develop an implementation of the Goodstein
function G in plain &lambda;-calculus, linked to a concise, self-contained
specification. The implementation works on a Church-encoded
representation of countable ordinals. The initial conversion to
hereditary base 2 is not covered, but the material is sufficient to
compute the particular value G(16), and easily extends to other fixed
arguments.
[VeriComp]
title = A Generic Framework for Verified Compilers
author = Martin Desharnais <https://martin.desharnais.me>
-topic = Computer Science/Programming Languages/Compiling
+topic = Computer science/Programming languages/Compiling
date = 2020-02-10
notify = martin.desharnais@unibw.de
abstract =
This is a generic framework for formalizing compiler transformations.
It leverages Isabelle/HOL’s locales to abstract over concrete
languages and transformations. It states common definitions for
language semantics, program behaviours, forward and backward
simulations, and compilers. We provide generic operations, such as
simulation and compiler composition, and prove general (partial)
correctness theorems, resulting in reusable proof components.
[Hello_World]
title = Hello World
author = Cornelius Diekmann <http://net.in.tum.de/~diekmann>, Lars Hupel <https://www21.in.tum.de/~hupel/>
-topic = Computer Science/Functional Programming
+topic = Computer science/Functional programming
date = 2020-03-07
notify = diekmann@net.in.tum.de
abstract =
In this article, we present a formalization of the well-known
"Hello, World!" code, including a formal framework for
reasoning about IO. Our model is inspired by the handling of IO in
Haskell. We start by formalizing the 🌍 and embrace the IO monad
afterwards. Then we present a sample main :: IO (), followed by its
proof of correctness.
[WOOT_Strong_Eventual_Consistency]
title = Strong Eventual Consistency of the Collaborative Editing Framework WOOT
author = Emin Karayel <https://orcid.org/0000-0003-3290-5034>, Edgar Gonzàlez <mailto:edgargip@google.com>
-topic = Computer Science/Algorithms/Distributed
+topic = Computer science/Algorithms/Distributed
date = 2020-03-25
notify = eminkarayel@google.com, edgargip@google.com, me@eminkarayel.de
abstract =
Commutative Replicated Data Types (CRDTs) are a promising new class of
data structures for large-scale shared mutable content in applications
that only require eventual consistency. The WithOut Operational
Transforms (WOOT) framework is a CRDT for collaborative text editing
introduced by Oster et al. (CSCW 2006) for which the eventual
consistency property was verified only for a bounded model to date. We
contribute a formal proof for WOOTs strong eventual consistency.
[Furstenberg_Topology]
title = Furstenberg's topology and his proof of the infinitude of primes
author = Manuel Eberl <https://www21.in.tum.de/~eberlm>
-topic = Mathematics/Number Theory
+topic = Mathematics/Number theory
date = 2020-03-22
notify = manuel.eberl@tum.de
abstract =
<p>This article gives a formal version of Furstenberg's
topological proof of the infinitude of primes. He defines a topology
on the integers based on arithmetic progressions (or, equivalently,
residue classes). Using some fairly obvious properties of this
topology, the infinitude of primes is then easily obtained.</p>
<p>Apart from this, this topology is also fairly ‘nice’ in
general: it is second countable, metrizable, and perfect. All of these
(well-known) facts are formally proven, including an explicit metric
for the topology given by Zulfeqarr.</p>
[Saturation_Framework]
title = A Comprehensive Framework for Saturation Theorem Proving
author = Sophie Tourret <https://www.mpi-inf.mpg.de/departments/automation-of-logic/people/sophie-tourret/>
topic = Logic/General logic/Mechanization of proofs
date = 2020-04-09
notify = stourret@mpi-inf.mpg.de
abstract =
This Isabelle/HOL formalization is the companion of the technical
report “A comprehensive framework for saturation theorem proving”,
itself companion of the eponym IJCAR 2020 paper, written by Uwe
Waldmann, Sophie Tourret, Simon Robillard and Jasmin Blanchette. It
verifies a framework for formal refutational completeness proofs of
abstract provers that implement saturation calculi, such as ordered
resolution or superposition, and allows to model entire prover
architectures in such a way that the static refutational completeness
of a calculus immediately implies the dynamic refutational
completeness of a prover implementing the calculus using a variant of
the given clause loop. The technical report “A comprehensive
framework for saturation theorem proving” is available <a
href="http://matryoshka.gforge.inria.fr/pubs/satur_report.pdf">on
the Matryoshka website</a>. The names of the Isabelle lemmas and
theorems corresponding to the results in the report are indicated in
the margin of the report.
[MFODL_Monitor_Optimized]
title = Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations
author = Thibault Dardinier<>, Lukas Heimes<>, Martin Raszyk <mailto:martin.raszyk@inf.ethz.ch>, Joshua Schneider <mailto:joshua.schneider@inf.ethz.ch>, Dmitriy Traytel <http://people.inf.ethz.ch/trayteld/>
-topic = Computer Science/Algorithms, Logic/General logic/Modal logic, Computer Science/Automata and Formal Languages
+topic = Computer science/Algorithms, Logic/General logic/Modal logic, Computer science/Automata and formal languages
date = 2020-04-09
notify = martin.raszyk@inf.ethz.ch, joshua.schneider@inf.ethz.ch, traytel@inf.ethz.ch
abstract =
A monitor is a runtime verification tool that solves the following
problem: Given a stream of time-stamped events and a policy formulated
in a specification language, decide whether the policy is satisfied at
every point in the stream. We verify the correctness of an executable
monitor for specifications given as formulas in metric first-order
dynamic logic (MFODL), which combines the features of metric
first-order temporal logic (MFOTL) and metric dynamic logic. Thus,
MFODL supports real-time constraints, first-order parameters, and
regular expressions. Additionally, the monitor supports aggregation
operations such as count and sum. This formalization, which is
described in a <a
href="http://people.inf.ethz.ch/trayteld/papers/ijcar20-verimonplus/verimonplus.pdf">
forthcoming paper at IJCAR 2020</a>, significantly extends <a
href="https://www.isa-afp.org/entries/MFOTL_Monitor.html">previous
work on a verified monitor</a> for MFOTL. Apart from the
addition of regular expressions and aggregations, we implemented <a
href="https://www.isa-afp.org/entries/Generic_Join.html">multi-way
joins</a> and a specialized sliding window algorithm to further
optimize the monitor.
[Sliding_Window_Algorithm]
title = Formalization of an Algorithm for Greedily Computing Associative Aggregations on Sliding Windows
author = Lukas Heimes<>, Dmitriy Traytel <http://people.inf.ethz.ch/trayteld/>, Joshua Schneider<>
-topic = Computer Science/Algorithms
+topic = Computer science/Algorithms
date = 2020-04-10
notify = heimesl@student.ethz.ch, traytel@inf.ethz.ch, joshua.schneider@inf.ethz.ch
abstract =
Basin et al.'s <a
href="https://doi.org/10.1016/j.ipl.2014.09.009">sliding
window algorithm (SWA)</a> is an algorithm for combining the
elements of subsequences of a sequence with an associative operator.
It is greedy and minimizes the number of operator applications. We
formalize the algorithm and verify its functional correctness. We
extend the algorithm with additional operations and provide an
alternative interface to the slide operation that does not require the
entire input sequence.
-
+[Lucas_Theorem]
+title = Lucas's Theorem
+author = Chelsea Edmonds <mailto:cle47@cam.ac.uk>
+topic = Mathematics/Number theory
+date = 2020-04-07
+notify = cle47@cam.ac.uk
+abstract =
+ This work presents a formalisation of a generating function proof for
+ Lucas's theorem. We first outline extensions to the existing
+ Formal Power Series (FPS) library, including an equivalence relation
+ for coefficients modulo <em>n</em>, an alternate binomial theorem statement,
+ and a formalised proof of the Freshman's dream (mod <em>p</em>) lemma.
+ The second part of the work presents the formal proof of Lucas's
+ Theorem. Working backwards, the formalisation first proves a well
+ known corollary of the theorem which is easier to formalise, and then
+ applies induction to prove the original theorem statement. The proof
+ of the corollary aims to provide a good example of a formalised
+ generating function equivalence proof using the FPS library. The final
+ theorem statement is intended to be integrated into the formalised
+ proof of Hilbert's 10th Problem.
+
+[ADS_Functor]
+title = Authenticated Data Structures As Functors
+author = Andreas Lochbihler <http://www.andreas-lochbihler.de>, Ognjen Marić <mailto:ogi.afp@mynosefroze.com>
+topic = Computer science/Data structures
+date = 2020-04-16
+notify = andreas.lochbihler@digitalasset.com, mail@andreas-lochbihler.de
+abstract =
+ Authenticated data structures allow several systems to convince each
+ other that they are referring to the same data structure, even if each
+ of them knows only a part of the data structure. Using inclusion
+ proofs, knowledgeable systems can selectively share their knowledge
+ with other systems and the latter can verify the authenticity of what
+ is being shared. In this article, we show how to modularly define
+ authenticated data structures, their inclusion proofs, and operations
+ thereon as datatypes in Isabelle/HOL, using a shallow embedding.
+ Modularity allows us to construct complicated trees from reusable
+ building blocks, which we call Merkle functors. Merkle functors
+ include sums, products, and function spaces and are closed under
+ composition and least fixpoints. As a practical application, we model
+ the hierarchical transactions of <a
+ href="https://www.canton.io">Canton</a>, a
+ practical interoperability protocol for distributed ledgers, as
+ authenticated data structures. This is a first step towards
+ formalizing the Canton protocol and verifying its integrity and
+ security guarantees.
+
diff --git a/metadata/topics b/metadata/topics
--- a/metadata/topics
+++ b/metadata/topics
@@ -1,63 +1,62 @@
-Computer Science
- Automata and Formal Languages
+Computer science
+ Automata and formal languages
Algorithms
Graph
Distributed
Concurrent
Online
Geometry
Approximation
Mathematical
Optimization
Concurrency
- Process Calculi
- Data Structures
- Functional Programming
+ Process calculi
+ Data structures
+ Functional programming
Hardware
- Machine Learning
+ Machine learning
Networks
- Programming Languages
- Language Definitions
- Lambda Calculi
- Type Systems
+ Programming languages
+ Language definitions
+ Lambda calculi
+ Type systems
Logics
Compiling
- Static Analysis
+ Static analysis
Transformations
Misc
Security
Cryptography
Semantics
- System Description Languages
+ System description languages
Logic
Philosophical aspects
General logic
Classical propositional logic
Classical first-order logic
Decidability of theories
Mechanization of proofs
Lambda calculus
Logics of knowledge and belief
Temporal logic
Modal logic
Paraconsistent logics
Computability
Set theory
Proof theory
Rewriting
Mathematics
Order
Algebra
Analysis
- Probability Theory
- Number Theory
- Games and Economics
+ Probability theory
+ Number theory
+ Games and economics
Geometry
Topology
- Graph Theory
+ Graph theory
Combinatorics
- Category Theory
+ Category theory
Physics
- Set Theory
Misc
Tools
diff --git a/thys/ADS_Functor/ADS_Construction.thy b/thys/ADS_Functor/ADS_Construction.thy
new file mode 100644
--- /dev/null
+++ b/thys/ADS_Functor/ADS_Construction.thy
@@ -0,0 +1,1281 @@
+(* Author: Andreas Lochbihler, Digital Asset
+ Author: Ognjen Maric, Digital Asset *)
+
+theory ADS_Construction imports
+ Merkle_Interface
+ "HOL-Library.Simps_Case_Conv"
+begin
+
+(************************************************************)
+section \<open> Building blocks for authenticated data structures on datatypes \<close>
+(************************************************************)
+
+(************************************************************)
+subsection \<open> Building Block: Identity Functor \<close>
+(************************************************************)
+
+text \<open>If nothing is blindable in a type, then the type itself is the hash and the ADS of itself.\<close>
+
+abbreviation (input) hash_discrete :: "('a, 'a) hash" where "hash_discrete \<equiv> id"
+
+abbreviation (input) blinding_of_discrete :: "'a blinding_of" where
+ "blinding_of_discrete \<equiv> (=)"
+
+definition merge_discrete :: "'a merge" where
+ "merge_discrete x y = (if x = y then Some y else None)"
+
+lemma blinding_of_discrete_hash:
+ "blinding_of_discrete \<le> vimage2p hash_discrete hash_discrete (=)"
+ by(auto simp add: vimage2p_def)
+
+lemma blinding_of_on_discrete [locale_witness]:
+ "blinding_of_on UNIV hash_discrete blinding_of_discrete"
+ by(unfold_locales)(simp_all add: OO_eq eq_onp_def blinding_of_discrete_hash)
+
+lemma merge_on_discrete [locale_witness]:
+ "merge_on UNIV hash_discrete blinding_of_discrete merge_discrete"
+ by unfold_locales(auto simp add: merge_discrete_def)
+
+lemma merkle_discrete [locale_witness]:
+ "merkle_interface hash_discrete blinding_of_discrete merge_discrete"
+ ..
+
+parametric_constant merge_discrete_parametric [transfer_rule]: merge_discrete_def
+
+(************************************************************)
+subsubsection \<open>Example: instantiation for @{typ unit}\<close>
+(************************************************************)
+
+abbreviation (input) hash_unit :: "(unit, unit) hash" where "hash_unit \<equiv> hash_discrete"
+
+abbreviation blinding_of_unit :: "unit blinding_of" where
+ "blinding_of_unit \<equiv> blinding_of_discrete"
+
+abbreviation merge_unit :: "unit merge" where "merge_unit \<equiv> merge_discrete"
+
+lemma blinding_of_unit_hash:
+ "blinding_of_unit \<le> vimage2p hash_unit hash_unit (=)"
+ by(fact blinding_of_discrete_hash)
+
+lemma blinding_of_on_unit:
+ "blinding_of_on UNIV hash_unit blinding_of_unit"
+ by(fact blinding_of_on_discrete)
+
+lemma merge_on_unit:
+ "merge_on UNIV hash_unit blinding_of_unit merge_unit"
+ by(fact merge_on_discrete)
+
+lemma merkle_interface_unit:
+ "merkle_interface hash_unit blinding_of_unit merge_unit"
+ by(intro merkle_interfaceI merge_on_unit)
+
+(************************************************************)
+subsection \<open> Building Block: Blindable Position \<close>
+(************************************************************)
+
+type_synonym 'a blindable = 'a
+
+text \<open> The following type represents the hashes of a datatype. We model hashes as being injective,
+ but not surjective; some hashes do not correspond to any values of the original datatypes. We
+ model such values as "garbage" coming from a countable set (here, naturals). \<close>
+
+type_synonym garbage = nat
+
+datatype 'a\<^sub>h blindable\<^sub>h = Content 'a\<^sub>h | Garbage garbage
+
+datatype ('a\<^sub>m, 'a\<^sub>h) blindable\<^sub>m = Unblinded 'a\<^sub>m | Blinded "'a\<^sub>h blindable\<^sub>h"
+
+(************************************************************)
+subsubsection \<open> Hashes \<close>
+(************************************************************)
+
+primrec hash_blindable' :: "(('a\<^sub>h, 'a\<^sub>h) blindable\<^sub>m, 'a\<^sub>h blindable\<^sub>h) hash" where
+ "hash_blindable' (Unblinded x) = Content x"
+| "hash_blindable' (Blinded x) = x"
+
+definition hash_blindable :: "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> (('a\<^sub>m, 'a\<^sub>h) blindable\<^sub>m, 'a\<^sub>h blindable\<^sub>h) hash" where
+ "hash_blindable h = hash_blindable' \<circ> map_blindable\<^sub>m h id"
+
+lemma hash_blindable_simps [simp]:
+ "hash_blindable h (Unblinded x) = Content (h x)"
+ "hash_blindable h (Blinded y) = y"
+ by(simp_all add: hash_blindable_def blindable\<^sub>h.map_id)
+
+lemma hash_map_blindable_simp:
+ "hash_blindable f (map_blindable\<^sub>m f' id x) = hash_blindable (f o f') x"
+ by(cases x) (simp_all add: hash_blindable_def blindable\<^sub>h.map_comp)
+
+parametric_constant hash_blindable'_parametric [transfer_rule]: hash_blindable'_def
+
+parametric_constant hash_blindable_parametric [transfer_rule]: hash_blindable_def
+
+(************************************************************)
+subsubsection \<open> Blinding \<close>
+(************************************************************)
+
+context
+ fixes h :: "('a\<^sub>m, 'a\<^sub>h) hash"
+ and bo :: "'a\<^sub>m blinding_of"
+begin
+
+inductive blinding_of_blindable :: "('a\<^sub>m, 'a\<^sub>h) blindable\<^sub>m blinding_of" where
+ "blinding_of_blindable (Unblinded x) (Unblinded y)" if "bo x y"
+| "blinding_of_blindable (Blinded x) t" if "hash_blindable h t = x"
+
+inductive_simps blinding_of_blindable_simps [simp]:
+ "blinding_of_blindable (Unblinded x) y"
+ "blinding_of_blindable (Blinded x) y"
+ "blinding_of_blindable z (Unblinded x)"
+ "blinding_of_blindable z (Blinded x)"
+
+inductive_simps blinding_of_blindable_simps2:
+ "blinding_of_blindable (Unblinded x) (Unblinded y)"
+ "blinding_of_blindable (Unblinded x) (Blinded y')"
+ "blinding_of_blindable (Blinded x') (Unblinded y)"
+ "blinding_of_blindable (Blinded x') (Blinded y')"
+
+end
+
+lemma blinding_of_blindable_mono:
+ assumes "bo \<le> bo'"
+ shows "blinding_of_blindable h bo \<le> blinding_of_blindable h bo'"
+ apply(rule predicate2I)
+ apply(erule blinding_of_blindable.cases; hypsubst)
+ subgoal by(rule blinding_of_blindable.intros)(rule assms[THEN predicate2D])
+ subgoal by(rule blinding_of_blindable.intros) simp
+ done
+
+lemma blinding_of_blindable_hash:
+ assumes "bo \<le> vimage2p h h (=)"
+ shows "blinding_of_blindable h bo \<le> vimage2p (hash_blindable h) (hash_blindable h) (=)"
+ apply(rule predicate2I vimage2pI)+
+ apply(erule blinding_of_blindable.cases; hypsubst)
+ subgoal using assms[THEN predicate2D] by(simp add: vimage2p_def)
+ subgoal by simp
+ done
+
+lemma blinding_of_on_blindable [locale_witness]:
+ assumes "blinding_of_on A h bo"
+ shows "blinding_of_on {x. set1_blindable\<^sub>m x \<subseteq> A} (hash_blindable h) (blinding_of_blindable h bo)"
+ (is "blinding_of_on ?A ?h ?bo")
+proof -
+ interpret blinding_of_on A h bo by fact
+ show ?thesis
+ proof
+ show "?bo \<le> vimage2p ?h ?h (=)"
+ by(rule blinding_of_blindable_hash)(rule hash)
+ show "?bo x x" if "x \<in> ?A" for x using that by(cases x)(auto simp add: refl)
+ show "?bo x z" if "?bo x y" "?bo y z" "x \<in> ?A" for x y z using that
+ by(auto elim!: blinding_of_blindable.cases dest: trans blinding_hash_eq)
+ show "x = y" if "?bo x y" "?bo y x" "x \<in> ?A" for x y using that
+ by(auto elim!: blinding_of_blindable.cases dest: antisym)
+ qed
+qed
+
+lemmas blinding_of_blindable [locale_witness] = blinding_of_on_blindable[of UNIV, simplified]
+
+case_of_simps blinding_of_blindable_alt_def: blinding_of_blindable_simps2
+parametric_constant blinding_of_blindable_parametric [transfer_rule]: blinding_of_blindable_alt_def
+
+(************************************************************)
+subsubsection \<open> Merging \<close>
+(************************************************************)
+
+context
+ fixes h :: "('a\<^sub>m, 'a\<^sub>h) hash"
+ fixes m :: "'a\<^sub>m merge"
+begin
+
+fun merge_blindable :: "('a\<^sub>m, 'a\<^sub>h) blindable\<^sub>m merge" where
+ "merge_blindable (Unblinded x) (Unblinded y) = map_option Unblinded (m x y)"
+| "merge_blindable (Blinded x) (Unblinded y) = (if x = Content (h y) then Some (Unblinded y) else None)"
+| "merge_blindable (Unblinded y) (Blinded x) = (if x = Content (h y) then Some (Unblinded y) else None)"
+| "merge_blindable (Blinded t) (Blinded u) = (if t = u then Some (Blinded u) else None)"
+
+lemma merge_on_blindable [locale_witness]:
+ assumes "merge_on A h bo m"
+ shows "merge_on {x. set1_blindable\<^sub>m x \<subseteq> A} (hash_blindable h) (blinding_of_blindable h bo) merge_blindable"
+ (is "merge_on ?A ?h ?bo ?m")
+proof -
+ interpret merge_on A h bo m by fact
+ show ?thesis
+ proof
+ show "\<exists>ab. ?m a b = Some ab \<and> ?bo a ab \<and> ?bo b ab \<and> (\<forall>u. ?bo a u \<longrightarrow> ?bo b u \<longrightarrow> ?bo ab u)" if "?h a = ?h b" "a \<in> ?A" for a b
+ using that by(cases "(a, b)" rule: merge_blindable.cases)(auto simp add: refl dest!: join)
+ show "?m a b = None" if "?h a \<noteq> ?h b" "a \<in> ?A" for a b
+ using that by(cases "(a, b)" rule: merge_blindable.cases)(auto simp add: dest!: undefined)
+ qed
+qed
+
+lemmas merge_blindable [locale_witness] =
+ merge_on_blindable[of UNIV, simplified]
+
+end
+
+lemma merge_blindable_alt_def:
+ "merge_blindable h m x y = (case (x, y) of
+ (Unblinded x, Unblinded y) \<Rightarrow> map_option Unblinded (m x y)
+ | (Blinded x, Unblinded y) \<Rightarrow> (if Content (h y) = x then Some (Unblinded y) else None)
+ | (Unblinded y, Blinded x) \<Rightarrow> (if Content (h y) = x then Some (Unblinded y) else None)
+ | (Blinded t, Blinded u) \<Rightarrow> (if t = u then Some (Blinded u) else None))"
+ by(simp split: blindable\<^sub>m.split blindable\<^sub>h.split)
+
+parametric_constant merge_blindable_parametric [transfer_rule]: merge_blindable_alt_def
+
+lemma merge_blindable_cong [fundef_cong]:
+ assumes "\<And>a b. \<lbrakk> a \<in> set1_blindable\<^sub>m x; b \<in> set1_blindable\<^sub>m y \<rbrakk> \<Longrightarrow> m a b = m' a b"
+ shows "merge_blindable h m x y = merge_blindable h m' x y"
+ by(auto simp add: merge_blindable_alt_def split: blindable\<^sub>m.split intro: assms intro!: arg_cong[where f="map_option _"])
+
+(************************************************************)
+subsubsection \<open> Merkle interface \<close>
+(************************************************************)
+
+lemma merkle_blindable [locale_witness]:
+ assumes "merkle_interface h bo m"
+ shows "merkle_interface (hash_blindable h) (blinding_of_blindable h bo) (merge_blindable h m)"
+proof -
+ interpret merge_on UNIV h bo m using assms by(simp add: merkle_interface_aux)
+ show ?thesis unfolding merkle_interface_aux ..
+qed
+
+
+(************************************************************)
+subsubsection \<open> Non-recursive blindable positions \<close>
+(************************************************************)
+
+text \<open> For a non-recursive data type @{typ 'a}, the type of hashes in @{type blindable\<^sub>m} is fixed
+to be simply @{typ "'a blindable\<^sub>h"}. We obtain this by instantiating the type variable with the
+identity building block. \<close>
+
+type_synonym 'a nr_blindable = "('a, 'a) blindable\<^sub>m"
+
+abbreviation hash_nr_blindable :: "('a nr_blindable, 'a blindable\<^sub>h) hash" where
+ "hash_nr_blindable \<equiv> hash_blindable hash_discrete"
+
+abbreviation blinding_of_nr_blindable :: "'a nr_blindable blinding_of" where
+ "blinding_of_nr_blindable \<equiv> blinding_of_blindable hash_discrete blinding_of_discrete"
+
+abbreviation merge_nr_blindable :: "'a nr_blindable merge" where
+ "merge_nr_blindable \<equiv> merge_blindable hash_discrete merge_discrete"
+
+lemma merge_on_nr_blindable:
+ "merge_on UNIV hash_nr_blindable blinding_of_nr_blindable merge_nr_blindable"
+ ..
+
+lemma merkle_nr_blindable:
+ "merkle_interface hash_nr_blindable blinding_of_nr_blindable merge_nr_blindable"
+ ..
+
+(************************************************************)
+subsection \<open> Building block: Sums \<close>
+(************************************************************)
+
+text \<open> We prove that we can lift the ADS construction through sums.\<close>
+
+type_synonym ('a\<^sub>h, 'b\<^sub>h) sum\<^sub>h = "'a\<^sub>h + 'b\<^sub>h"
+type_notation sum\<^sub>h (infixr "+\<^sub>h" 10)
+
+type_synonym ('a\<^sub>m, 'b\<^sub>m) sum\<^sub>m = "'a\<^sub>m + 'b\<^sub>m"
+ \<comment> \<open>If a functor does not introduce blindable positions, then we don't need the type variable copies.\<close>
+type_notation sum\<^sub>m (infixr "+\<^sub>m" 10)
+
+(************************************************************)
+subsubsection \<open> Hashes \<close>
+(************************************************************)
+
+abbreviation (input) hash_sum' :: "('a\<^sub>h +\<^sub>h 'b\<^sub>h, 'a\<^sub>h +\<^sub>h 'b\<^sub>h) hash" where
+ "hash_sum' \<equiv> id"
+
+abbreviation (input) hash_sum :: "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> ('b\<^sub>m, 'b\<^sub>h) hash \<Rightarrow> ('a\<^sub>m +\<^sub>m 'b\<^sub>m, 'a\<^sub>h +\<^sub>h 'b\<^sub>h) hash"
+ where "hash_sum \<equiv> map_sum"
+
+(************************************************************)
+subsubsection \<open> Blinding \<close>
+(************************************************************)
+
+abbreviation (input) blinding_of_sum :: "'a\<^sub>m blinding_of \<Rightarrow> 'b\<^sub>m blinding_of \<Rightarrow> ('a\<^sub>m +\<^sub>m 'b\<^sub>m) blinding_of" where
+ "blinding_of_sum \<equiv> rel_sum"
+
+lemmas blinding_of_sum_mono = sum.rel_mono
+
+lemma blinding_of_sum_hash:
+ assumes "boa \<le> vimage2p rha rha (=)" "bob \<le> vimage2p rhb rhb (=)"
+ shows "blinding_of_sum boa bob \<le> vimage2p (hash_sum rha rhb) (hash_sum rha rhb) (=)"
+ using assms by(auto simp add: vimage2p_def elim!: rel_sum.cases)
+
+lemma blinding_of_on_sum [locale_witness]:
+ assumes "blinding_of_on A rha boa" "blinding_of_on B rhb bob"
+ shows "blinding_of_on {x. setl x \<subseteq> A \<and> setr x \<subseteq> B} (hash_sum rha rhb) (blinding_of_sum boa bob)"
+ (is "blinding_of_on ?A ?h ?bo")
+proof -
+ interpret a: blinding_of_on A rha boa by fact
+ interpret b: blinding_of_on B rhb bob by fact
+ show ?thesis
+ proof
+ show "?bo x x" if "x \<in> ?A" for x using that by(intro sum.rel_refl_strong)(auto intro: a.refl b.refl)
+ show "?bo x z" if "?bo x y" "?bo y z" "x \<in> ?A" for x y z
+ using that by(auto elim!: rel_sum.cases dest: a.trans b.trans)
+ show "x = y" if "?bo x y" "?bo y x" "x \<in> ?A" for x y
+ using that by(auto elim!: rel_sum.cases dest: a.antisym b.antisym)
+ qed(rule blinding_of_sum_hash a.hash b.hash)+
+qed
+
+lemmas blinding_of_sum [locale_witness] = blinding_of_on_sum[of UNIV _ _ UNIV, simplified]
+
+(************************************************************)
+subsubsection \<open> Merging \<close>
+(************************************************************)
+
+context
+ fixes ma :: "'a\<^sub>m merge"
+ fixes mb :: "'b\<^sub>m merge"
+begin
+
+fun merge_sum :: "('a\<^sub>m +\<^sub>m 'b\<^sub>m) merge" where
+ "merge_sum (Inl x) (Inl y) = map_option Inl (ma x y)"
+| "merge_sum (Inr x) (Inr y) = map_option Inr (mb x y)"
+| "merge_sum _ _ = None"
+
+lemma merge_on_sum [locale_witness]:
+ assumes "merge_on A rha boa ma" "merge_on B rhb bob mb"
+ shows "merge_on {x. setl x \<subseteq> A \<and> setr x \<subseteq> B} (hash_sum rha rhb) (blinding_of_sum boa bob) merge_sum"
+ (is "merge_on ?A ?h ?bo ?m")
+proof -
+ interpret a: merge_on A rha boa ma by fact
+ interpret b: merge_on B rhb bob mb by fact
+ show ?thesis
+ proof
+ show "\<exists>ab. ?m a b = Some ab \<and> ?bo a ab \<and> ?bo b ab \<and> (\<forall>u. ?bo a u \<longrightarrow> ?bo b u \<longrightarrow> ?bo ab u)"
+ if "?h a = ?h b" "a \<in> ?A" for a b using that
+ by(cases "(a, b)" rule: merge_sum.cases)(auto dest!: a.join b.join elim!: rel_sum.cases)
+ show "?m a b = None" if "?h a \<noteq> ?h b" "a \<in> ?A" for a b using that
+ by(cases "(a, b)" rule: merge_sum.cases)(auto dest!: a.undefined b.undefined)
+ qed
+qed
+
+lemmas merge_sum [locale_witness] = merge_on_sum[where A=UNIV and B=UNIV, simplified]
+
+lemma merge_sum_alt_def:
+ "merge_sum x y = (case (x, y) of
+ (Inl x, Inl y) \<Rightarrow> map_option Inl (ma x y)
+ | (Inr x, Inr y) \<Rightarrow> map_option Inr (mb x y)
+ | _ \<Rightarrow> None)"
+ by(simp add: split: sum.split)
+
+end
+
+lemma merge_sum_cong[fundef_cong]:
+ "\<lbrakk> x = x'; y = y';
+ \<And>xl yl. \<lbrakk> x = Inl xl; y = Inl yl \<rbrakk> \<Longrightarrow> ma xl yl = ma' xl yl;
+ \<And>xr yr. \<lbrakk> x = Inr xr; y = Inr yr \<rbrakk> \<Longrightarrow> mb xr yr = mb' xr yr \<rbrakk> \<Longrightarrow>
+ merge_sum ma mb x y = merge_sum ma' mb' x' y'"
+ by(cases x; simp_all; cases y; auto)
+
+parametric_constant merge_sum_parametric [transfer_rule]: merge_sum_alt_def
+
+subsubsection \<open> Merkle interface \<close>
+
+lemma merkle_sum [locale_witness]:
+ assumes "merkle_interface rha boa ma" "merkle_interface rhb bob mb"
+ shows "merkle_interface (hash_sum rha rhb) (blinding_of_sum boa bob) (merge_sum ma mb)"
+proof -
+ interpret a: merge_on UNIV rha boa ma unfolding merkle_interface_aux[symmetric] by fact
+ interpret b: merge_on UNIV rhb bob mb unfolding merkle_interface_aux[symmetric] by fact
+ show ?thesis unfolding merkle_interface_aux[symmetric] ..
+qed
+
+(************************************************************)
+subsection \<open> Building Block: Products\<close>
+(************************************************************)
+
+text \<open> We prove that we can lift the ADS construction through products.\<close>
+
+type_synonym ('a\<^sub>h, 'b\<^sub>h) prod\<^sub>h = "'a\<^sub>h \<times> 'b\<^sub>h"
+type_notation prod\<^sub>h ("(_ \<times>\<^sub>h/ _)" [21, 20] 20)
+
+type_synonym ('a\<^sub>m, 'b\<^sub>m) prod\<^sub>m = "'a\<^sub>m \<times> 'b\<^sub>m"
+ \<comment> \<open>If a functor does not introduce blindable positions, then we don't need the type variable copies.\<close>
+type_notation prod\<^sub>m ("(_ \<times>\<^sub>m/ _)" [21, 20] 20)
+
+(************************************************************)
+subsubsection \<open> Hashes \<close>
+(************************************************************)
+
+abbreviation (input) hash_prod' :: "('a\<^sub>h \<times>\<^sub>h 'b\<^sub>h, 'a\<^sub>h \<times>\<^sub>h 'b\<^sub>h) hash" where
+ "hash_prod' \<equiv> id"
+
+abbreviation (input) hash_prod :: "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> ('b\<^sub>m, 'b\<^sub>h) hash \<Rightarrow> ('a\<^sub>m \<times>\<^sub>m 'b\<^sub>m, 'a\<^sub>h \<times>\<^sub>h 'b\<^sub>h) hash"
+ where "hash_prod \<equiv> map_prod"
+
+(************************************************************)
+subsubsection \<open> Blinding \<close>
+(************************************************************)
+
+abbreviation (input) blinding_of_prod :: "'a\<^sub>m blinding_of \<Rightarrow> 'b\<^sub>m blinding_of \<Rightarrow> ('a\<^sub>m \<times>\<^sub>m 'b\<^sub>m) blinding_of" where
+ "blinding_of_prod \<equiv> rel_prod"
+
+lemmas blinding_of_prod_mono = prod.rel_mono
+
+lemma blinding_of_prod_hash:
+ assumes "boa \<le> vimage2p rha rha (=)" "bob \<le> vimage2p rhb rhb (=)"
+ shows "blinding_of_prod boa bob \<le> vimage2p (hash_prod rha rhb) (hash_prod rha rhb) (=)"
+ using assms by(auto simp add: vimage2p_def)
+
+lemma blinding_of_on_prod [locale_witness]:
+ assumes "blinding_of_on A rha boa" "blinding_of_on B rhb bob"
+ shows "blinding_of_on {x. fsts x \<subseteq> A \<and> snds x \<subseteq> B} (hash_prod rha rhb) (blinding_of_prod boa bob)"
+ (is "blinding_of_on ?A ?h ?bo")
+proof -
+ interpret a: blinding_of_on A rha boa by fact
+ interpret b: blinding_of_on B rhb bob by fact
+ show ?thesis
+ proof
+ show "?bo x x" if "x \<in> ?A" for x using that by(cases x)(auto intro: a.refl b.refl)
+ show "?bo x z" if "?bo x y" "?bo y z" "x \<in> ?A" for x y z using that
+ by(auto elim!: rel_prod.cases dest: a.trans b.trans)
+ show "x = y" if "?bo x y" "?bo y x" "x \<in> ?A" for x y using that
+ by(auto elim!: rel_prod.cases dest: a.antisym b.antisym)
+ qed(rule blinding_of_prod_hash a.hash b.hash)+
+qed
+
+lemmas blinding_of_prod [locale_witness] = blinding_of_on_prod[where A=UNIV and B=UNIV, simplified]
+
+(************************************************************)
+subsubsection \<open> Merging \<close>
+(************************************************************)
+
+context
+ fixes ma :: "'a\<^sub>m merge"
+ fixes mb :: "'b\<^sub>m merge"
+begin
+
+fun merge_prod :: "('a\<^sub>m \<times>\<^sub>m 'b\<^sub>m) merge" where
+ "merge_prod (x, y) (x', y') = Option.bind (ma x x') (\<lambda>x''. map_option (Pair x'') (mb y y'))"
+
+lemma merge_on_prod [locale_witness]:
+ assumes "merge_on A rha boa ma" "merge_on B rhb bob mb"
+ shows "merge_on {x. fsts x \<subseteq> A \<and> snds x \<subseteq> B} (hash_prod rha rhb) (blinding_of_prod boa bob) merge_prod"
+ (is "merge_on ?A ?h ?bo ?m")
+proof -
+ interpret a: merge_on A rha boa ma by fact
+ interpret b: merge_on B rhb bob mb by fact
+ show ?thesis
+ proof
+ show "\<exists>ab. ?m a b = Some ab \<and> ?bo a ab \<and> ?bo b ab \<and> (\<forall>u. ?bo a u \<longrightarrow> ?bo b u \<longrightarrow> ?bo ab u)"
+ if "?h a = ?h b" "a \<in> ?A" for a b using that
+ by(cases "(a, b)" rule: merge_prod.cases)(auto dest!: a.join b.join)
+ show "?m a b = None" if "?h a \<noteq> ?h b" "a \<in> ?A" for a b using that
+ by(cases "(a, b)" rule: merge_prod.cases)(auto dest!: a.undefined b.undefined)
+ qed
+qed
+
+lemmas merge_prod [locale_witness] = merge_on_prod[where A=UNIV and B=UNIV, simplified]
+
+lemma merge_prod_alt_def:
+ "merge_prod = (\<lambda>(x, y) (x', y'). Option.bind (ma x x') (\<lambda>x''. map_option (Pair x'') (mb y y')))"
+ by(simp add: fun_eq_iff)
+
+end
+
+lemma merge_prod_cong[fundef_cong]:
+ assumes "\<And>a b. \<lbrakk> a \<in> fsts p1; b \<in> fsts p2 \<rbrakk> \<Longrightarrow> ma a b = ma' a b"
+ and "\<And>a b. \<lbrakk> a \<in> snds p1; b \<in> snds p2 \<rbrakk> \<Longrightarrow> mb a b = mb' a b"
+ shows "merge_prod ma mb p1 p2 = merge_prod ma' mb' p1 p2"
+ using assms by(cases p1; cases p2) auto
+
+parametric_constant merge_prod_parametric [transfer_rule]: merge_prod_alt_def
+
+(************************************************************)
+subsubsection \<open> Merkle Interface \<close>
+(************************************************************)
+
+lemma merkle_product [locale_witness]:
+ assumes "merkle_interface rha boa ma" "merkle_interface rhb bob mb"
+ shows "merkle_interface (hash_prod rha rhb) (blinding_of_prod boa bob) (merge_prod ma mb)"
+proof -
+ interpret a: merge_on UNIV rha boa ma unfolding merkle_interface_aux[symmetric] by fact
+ interpret b: merge_on UNIV rhb bob mb unfolding merkle_interface_aux[symmetric] by fact
+ show ?thesis unfolding merkle_interface_aux[symmetric] ..
+qed
+
+
+(************************************************************)
+subsection \<open>Building Block: Lists\<close>
+(************************************************************)
+
+text \<open>The ADS construction on lists is done the easiest through a separate isomorphic datatype
+ that has only a single constructor. We hide this construction in a locale. \<close>
+
+locale list_R1 begin
+
+type_synonym ('a, 'b) list_F = "unit + 'a \<times> 'b"
+
+abbreviation (input) "set_base_F\<^sub>m \<equiv> \<lambda>x. setr x \<bind> fsts"
+abbreviation (input) "set_rec_F\<^sub>m \<equiv> \<lambda>A. setr A \<bind> snds"
+abbreviation (input) "map_F \<equiv> \<lambda>fb fr. map_sum id (map_prod fb fr)"
+
+datatype 'a list_R1 = list_R1 (unR: "('a, 'a list_R1) list_F")
+
+lemma list_R1_const_into_dest: "list_R1 F = l \<longleftrightarrow> F = unR l"
+ by auto
+
+declare list_R1.split[split]
+
+lemma list_R1_induct[case_names list_R1]:
+ assumes "\<And>F. \<lbrakk> \<And>l'. l' \<in> set_rec_F\<^sub>m F \<Longrightarrow> P l' \<rbrakk> \<Longrightarrow> P (list_R1 F)"
+ shows "P l"
+ apply(rule list_R1.induct)
+ apply(auto intro!: assms)
+ done
+
+lemma set_list_R1_eq:
+ "{x. set_base_F\<^sub>m x \<subseteq> A \<and> set_rec_F\<^sub>m x \<subseteq> B} =
+ {x. setl x \<subseteq> UNIV \<and> setr x \<subseteq> {x. fsts x \<subseteq> A \<and> snds x \<subseteq> B}}"
+ by(auto simp add: bind_UNION)
+
+(************************************************************)
+subsubsection \<open> The Isomorphism \<close>
+(************************************************************)
+
+primrec (transfer) list_R1_to_list :: "'a list_R1 \<Rightarrow> 'a list" where
+ "list_R1_to_list (list_R1 l) = (case map_sum id (map_prod id list_R1_to_list) l of Inl () \<Rightarrow> [] | Inr (x, xs) \<Rightarrow> x # xs)"
+
+lemma list_R1_to_list_simps [simp]:
+ "list_R1_to_list (list_R1 (Inl ())) = []"
+ "list_R1_to_list (list_R1 (Inr (x, xs))) = x # list_R1_to_list xs"
+ by(simp_all split: unit.split)
+
+declare list_R1_to_list.simps [simp del]
+
+primrec (transfer) list_to_list_R1 :: "'a list \<Rightarrow> 'a list_R1" where
+ "list_to_list_R1 [] = list_R1 (Inl ())"
+| "list_to_list_R1 (x#xs) = list_R1 (Inr (x, list_to_list_R1 xs))"
+
+lemma R1_of_list: "list_R1_to_list (list_to_list_R1 x) = x"
+ by(induct x) (auto)
+
+lemma list_of_R1: "list_to_list_R1 (list_R1_to_list x) = x"
+ apply(induct x)
+ subgoal for x
+ by(cases x) (auto)
+ done
+
+lemma list_R1_def: "type_definition list_to_list_R1 list_R1_to_list UNIV"
+ by(unfold_locales)(auto intro: R1_of_list list_of_R1)
+
+setup_lifting list_R1_def
+
+lemma map_list_R1_list_to_list_R1: "map_list_R1 f (list_to_list_R1 xs) = list_to_list_R1 (map f xs)"
+ by(induction xs) auto
+
+lemma list_R1_map_trans [transfer_rule]: includes lifting_syntax shows
+ "(((=) ===> (=)) ===> pcr_list (=) ===> pcr_list (=)) map_list_R1 map"
+ by(auto 4 3 simp add: list.pcr_cr_eq rel_fun_eq cr_list_def map_list_R1_list_to_list_R1)
+
+lemma set_list_R1_list_to_list_R1: "set_list_R1 (list_to_list_R1 xs) = set xs"
+ by(induction xs) auto
+
+lemma list_R1_set_trans [transfer_rule]: includes lifting_syntax shows
+ "(pcr_list (=) ===> (=)) set_list_R1 set"
+ by(auto simp add: list.pcr_cr_eq cr_list_def set_list_R1_list_to_list_R1)
+
+lemma rel_list_R1_list_to_list_R1:
+ "rel_list_R1 R (list_to_list_R1 xs) (list_to_list_R1 ys) \<longleftrightarrow> list_all2 R xs ys"
+ (is "?lhs \<longleftrightarrow> ?rhs")
+proof
+ define xs' and ys' where "xs' = list_to_list_R1 xs" and "ys' = list_to_list_R1 ys"
+ assume "rel_list_R1 R xs' ys'"
+ then have "list_all2 R (list_R1_to_list xs') (list_R1_to_list ys')"
+ by induction(auto elim!: rel_sum.cases)
+ thus ?rhs by(simp add: xs'_def ys'_def R1_of_list)
+next
+ show ?lhs if ?rhs using that by induction auto
+qed
+
+lemma list_R1_rel_trans[transfer_rule]: includes lifting_syntax shows
+ "(((=) ===> (=) ===> (=)) ===> pcr_list (=) ===> pcr_list (=) ===> (=)) rel_list_R1 list_all2"
+ by(auto 4 4 simp add: list.pcr_cr_eq rel_fun_eq cr_list_def rel_list_R1_list_to_list_R1)
+
+(************************************************************)
+subsubsection \<open> Hashes \<close>
+(************************************************************)
+
+type_synonym ('a\<^sub>h, 'b\<^sub>h) list_F\<^sub>h = "unit +\<^sub>h 'a\<^sub>h \<times>\<^sub>h 'b\<^sub>h"
+
+type_synonym ('a\<^sub>m, 'b\<^sub>m) list_F\<^sub>m = "unit +\<^sub>m 'a\<^sub>m \<times>\<^sub>m 'b\<^sub>m"
+
+type_synonym 'a\<^sub>h list_R1\<^sub>h = "'a\<^sub>h list_R1"
+ \<comment> \<open>In theory, we should define a separate datatype here of the functor @{typ "('a\<^sub>h, _) list_F\<^sub>h"}.
+ We take a shortcut because they're isomorphic.\<close>
+
+type_synonym 'a\<^sub>m list_R1\<^sub>m = "'a\<^sub>m list_R1"
+ \<comment> \<open>In theory, we should define a separate datatype here of the functor @{typ "('a\<^sub>m, _) list_F\<^sub>m"}.
+ We take a shortcut because they're isomorphic.\<close>
+
+definition hash_F :: "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> ('b\<^sub>m, 'b\<^sub>h) hash \<Rightarrow> (('a\<^sub>m, 'b\<^sub>m) list_F\<^sub>m, ('a\<^sub>h, 'b\<^sub>h) list_F\<^sub>h) hash" where
+ "hash_F h rhL = hash_sum hash_unit (hash_prod h rhL)"
+
+abbreviation (input) hash_R1 :: "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> ('a\<^sub>m list_R1\<^sub>m, 'a\<^sub>h list_R1\<^sub>h) hash" where
+ "hash_R1 \<equiv> map_list_R1"
+
+parametric_constant hash_F_parametric[transfer_rule]: hash_F_def
+
+(************************************************************)
+subsubsection \<open> Blinding \<close>
+(************************************************************)
+
+definition blinding_of_F :: "'a\<^sub>m blinding_of \<Rightarrow> 'b\<^sub>m blinding_of \<Rightarrow> ('a\<^sub>m, 'b\<^sub>m) list_F\<^sub>m blinding_of" where
+ "blinding_of_F bo bL = blinding_of_sum blinding_of_unit (blinding_of_prod bo bL)"
+
+abbreviation (input) blinding_of_R1 :: "'a blinding_of \<Rightarrow> 'a list_R1 blinding_of" where
+ "blinding_of_R1 \<equiv> rel_list_R1"
+
+lemma blinding_of_hash_R1:
+ assumes "bo \<le> vimage2p h h (=)"
+ shows "blinding_of_R1 bo \<le> vimage2p (hash_R1 h) (hash_R1 h) (=)"
+ apply(rule predicate2I vimage2pI)+
+ apply(auto simp add: predicate2D_vimage2p[OF assms] elim!: list_R1.rel_induct rel_sum.cases rel_prod.cases)
+ done
+
+lemma blinding_of_on_R1 [locale_witness]:
+ assumes "blinding_of_on A h bo"
+ shows "blinding_of_on {x. set_list_R1 x \<subseteq> A} (hash_R1 h) (blinding_of_R1 bo)"
+ (is "blinding_of_on ?A ?h ?bo")
+proof -
+ interpret a: blinding_of_on A h bo by fact
+ show ?thesis
+ proof
+ show hash: "?bo \<le> vimage2p ?h ?h (=)" using a.hash by(rule blinding_of_hash_R1)
+
+ have "?bo x x \<and> (?bo x y \<longrightarrow> ?bo y z \<longrightarrow> ?bo x z) \<and> (?bo x y \<longrightarrow> ?bo y x \<longrightarrow> x = y)" if "x \<in> ?A" for x y z using that
+ proof(induction x arbitrary: y z)
+ case (list_R1 x y' z')
+ from list_R1.prems have s1: "set_base_F\<^sub>m x \<subseteq> A" by(fastforce)
+ from list_R1.prems have s3: "set_rec_F\<^sub>m x \<bind> set_list_R1 \<subseteq> A" by(fastforce intro: rev_bexI)
+
+ interpret F: blinding_of_on "{y. set_base_F\<^sub>m y \<subseteq> A \<and> set_rec_F\<^sub>m y \<subseteq> set_rec_F\<^sub>m x}"
+ "hash_F h (hash_R1 h)" "blinding_of_F bo (blinding_of_R1 bo)"
+ unfolding hash_F_def blinding_of_F_def set_list_R1_eq
+ proof
+ let ?A' = "setr x \<bind> snds" and ?bo' = "rel_list_R1 bo"
+ show "?bo' x x" if "x \<in> ?A'" for x using that list_R1 by(force simp add: eq_onp_def)
+ show "?bo' x z" if "?bo' x y" "?bo' y z" "x \<in> ?A'" for x y z
+ using that list_R1.IH[of _ x y z] list_R1.prems
+ by(force simp add: bind_UNION prod_set_defs)
+ show "x = y" if "?bo' x y" "?bo' y x" "x \<in> ?A'" for x y
+ using that list_R1.IH[of _ x y] list_R1.prems
+ by(force simp add: prod_set_defs)
+ qed(rule hash)
+ show ?case using list_R1.prems
+ apply(intro conjI)
+ subgoal using F.refl[of x] s1 unfolding blinding_of_F_def by(auto intro: list_R1.rel_intros)
+ subgoal using s1 by(auto elim!: list_R1.rel_cases F.trans[unfolded blinding_of_F_def] intro: list_R1.rel_intros)
+ subgoal using s1 by(auto elim!: list_R1.rel_cases dest: F.antisym[unfolded blinding_of_F_def])
+ done
+ qed
+ then show "x \<in> ?A \<Longrightarrow> ?bo x x"
+ and "\<lbrakk> ?bo x y; ?bo y z; x \<in> ?A \<rbrakk> \<Longrightarrow> ?bo x z"
+ and "\<lbrakk> ?bo x y; ?bo y x; x \<in> ?A \<rbrakk> \<Longrightarrow> x = y"
+ for x y z by blast+
+ qed
+qed
+
+lemmas blinding_of_R1 [locale_witness] = blinding_of_on_R1[where A=UNIV, simplified]
+
+parametric_constant blinding_of_F_parametric[transfer_rule]: blinding_of_F_def
+
+(************************************************************)
+subsubsection \<open> Merging \<close>
+(************************************************************)
+
+definition merge_F :: "'a\<^sub>m merge \<Rightarrow> 'b\<^sub>m merge \<Rightarrow> ('a\<^sub>m, 'b\<^sub>m) list_F\<^sub>m merge" where
+ "merge_F m mL = merge_sum merge_unit (merge_prod m mL)"
+
+lemma merge_F_cong[fundef_cong]:
+ assumes "\<And>a b. \<lbrakk> a \<in> set_base_F\<^sub>m x; b \<in> set_base_F\<^sub>m y \<rbrakk> \<Longrightarrow> m a b = m' a b"
+ and "\<And>a b. \<lbrakk> a \<in> set_rec_F\<^sub>m x; b \<in> set_rec_F\<^sub>m y \<rbrakk> \<Longrightarrow> mL a b = mL' a b"
+ shows "merge_F m mL x y = merge_F m' mL' x y"
+ using assms
+ apply(cases x; cases y)
+ apply(simp_all add: merge_F_def)
+ apply(rule arg_cong[where f="map_option _"])
+ apply(blast intro: merge_prod_cong)
+ done
+
+context
+ fixes m :: "'a\<^sub>m merge"
+ notes setr.simps[simp]
+begin
+fun merge_R1 :: "'a\<^sub>m list_R1\<^sub>m merge" where
+ "merge_R1 (list_R1 l1) (list_R1 l2) = map_option list_R1 (merge_F m merge_R1 l1 l2)"
+end
+
+case_of_simps merge_cases [simp]: merge_R1.simps
+
+lemma merge_on_R1:
+ assumes "merge_on A h bo m"
+ shows "merge_on {x. set_list_R1 x \<subseteq> A } (hash_R1 h) (blinding_of_R1 bo) (merge_R1 m)"
+ (is "merge_on ?A ?h ?bo ?m")
+proof -
+ interpret a: merge_on A h bo m by fact
+ show ?thesis
+ proof
+ have "(?h a = ?h b \<longrightarrow> (\<exists>ab. ?m a b = Some ab \<and> ?bo a ab \<and> ?bo b ab \<and> (\<forall>u. ?bo a u \<longrightarrow> ?bo b u \<longrightarrow> ?bo ab u))) \<and>
+ (?h a \<noteq> ?h b \<longrightarrow> ?m a b = None)"
+ if "a \<in> ?A" for a b using that unfolding mem_Collect_eq
+ proof(induction a arbitrary: b rule: list_R1_induct)
+ case wfInd: (list_R1 l)
+ interpret merge_on "{y. set_base_F\<^sub>m y \<subseteq> A \<and> set_rec_F\<^sub>m y \<subseteq> set_rec_F\<^sub>m l}"
+ "hash_F h ?h" "blinding_of_F bo ?bo" "merge_F m ?m"
+ unfolding set_list_R1_eq hash_F_def merge_F_def blinding_of_F_def
+ proof
+ fix a
+ assume a: "a \<in> set_rec_F\<^sub>m l"
+ with wfInd.prems have a': "set_list_R1 a \<subseteq> A"
+ by fastforce
+
+ show "hash_R1 h a = hash_R1 h b
+ \<Longrightarrow> \<exists>ab. ?m a b = Some ab \<and> ?bo a ab \<and> ?bo b ab \<and>
+ (\<forall>u. ?bo a u \<longrightarrow> ?bo b u \<longrightarrow> ?bo ab u)"
+ and "?h a \<noteq> ?h b \<Longrightarrow> ?m a b = None" for b
+ using wfInd.IH[OF a a', rule_format, of b]
+ by(auto dest: sym)
+ qed
+ show ?case using wfInd.prems
+ apply(intro conjI strip)
+ subgoal
+ by(auto 4 4 dest!: join[unfolded hash_F_def]
+ simp add: blinding_of_F_def UN_subset_iff list_R1.rel_sel)
+ subgoal by(auto 4 3 intro!: undefined[simplified hash_F_def])
+ done
+ qed
+ then show
+ "?h a = ?h b \<Longrightarrow> \<exists>ab. ?m a b = Some ab \<and> ?bo a ab \<and> ?bo b ab \<and> (\<forall>u. ?bo a u \<longrightarrow> ?bo b u \<longrightarrow> ?bo ab u)"
+ "?h a \<noteq> ?h b \<Longrightarrow> ?m a b = None"
+ if "a \<in> ?A" for a b using that by blast+
+ qed
+qed
+
+lemmas merge_R1 [locale_witness] = merge_on_R1[where A=UNIV, simplified]
+
+lemma merkle_list_R1 [locale_witness]:
+ assumes "merkle_interface h bo m"
+ shows "merkle_interface (hash_R1 h) (blinding_of_R1 bo) (merge_R1 m)"
+proof -
+ interpret merge_on UNIV h bo m using assms by(unfold merkle_interface_aux)
+ show ?thesis unfolding merkle_interface_aux[symmetric] ..
+qed
+
+lemma merge_R1_cong [fundef_cong]:
+ assumes "\<And>a b. \<lbrakk> a \<in> set_list_R1 x; b \<in> set_list_R1 y \<rbrakk> \<Longrightarrow> m a b = m' a b"
+ shows "merge_R1 m x y = merge_R1 m' x y"
+ using assms
+ apply(induction x y rule: merge_R1.induct)
+ apply(simp del: merge_cases)
+ apply(rule arg_cong[where f="map_option _"])
+ apply(blast intro: merge_F_cong[unfolded bind_UNION])
+ done
+
+parametric_constant merge_F_parametric[transfer_rule]: merge_F_def
+
+lemma merge_R1_parametric [transfer_rule]:
+ includes lifting_syntax
+ notes [simp del] = merge_cases
+ assumes [transfer_rule]: "bi_unique A"
+ shows "((A ===> A ===> rel_option A) ===> rel_list_R1 A ===> rel_list_R1 A ===> rel_option (rel_list_R1 A))
+ merge_R1 merge_R1"
+ apply(intro rel_funI)
+ subgoal premises prems [transfer_rule] for m1 m2 xs1 xs2 ys1 ys2 using prems(2, 3)
+ apply(induction xs1 ys1 arbitrary: xs2 ys2 rule: merge_R1.induct)
+ apply(elim list_R1.rel_cases rel_sum.cases; clarsimp simp add: option.rel_map merge_F_def merge_discrete_def)
+ apply(elim meta_allE; (erule meta_impE, simp)+)
+ subgoal premises [transfer_rule] by transfer_prover
+ done
+ done
+
+end
+
+subsubsection \<open> Transferring the Constructions to Lists \<close>
+type_synonym 'a\<^sub>h list\<^sub>h = "'a\<^sub>h list"
+type_synonym 'a\<^sub>m list\<^sub>m = "'a\<^sub>m list"
+
+context begin
+interpretation list_R1 .
+
+abbreviation (input) hash_list :: "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> ('a\<^sub>m list\<^sub>m, 'a\<^sub>h list\<^sub>h) hash"
+ where "hash_list \<equiv> map"
+abbreviation (input) blinding_of_list :: "'a\<^sub>m blinding_of \<Rightarrow> 'a\<^sub>m list\<^sub>m blinding_of"
+ where "blinding_of_list \<equiv> list_all2"
+lift_definition merge_list :: "'a\<^sub>m merge \<Rightarrow> 'a\<^sub>m list\<^sub>m merge" is merge_R1 .
+
+lemma blinding_of_list_mono:
+ "\<lbrakk> \<And>x y. bo x y \<longrightarrow> bo' x y \<rbrakk> \<Longrightarrow>
+ blinding_of_list bo x y \<longrightarrow> blinding_of_list bo' x y"
+ by (transfer) (blast intro: list_R1.rel_mono_strong)
+
+lemmas blinding_of_list_hash = blinding_of_hash_R1[Transfer.transferred]
+ and blinding_of_on_list [locale_witness] = blinding_of_on_R1[Transfer.transferred]
+ and blinding_of_list [locale_witness] = blinding_of_R1[Transfer.transferred]
+ and merge_on_list [locale_witness] = merge_on_R1[Transfer.transferred]
+ and merge_list [locale_witness] = merge_R1[Transfer.transferred]
+ and merge_list_cong = merge_R1_cong[Transfer.transferred]
+
+lemma blinding_of_list_mono_pred:
+ "R \<le> R' \<Longrightarrow> blinding_of_list R \<le> blinding_of_list R'"
+ by(transfer) (rule list_R1.rel_mono)
+
+lemma blinding_of_list_simp: "blinding_of_list = list_all2"
+ by(transfer) (rule refl)
+
+lemma merkle_list [locale_witness]:
+ assumes [locale_witness]: "merkle_interface h bo m"
+ shows "merkle_interface (hash_list h) (blinding_of_list bo) (merge_list m)"
+ by(transfer fixing: h bo m) unfold_locales
+
+parametric_constant merge_list_parametric [transfer_rule]: merge_list_def
+
+lifting_update list.lifting
+lifting_forget list.lifting
+
+end
+
+
+(************************************************************)
+subsection \<open>Building block: function space\<close>
+(************************************************************)
+
+text \<open> We prove that we can lift the ADS construction through functions.\<close>
+
+type_synonym ('a, 'b\<^sub>h) fun\<^sub>h = "'a \<Rightarrow> 'b\<^sub>h"
+type_notation fun\<^sub>h (infixr "\<Rightarrow>\<^sub>h" 0)
+
+type_synonym ('a, 'b\<^sub>m) fun\<^sub>m = "'a \<Rightarrow> 'b\<^sub>m"
+type_notation fun\<^sub>m (infixr "\<Rightarrow>\<^sub>m" 0)
+
+(************************************************************)
+subsubsection \<open> Hashes \<close>
+(************************************************************)
+
+text \<open> Only the range is live, the domain is dead like for BNFs. \<close>
+
+abbreviation (input) hash_fun' :: "('a \<Rightarrow>\<^sub>m 'b\<^sub>h, 'a \<Rightarrow>\<^sub>h 'b\<^sub>h) hash" where
+ "hash_fun' \<equiv> id"
+
+abbreviation (input) hash_fun :: "('b\<^sub>m, 'b\<^sub>h) hash \<Rightarrow> ('a \<Rightarrow>\<^sub>m 'b\<^sub>m, 'a \<Rightarrow>\<^sub>h 'b\<^sub>h) hash"
+ where "hash_fun \<equiv> comp"
+
+(************************************************************)
+subsubsection \<open> Blinding \<close>
+(************************************************************)
+
+abbreviation (input) blinding_of_fun :: "'b\<^sub>m blinding_of \<Rightarrow> ('a \<Rightarrow>\<^sub>m 'b\<^sub>m) blinding_of" where
+ "blinding_of_fun \<equiv> rel_fun (=)"
+
+lemmas blinding_of_fun_mono = fun.rel_mono
+
+lemma blinding_of_fun_hash:
+ assumes "bo \<le> vimage2p rh rh (=)"
+ shows "blinding_of_fun bo \<le> vimage2p (hash_fun rh) (hash_fun rh) (=)"
+ using assms by(auto simp add: vimage2p_def rel_fun_def le_fun_def)
+
+lemma blinding_of_on_fun [locale_witness]:
+ assumes "blinding_of_on A rh bo"
+ shows "blinding_of_on {x. range x \<subseteq> A} (hash_fun rh) (blinding_of_fun bo)"
+ (is "blinding_of_on ?A ?h ?bo")
+proof -
+ interpret a: blinding_of_on A rh bo by fact
+ show ?thesis
+ proof
+ show "?bo x x" if "x \<in> ?A" for x using that by(auto simp add: rel_fun_def intro: a.refl)
+ show "?bo x z" if "?bo x y" "?bo y z" "x \<in> ?A" for x y z using that
+ by(auto 4 3 simp add: rel_fun_def intro: a.trans)
+ show "x = y" if "?bo x y" "?bo y x" "x \<in> ?A" for x y using that
+ by(fastforce simp add: fun_eq_iff rel_fun_def intro: a.antisym)
+ qed(rule blinding_of_fun_hash a.hash)+
+qed
+
+lemmas blinding_of_fun [locale_witness] = blinding_of_on_fun[where A=UNIV, simplified]
+
+(************************************************************)
+subsubsection \<open> Merging \<close>
+(************************************************************)
+
+context
+ fixes m :: "'b\<^sub>m merge"
+begin
+
+definition merge_fun :: "('a \<Rightarrow>\<^sub>m 'b\<^sub>m) merge" where
+ "merge_fun f g = (if \<forall>x. m (f x) (g x) \<noteq> None then Some (\<lambda>x. the (m (f x) (g x))) else None)"
+
+lemma merge_on_fun [locale_witness]:
+ assumes "merge_on A rh bo m"
+ shows "merge_on {x. range x \<subseteq> A} (hash_fun rh) (blinding_of_fun bo) merge_fun"
+ (is "merge_on ?A ?h ?bo ?m")
+proof -
+ interpret a: merge_on A rh bo m by fact
+ show ?thesis
+ proof
+ show "\<exists>ab. ?m a b = Some ab \<and> ?bo a ab \<and> ?bo b ab \<and> (\<forall>u. ?bo a u \<longrightarrow> ?bo b u \<longrightarrow> ?bo ab u)"
+ if "?h a = ?h b" "a \<in> ?A" for a b
+ using that(1)[THEN fun_cong, unfolded o_apply, THEN a.join, OF that(2)[unfolded mem_Collect_eq, THEN subsetD, OF rangeI]]
+ by atomize(subst (asm) choice_iff; auto simp add: merge_fun_def rel_fun_def)
+ show "?m a b = None" if "?h a \<noteq> ?h b" "a \<in> ?A" for a b using that
+ by(auto simp add: merge_fun_def fun_eq_iff dest: a.undefined)
+ qed
+qed
+
+lemmas merge_fun [locale_witness] = merge_on_fun[where A=UNIV, simplified]
+
+end
+
+lemma merge_fun_cong[fundef_cong]:
+ assumes "\<And>a b. \<lbrakk> a \<in> range f; b \<in> range g \<rbrakk> \<Longrightarrow> m a b = m' a b"
+ shows "merge_fun m f g = merge_fun m' f g"
+ using assms[OF rangeI rangeI] by(clarsimp simp add: merge_fun_def)
+
+lemma is_none_alt_def: "Option.is_none x \<longleftrightarrow> (case x of None \<Rightarrow> True | Some _ \<Rightarrow> False)"
+ by(auto simp add: Option.is_none_def split: option.splits)
+
+parametric_constant is_none_parametric [transfer_rule]: is_none_alt_def
+
+lemma merge_fun_parametric [transfer_rule]: includes lifting_syntax shows
+ "((A ===> B ===> rel_option C) ===> ((=) ===> A) ===> ((=) ===> B) ===> rel_option ((=) ===> C))
+ merge_fun merge_fun"
+proof(intro rel_funI)
+ fix m :: "'a merge" and m' :: "'b merge" and f :: "'c \<Rightarrow> 'a" and f' :: "'c \<Rightarrow> 'b" and g :: "'c \<Rightarrow> 'a" and g' :: "'c \<Rightarrow> 'b"
+ assume m: "(A ===> B ===> rel_option C) m m'"
+ and f: "((=) ===> A) f f'" and g: "((=) ===> B) g g'"
+ note [transfer_rule] = this
+ have cond [unfolded Option.is_none_def]: "(\<forall>x. \<not> Option.is_none (m (f x) (g x))) \<longleftrightarrow> (\<forall>x. \<not> Option.is_none (m' (f' x) (g' x)))"
+ by transfer_prover
+ moreover
+ have "((=) ===> C) (\<lambda>x. the (m (f x) (g x))) (\<lambda>x. the (m' (f' x) (g' x)))" if *: "\<forall>x. \<not> m (f x) (g x) = None"
+ proof -
+ obtain fg fg' where m: "m (f x) (g x) = Some (fg x)" and m': "m' (f' x) (g' x) = Some (fg' x)" for x
+ using * *[simplified cond]
+ by(simp)(subst (asm) (1 2) choice_iff; clarsimp)
+ have "rel_option C (Some (fg x)) (Some (fg' x))" for x unfolding m[symmetric] m'[symmetric] by transfer_prover
+ then show ?thesis by(simp add: rel_fun_def m m')
+ qed
+ ultimately show "rel_option ((=) ===> C) (merge_fun m f g) (merge_fun m' f' g')"
+ unfolding merge_fun_def by(simp)
+qed
+
+(************************************************************)
+subsubsection \<open> Merkle Interface \<close>
+(************************************************************)
+
+lemma merkle_fun [locale_witness]:
+ assumes "merkle_interface rh bo m"
+ shows "merkle_interface (hash_fun rh) (blinding_of_fun bo) (merge_fun m)"
+proof -
+ interpret a: merge_on UNIV rh bo m unfolding merkle_interface_aux[symmetric] by fact
+ show ?thesis unfolding merkle_interface_aux[symmetric] ..
+qed
+
+(************************************************************)
+subsection \<open>Rose trees\<close>
+(************************************************************)
+
+text \<open>
+We now define an ADS over rose trees, which is like a arbitrarily branching Merkle tree where each
+node in the tree can be blinded, including the root. The number of children and the position of a
+child among its siblings cannot be hidden. The construction allows to plug in further blindable
+positions in the labels of the nodes.
+\<close>
+
+type_synonym ('a, 'b) rose_tree_F = "'a \<times> 'b list"
+
+abbreviation (input) map_rose_tree_F where
+ "map_rose_tree_F f1 f2 \<equiv> map_prod f1 (map f2)"
+definition map_rose_tree_F_const where
+ "map_rose_tree_F_const f1 f2 \<equiv> map_rose_tree_F f1 f2"
+
+datatype 'a rose_tree = Tree "('a, 'a rose_tree) rose_tree_F"
+
+type_synonym ('a\<^sub>h, 'b\<^sub>h) rose_tree_F\<^sub>h = "('a\<^sub>h \<times>\<^sub>h 'b\<^sub>h list\<^sub>h) blindable\<^sub>h"
+
+datatype 'a\<^sub>h rose_tree\<^sub>h = Tree\<^sub>h "('a\<^sub>h, 'a\<^sub>h rose_tree\<^sub>h) rose_tree_F\<^sub>h"
+
+type_synonym ('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) rose_tree_F\<^sub>m = "('a\<^sub>m \<times>\<^sub>m 'b\<^sub>m list\<^sub>m, 'a\<^sub>h \<times>\<^sub>h 'b\<^sub>h list\<^sub>h) blindable\<^sub>m"
+
+datatype ('a\<^sub>m, 'a\<^sub>h) rose_tree\<^sub>m = Tree\<^sub>m "('a\<^sub>m, 'a\<^sub>h, ('a\<^sub>m, 'a\<^sub>h) rose_tree\<^sub>m, 'a\<^sub>h rose_tree\<^sub>h) rose_tree_F\<^sub>m"
+
+abbreviation (input) map_rose_tree_F\<^sub>m
+ :: "('ma \<Rightarrow> 'a) \<Rightarrow> ('mr \<Rightarrow> 'r) \<Rightarrow> ('ma, 'ha, 'mr, 'hr) rose_tree_F\<^sub>m \<Rightarrow> ('a, 'ha, 'r, 'hr) rose_tree_F\<^sub>m"
+ where
+ "map_rose_tree_F\<^sub>m f g \<equiv> map_blindable\<^sub>m (map_prod f (map g)) id"
+
+(************************************************************)
+subsubsection \<open> Hashes \<close>
+(************************************************************)
+
+abbreviation (input) hash_rt_F'
+ :: "(('a\<^sub>h, 'a\<^sub>h, 'b\<^sub>h, 'b\<^sub>h) rose_tree_F\<^sub>m, ('a\<^sub>h, 'b\<^sub>h) rose_tree_F\<^sub>h) hash"
+ where
+ "hash_rt_F' \<equiv> hash_blindable id"
+
+definition hash_rt_F\<^sub>m
+ :: "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> ('b\<^sub>m, 'b\<^sub>h) hash \<Rightarrow>
+ (('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) rose_tree_F\<^sub>m, ('a\<^sub>h, 'b\<^sub>h) rose_tree_F\<^sub>h) hash" where
+ "hash_rt_F\<^sub>m h rhm \<equiv> hash_rt_F' o map_rose_tree_F\<^sub>m h rhm"
+
+lemma hash_rt_F\<^sub>m_alt_def: "hash_rt_F\<^sub>m h rhm = hash_blindable (map_prod h (map rhm))"
+ by(simp add: hash_rt_F\<^sub>m_def fun_eq_iff hash_map_blindable_simp)
+
+primrec (transfer) hash_rt_tree'
+ :: "(('a\<^sub>h, 'a\<^sub>h) rose_tree\<^sub>m, 'a\<^sub>h rose_tree\<^sub>h) hash" where
+ "hash_rt_tree' (Tree\<^sub>m x) = Tree\<^sub>h (hash_rt_F' (map_rose_tree_F\<^sub>m id hash_rt_tree' x))"
+
+definition hash_tree
+ :: "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> (('a\<^sub>m, 'a\<^sub>h) rose_tree\<^sub>m, 'a\<^sub>h rose_tree\<^sub>h) hash" where
+ "hash_tree h = hash_rt_tree' o map_rose_tree\<^sub>m h id"
+
+lemma blindable\<^sub>m_map_compositionality:
+ "map_blindable\<^sub>m f g o map_blindable\<^sub>m f' g' = map_blindable\<^sub>m (f o f') (g o g')"
+ by(rule ext) (simp add: blindable\<^sub>m.map_comp)
+
+lemma hash_tree_simps [simp]:
+ "hash_tree h (Tree\<^sub>m x) = Tree\<^sub>h (hash_rt_F\<^sub>m h (hash_tree h) x)"
+ by(simp add: hash_tree_def hash_rt_F\<^sub>m_def
+ map_prod.comp map_sum.comp rose_tree\<^sub>h.map_comp blindable\<^sub>m.map_comp
+ prod.map_id0 rose_tree\<^sub>h.map_id0)
+
+parametric_constant hash_rt_F\<^sub>m_parametric [transfer_rule]: hash_rt_F\<^sub>m_alt_def
+
+parametric_constant hash_tree_parametric [transfer_rule]: hash_tree_def
+
+(************************************************************)
+subsubsection \<open> Blinding \<close>
+(************************************************************)
+
+abbreviation (input) blinding_of_rt_F\<^sub>m
+ :: "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> 'a\<^sub>m blinding_of \<Rightarrow> ('b\<^sub>m, 'b\<^sub>h) hash \<Rightarrow> 'b\<^sub>m blinding_of
+ \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) rose_tree_F\<^sub>m blinding_of" where
+ "blinding_of_rt_F\<^sub>m ha boa hb bob \<equiv> blinding_of_blindable (hash_prod ha (map hb))
+ (blinding_of_prod boa (blinding_of_list bob))"
+
+lemma blinding_of_rt_F\<^sub>m_mono:
+ "\<lbrakk> boa \<le> boa'; bob \<le> bob' \<rbrakk> \<Longrightarrow> blinding_of_rt_F\<^sub>m ha boa hb bob \<le> blinding_of_rt_F\<^sub>m ha boa' hb bob'"
+ by(intro blinding_of_blindable_mono prod.rel_mono list.rel_mono)
+
+lemma blinding_of_rt_F\<^sub>m_mono_inductive:
+ assumes "\<And>x y. boa x y \<longrightarrow> boa' x y" "\<And>x y. bob x y \<longrightarrow> bob' x y"
+ shows "blinding_of_rt_F\<^sub>m ha boa hb bob x y \<longrightarrow> blinding_of_rt_F\<^sub>m ha boa' hb bob' x y"
+ apply(rule impI)
+ apply(erule blinding_of_rt_F\<^sub>m_mono[THEN predicate2D, rotated -1])
+ using assms by blast+
+
+context
+ fixes h :: "('a\<^sub>m, 'a\<^sub>h) hash"
+ and bo :: "'a\<^sub>m blinding_of"
+begin
+
+inductive blinding_of_tree :: "('a\<^sub>m, 'a\<^sub>h) rose_tree\<^sub>m blinding_of" where
+ "blinding_of_tree (Tree\<^sub>m t1) (Tree\<^sub>m t2)"
+ if "blinding_of_rt_F\<^sub>m h bo (hash_tree h) blinding_of_tree t1 t2"
+monos blinding_of_rt_F\<^sub>m_mono_inductive
+
+end
+
+inductive_simps blinding_of_tree_simps [simp]:
+ "blinding_of_tree h bo (Tree\<^sub>m t1) (Tree\<^sub>m t2)"
+
+lemma blinding_of_rt_F\<^sub>m_hash:
+ assumes "boa \<le> vimage2p ha ha (=)" "bob \<le> vimage2p hb hb (=)"
+ shows "blinding_of_rt_F\<^sub>m ha boa hb bob \<le> vimage2p (hash_rt_F\<^sub>m ha hb) (hash_rt_F\<^sub>m ha hb) (=)"
+ apply(rule order_trans)
+ apply(rule blinding_of_blindable_hash)
+ apply(fold relator_eq)
+ apply(unfold vimage2p_map_rel_prod vimage2p_map_list_all2)
+ apply(rule prod.rel_mono assms list.rel_mono)+
+ apply(simp only: hash_rt_F\<^sub>m_def vimage2p_comp o_apply hash_blindable_def blindable\<^sub>m.map_id0 id_def[symmetric] vimage2p_id id_apply)
+ done
+
+lemma blinding_of_tree_hash:
+ assumes "bo \<le> vimage2p h h (=)"
+ shows "blinding_of_tree h bo \<le> vimage2p (hash_tree h) (hash_tree h) (=)"
+ apply(rule predicate2I vimage2pI)+
+ apply(erule blinding_of_tree.induct)
+ apply(simp)
+ apply(erule blinding_of_rt_F\<^sub>m_hash[OF assms, THEN predicate2D_vimage2p, rotated 1])
+ apply(blast intro: vimage2pI)
+ done
+
+abbreviation (input) set1_rt_F\<^sub>m :: "('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>h, 'b\<^sub>m) rose_tree_F\<^sub>m \<Rightarrow> 'a\<^sub>m set" where
+ "set1_rt_F\<^sub>m x \<equiv> set1_blindable\<^sub>m x \<bind> fsts"
+
+abbreviation (input) set3_rt_F\<^sub>m :: "('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) rose_tree_F\<^sub>m \<Rightarrow> 'b\<^sub>m set" where
+ "set3_rt_F\<^sub>m x \<equiv> (set1_blindable\<^sub>m x \<bind> snds) \<bind> set"
+
+lemma set_rt_F\<^sub>m_eq:
+ "{x. set1_rt_F\<^sub>m x \<subseteq> A \<and> set3_rt_F\<^sub>m x \<subseteq> B} =
+ {x. set1_blindable\<^sub>m x \<subseteq> {x. fsts x \<subseteq> A \<and> snds x \<subseteq> {x. set x \<subseteq> B}}}"
+ by force
+
+lemma hash_blindable_map: "hash_blindable f \<circ> map_blindable\<^sub>m g id = hash_blindable (f \<circ> g)"
+ by(rule ext) (simp add: hash_blindable_def blindable\<^sub>m.map_comp)
+
+lemma blinding_of_on_tree [locale_witness]:
+ assumes "blinding_of_on A h bo"
+ shows "blinding_of_on {x. set1_rose_tree\<^sub>m x \<subseteq> A} (hash_tree h) (blinding_of_tree h bo)"
+ (is "blinding_of_on ?A ?h ?bo")
+proof -
+ interpret a: blinding_of_on A h bo by fact
+ show ?thesis
+ proof
+ show "?bo \<le> vimage2p ?h ?h (=)" using a.hash by(rule blinding_of_tree_hash)
+ have "?bo x x \<and> (?bo x y \<longrightarrow> ?bo y z \<longrightarrow> ?bo x z) \<and> (?bo x y \<longrightarrow> ?bo y x \<longrightarrow> x = y)" if "x \<in> ?A" for x y z using that
+ proof(induction x arbitrary: y z)
+ case (Tree\<^sub>m x)
+ have [locale_witness]: "blinding_of_on (set3_rt_F\<^sub>m x) (hash_tree h) (blinding_of_tree h bo)"
+ apply unfold_locales
+ subgoal by(rule blinding_of_tree_hash)(rule a.hash)
+ subgoal using Tree\<^sub>m.IH Tree\<^sub>m.prems by(fastforce simp add: eq_onp_def)
+ subgoal for x y z using Tree\<^sub>m.IH[of _ _ x y z] Tree\<^sub>m.prems by fastforce
+ subgoal for x y using Tree\<^sub>m.IH[of _ _ x y] Tree\<^sub>m.prems by fastforce
+ done
+ interpret blinding_of_on
+ "{a. set1_rt_F\<^sub>m a \<subseteq> A \<and> set3_rt_F\<^sub>m a \<subseteq> set3_rt_F\<^sub>m x}"
+ "hash_rt_F\<^sub>m h ?h" "blinding_of_rt_F\<^sub>m h bo ?h ?bo"
+ unfolding set_rt_F\<^sub>m_eq hash_rt_F\<^sub>m_alt_def ..
+ from Tree\<^sub>m.prems show ?case
+ apply(intro conjI)
+ subgoal by(fastforce intro!: blinding_of_tree.intros refl[unfolded hash_rt_F\<^sub>m_alt_def])
+ subgoal by(fastforce elim!: blinding_of_tree.cases trans[unfolded hash_rt_F\<^sub>m_alt_def]
+ intro!: blinding_of_tree.intros)
+ subgoal by(fastforce elim!: blinding_of_tree.cases antisym[unfolded hash_rt_F\<^sub>m_alt_def])
+ done
+ qed
+ then show "x \<in> ?A \<Longrightarrow> ?bo x x"
+ and "\<lbrakk> ?bo x y; ?bo y z; x \<in> ?A \<rbrakk> \<Longrightarrow> ?bo x z"
+ and "\<lbrakk> ?bo x y; ?bo y x; x \<in> ?A \<rbrakk> \<Longrightarrow> x = y"
+ for x y z by blast+
+ qed
+qed
+
+lemmas blinding_of_tree [locale_witness] = blinding_of_on_tree[where A=UNIV, simplified]
+
+lemma blinding_of_tree_mono:
+ "bo \<le> bo' \<Longrightarrow> blinding_of_tree h bo \<le> blinding_of_tree h bo'"
+ apply(rule predicate2I)
+ apply(erule blinding_of_tree.induct)
+ apply(rule blinding_of_tree.intros)
+ apply(erule blinding_of_rt_F\<^sub>m_mono[THEN predicate2D, rotated -1])
+ apply(blast)+
+ done
+
+(************************************************************)
+subsubsection \<open> Merging \<close>
+(************************************************************)
+
+definition merge_rt_F\<^sub>m
+ :: "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> 'a\<^sub>m merge \<Rightarrow> ('b\<^sub>m, 'b\<^sub>h) hash \<Rightarrow> 'b\<^sub>m merge \<Rightarrow>
+ ('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) rose_tree_F\<^sub>m merge"
+ where
+ "merge_rt_F\<^sub>m ha ma hr mr \<equiv> merge_blindable (hash_prod ha (hash_list hr)) (merge_prod ma (merge_list mr))"
+
+lemma merge_rt_F\<^sub>m_cong [fundef_cong]:
+ assumes "\<And>a b. \<lbrakk> a \<in> set1_rt_F\<^sub>m x; b \<in> set1_rt_F\<^sub>m y \<rbrakk> \<Longrightarrow> ma a b = ma' a b"
+ and "\<And>a b. \<lbrakk> a \<in> set3_rt_F\<^sub>m x; b \<in> set3_rt_F\<^sub>m y \<rbrakk> \<Longrightarrow> mm a b = mm' a b"
+ shows "merge_rt_F\<^sub>m ha ma hm mm x y = merge_rt_F\<^sub>m ha ma' hm mm' x y"
+ using assms
+ apply(cases x; cases y; simp add: merge_rt_F\<^sub>m_def bind_UNION)
+ apply(rule arg_cong[where f="map_option _"])
+ apply(blast intro: merge_prod_cong merge_list_cong)
+ done
+
+lemma in_set1_blindable\<^sub>m_iff: "x \<in> set1_blindable\<^sub>m y \<longleftrightarrow> y = Unblinded x"
+ by(cases y) auto
+
+context
+ fixes h :: "('a\<^sub>m, 'a\<^sub>h) hash"
+ and ma :: "'a\<^sub>m merge"
+ notes in_set1_blindable\<^sub>m_iff[simp]
+begin
+fun merge_tree :: "('a\<^sub>m, 'a\<^sub>h) rose_tree\<^sub>m merge" where
+ "merge_tree (Tree\<^sub>m x) (Tree\<^sub>m y) = map_option Tree\<^sub>m (
+ merge_rt_F\<^sub>m h ma (hash_tree h) merge_tree x y)"
+end
+
+lemma merge_on_tree [locale_witness]:
+ assumes "merge_on A h bo m"
+ shows "merge_on {x. set1_rose_tree\<^sub>m x \<subseteq> A} (hash_tree h) (blinding_of_tree h bo) (merge_tree h m)"
+ (is "merge_on ?A ?h ?bo ?m")
+proof -
+ interpret a: merge_on A h bo m by fact
+ show ?thesis
+ proof
+ have "(?h a = ?h b \<longrightarrow> (\<exists>ab. ?m a b = Some ab \<and> ?bo a ab \<and> ?bo b ab \<and> (\<forall>u. ?bo a u \<longrightarrow> ?bo b u \<longrightarrow> ?bo ab u))) \<and>
+ (?h a \<noteq> ?h b \<longrightarrow> ?m a b = None)"
+ if "a \<in> ?A" for a b using that unfolding mem_Collect_eq
+ proof(induction a arbitrary: b rule: rose_tree\<^sub>m.induct)
+ case (Tree\<^sub>m x y)
+ interpret merge_on
+ "{y. set1_rt_F\<^sub>m y \<subseteq> A \<and> set3_rt_F\<^sub>m y \<subseteq> set3_rt_F\<^sub>m x}"
+ "hash_rt_F\<^sub>m h ?h"
+ "blinding_of_rt_F\<^sub>m h bo ?h ?bo"
+ "merge_rt_F\<^sub>m h m ?h ?m"
+ unfolding set_rt_F\<^sub>m_eq hash_rt_F\<^sub>m_alt_def merge_rt_F\<^sub>m_def
+ proof
+ fix a
+ assume a: "a \<in> set3_rt_F\<^sub>m x"
+ with Tree\<^sub>m.prems have a': "set1_rose_tree\<^sub>m a \<subseteq> A"
+ by(force simp add: bind_UNION)
+
+ from a obtain l and ab where a'': "ab \<in> set1_blindable\<^sub>m x" "l \<in> snds ab" "a \<in> set l"
+ by(clarsimp simp add: bind_UNION)
+
+ fix b
+ from Tree\<^sub>m.IH[OF a'' a', rule_format, of b]
+ show "hash_tree h a = hash_tree h b
+ \<Longrightarrow> \<exists>ab. merge_tree h m a b = Some ab \<and> blinding_of_tree h bo a ab \<and> blinding_of_tree h bo b ab \<and>
+ (\<forall>u. blinding_of_tree h bo a u \<longrightarrow> blinding_of_tree h bo b u \<longrightarrow> blinding_of_tree h bo ab u)"
+ and "hash_tree h a \<noteq> hash_tree h b \<Longrightarrow> merge_tree h m a b = None"
+ by(auto dest: sym)
+ qed
+ show ?case using Tree\<^sub>m.prems
+ apply(intro conjI strip)
+ subgoal by(cases y)(fastforce dest!: join simp add: blinding_of_tree.simps)
+ subgoal by (cases y) (fastforce dest!: undefined)
+ done
+ qed
+ then show
+ "?h a = ?h b \<Longrightarrow> \<exists>ab. ?m a b = Some ab \<and> ?bo a ab \<and> ?bo b ab \<and> (\<forall>u. ?bo a u \<longrightarrow> ?bo b u \<longrightarrow> ?bo ab u)"
+ "?h a \<noteq> ?h b \<Longrightarrow> ?m a b = None"
+ if "a \<in> ?A" for a b using that by blast+
+ qed
+qed
+
+lemmas merge_tree [locale_witness] = merge_on_tree[where A=UNIV, simplified]
+
+lemma option_bind_comm:
+ "((x :: 'a option) \<bind> (\<lambda>y. c \<bind> (\<lambda>z. f y z))) = (c \<bind> (\<lambda>y. x \<bind> (\<lambda>z. f z y)))"
+ by(cases x; cases c; auto)
+
+parametric_constant merge_rt_F\<^sub>m_parametric [transfer_rule]: merge_rt_F\<^sub>m_def
+
+(************************************************************)
+subsubsection \<open>Merkle interface\<close>
+(************************************************************)
+
+lemma merkle_tree [locale_witness]:
+ assumes "merkle_interface h bo m"
+ shows "merkle_interface (hash_tree h) (blinding_of_tree h bo) (merge_tree h m)"
+proof -
+ interpret merge_on UNIV h bo m using assms unfolding merkle_interface_aux .
+ show ?thesis unfolding merkle_interface_aux[symmetric] ..
+qed
+
+lemma merge_tree_cong [fundef_cong]:
+ assumes "\<And>a b. \<lbrakk> a \<in> set1_rose_tree\<^sub>m x; b \<in> set1_rose_tree\<^sub>m y \<rbrakk> \<Longrightarrow> m a b = m' a b"
+ shows "merge_tree h m x y = merge_tree h m' x y"
+ using assms
+ apply(induction x y rule: merge_tree.induct)
+ apply(simp add: bind_UNION)
+ apply(rule arg_cong[where f="map_option _"])
+ apply(rule merge_rt_F\<^sub>m_cong; simp add: bind_UNION; blast)
+ done
+
+end
diff --git a/thys/ADS_Functor/Canton_Transaction_Tree.thy b/thys/ADS_Functor/Canton_Transaction_Tree.thy
new file mode 100644
--- /dev/null
+++ b/thys/ADS_Functor/Canton_Transaction_Tree.thy
@@ -0,0 +1,518 @@
+theory Canton_Transaction_Tree imports
+ Inclusion_Proof_Construction
+begin
+
+section \<open>Canton's hierarchical transaction trees\<close>
+
+typedecl view_data
+typedecl view_metadata
+typedecl common_metadata
+typedecl participant_metadata
+
+datatype view = View view_metadata view_data (subviews: "view list")
+
+datatype transaction = Transaction common_metadata participant_metadata (views: "view list")
+
+subsection \<open>Views as authenticated data structures\<close>
+
+type_synonym view_metadata\<^sub>h = "view_metadata blindable\<^sub>h"
+type_synonym view_data\<^sub>h = "view_data blindable\<^sub>h"
+
+datatype view\<^sub>h = View\<^sub>h "((view_metadata\<^sub>h \<times>\<^sub>h view_data\<^sub>h) \<times>\<^sub>h view\<^sub>h list\<^sub>h) blindable\<^sub>h"
+
+type_synonym view_metadata\<^sub>m = "(view_metadata, view_metadata) blindable\<^sub>m"
+type_synonym view_data\<^sub>m = "(view_data, view_data) blindable\<^sub>m"
+
+datatype view\<^sub>m = View\<^sub>m
+ "((view_metadata\<^sub>m \<times>\<^sub>m view_data\<^sub>m) \<times>\<^sub>m view\<^sub>m list\<^sub>m,
+ (view_metadata\<^sub>h \<times>\<^sub>h view_data\<^sub>h) \<times>\<^sub>h view\<^sub>h list\<^sub>h) blindable\<^sub>m"
+
+abbreviation (input) hash_view_data :: "(view_data\<^sub>m, view_data\<^sub>h) hash" where
+ "hash_view_data \<equiv> hash_blindable id"
+abbreviation (input) blinding_of_view_data :: "view_data\<^sub>m blinding_of" where
+ "blinding_of_view_data \<equiv> blinding_of_blindable id (=)"
+abbreviation (input) merge_view_data :: "view_data\<^sub>m merge" where
+ "merge_view_data \<equiv> merge_blindable id merge_discrete"
+
+lemma merkle_view_data:
+ "merkle_interface hash_view_data blinding_of_view_data merge_view_data"
+ by unfold_locales
+
+abbreviation (input) hash_view_metadata :: "(view_metadata\<^sub>m, view_metadata\<^sub>h) hash" where
+ "hash_view_metadata \<equiv> hash_blindable id"
+abbreviation (input) blinding_of_view_metadata :: "view_metadata\<^sub>m blinding_of" where
+ "blinding_of_view_metadata \<equiv> blinding_of_blindable id (=)"
+abbreviation (input) merge_view_metadata :: "view_metadata\<^sub>m merge" where
+ "merge_view_metadata \<equiv> merge_blindable id merge_discrete"
+
+lemma merkle_view_metadata:
+ "merkle_interface hash_view_metadata blinding_of_view_metadata merge_view_metadata"
+ by unfold_locales
+
+type_synonym view_content = "view_metadata \<times> view_data"
+type_synonym view_content\<^sub>h = "view_metadata\<^sub>h \<times>\<^sub>h view_data\<^sub>h"
+type_synonym view_content\<^sub>m = "view_metadata\<^sub>m \<times>\<^sub>m view_data\<^sub>m"
+
+locale view_merkle begin
+
+type_synonym view\<^sub>h' = "view_content\<^sub>h rose_tree\<^sub>h"
+
+primrec from_view\<^sub>h :: "view\<^sub>h \<Rightarrow> view\<^sub>h'" where
+ "from_view\<^sub>h (View\<^sub>h x) = Tree\<^sub>h (map_blindable\<^sub>h (map_prod id (map from_view\<^sub>h)) x)"
+
+primrec to_view\<^sub>h :: "view\<^sub>h' \<Rightarrow> view\<^sub>h" where
+ "to_view\<^sub>h (Tree\<^sub>h x) = View\<^sub>h (map_blindable\<^sub>h (map_prod id (map to_view\<^sub>h)) x)"
+
+lemma from_to_view\<^sub>h [simp]: "from_view\<^sub>h (to_view\<^sub>h x) = x"
+ apply(induction x)
+ apply(simp add: blindable\<^sub>h.map_comp o_def prod.map_comp)
+ apply(simp cong: blindable\<^sub>h.map_cong prod.map_cong list.map_cong add: blindable\<^sub>h.map_id[unfolded id_def])
+ done
+
+lemma to_from_view\<^sub>h [simp]: "to_view\<^sub>h (from_view\<^sub>h x) = x"
+ apply(induction x)
+ apply(simp add: blindable\<^sub>h.map_comp o_def prod.map_comp)
+ apply(simp cong: blindable\<^sub>h.map_cong prod.map_cong list.map_cong add: blindable\<^sub>h.map_id[unfolded id_def])
+ done
+
+lemma iso_view\<^sub>h: "type_definition from_view\<^sub>h to_view\<^sub>h UNIV"
+ by unfold_locales simp_all
+
+setup_lifting iso_view\<^sub>h
+
+lemma cr_view\<^sub>h_Grp: "cr_view\<^sub>h = Grp UNIV to_view\<^sub>h"
+ by(simp add: cr_view\<^sub>h_def Grp_def fun_eq_iff)(transfer, auto)
+
+lemma View\<^sub>h_transfer [transfer_rule]: includes lifting_syntax shows
+ "(rel_blindable\<^sub>h (rel_prod (=) (list_all2 pcr_view\<^sub>h)) ===> pcr_view\<^sub>h) Tree\<^sub>h View\<^sub>h"
+ by(simp add: rel_fun_def view\<^sub>h.pcr_cr_eq cr_view\<^sub>h_Grp list.rel_Grp eq_alt prod.rel_Grp blindable\<^sub>h.rel_Grp)
+ (simp add: Grp_def)
+
+type_synonym view\<^sub>m' = "(view_content\<^sub>m, view_content\<^sub>h) rose_tree\<^sub>m"
+
+primrec from_view\<^sub>m :: "view\<^sub>m \<Rightarrow> view\<^sub>m'" where
+ "from_view\<^sub>m (View\<^sub>m x) = Tree\<^sub>m (map_blindable\<^sub>m (map_prod id (map from_view\<^sub>m)) (map_prod id (map from_view\<^sub>h)) x)"
+
+primrec to_view\<^sub>m :: "view\<^sub>m' \<Rightarrow> view\<^sub>m" where
+ "to_view\<^sub>m (Tree\<^sub>m x) = View\<^sub>m (map_blindable\<^sub>m (map_prod id (map to_view\<^sub>m)) (map_prod id (map to_view\<^sub>h)) x)"
+
+lemma from_to_view\<^sub>m [simp]: "from_view\<^sub>m (to_view\<^sub>m x) = x"
+ apply(induction x)
+ apply(simp add: blindable\<^sub>m.map_comp o_def prod.map_comp)
+ apply(simp cong: blindable\<^sub>m.map_cong prod.map_cong list.map_cong add: blindable\<^sub>m.map_id[unfolded id_def])
+ done
+
+lemma to_from_view\<^sub>m [simp]: "to_view\<^sub>m (from_view\<^sub>m x) = x"
+ apply(induction x)
+ apply(simp add: blindable\<^sub>m.map_comp o_def prod.map_comp)
+ apply(simp cong: blindable\<^sub>m.map_cong prod.map_cong list.map_cong add: blindable\<^sub>m.map_id[unfolded id_def])
+ done
+
+lemma iso_view\<^sub>m: "type_definition from_view\<^sub>m to_view\<^sub>m UNIV"
+ by unfold_locales simp_all
+
+setup_lifting iso_view\<^sub>m
+
+lemma cr_view\<^sub>m_Grp: "cr_view\<^sub>m = Grp UNIV to_view\<^sub>m"
+ by(simp add: cr_view\<^sub>m_def Grp_def fun_eq_iff)(transfer, auto)
+
+lemma View\<^sub>m_transfer [transfer_rule]: includes lifting_syntax shows
+ "(rel_blindable\<^sub>m (rel_prod (=) (list_all2 pcr_view\<^sub>m)) (rel_prod (=) (list_all2 pcr_view\<^sub>h)) ===> pcr_view\<^sub>m) Tree\<^sub>m View\<^sub>m"
+ by(simp add: rel_fun_def view\<^sub>h.pcr_cr_eq view\<^sub>m.pcr_cr_eq cr_view\<^sub>h_Grp cr_view\<^sub>m_Grp list.rel_Grp eq_alt prod.rel_Grp blindable\<^sub>m.rel_Grp)
+ (simp add: Grp_def)
+
+end
+
+code_datatype View\<^sub>h
+code_datatype View\<^sub>m
+
+context begin
+interpretation view_merkle .
+
+abbreviation (input) hash_view_content :: "(view_content\<^sub>m, view_content\<^sub>h) hash" where
+ "hash_view_content \<equiv> hash_prod hash_view_metadata hash_view_data"
+
+abbreviation (input) blinding_of_view_content :: "view_content\<^sub>m blinding_of" where
+ "blinding_of_view_content \<equiv> blinding_of_prod blinding_of_view_metadata blinding_of_view_data"
+
+abbreviation (input) merge_view_content :: "view_content\<^sub>m merge" where
+ "merge_view_content \<equiv> merge_prod merge_view_metadata merge_view_data"
+
+lift_definition hash_view :: "(view\<^sub>m, view\<^sub>h) hash" is
+ "hash_tree hash_view_content" .
+
+lift_definition blinding_of_view :: "view\<^sub>m blinding_of" is
+ "blinding_of_tree hash_view_content blinding_of_view_content" .
+
+lift_definition merge_view :: "view\<^sub>m merge" is
+ "merge_tree hash_view_content merge_view_content" .
+
+lemma merkle_view [locale_witness]: "merkle_interface hash_view blinding_of_view merge_view"
+ by transfer unfold_locales
+
+lemma hash_view_simps [simp]:
+ "hash_view (View\<^sub>m x) =
+ View\<^sub>h (hash_blindable (hash_prod hash_view_content (hash_list hash_view)) x)"
+ by transfer(simp add: hash_rt_F\<^sub>m_def prod.map_comp hash_blindable_def blindable\<^sub>m.map_id)
+
+lemma blinding_of_view_iff [simp]:
+ "blinding_of_view (View\<^sub>m x) (View\<^sub>m y) \<longleftrightarrow>
+ blinding_of_blindable (hash_prod hash_view_content (hash_list hash_view))
+ (blinding_of_prod blinding_of_view_content (blinding_of_list blinding_of_view)) x y"
+ by transfer simp
+
+lemma blinding_of_view_induct [consumes 1, induct pred: blinding_of_view]:
+ assumes "blinding_of_view x y"
+ and "\<And>x y. blinding_of_blindable (hash_prod hash_view_content (hash_list hash_view))
+ (blinding_of_prod blinding_of_view_content (blinding_of_list (\<lambda>x y. blinding_of_view x y \<and> P x y))) x y
+ \<Longrightarrow> P (View\<^sub>m x) (View\<^sub>m y)"
+ shows "P x y"
+ using assms by transfer(rule blinding_of_tree.induct)
+
+lemma merge_view_simps [simp]:
+ "merge_view (View\<^sub>m x) (View\<^sub>m y) =
+ map_option View\<^sub>m (merge_rt_F\<^sub>m hash_view_content merge_view_content hash_view merge_view x y)"
+ by transfer simp
+
+end
+
+subsection \<open>Transaction trees as authenticated data structures\<close>
+
+type_synonym common_metadata\<^sub>h = "common_metadata blindable\<^sub>h"
+type_synonym common_metadata\<^sub>m = "(common_metadata, common_metadata) blindable\<^sub>m"
+
+type_synonym participant_metadata\<^sub>h = "participant_metadata blindable\<^sub>h"
+type_synonym participant_metadata\<^sub>m = "(participant_metadata, participant_metadata) blindable\<^sub>m"
+
+datatype transaction\<^sub>h = Transaction\<^sub>h
+ (the_Transaction\<^sub>h: "((common_metadata\<^sub>h \<times>\<^sub>h participant_metadata\<^sub>h) \<times>\<^sub>h view\<^sub>h list\<^sub>h) blindable\<^sub>h")
+
+datatype transaction\<^sub>m = Transaction\<^sub>m
+ (the_Transaction\<^sub>m: "((common_metadata\<^sub>m \<times>\<^sub>m participant_metadata\<^sub>m) \<times>\<^sub>m view\<^sub>m list\<^sub>m,
+ (common_metadata\<^sub>h \<times>\<^sub>h participant_metadata\<^sub>h) \<times>\<^sub>h view\<^sub>h list\<^sub>h) blindable\<^sub>m")
+
+abbreviation (input) hash_common_metadata :: "(common_metadata\<^sub>m, common_metadata\<^sub>h) hash" where
+ "hash_common_metadata \<equiv> hash_blindable id"
+abbreviation (input) blinding_of_common_metadata :: "common_metadata\<^sub>m blinding_of" where
+ "blinding_of_common_metadata \<equiv> blinding_of_blindable id (=)"
+abbreviation (input) merge_common_metadata :: "common_metadata\<^sub>m merge" where
+ "merge_common_metadata \<equiv> merge_blindable id merge_discrete"
+
+abbreviation (input) hash_participant_metadata :: "(participant_metadata\<^sub>m, participant_metadata\<^sub>h) hash" where
+ "hash_participant_metadata \<equiv> hash_blindable id"
+abbreviation (input) blinding_of_participant_metadata :: "participant_metadata\<^sub>m blinding_of" where
+ "blinding_of_participant_metadata \<equiv> blinding_of_blindable id (=)"
+abbreviation (input) merge_participant_metadata :: "participant_metadata\<^sub>m merge" where
+ "merge_participant_metadata \<equiv> merge_blindable id merge_discrete"
+
+locale transaction_merkle begin
+
+lemma iso_transaction\<^sub>h: "type_definition the_Transaction\<^sub>h Transaction\<^sub>h UNIV"
+ by unfold_locales simp_all
+
+setup_lifting iso_transaction\<^sub>h
+
+lemma Transaction\<^sub>h_transfer [transfer_rule]: includes lifting_syntax shows
+ "((=) ===> pcr_transaction\<^sub>h) id Transaction\<^sub>h"
+ by(simp add: transaction\<^sub>h.pcr_cr_eq cr_transaction\<^sub>h_def rel_fun_def)
+
+lemma iso_transaction\<^sub>m: "type_definition the_Transaction\<^sub>m Transaction\<^sub>m UNIV"
+ by unfold_locales simp_all
+
+setup_lifting iso_transaction\<^sub>m
+
+lemma Transaction\<^sub>m_transfer [transfer_rule]: includes lifting_syntax shows
+ "((=) ===> pcr_transaction\<^sub>m) id Transaction\<^sub>m"
+ by(simp add: transaction\<^sub>m.pcr_cr_eq cr_transaction\<^sub>m_def rel_fun_def)
+
+end
+
+code_datatype Transaction\<^sub>h
+code_datatype Transaction\<^sub>m
+
+context begin
+interpretation transaction_merkle .
+
+lift_definition hash_transaction :: "(transaction\<^sub>m, transaction\<^sub>h) hash" is
+ "hash_blindable (hash_prod (hash_prod hash_common_metadata hash_participant_metadata) (hash_list hash_view))" .
+
+lift_definition blinding_of_transaction :: "transaction\<^sub>m blinding_of" is
+ "blinding_of_blindable
+ (hash_prod (hash_prod hash_common_metadata hash_participant_metadata) (hash_list hash_view))
+ (blinding_of_prod (blinding_of_prod blinding_of_common_metadata blinding_of_participant_metadata) (blinding_of_list blinding_of_view))" .
+
+lift_definition merge_transaction :: "transaction\<^sub>m merge" is
+ "merge_blindable
+ (hash_prod (hash_prod hash_common_metadata hash_participant_metadata) (hash_list hash_view))
+ (merge_prod (merge_prod merge_common_metadata merge_participant_metadata) (merge_list merge_view))" .
+
+lemma merkle_transaction [locale_witness]:
+ "merkle_interface hash_transaction blinding_of_transaction merge_transaction"
+ by transfer unfold_locales
+
+lemmas hash_transaction_simps [simp] = hash_transaction.abs_eq
+lemmas blinding_of_transaction_iff [simp] = blinding_of_transaction.abs_eq
+lemmas merge_transaction_simps [simp] = merge_transaction.abs_eq
+
+end
+
+interpretation transaction:
+ merkle_interface hash_transaction blinding_of_transaction merge_transaction
+ by(rule merkle_transaction)
+
+subsection \<open>
+Constructing authenticated data structures for views
+\<close>
+
+context view_merkle begin
+
+type_synonym view' = "(view_metadata \<times> view_data) rose_tree"
+
+primrec from_view :: "view \<Rightarrow> view'" where
+ "from_view (View vm vd vs) = Tree ((vm, vd), map from_view vs)"
+
+primrec to_view :: "view' \<Rightarrow> view" where
+ "to_view (Tree x) = View (fst (fst x)) (snd (fst x)) (snd (map_prod id (map to_view) x))"
+
+lemma from_to_view [simp]: "from_view (to_view x) = x"
+ by(induction x)(clarsimp cong: map_cong)
+
+lemma to_from_view [simp]: "to_view (from_view x) = x"
+ by(induction x)(clarsimp cong: map_cong)
+
+lemma iso_view: "type_definition from_view to_view UNIV"
+ by unfold_locales simp_all
+
+setup_lifting iso_view
+
+definition View' :: "(view_metadata \<times> view_data) \<times> view list \<Rightarrow> view" where
+ "View' = (\<lambda>((vm, vd), vs). View vm vd vs)"
+
+lemma View_View': "View = (\<lambda>vm vd vs. View' ((vm, vd), vs))"
+ by(simp add: View'_def)
+
+lemma cr_view_Grp: "cr_view = Grp UNIV to_view"
+ by(simp add: cr_view_def Grp_def fun_eq_iff)(transfer, auto)
+
+lemma View'_transfer [transfer_rule]: includes lifting_syntax shows
+ "(rel_prod (=) (list_all2 pcr_view) ===> pcr_view) Tree View'"
+ by(simp add: view.pcr_cr_eq cr_view_Grp eq_alt prod.rel_Grp rose_tree.rel_Grp list.rel_Grp)
+ (auto simp add: Grp_def View'_def)
+
+end
+
+code_datatype View
+
+context begin
+interpretation view_merkle .
+
+abbreviation embed_view_content :: "view_metadata \<times> view_data \<Rightarrow> view_metadata\<^sub>m \<times> view_data\<^sub>m" where
+ "embed_view_content \<equiv> map_prod Unblinded Unblinded"
+
+lift_definition embed_view :: "view \<Rightarrow> view\<^sub>m" is "embed_source_tree embed_view_content" .
+
+lemma embed_view_simps [simp]:
+ "embed_view (View vm vd vs) = View\<^sub>m (Unblinded ((Unblinded vm, Unblinded vd), map embed_view vs))"
+ unfolding View_View' by transfer simp
+
+end
+
+context transaction_merkle begin
+
+primrec the_Transaction :: "transaction \<Rightarrow> (common_metadata \<times> participant_metadata) \<times> view list" where
+ "the_Transaction (Transaction cm pm views) = ((cm, pm), views)" for views
+
+definition Transaction' :: "(common_metadata \<times> participant_metadata) \<times> view list \<Rightarrow> transaction" where
+ "Transaction' = (\<lambda>((cm, pm), views). Transaction cm pm views)"
+
+lemma Transaction_Transaction': "Transaction = (\<lambda>cm pm views. Transaction' ((cm, pm), views))"
+ by(simp add: Transaction'_def)
+
+lemma the_Transaction_inverse [simp]: "Transaction' (the_Transaction x) = x"
+ by(cases x)(simp add: Transaction'_def)
+
+lemma Transaction'_inverse [simp]: "the_Transaction (Transaction' x) = x"
+ by(simp add: Transaction'_def split_def)
+
+lemma iso_transaction: "type_definition the_Transaction Transaction' UNIV"
+ by unfold_locales simp_all
+
+setup_lifting iso_transaction
+
+lemma Transaction'_transfer [transfer_rule]: includes lifting_syntax shows
+ "((=) ===> pcr_transaction) id Transaction'"
+ by(simp add: transaction.pcr_cr_eq cr_transaction_def rel_fun_def)
+
+end
+
+code_datatype Transaction
+
+context begin
+interpretation transaction_merkle .
+
+lift_definition embed_transaction :: "transaction \<Rightarrow> transaction\<^sub>m" is
+ "Unblinded \<circ> map_prod (map_prod Unblinded Unblinded) (map embed_view)" .
+
+lemma embed_transaction_simps [simp]:
+ "embed_transaction (Transaction cm pm views) =
+ Transaction\<^sub>m (Unblinded ((Unblinded cm, Unblinded pm), map embed_view views))"
+ for views unfolding Transaction_Transaction' by transfer simp
+
+end
+
+subsubsection \<open>Inclusion proof for the mediator\<close>
+
+primrec mediator_view :: "view \<Rightarrow> view\<^sub>m" where
+ "mediator_view (View vm vd vs) =
+ View\<^sub>m (Unblinded ((Unblinded vm, Blinded (Content vd)), map mediator_view vs))"
+
+primrec mediator_transaction_tree :: "transaction \<Rightarrow> transaction\<^sub>m" where
+ "mediator_transaction_tree (Transaction cm pm views) =
+ Transaction\<^sub>m (Unblinded ((Unblinded cm, Blinded (Content pm)), map mediator_view views))"
+ for views
+
+lemma blinding_of_mediator_view [simp]: "blinding_of_view (mediator_view view) (embed_view view)"
+ by(induction view)(auto simp add: list.rel_map intro!: list.rel_refl_strong)
+
+lemma blinding_of_mediator_transaction_tree:
+ "blinding_of_transaction (mediator_transaction_tree tt) (embed_transaction tt)"
+ by(cases tt)(auto simp add: list.rel_map intro: list.rel_refl_strong)
+
+subsubsection \<open>Inclusion proofs for participants\<close>
+
+text \<open>Next, we define a function for producing all transaction views from a given view,
+ and prove its properties.\<close>
+
+type_synonym view_path_elem = "(view_metadata \<times> view_data) blindable \<times> view list \<times> view list"
+type_synonym view_path = "view_path_elem list"
+type_synonym view_zipper = "view_path \<times> view"
+
+type_synonym view_path_elem\<^sub>m = "(view_metadata\<^sub>m \<times>\<^sub>m view_data\<^sub>m) \<times> view\<^sub>m list\<^sub>m \<times> view\<^sub>m list\<^sub>m"
+type_synonym view_path\<^sub>m = "view_path_elem\<^sub>m list"
+type_synonym view_zipper\<^sub>m = "view_path\<^sub>m \<times> view\<^sub>m"
+
+context begin
+interpretation view_merkle .
+
+lift_definition zipper_of_view :: "view \<Rightarrow> view_zipper" is zipper_of_tree .
+lift_definition view_of_zipper :: "view_zipper \<Rightarrow> view" is tree_of_zipper .
+
+lift_definition zipper_of_view\<^sub>m :: "view\<^sub>m \<Rightarrow> view_zipper\<^sub>m" is zipper_of_tree\<^sub>m .
+lift_definition view_of_zipper\<^sub>m :: "view_zipper\<^sub>m \<Rightarrow> view\<^sub>m" is tree_of_zipper\<^sub>m .
+
+lemma view_of_zipper\<^sub>m_Nil [simp]: "view_of_zipper\<^sub>m ([], t) = t"
+ by transfer simp
+
+lift_definition blind_view_path_elem :: "view_path_elem \<Rightarrow> view_path_elem\<^sub>m" is
+ "blind_path_elem embed_view_content hash_view_content" .
+
+lift_definition blind_view_path :: "view_path \<Rightarrow> view_path\<^sub>m" is
+ "blind_path embed_view_content hash_view_content" .
+
+lift_definition embed_view_path_elem :: "view_path_elem \<Rightarrow> view_path_elem\<^sub>m" is
+ "embed_path_elem embed_view_content" .
+
+lift_definition embed_view_path :: "view_path \<Rightarrow> view_path\<^sub>m" is
+ "embed_path embed_view_content" .
+
+lift_definition hash_view_path_elem :: "view_path_elem\<^sub>m \<Rightarrow> (view_content\<^sub>h \<times> view\<^sub>h list \<times> view\<^sub>h list)" is
+ "hash_path_elem hash_view_content" .
+
+lift_definition zippers_view :: "view_zipper \<Rightarrow> view_zipper\<^sub>m list" is
+ "zippers_rose_tree embed_view_content hash_view_content" .
+
+lemma embed_view_path_Nil [simp]: "embed_view_path [] = []"
+ by transfer(simp add: embed_path_def)
+
+lemma zippers_view_same_hash:
+ assumes "z \<in> set (zippers_view (p, t))"
+ shows "hash_view (view_of_zipper\<^sub>m z) = hash_view (view_of_zipper\<^sub>m (embed_view_path p, embed_view t))"
+ using assms by transfer(rule zippers_rose_tree_same_hash')
+
+lemma zippers_view_blinding_of:
+ assumes "z \<in> set (zippers_view (p, t))"
+ shows "blinding_of_view (view_of_zipper\<^sub>m z) (view_of_zipper\<^sub>m (blind_view_path p, embed_view t))"
+ using assms by transfer(rule zippers_rose_tree_blinding_of, unfold_locales)
+
+end
+
+primrec blind_view :: "view \<Rightarrow> view\<^sub>m" where
+ "blind_view (View vm vd subviews) =
+ View\<^sub>m (Blinded (Content ((Content vm, Content vd), map (hash_view \<circ> embed_view) subviews)))"
+ for subviews
+
+lemma hash_blind_view: "hash_view (blind_view view) = hash_view (embed_view view)"
+ by(cases view) simp
+
+primrec blind_transaction :: "transaction \<Rightarrow> transaction\<^sub>m" where
+ "blind_transaction (Transaction cm pm views) =
+ Transaction\<^sub>m (Blinded (Content ((Content cm, Content pm), map (hash_view \<circ> blind_view) views)))"
+ for views
+
+lemma hash_blind_transaction:
+ "hash_transaction (blind_transaction transaction) = hash_transaction (embed_transaction transaction)"
+ by(cases transaction)(simp add: hash_blind_view)
+
+
+typedecl participant
+consts recipients :: "view_metadata \<Rightarrow> participant list"
+
+fun view_recipients :: "view\<^sub>m \<Rightarrow> participant set" where
+ "view_recipients (View\<^sub>m (Unblinded ((Unblinded vm, vd), subviews))) = set (recipients vm)" for subviews
+| "view_recipients _ = {}" \<comment> \<open>Sane default case\<close>
+
+context fixes participant :: participant begin
+
+definition view_trees_for :: "view \<Rightarrow> view\<^sub>m list" where
+ "view_trees_for view =
+ map view_of_zipper\<^sub>m
+ (filter (\<lambda>(_, t). participant \<in> view_recipients t)
+ (zippers_view ([], view)))"
+
+primrec transaction_views_for :: "transaction \<Rightarrow> transaction\<^sub>m list" where
+ "transaction_views_for (Transaction cm pm views) =
+ map (\<lambda>view\<^sub>m. Transaction\<^sub>m (Unblinded ((Unblinded cm, Unblinded pm), view\<^sub>m)))
+ (concat (map (\<lambda>(l, v, r). map (\<lambda>v\<^sub>m. map blind_view l @ [v\<^sub>m] @ map blind_view r) (view_trees_for v)) (splits views)))"
+ for views
+
+lemma view_trees_for_same_hash:
+ "vt \<in> set (view_trees_for view) \<Longrightarrow> hash_view vt = hash_view (embed_view view)"
+ by(auto simp add: view_trees_for_def dest: zippers_view_same_hash)
+
+lemma transaction_views_for_same_hash:
+ "t\<^sub>m \<in> set (transaction_views_for t) \<Longrightarrow> hash_transaction t\<^sub>m = hash_transaction (embed_transaction t)"
+ by(cases t)(clarsimp simp add: splits_iff hash_blind_view view_trees_for_same_hash)
+
+definition transaction_projection_for :: "transaction \<Rightarrow> transaction\<^sub>m" where
+ "transaction_projection_for t =
+ (let tvs = transaction_views_for t
+ in if tvs = [] then blind_transaction t else the (transaction.Merge (set tvs)))"
+
+lemma transaction_projection_for_same_hash:
+ "hash_transaction (transaction_projection_for t) = hash_transaction (embed_transaction t)"
+proof(cases "transaction_views_for t = []")
+ case True thus ?thesis by(simp add: transaction_projection_for_def Let_def hash_blind_transaction)
+next
+ case False
+ then have "transaction.Merge (set (transaction_views_for t)) \<noteq> None"
+ by(intro transaction.Merge_defined)(auto simp add: transaction_views_for_same_hash)
+ with False show ?thesis
+ apply(clarsimp simp add: transaction_projection_for_def neq_Nil_conv simp del: transaction.Merge_insert)
+ apply(drule transaction.Merge_hash[symmetric], blast)
+ apply(auto intro: transaction_views_for_same_hash)
+ done
+qed
+
+lemma transaction_projection_for_upper:
+ assumes "t\<^sub>m \<in> set (transaction_views_for t)"
+ shows "blinding_of_transaction t\<^sub>m (transaction_projection_for t)"
+proof -
+ from assms have "transaction.Merge (set (transaction_views_for t)) \<noteq> None"
+ by(intro transaction.Merge_defined)(auto simp add: transaction_views_for_same_hash)
+ with assms show ?thesis
+ by(auto simp add: transaction_projection_for_def Let_def dest: transaction.Merge_upper)
+qed
+
+end
+
+end
\ No newline at end of file
diff --git a/thys/ADS_Functor/Generic_ADS_Construction.thy b/thys/ADS_Functor/Generic_ADS_Construction.thy
new file mode 100644
--- /dev/null
+++ b/thys/ADS_Functor/Generic_ADS_Construction.thy
@@ -0,0 +1,466 @@
+(* Author: Andreas Lochbihler, Digital Asset
+ Author: Ognjen Maric, Digital Asset *)
+
+theory Generic_ADS_Construction imports
+ "Merkle_Interface"
+ "HOL-Library.BNF_Axiomatization"
+begin
+
+section \<open>Generic construction of authenticated data structures\<close>
+
+subsection \<open>Functors\<close>
+
+subsubsection \<open>Source functor\<close>
+
+text \<open>
+
+We want to allow ADSs of arbitrary ADTs, which we call "source trees". The ADTs we are interested in can
+in general be represented as the least fixpoints of some bounded natural (bi-)functor (BNF) \<open>('a, 'b) F\<close>, where
+@{typ 'a} is the type of "source" data, and @{typ 'b} is a recursion "handle".
+However, Isabelle's type system does not support higher kinds, necessary to parameterize our definitions
+over functors.
+Instead, we first develop a general theory of ADSs over an arbitrary, but fixed functor,
+and its least fixpoint. We show that the theory is compositional, in that the functor's least fixed point
+can then be reused as the "source" data of another functor.
+
+We start by defining the arbitrary fixed functor, its fixpoints, and showing how composition can be
+done. A higher-level explanation is found in the paper.
+\<close>
+
+
+bnf_axiomatization ('a, 'b) F [wits: "'a \<Rightarrow> ('a, 'b) F"]
+
+context notes [[typedef_overloaded]] begin
+datatype 'a T = T "('a, 'a T) F"
+end
+
+subsubsection \<open>Base Merkle functor\<close>
+
+text \<open>
+This type captures the ADS hashes.
+\<close>
+
+bnf_axiomatization ('a, 'b) F\<^sub>h [wits: "'a \<Rightarrow> ('a, 'b) F\<^sub>h"]
+
+text \<open>
+It intuitively contains mixed garbage and source values.
+The functor's recursive handle @{typ 'b} might contain partial garbage.
+\<close>
+
+
+text \<open>
+This type captures the ADS inclusion proofs.
+The functor \<open>('a, 'a', 'b, 'b') F\<^sub>m\<close> has all type variables doubled.
+This type represents all values including the information which parts are blinded.
+The original type variable @{typ 'a} now represents the source data, which for compositionality can contain blindable positions.
+The type @{typ 'b} is a recursive handle to inclusion sub-proofs (which can be partialy blinded).
+The type @{typ 'a'} represent "hashes" of the source data in @{typ 'a}, i.e., a mix of source values and garbage.
+The type @{typ 'b'} is a recursive handle to ADS hashes of subtrees.
+
+The corresponding type of recursive authenticated trees is then a fixpoint of this functor.
+\<close>
+
+bnf_axiomatization ('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) F\<^sub>m [wits: "'a\<^sub>m \<Rightarrow> 'a\<^sub>h \<Rightarrow> 'b\<^sub>h \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) F\<^sub>m"]
+
+subsubsection \<open>Least fixpoint\<close>
+
+context notes [[typedef_overloaded]] begin
+datatype 'a\<^sub>h T\<^sub>h = T\<^sub>h "('a\<^sub>h, 'a\<^sub>h T\<^sub>h) F\<^sub>h"
+end
+
+context notes [[typedef_overloaded]] begin
+datatype ('a\<^sub>m, 'a\<^sub>h) T\<^sub>m = T\<^sub>m (the_T\<^sub>m: "('a\<^sub>m, 'a\<^sub>h, ('a\<^sub>m, 'a\<^sub>h) T\<^sub>m, 'a\<^sub>h T\<^sub>h) F\<^sub>m")
+end
+
+
+subsubsection \<open> Composition \<close>
+
+text \<open>
+Finally, we show how to compose two Merkle functors.
+For simplicity, we reuse @{typ \<open>('a, 'b) F\<close>} and @{typ \<open>'a T\<close>}.
+\<close>
+
+context notes [[typedef_overloaded]] begin
+
+datatype ('a, 'b) G = G "('a T, 'b) F"
+
+datatype ('a\<^sub>h, 'b\<^sub>h) G\<^sub>h = G\<^sub>h (the_G\<^sub>h: "('a\<^sub>h T\<^sub>h, 'b\<^sub>h) F\<^sub>h")
+
+datatype ('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) G\<^sub>m = G\<^sub>m (the_G\<^sub>m: "(('a\<^sub>m, 'a\<^sub>h) T\<^sub>m, 'a\<^sub>h T\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) F\<^sub>m")
+
+end
+
+
+subsection \<open>Root hash\<close>
+
+subsubsection \<open>Base functor\<close>
+
+text \<open>
+The root hash of an authenticated value is modelled as a blindable value of type @{typ "('a', 'b') F\<^sub>h"}.
+(Actually, we want to use an abstract datatype for root hashes, but we omit this distinction here for simplicity.)
+\<close>
+
+consts root_hash_F' :: "(('a\<^sub>h, 'a\<^sub>h, 'b\<^sub>h, 'b\<^sub>h) F\<^sub>m, ('a\<^sub>h, 'b\<^sub>h) F\<^sub>h) hash"
+ \<comment> \<open>Root hash operation where we assume that all atoms have already been replaced by root hashes.
+ This assumption is reflected in the equality of the type parameters of @{type F\<^sub>m} \<close>
+
+type_synonym ('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) hash_F =
+ "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> ('b\<^sub>m, 'b\<^sub>h) hash \<Rightarrow> (('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) F\<^sub>m, ('a\<^sub>h, 'b\<^sub>h) F\<^sub>h) hash"
+definition root_hash_F :: "('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) hash_F" where
+ "root_hash_F rha rhb = root_hash_F' \<circ> map_F\<^sub>m rha id rhb id"
+
+subsubsection \<open>Least fixpoint\<close>
+
+primrec root_hash_T' :: "(('a\<^sub>h, 'a\<^sub>h) T\<^sub>m, 'a\<^sub>h T\<^sub>h) hash" where
+ "root_hash_T' (T\<^sub>m x) = T\<^sub>h (root_hash_F' (map_F\<^sub>m id id root_hash_T' id x))"
+
+definition root_hash_T :: "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> (('a\<^sub>m, 'a\<^sub>h) T\<^sub>m, 'a\<^sub>h T\<^sub>h) hash" where
+ "root_hash_T rha = root_hash_T' \<circ> map_T\<^sub>m rha id"
+
+lemma root_hash_T_simps [simp]:
+ "root_hash_T rha (T\<^sub>m x) = T\<^sub>h (root_hash_F rha (root_hash_T rha) x)"
+ by(simp add: root_hash_T_def F\<^sub>m.map_comp root_hash_F_def T\<^sub>h.map_id0)
+
+subsubsection \<open>Composition\<close>
+
+primrec root_hash_G' :: "(('a\<^sub>h, 'a\<^sub>h, 'b\<^sub>h, 'b\<^sub>h) G\<^sub>m, ('a\<^sub>h, 'b\<^sub>h) G\<^sub>h) hash" where
+ "root_hash_G' (G\<^sub>m x) = G\<^sub>h (root_hash_F' (map_F\<^sub>m root_hash_T' id id id x))"
+
+definition root_hash_G :: "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> ('b\<^sub>m, 'b\<^sub>h) hash \<Rightarrow> (('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) G\<^sub>m, ('a\<^sub>h, 'b\<^sub>h) G\<^sub>h) hash" where
+ "root_hash_G rha rhb = root_hash_G' \<circ> map_G\<^sub>m rha id rhb id"
+
+lemma root_hash_G_unfold:
+ "root_hash_G rha rhb = G\<^sub>h \<circ> root_hash_F (root_hash_T rha) rhb \<circ> the_G\<^sub>m"
+ apply(rule ext)
+ subgoal for x
+ by(cases x)(simp add: root_hash_G_def fun_eq_iff root_hash_F_def root_hash_T_def F\<^sub>m.map_comp T\<^sub>m.map_comp o_def T\<^sub>h.map_id id_def[symmetric])
+ done
+
+lemma root_hash_G_simps [simp]:
+ "root_hash_G rha rhb (G\<^sub>m x) = G\<^sub>h (root_hash_F (root_hash_T rha) rhb x)"
+ by(simp add: root_hash_G_def root_hash_T_def F\<^sub>m.map_comp root_hash_F_def T\<^sub>h.map_id0)
+
+
+subsection \<open>Blinding relation\<close>
+
+text \<open>
+The blinding relation determines whether one ADS value is a blinding of another.
+\<close>
+
+subsubsection \<open> Blinding on the base functor (@{type F\<^sub>m}) \<close>
+
+type_synonym ('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) blinding_of_F =
+ "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> 'a\<^sub>m blinding_of \<Rightarrow> ('b\<^sub>m, 'b\<^sub>h) hash \<Rightarrow> 'b\<^sub>m blinding_of \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) F\<^sub>m blinding_of"
+
+\<comment> \<open> Computes whether a partially blinded ADS is a blinding of another one \<close>
+axiomatization blinding_of_F :: "('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) blinding_of_F" where
+ blinding_of_F_mono: "\<lbrakk> boa \<le> boa'; bob \<le> bob' \<rbrakk>
+ \<Longrightarrow> blinding_of_F rha boa rhb bob \<le> blinding_of_F rha boa' rhb bob'"
+ \<comment> \<open> Monotonicity must be unconditional (without the assumption @{text "blinding_of_on"})
+ such that we can justify the recursive definition for the least fixpoint. \<close>
+ and blinding_respects_hashes_F [locale_witness]:
+ "\<lbrakk> blinding_respects_hashes rha boa; blinding_respects_hashes rhb bob \<rbrakk>
+ \<Longrightarrow> blinding_respects_hashes (root_hash_F rha rhb) (blinding_of_F rha boa rhb bob)"
+ and blinding_of_on_F [locale_witness]:
+ "\<lbrakk> blinding_of_on A rha boa; blinding_of_on B rhb bob \<rbrakk>
+ \<Longrightarrow> blinding_of_on {x. set1_F\<^sub>m x \<subseteq> A \<and> set3_F\<^sub>m x \<subseteq> B} (root_hash_F rha rhb) (blinding_of_F rha boa rhb bob)"
+
+lemma blinding_of_F_mono_inductive:
+ assumes a: "\<And>x y. boa x y \<longrightarrow> boa' x y"
+ and b: "\<And>x y. bob x y \<longrightarrow> bob' x y"
+ shows "blinding_of_F rha boa rhb bob x y \<longrightarrow> blinding_of_F rha boa' rhb bob' x y"
+ using assms by(blast intro: blinding_of_F_mono[THEN predicate2D, OF predicate2I predicate2I])
+
+subsubsection \<open> Blinding on least fixpoints \<close>
+
+context
+ fixes rh :: "('a\<^sub>m, 'a\<^sub>h) hash"
+ and bo :: "'a\<^sub>m blinding_of"
+begin
+
+inductive blinding_of_T :: "('a\<^sub>m, 'a\<^sub>h) T\<^sub>m blinding_of" where
+ "blinding_of_T (T\<^sub>m x) (T\<^sub>m y)" if
+ "blinding_of_F rh bo (root_hash_T rh) blinding_of_T x y"
+monos blinding_of_F_mono_inductive
+
+end
+
+lemma blinding_of_T_mono:
+ assumes "bo \<le> bo'"
+ shows "blinding_of_T rh bo \<le> blinding_of_T rh bo'"
+ by(rule predicate2I; erule blinding_of_T.induct)
+ (blast intro: blinding_of_T.intros blinding_of_F_mono[THEN predicate2D, OF assms, rotated -1])
+
+lemma blinding_of_T_root_hash:
+ assumes "bo \<le> vimage2p rh rh (=)"
+ shows "blinding_of_T rh bo \<le> vimage2p (root_hash_T rh) (root_hash_T rh) (=)"
+ apply(rule predicate2I vimage2pI)+
+ apply(erule blinding_of_T.induct)
+ apply simp
+ apply(drule blinding_respects_hashes_F[unfolded blinding_respects_hashes_def, THEN predicate2D, rotated -1])
+ apply(rule assms)
+ apply(blast intro: vimage2pI)
+ apply(simp add: vimage2p_def)
+ done
+
+lemma blinding_respects_hashes_T [locale_witness]:
+ "blinding_respects_hashes rh bo \<Longrightarrow> blinding_respects_hashes (root_hash_T rh) (blinding_of_T rh bo)"
+ unfolding blinding_respects_hashes_def by(rule blinding_of_T_root_hash)
+
+lemma blinding_of_on_T [locale_witness]:
+ assumes "blinding_of_on A rh bo"
+ shows "blinding_of_on {x. set1_T\<^sub>m x \<subseteq> A} (root_hash_T rh) (blinding_of_T rh bo)"
+ (is "blinding_of_on ?A ?h ?bo")
+proof -
+ interpret a: blinding_of_on A rh bo by fact
+ show ?thesis
+ proof
+ have "?bo x x \<and> (?bo x y \<longrightarrow> ?bo y z \<longrightarrow> ?bo x z) \<and> (?bo x y \<longrightarrow> ?bo y x \<longrightarrow> x = y)"
+ if "x \<in> ?A" for x y z using that
+ proof(induction x arbitrary: y z)
+ case (T\<^sub>m x)
+ interpret blinding_of_on
+ "{a. set1_F\<^sub>m a \<subseteq> A \<and> set3_F\<^sub>m a \<subseteq> set3_F\<^sub>m x}"
+ "root_hash_F rh ?h"
+ "blinding_of_F rh bo ?h ?bo"
+ apply(rule blinding_of_on_F[OF assms])
+ apply unfold_locales
+ subgoal using T\<^sub>m.IH T\<^sub>m.prems by(force simp add: eq_onp_def)
+ subgoal for a b c using T\<^sub>m.IH[of a b c] T\<^sub>m.prems by auto
+ subgoal for a b using T\<^sub>m.IH[of a b] T\<^sub>m.prems by auto
+ done
+ show ?case using T\<^sub>m.prems
+ apply(intro conjI)
+ subgoal by(auto intro: blinding_of_T.intros refl)
+ subgoal by(auto elim!: blinding_of_T.cases trans intro!: blinding_of_T.intros)
+ subgoal by(auto elim!: blinding_of_T.cases dest: antisym)
+ done
+ qed
+ then show "x \<in> ?A \<Longrightarrow> ?bo x x"
+ and "\<lbrakk> ?bo x y; ?bo y z; x \<in> ?A \<rbrakk> \<Longrightarrow> ?bo x z"
+ and "\<lbrakk> ?bo x y; ?bo y x; x \<in> ?A \<rbrakk> \<Longrightarrow> x = y"
+ for x y z by blast+
+ qed
+qed
+
+lemmas blinding_of_T [locale_witness] = blinding_of_on_T[where A=UNIV, simplified]
+
+subsubsection \<open> Blinding on composition \<close>
+
+context
+ fixes rha :: "('a\<^sub>m, 'a\<^sub>h) hash"
+ and boa :: "'a\<^sub>m blinding_of"
+ and rhb :: "('b\<^sub>m, 'b\<^sub>h) hash"
+ and bob :: "'b\<^sub>m blinding_of"
+begin
+
+inductive blinding_of_G :: "('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) G\<^sub>m blinding_of" where
+ "blinding_of_G (G\<^sub>m x) (G\<^sub>m y)" if
+ "blinding_of_F (root_hash_T rha) (blinding_of_T rha boa) rhb bob x y"
+
+lemma blinding_of_G_unfold:
+ "blinding_of_G = vimage2p the_G\<^sub>m the_G\<^sub>m (blinding_of_F (root_hash_T rha) (blinding_of_T rha boa) rhb bob)"
+ apply(rule ext)+
+ subgoal for x y by(cases x; cases y)(simp_all add: blinding_of_G.simps fun_eq_iff vimage2p_def)
+ done
+
+end
+
+lemma blinding_of_G_mono:
+ assumes "boa \<le> boa'" "bob \<le> bob'"
+ shows "blinding_of_G rha boa rhb bob \<le> blinding_of_G rha boa' rhb bob'"
+ unfolding blinding_of_G_unfold
+ by(rule vimage2p_mono' blinding_of_F_mono blinding_of_T_mono assms)+
+
+lemma blinding_of_G_root_hash:
+ assumes "boa \<le> vimage2p rha rha (=)" and "bob \<le> vimage2p rhb rhb (=)"
+ shows "blinding_of_G rha boa rhb bob \<le> vimage2p (root_hash_G rha rhb) (root_hash_G rha rhb) (=)"
+ unfolding blinding_of_G_unfold root_hash_G_unfold vimage2p_comp o_apply
+ apply(rule vimage2p_mono')
+ apply(rule order_trans)
+ apply(rule blinding_respects_hashes_F[unfolded blinding_respects_hashes_def])
+ apply(rule blinding_of_T_root_hash)
+ apply(rule assms)+
+ apply(rule vimage2p_mono')
+ apply(simp add: vimage2p_def)
+ done
+
+lemma blinding_of_on_G [locale_witness]:
+ assumes "blinding_of_on A rha boa" "blinding_of_on B rhb bob"
+ shows "blinding_of_on {x. set1_G\<^sub>m x \<subseteq> A \<and> set3_G\<^sub>m x \<subseteq> B} (root_hash_G rha rhb) (blinding_of_G rha boa rhb bob)"
+ (is "blinding_of_on ?A ?h ?bo")
+proof -
+ interpret a: blinding_of_on A rha boa by fact
+ interpret b: blinding_of_on B rhb bob by fact
+ interpret FT: blinding_of_on
+ "{x. set1_F\<^sub>m x \<subseteq> {x. set1_T\<^sub>m x \<subseteq> A} \<and> set3_F\<^sub>m x \<subseteq> B}"
+ "root_hash_F (root_hash_T rha) rhb"
+ "blinding_of_F (root_hash_T rha) (blinding_of_T rha boa) rhb bob"
+ ..
+ show ?thesis
+ proof
+ show "?bo \<le> vimage2p ?h ?h (=)"
+ using a.hash b.hash
+ by(rule blinding_of_G_root_hash)
+ show "?bo x x" if "x \<in> ?A" for x using that
+ by(cases x; hypsubst)(rule blinding_of_G.intros; rule FT.refl; auto)
+ show "?bo x z" if "?bo x y" "?bo y z" "x \<in> ?A" for x y z using that
+ by(fastforce elim!: blinding_of_G.cases intro!: blinding_of_G.intros elim!: FT.trans)
+ show "x = y" if "?bo x y" "?bo y x" "x \<in> ?A" for x y using that
+ by(clarsimp elim!: blinding_of_G.cases)(erule (1) FT.antisym; auto)
+ qed
+qed
+
+lemmas blinding_of_G [locale_witness] = blinding_of_on_G[where A=UNIV and B=UNIV, simplified]
+
+subsection \<open>Merging\<close>
+
+text \<open>Two Merkle values with the same root hash can be merged into a less blinded Merkle value.
+The operation is unspecified for trees with different root hashes.
+\<close>
+
+subsubsection \<open> Merging on the base functor \<close>
+
+axiomatization merge_F :: "('a\<^sub>m, 'a\<^sub>h) hash \<Rightarrow> 'a\<^sub>m merge \<Rightarrow> ('b\<^sub>m, 'b\<^sub>h) hash \<Rightarrow> 'b\<^sub>m merge
+ \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) F\<^sub>m merge" where
+ merge_F_cong [fundef_cong]:
+ "\<lbrakk> \<And>a b. a \<in> set1_F\<^sub>m x \<Longrightarrow> ma a b = ma' a b; \<And>a b. a \<in> set3_F\<^sub>m x \<Longrightarrow> mb a b = mb' a b \<rbrakk>
+ \<Longrightarrow> merge_F rha ma rhb mb x y = merge_F rha ma' rhb mb' x y"
+ and
+ merge_on_F [locale_witness]:
+ "\<lbrakk> merge_on A rha boa ma; merge_on B rhb bob mb \<rbrakk>
+ \<Longrightarrow> merge_on {x. set1_F\<^sub>m x \<subseteq> A \<and> set3_F\<^sub>m x \<subseteq> B} (root_hash_F rha rhb) (blinding_of_F rha boa rhb bob) (merge_F rha ma rhb mb)"
+
+lemmas merge_F [locale_witness] = merge_on_F[where A=UNIV and B=UNIV, simplified]
+
+subsubsection \<open> Merging on the least fixpoint \<close>
+
+lemma wfP_subterm_T: "wfP (\<lambda>x y. x \<in> set3_F\<^sub>m (the_T\<^sub>m y))"
+ apply(rule wfPUNIVI)
+ subgoal premises IH[rule_format] for P x
+ by(induct x)(auto intro: IH)
+ done
+
+context
+ fixes rh :: "('a\<^sub>m, 'a\<^sub>h) hash"
+ fixes m :: "'a\<^sub>m merge"
+begin
+
+function merge_T :: "('a\<^sub>m, 'a\<^sub>h) T\<^sub>m merge" where
+ "merge_T (T\<^sub>m x) (T\<^sub>m y) = map_option T\<^sub>m (merge_F rh m (root_hash_T rh) merge_T x y)"
+ by pat_completeness auto
+termination
+ apply(relation "{(x, y). x \<in> set3_F\<^sub>m (the_T\<^sub>m y)} <*lex*> {(x, y). x \<in> set3_F\<^sub>m (the_T\<^sub>m y)}")
+ apply(auto simp add: wfP_def[symmetric] wfP_subterm_T)
+ done
+
+lemma merge_on_T [locale_witness]:
+ assumes "merge_on A rh bo m"
+ shows "merge_on {x. set1_T\<^sub>m x \<subseteq> A} (root_hash_T rh) (blinding_of_T rh bo) merge_T"
+ (is "merge_on ?A ?h ?bo ?m")
+proof -
+ interpret a: merge_on A rh bo m by fact
+ show ?thesis
+ proof
+ have "(?h a = ?h b \<longrightarrow> (\<exists>ab. ?m a b = Some ab \<and> ?bo a ab \<and> ?bo b ab \<and> (\<forall>u. ?bo a u \<longrightarrow> ?bo b u \<longrightarrow> ?bo ab u))) \<and>
+ (?h a \<noteq> ?h b \<longrightarrow> ?m a b = None)"
+ if "a \<in> ?A" for a b using that unfolding mem_Collect_eq
+ proof(induction a arbitrary: b)
+ case (T\<^sub>m x y)
+ interpret merge_on "{y. set1_F\<^sub>m y \<subseteq> A \<and> set3_F\<^sub>m y \<subseteq> set3_F\<^sub>m x}"
+ "root_hash_F rh ?h" "blinding_of_F rh bo ?h ?bo" "merge_F rh m ?h ?m"
+ proof
+ fix a
+ assume a: "a \<in> set3_F\<^sub>m x"
+ with T\<^sub>m.prems have a': "set1_T\<^sub>m a \<subseteq> A" by auto
+
+ fix b
+ from T\<^sub>m.IH[OF a a', rule_format, of b]
+ show "root_hash_T rh a = root_hash_T rh b
+ \<Longrightarrow> \<exists>ab. merge_T a b = Some ab \<and> blinding_of_T rh bo a ab \<and> blinding_of_T rh bo b ab \<and>
+ (\<forall>u. blinding_of_T rh bo a u \<longrightarrow> blinding_of_T rh bo b u \<longrightarrow> blinding_of_T rh bo ab u)"
+ and "root_hash_T rh a \<noteq> root_hash_T rh b \<Longrightarrow> merge_T a b = None"
+ by(auto dest: sym)
+ qed
+ show ?case using T\<^sub>m.prems
+ apply(intro conjI strip)
+ subgoal by(cases y)(auto dest!: join simp add: blinding_of_T.simps)
+ subgoal by(cases y)(auto dest!: undefined)
+ done
+ qed
+ then show
+ "?h a = ?h b \<Longrightarrow> \<exists>ab. ?m a b = Some ab \<and> ?bo a ab \<and> ?bo b ab \<and> (\<forall>u. ?bo a u \<longrightarrow> ?bo b u \<longrightarrow> ?bo ab u)"
+ "?h a \<noteq> ?h b \<Longrightarrow> ?m a b = None"
+ if "a \<in> ?A" for a b using that by blast+
+ qed
+qed
+
+lemmas merge_T [locale_witness] = merge_on_T[where A=UNIV, simplified]
+
+end
+
+lemma merge_T_cong [fundef_cong]:
+ assumes "\<And>a b. a \<in> set1_T\<^sub>m x \<Longrightarrow> m a b = m' a b"
+ shows "merge_T rh m x y = merge_T rh m' x y"
+ using assms
+ apply(induction x y rule: merge_T.induct)
+ apply simp
+ apply(rule arg_cong[where f="map_option _"])
+ apply(blast intro: merge_F_cong)
+ done
+
+subsubsection \<open> Merging and composition \<close>
+
+context
+ fixes rha :: "('a\<^sub>m, 'a\<^sub>h) hash"
+ fixes ma :: "'a\<^sub>m merge"
+ fixes rhb :: "('b\<^sub>m, 'b\<^sub>h) hash"
+ fixes mb :: "'b\<^sub>m merge"
+begin
+
+primrec merge_G :: "('a\<^sub>m, 'a\<^sub>h, 'b\<^sub>m, 'b\<^sub>h) G\<^sub>m merge" where
+ "merge_G (G\<^sub>m x) y' = (case y' of G\<^sub>m y \<Rightarrow>
+ map_option G\<^sub>m (merge_F (root_hash_T rha) (merge_T rha ma) rhb mb x y))"
+
+lemma merge_G_simps [simp]:
+ "merge_G (G\<^sub>m x) (G\<^sub>m y) = map_option G\<^sub>m (merge_F (root_hash_T rha) (merge_T rha ma) rhb mb x y)"
+ by(simp)
+
+declare merge_G.simps [simp del]
+
+lemma merge_on_G:
+ assumes a: "merge_on A rha boa ma" and b: "merge_on B rhb bob mb"
+ shows "merge_on {x. set1_G\<^sub>m x \<subseteq> A \<and> set3_G\<^sub>m x \<subseteq> B} (root_hash_G rha rhb) (blinding_of_G rha boa rhb bob) merge_G"
+ (is "merge_on ?A ?h ?bo ?m")
+proof -
+ interpret a: merge_on A rha boa ma by fact
+ interpret b: merge_on B rhb bob mb by fact
+ interpret F: merge_on
+ "{x. set1_F\<^sub>m x \<subseteq> {x. set1_T\<^sub>m x \<subseteq> A} \<and> set3_F\<^sub>m x \<subseteq> B}"
+ "root_hash_F (root_hash_T rha) rhb"
+ "blinding_of_F (root_hash_T rha) (blinding_of_T rha boa) rhb bob"
+ "merge_F (root_hash_T rha) (merge_T rha ma) rhb mb"
+ ..
+ show ?thesis
+ proof
+ show "\<exists>ab. ?m a b = Some ab \<and> ?bo a ab \<and> ?bo b ab \<and> (\<forall>u. ?bo a u \<longrightarrow> ?bo b u \<longrightarrow> ?bo ab u)"
+ if "?h a = ?h b" "a \<in> ?A" for a b using that
+ by(cases a; cases b)(auto dest!: F.join simp add: blinding_of_G.simps)
+ show "?m a b = None" if "?h a \<noteq> ?h b" "a \<in> ?A" for a b using that
+ by(cases a; cases b)(auto dest!: F.undefined)
+ qed
+qed
+
+lemmas merge_G [locale_witness] = merge_on_G[where A=UNIV and B=UNIV, simplified]
+
+end
+
+lemma merge_G_cong [fundef_cong]:
+ "\<lbrakk> \<And>a b. a \<in> set1_G\<^sub>m x \<Longrightarrow> ma a b = ma' a b; \<And>a b. a \<in> set3_G\<^sub>m x \<Longrightarrow> mb a b = mb' a b \<rbrakk>
+ \<Longrightarrow> merge_G rha ma rhb mb x y = merge_G rha ma' rhb mb' x y"
+ apply(cases x; cases y; simp)
+ apply(rule arg_cong[where f="map_option _"])
+ apply(blast intro: merge_F_cong merge_T_cong)
+ done
+
+end
diff --git a/thys/ADS_Functor/Inclusion_Proof_Construction.thy b/thys/ADS_Functor/Inclusion_Proof_Construction.thy
new file mode 100644
--- /dev/null
+++ b/thys/ADS_Functor/Inclusion_Proof_Construction.thy
@@ -0,0 +1,430 @@
+(* Author: Andreas Lochbihler, Digital Asset
+ Author: Ognjen Maric, Digital Asset *)
+
+theory Inclusion_Proof_Construction imports
+ ADS_Construction
+begin
+
+primrec blind_blindable :: "('a\<^sub>m \<Rightarrow> 'a\<^sub>h) \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h) blindable\<^sub>m \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h) blindable\<^sub>m" where
+ "blind_blindable h (Blinded x) = Blinded x"
+| "blind_blindable h (Unblinded x) = Blinded (Content (h x))"
+
+lemma hash_blind_blindable [simp]: "hash_blindable h (blind_blindable h x) = hash_blindable h x"
+ by(cases x) simp_all
+
+subsection \<open>Inclusion proof construction for rose trees\<close>
+
+(************************************************************)
+subsubsection \<open> Hashing, embedding and blinding source trees \<close>
+(************************************************************)
+
+context fixes h :: "'a \<Rightarrow> 'a\<^sub>h" begin
+fun hash_source_tree :: "'a rose_tree \<Rightarrow> 'a\<^sub>h rose_tree\<^sub>h" where
+ "hash_source_tree (Tree (data, subtrees)) = Tree\<^sub>h (Content (h data, map hash_source_tree subtrees))"
+end
+
+context fixes e :: "'a \<Rightarrow> 'a\<^sub>m" begin
+fun embed_source_tree :: "'a rose_tree \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h) rose_tree\<^sub>m" where
+ "embed_source_tree (Tree (data, subtrees)) =
+ Tree\<^sub>m (Unblinded (e data, map embed_source_tree subtrees))"
+end
+
+context fixes h :: "'a \<Rightarrow> 'a\<^sub>h" begin
+fun blind_source_tree :: "'a rose_tree \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h) rose_tree\<^sub>m" where
+ "blind_source_tree (Tree (data, subtrees)) = Tree\<^sub>m (Blinded (Content (h data, map (hash_source_tree h) subtrees)))"
+end
+
+case_of_simps blind_source_tree_cases: blind_source_tree.simps
+
+fun is_blinded :: "('a\<^sub>m, 'a\<^sub>h) rose_tree\<^sub>m \<Rightarrow> bool" where
+ "is_blinded (Tree\<^sub>m (Blinded _)) = True"
+| "is_blinded _ = False"
+
+lemma hash_blinded_simp: "hash_tree h' (blind_source_tree h st) = hash_source_tree h st"
+ by(cases st rule: blind_source_tree.cases)(simp_all add: hash_rt_F\<^sub>m_def)
+
+lemma hash_embedded_simp:
+ "hash_tree h (embed_source_tree e st) = hash_source_tree (h \<circ> e) st"
+ by(induction st rule: embed_source_tree.induct)(simp add: hash_rt_F\<^sub>m_def)
+
+lemma blinded_embedded_same_hash:
+ "hash_tree h'' (blind_source_tree (h o e) st) = hash_tree h (embed_source_tree e st)"
+ by(simp add: hash_blinded_simp hash_embedded_simp)
+
+lemma blinding_blinds [simp]:
+ "is_blinded (blind_source_tree h t)"
+ by(simp add: blind_source_tree_cases split: rose_tree.split)
+
+lemma blinded_blinds_embedded:
+ "blinding_of_tree h bo (blind_source_tree (h o e) st) (embed_source_tree e st)"
+ by(cases st rule: blind_source_tree.cases)(simp_all add: hash_embedded_simp)
+
+fun embed_hash_tree :: "'ha rose_tree\<^sub>h \<Rightarrow> ('a, 'ha) rose_tree\<^sub>m" where
+ "embed_hash_tree (Tree\<^sub>h h) = Tree\<^sub>m (Blinded h)"
+
+
+(************************************************************)
+subsubsection \<open>Auxiliary definitions: selectors and list splits\<close>
+(************************************************************)
+
+fun children :: "'a rose_tree \<Rightarrow> 'a rose_tree list" where
+ "children (Tree (data, subtrees)) = subtrees"
+
+fun children\<^sub>m :: "('a, 'a\<^sub>h) rose_tree\<^sub>m \<Rightarrow> ('a, 'a\<^sub>h) rose_tree\<^sub>m list" where
+ "children\<^sub>m (Tree\<^sub>m (Unblinded (data, subtrees))) = subtrees"
+| "children\<^sub>m _ = undefined"
+
+fun splits :: "'a list \<Rightarrow> ('a list \<times> 'a \<times> 'a list) list" where
+ "splits [] = []"
+| "splits (x#xs) = ([], x, xs) # map (\<lambda>(l, y, r). (x # l, y, r)) (splits xs)"
+
+lemma splits_iff: "(l, a, r) \<in> set (splits ll) = (ll = l @ a # r)"
+ by(induction ll arbitrary: l a r)(auto simp add: Cons_eq_append_conv)
+
+(************************************************************)
+subsubsection \<open> Zippers \<close>
+(************************************************************)
+
+text \<open> Zippers provide a neat representation of tree-like ADSs when they have only a single
+ unblinded subtree. The zipper path provides the "inclusion proof" that the unblinded subtree is
+ included in a larger structure. \<close>
+
+type_synonym 'a path_elem = "'a \<times> 'a rose_tree list \<times> 'a rose_tree list"
+type_synonym 'a path = "'a path_elem list"
+type_synonym 'a zipper = "'a path \<times> 'a rose_tree"
+
+definition zipper_of_tree :: "'a rose_tree \<Rightarrow> 'a zipper" where
+ "zipper_of_tree t \<equiv> ([], t)"
+
+fun tree_of_zipper :: "'a zipper \<Rightarrow> 'a rose_tree" where
+ "tree_of_zipper ([], t) = t"
+| "tree_of_zipper ((a, l, r) # z, t) = tree_of_zipper (z, (Tree (a, (l @ t # r))))"
+
+case_of_simps tree_of_zipper_cases: tree_of_zipper.simps
+
+lemma tree_of_zipper_id[iff]: "tree_of_zipper (zipper_of_tree t) = t"
+ by(simp add: zipper_of_tree_def)
+
+fun zipper_children :: "'a zipper \<Rightarrow> 'a zipper list" where
+ "zipper_children (p, Tree (a, ts)) = map (\<lambda>(l, t, r). ((a, l, r) # p, t)) (splits ts)"
+
+lemma zipper_children_same_tree:
+ assumes "z' \<in> set (zipper_children z)"
+ shows "tree_of_zipper z' = tree_of_zipper z"
+proof-
+ obtain p a ts where z: "z = (p, Tree (a, ts))"
+ using assms
+ by(cases z rule: zipper_children.cases) (simp_all)
+
+ then obtain l t r where ltr: "z' = ((a, l, r) # p, t)" and "(l, t, r) \<in> set (splits ts)"
+ using assms
+ by(auto)
+
+ with z show ?thesis
+ by(simp add: splits_iff)
+qed
+
+type_synonym ('a\<^sub>m, 'a\<^sub>h) path_elem\<^sub>m = "'a\<^sub>m \<times> ('a\<^sub>m, 'a\<^sub>h) rose_tree\<^sub>m list \<times> ('a\<^sub>m, 'a\<^sub>h) rose_tree\<^sub>m list"
+type_synonym ('a\<^sub>m, 'a\<^sub>h) path\<^sub>m = "('a\<^sub>m, 'a\<^sub>h) path_elem\<^sub>m list"
+type_synonym ('a\<^sub>m, 'a\<^sub>h) zipper\<^sub>m = "('a\<^sub>m, 'a\<^sub>h) path\<^sub>m \<times> ('a\<^sub>m, 'a\<^sub>h) rose_tree\<^sub>m"
+
+definition zipper_of_tree\<^sub>m :: "('a\<^sub>m, 'a\<^sub>h) rose_tree\<^sub>m \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h) zipper\<^sub>m" where
+ "zipper_of_tree\<^sub>m t \<equiv> ([], t)"
+
+fun tree_of_zipper\<^sub>m :: "('a\<^sub>m, 'a\<^sub>h) zipper\<^sub>m \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h) rose_tree\<^sub>m" where
+ "tree_of_zipper\<^sub>m ([], t) = t"
+| "tree_of_zipper\<^sub>m ((m, l, r) # z, t) = tree_of_zipper\<^sub>m (z, Tree\<^sub>m (Unblinded (m, l @ t # r)))"
+
+lemma tree_of_zipper\<^sub>m_append:
+ "tree_of_zipper\<^sub>m (p @ p', t) = tree_of_zipper\<^sub>m (p', tree_of_zipper\<^sub>m (p, t))"
+ by(induction p arbitrary: p' t) auto
+
+fun zipper_children\<^sub>m :: "('a\<^sub>m, 'a\<^sub>h) zipper\<^sub>m \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h) zipper\<^sub>m list" where
+ "zipper_children\<^sub>m (p, Tree\<^sub>m (Unblinded (a, ts))) = map (\<lambda>(l, t, r). ((a, l, r) # p, t)) (splits ts) "
+| "zipper_children\<^sub>m _ = []"
+
+lemma zipper_children_same_tree\<^sub>m:
+ assumes "z' \<in> set (zipper_children\<^sub>m z)"
+ shows "tree_of_zipper\<^sub>m z' = tree_of_zipper\<^sub>m z"
+proof-
+ obtain p a ts where z: "z = (p, Tree\<^sub>m (Unblinded (a, ts)))"
+ using assms
+ by(cases z rule: zipper_children\<^sub>m.cases) (simp_all)
+
+ then obtain l t r where ltr: "z' = ((a, l, r) # p, t)" and "(l, t, r) \<in> set (splits ts)"
+ using assms
+ by(auto)
+
+ with z show ?thesis
+ by(simp add: splits_iff)
+qed
+
+fun blind_path_elem :: "('a \<Rightarrow> 'a\<^sub>m) \<Rightarrow> ('a\<^sub>m \<Rightarrow> 'a\<^sub>h) \<Rightarrow> 'a path_elem \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h) path_elem\<^sub>m" where
+ "blind_path_elem e h (x, l, r) = (e x, map (blind_source_tree (h \<circ> e)) l, map (blind_source_tree (h \<circ> e)) r)"
+
+case_of_simps blind_path_elem_cases: blind_path_elem.simps
+
+definition blind_path :: "('a \<Rightarrow> 'a\<^sub>m) \<Rightarrow> ('a\<^sub>m \<Rightarrow> 'a\<^sub>h) \<Rightarrow> 'a path \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h) path\<^sub>m" where
+ "blind_path e h \<equiv> map (blind_path_elem e h)"
+
+fun embed_path_elem :: "('a \<Rightarrow> 'a\<^sub>m) \<Rightarrow> 'a path_elem \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h) path_elem\<^sub>m" where
+ "embed_path_elem e (d, l, r) = (e d, map (embed_source_tree e) l, map (embed_source_tree e) r)"
+
+definition embed_path :: "('a \<Rightarrow> 'a\<^sub>m) \<Rightarrow> 'a path \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h) path\<^sub>m" where
+ "embed_path embed_elem \<equiv> map (embed_path_elem embed_elem)"
+
+lemma hash_tree_of_zipper_same_path:
+ "hash_tree h (tree_of_zipper\<^sub>m (p, v)) = hash_tree h (tree_of_zipper\<^sub>m (p, v'))
+ \<longleftrightarrow> hash_tree h v = hash_tree h v'"
+ by(induction p arbitrary: v v')(auto simp add: hash_rt_F\<^sub>m_def)
+
+fun hash_path_elem :: "('a\<^sub>m \<Rightarrow> 'a\<^sub>h) \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h) path_elem\<^sub>m \<Rightarrow> ('a\<^sub>h \<times> 'a\<^sub>h rose_tree\<^sub>h list \<times> 'a\<^sub>h rose_tree\<^sub>h list)" where
+ "hash_path_elem h (e, l, r) = (h e, map (hash_tree h) l, map (hash_tree h) r)"
+
+lemma hash_view_zipper_eqI:
+ "\<lbrakk> hash_list (hash_path_elem h) p = hash_list (hash_path_elem h') p';
+ hash_tree h v = hash_tree h' v' \<rbrakk> \<Longrightarrow>
+ hash_tree h (tree_of_zipper\<^sub>m (p, v)) = hash_tree h' (tree_of_zipper\<^sub>m (p', v'))"
+ by(induction p arbitrary: p' v v')(auto simp add: hash_rt_F\<^sub>m_def)
+
+lemma blind_embed_path_same_hash:
+ "hash_tree h (tree_of_zipper\<^sub>m (blind_path e h p, t)) = hash_tree h (tree_of_zipper\<^sub>m (embed_path e p, t))"
+proof -
+ have "hash_path_elem h \<circ> blind_path_elem e h = hash_path_elem h \<circ> embed_path_elem e"
+ by(clarsimp simp add: hash_blinded_simp hash_embedded_simp fun_eq_iff intro!: arg_cong2[where f=hash_source_tree, OF _ refl])
+ then show ?thesis
+ by(intro hash_view_zipper_eqI)(simp_all add: embed_path_def blind_path_def list.map_comp)
+qed
+
+lemma tree_of_embed_commute:
+ "tree_of_zipper\<^sub>m (embed_path e p, embed_source_tree e t) = embed_source_tree e (tree_of_zipper (p, t))"
+ by(induction "(p, t)" arbitrary: p t rule: tree_of_zipper.induct)(simp_all add: embed_path_def)
+
+lemma childz_same_tree:
+ "(l, t, r) \<in> set (splits ts) \<Longrightarrow>
+ tree_of_zipper\<^sub>m (embed_path e p, embed_source_tree e (Tree (d, ts)))
+ = tree_of_zipper\<^sub>m (embed_path e ((d, l, r) # p), embed_source_tree e t)"
+ by(simp add: tree_of_embed_commute splits_iff del: embed_source_tree.simps)
+
+lemma blinding_of_same_path:
+ assumes bo: "blinding_of_on UNIV h bo"
+ shows
+ "blinding_of_tree h bo (tree_of_zipper\<^sub>m (p, t)) (tree_of_zipper\<^sub>m (p, t'))
+ \<longleftrightarrow> blinding_of_tree h bo t t'"
+proof -
+ interpret a: blinding_of_on UNIV h bo by fact
+ interpret tree: blinding_of_on UNIV "hash_tree h" "blinding_of_tree h bo" ..
+ show ?thesis
+ by(induction p arbitrary: t t')(auto simp add: list_all2_append list.rel_refl a.refl tree.refl)
+qed
+
+lemma zipper_children_size_change [termination_simp]: "(a, b) \<in> set (zipper_children (p, v)) \<Longrightarrow> size b < size v"
+ by(cases v)(clarsimp simp add: splits_iff Set.image_iff)
+
+
+subsection \<open>All zippers of a rose tree\<close>
+
+context fixes e :: "'a \<Rightarrow> 'a\<^sub>m" and h :: "'a\<^sub>m \<Rightarrow> 'a\<^sub>h" begin
+
+fun zippers_rose_tree :: "'a zipper \<Rightarrow> ('a\<^sub>m, 'a\<^sub>h) zipper\<^sub>m list" where
+ "zippers_rose_tree (p, t) = (blind_path e h p, embed_source_tree e t) #
+ concat (map zippers_rose_tree (zipper_children (p, t)))"
+
+end
+
+lemmas [simp del] = zippers_rose_tree.simps zipper_children.simps
+
+lemma zippers_rose_tree_same_hash':
+ assumes "z \<in> set (zippers_rose_tree e h (p, t))"
+ shows "hash_tree h (tree_of_zipper\<^sub>m z) =
+ hash_tree h (tree_of_zipper\<^sub>m (embed_path e p, embed_source_tree e t))"
+ using assms(1)
+proof(induction "(p, t)" arbitrary: p t rule: zippers_rose_tree.induct)
+ case (1 p t)
+ from "1.prems"[unfolded zippers_rose_tree.simps]
+ consider (find) "z = (blind_path e h p, embed_source_tree e t)"
+ | (rec) x ts l t' r where "t = Tree (x, ts)" "(l, t', r) \<in> set (splits ts)" "z \<in> set (zippers_rose_tree e h ((x, l, r) # p, t'))"
+ by(cases t)(auto simp add: zipper_children.simps)
+ then show ?case
+ proof cases
+ case rec
+ then show ?thesis
+ apply(subst "1.hyps"[of "(x, l, r) # p" "t'"])
+ apply(simp_all add: rev_image_eqI zipper_children.simps)
+ by (metis (no_types) childz_same_tree comp_apply embed_source_tree.simps rec(2))
+ qed(simp add: blind_embed_path_same_hash)
+qed
+
+lemma zippers_rose_tree_blinding_of:
+ assumes "blinding_of_on UNIV h bo"
+ and z: "z \<in> set (zippers_rose_tree e h (p, t))"
+ shows "blinding_of_tree h bo (tree_of_zipper\<^sub>m z) (tree_of_zipper\<^sub>m (blind_path e h p, embed_source_tree e t))"
+ using z
+proof(induction "(p, t)" arbitrary: p t rule: zippers_rose_tree.induct)
+ case (1 p t)
+
+ interpret a: blinding_of_on UNIV h bo by fact
+ interpret rt: blinding_of_on UNIV "hash_tree h" "blinding_of_tree h bo" ..
+
+ from "1.prems"[unfolded zippers_rose_tree.simps]
+ consider (find) "z = (blind_path e h p, embed_source_tree e t)"
+ | (rec) x ts l t' r where "t = Tree (x, ts)" "(l, t', r) \<in> set (splits ts)" "z \<in> set (zippers_rose_tree e h ((x, l, r) # p, t'))"
+ by(cases t)(auto simp add: zipper_children.simps)
+ then show ?case
+ proof cases
+ case find
+ then show ?thesis by(simp add: rt.refl)
+ next
+ case rec
+ then have "blinding_of_tree h bo
+ (tree_of_zipper\<^sub>m z)
+ (tree_of_zipper\<^sub>m (blind_path e h ((x, l, r) # p), embed_source_tree e t'))"
+ by(intro 1)(simp add: rev_image_eqI zipper_children.simps)
+ also have "blinding_of_tree h bo
+ (tree_of_zipper\<^sub>m (blind_path e h ((x, l, r) # p), embed_source_tree e t'))
+ (tree_of_zipper\<^sub>m (blind_path e h p, embed_source_tree e (Tree (x, ts))))"
+ using rec
+ by(simp add: blind_path_def splits_iff blinding_of_same_path[OF assms(1)] a.refl list_all2_append list_all2_same list.rel_map blinded_blinds_embedded rt.refl)
+ finally (rt.trans) show ?thesis using rec by simp
+ qed
+qed
+
+lemma zippers_rose_tree_neq_Nil: "zippers_rose_tree e h (p, t) \<noteq> []"
+ by(simp add: zippers_rose_tree.simps)
+
+lemma (in comp_fun_idem) fold_set_union:
+ assumes "finite A" "finite B"
+ shows "Finite_Set.fold f z (A \<union> B) = Finite_Set.fold f (Finite_Set.fold f z A) B"
+ using assms(2,1) by induct simp_all
+
+context merkle_interface begin
+
+lemma comp_fun_idem_merge: "comp_fun_idem (\<lambda>x yo. yo \<bind> m x)"
+ apply(unfold_locales; clarsimp simp add: fun_eq_iff split: bind_split)
+ subgoal by (metis assoc bind.bind_lunit bind.bind_lzero idem option.distinct(1))
+ subgoal by (simp add: join)
+ done
+
+interpretation merge: comp_fun_idem "\<lambda>x yo. yo \<bind> m x" by(rule comp_fun_idem_merge)
+
+definition Merge :: "'a\<^sub>m set \<Rightarrow> 'a\<^sub>m option" where
+ "Merge A = (if A = {} \<or> infinite A then None else Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) (Some (SOME x. x \<in> A)) A)"
+
+lemma Merge_empty [simp]: "Merge {} = None"
+ by(simp add: Merge_def)
+
+lemma Merge_infinite [simp]: "infinite A \<Longrightarrow> Merge A = None"
+ by(simp add: Merge_def)
+
+lemma Merge_cong_start:
+ "Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) (Some x) A = Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) (Some y) A" (is "?lhs = ?rhs")
+ if "x \<in> A" "y \<in> A" "finite A"
+proof -
+ have "?lhs = Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) (Some x) (insert y A)" using that by(simp add: insert_absorb)
+ also have "\<dots> = Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) (m x y) A" using that
+ by(simp only: merge.fold_insert_idem2)(simp add: commute)
+ also have "\<dots> = Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) (Some y) (insert x A)" using that
+ by(simp only: merge.fold_insert_idem2)(simp)
+ also have "\<dots> = ?rhs" using that by(simp add: insert_absorb)
+ finally show ?thesis .
+qed
+
+lemma Merge_insert [simp]: "Merge (insert x A) = (if A = {} then Some x else Merge A \<bind> m x)" (is "?lhs = ?rhs")
+proof(cases "finite A \<and> A \<noteq> {}")
+ case True
+ then have "?lhs = Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) (Some (SOME x. x \<in> A)) (insert x A)"
+ unfolding Merge_def by(subst Merge_cong_start[where y="SOME x. x \<in> A", OF someI])(auto intro: someI)
+ also have "\<dots> = ?rhs" using True by(simp add: Merge_def)
+ finally show ?thesis .
+qed(auto simp add: Merge_def idem)
+
+lemma Merge_insert_alt:
+ "Merge (insert x A) = Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) (Some x) A" (is "?lhs = ?rhs") if "finite A"
+proof -
+ have "?lhs = Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) (Some x) (insert x A)" using that
+ unfolding Merge_def by(subst Merge_cong_start[where y=x, OF someI]) auto
+ also have "\<dots> = ?rhs" using that by(simp only: merge.fold_insert_idem2)(simp add: idem)
+ finally show ?thesis .
+qed
+
+lemma Merge_None [simp]: "Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) None A = None"
+proof(cases "finite A")
+ case True
+ then show ?thesis by(induction) auto
+qed simp
+
+lemma Merge_union:
+ "Merge (A \<union> B) = (if A = {} then Merge B else if B = {} then Merge A else (Merge A \<bind> (\<lambda>a. Merge B \<bind> m a)))"
+ (is "?lhs = ?rhs")
+proof(cases "finite (A \<union> B) \<and> A \<noteq> {} \<and> B \<noteq> {}")
+ case True
+ then have "?lhs = Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) (Some (SOME x. x \<in> B)) (B \<union> A)"
+ unfolding Merge_def by(subst Merge_cong_start[where y="SOME x. x \<in> B", OF someI])(auto intro: someI simp add: Un_commute)
+ also have "\<dots> = Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) (Merge B) A" using True
+ by(simp add: Merge_def merge.fold_set_union)
+ also have "\<dots> = Merge A \<bind> (\<lambda>a. Merge B \<bind> m a)"
+ proof(cases "Merge B")
+ case (Some b)
+ thus ?thesis using True
+ by simp(subst Merge_insert_alt[symmetric]; simp add: commute; metis commute)
+ qed simp
+ finally show ?thesis using True by simp
+qed auto
+
+lemma Merge_upper:
+ assumes m: "Merge A = Some x" and y: "y \<in> A"
+ shows "bo y x"
+proof -
+ have "Merge A = Merge (insert y A)" using y by(simp add: insert_absorb)
+ also have "\<dots> = Merge A \<bind> m y" using y by auto
+ finally have "m y x = Some x" using m by simp
+ thus ?thesis by(simp add: bo_def)
+qed
+
+lemma Merge_least:
+ assumes m: "Merge A = Some x" and u[rule_format]: "\<forall>a\<in>A. bo a u"
+ shows "bo x u"
+proof -
+ define a where "a \<equiv> SOME x. x \<in> A"
+ from m have A: "finite A" "A \<noteq> {}"
+ and *: "Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) (Some a) A = Some x"
+ by(auto simp add: Merge_def a_def split: if_splits)
+ from A have "bo a u" by(auto intro: someI u simp add: a_def)
+ with A * u show ?thesis
+ proof(induction A arbitrary: a)
+ case (insert x A)
+ then show ?case
+ by(cases "m x a"; cases "A = {}"; simp only: merge.fold_insert_idem2; simp)(auto simp add: join)
+ qed simp
+qed
+
+lemma Merge_defined:
+ assumes "finite A" "A \<noteq> {}" "\<forall>a\<in>A. \<forall>b \<in> A. h a = h b"
+ shows "Merge A \<noteq> None"
+proof
+ define a where "a \<equiv> SOME a. a \<in> A"
+ have a: "a \<in> A" unfolding a_def using assms by(auto intro: someI)
+ hence ha: "\<forall>b \<in> A. h b = h a" using assms by blast
+
+ assume m: "Merge A = None"
+ hence "Finite_Set.fold (\<lambda>x yo. yo \<bind> m x) (Some a) A = None"
+ using assms by(simp add: Merge_def a_def)
+ with assms(1) show False using ha
+ proof(induction arbitrary: a)
+ case (insert x A)
+ thus ?case
+ apply(cases "m x a"; use nothing in \<open>simp only: merge.fold_insert_idem2\<close>)
+ apply(simp add: merge_respects_hashes)
+ apply(fastforce simp add: join vimage2p_def dest: hash[THEN predicate2D])
+ done
+ qed simp
+qed
+
+lemma Merge_hash:
+ assumes "Merge A = Some x" "a \<in> A"
+ shows "h a = h x"
+ using Merge_upper[OF assms] hash by(auto simp add: vimage2p_def)
+
+end
+
+end
\ No newline at end of file
diff --git a/thys/ADS_Functor/Merkle_Interface.thy b/thys/ADS_Functor/Merkle_Interface.thy
new file mode 100644
--- /dev/null
+++ b/thys/ADS_Functor/Merkle_Interface.thy
@@ -0,0 +1,299 @@
+(* Author: Andreas Lochbihler, Digital Asset
+ Author: Ognjen Maric, Digital Asset *)
+
+theory Merkle_Interface
+imports
+ Main
+ "HOL-Library.Conditional_Parametricity"
+ "HOL-Library.Monad_Syntax"
+begin
+
+alias vimage2p = BNF_Def.vimage2p
+alias Grp = BNF_Def.Grp
+alias setl = Basic_BNFs.setl
+alias setr = Basic_BNFs.setr
+alias fsts = Basic_BNFs.fsts
+alias snds = Basic_BNFs.snds
+
+attribute_setup locale_witness = \<open>Scan.succeed Locale.witness_add\<close>
+
+lemma vimage2p_mono': "R \<le> S \<Longrightarrow> vimage2p f g R \<le> vimage2p f g S"
+ by(auto simp add: vimage2p_def le_fun_def)
+
+lemma vimage2p_map_rel_prod:
+ "vimage2p (map_prod f g) (map_prod f' g') (rel_prod A B) = rel_prod (vimage2p f f' A) (vimage2p g g' B)"
+ by(simp add: vimage2p_def prod.rel_map)
+
+lemma vimage2p_map_list_all2:
+ "vimage2p (map f) (map g) (list_all2 A) = list_all2 (vimage2p f g A)"
+ by(simp add: vimage2p_def list.rel_map)
+
+lemma equivclp_least:
+ assumes le: "r \<le> s" and s: "equivp s"
+ shows "equivclp r \<le> s"
+ apply(rule predicate2I)
+ subgoal by(induction rule: equivclp_induct)(auto 4 3 intro: equivp_reflp[OF s] equivp_transp[OF s] equivp_symp[OF s] le[THEN predicate2D])
+ done
+
+lemma reflp_eq_onp: "reflp R \<longleftrightarrow> eq_onp (\<lambda>x. True) \<le> R"
+ by(auto simp add: reflp_def eq_onp_def)
+
+lemma eq_onpE:
+ assumes "eq_onp P x y"
+ obtains "x = y" "P y"
+ using assms by(auto simp add: eq_onp_def)
+
+lemma case_unit_parametric [transfer_rule]: "rel_fun A (rel_fun (=) A) case_unit case_unit"
+ by(simp add: rel_fun_def split: unit.split)
+
+
+(************************************************************)
+section \<open>Authenticated Data Structures\<close>
+(************************************************************)
+
+(************************************************************)
+subsection \<open>Interface\<close>
+(************************************************************)
+
+(************************************************************)
+subsubsection \<open> Types \<close>
+(************************************************************)
+
+type_synonym ('a\<^sub>m, 'a\<^sub>h) hash = "'a\<^sub>m \<Rightarrow> 'a\<^sub>h" \<comment> \<open>Type of hash operation\<close>
+type_synonym 'a\<^sub>m blinding_of = "'a\<^sub>m \<Rightarrow> 'a\<^sub>m \<Rightarrow> bool"
+type_synonym 'a\<^sub>m merge = "'a\<^sub>m \<Rightarrow> 'a\<^sub>m \<Rightarrow> 'a\<^sub>m option" \<comment> \<open> merging that can fail for values with different hashes\<close>
+
+(************************************************************)
+subsubsection \<open> Properties \<close>
+(************************************************************)
+
+locale merkle_interface =
+ fixes h :: "('a\<^sub>m, 'a\<^sub>h) hash"
+ and bo :: "'a\<^sub>m blinding_of"
+ and m :: "'a\<^sub>m merge"
+ assumes merge_respects_hashes: "h a = h b \<longleftrightarrow> (\<exists>ab. m a b = Some ab)"
+ and idem: "m a a = Some a"
+ and commute: "m a b = m b a"
+ and assoc: "m a b \<bind> m c = m b c \<bind> m a"
+ and bo_def: "bo a b \<longleftrightarrow> m a b = Some b"
+begin
+
+lemma reflp: "reflp bo"
+ unfolding bo_def by(rule reflpI)(simp add: idem)
+
+lemma antisymp: "antisymp bo"
+ unfolding bo_def by(rule antisympI)(simp add: commute)
+
+lemma transp: "transp bo"
+ apply(rule transpI)
+ subgoal for x y z using assoc[of x y z] by(simp add: commute bo_def)
+ done
+
+lemma hash: "bo \<le> vimage2p h h (=)"
+ unfolding bo_def by(auto simp add: vimage2p_def merge_respects_hashes)
+
+lemma join: "m a b = Some ab \<longleftrightarrow> bo a ab \<and> bo b ab \<and> (\<forall>u. bo a u \<longrightarrow> bo b u \<longrightarrow> bo ab u)"
+ unfolding bo_def
+ by (smt Option.bind_cong bind.bind_lunit commute idem merkle_interface.assoc merkle_interface_axioms)
+
+text \<open>The equivalence closure of the blinding relation are the equivalence classes of the hash function (the kernel).\<close>
+
+lemma equivclp_blinding_of: "equivclp bo = vimage2p h h (=)" (is "?lhs = ?rhs")
+proof(rule antisym)
+ show "?lhs \<le> ?rhs" by(rule equivclp_least[OF hash])(rule equivp_vimage2p[OF identity_equivp])
+ show "?rhs \<le> ?lhs" unfolding vimage2p_def
+ proof(rule predicate2I)
+ fix x y
+ assume "h x = h y"
+ then obtain xy where "m x y = Some xy" unfolding merge_respects_hashes ..
+ hence "bo x xy" "bo y xy" unfolding join by blast+
+ hence "equivclp bo x xy" "equivclp bo xy y" by(blast)+
+ thus "equivclp bo x y" by(rule equivclp_trans)
+ qed
+qed
+
+end
+
+(************************************************************)
+subsection \<open> Auxiliary definitions \<close>
+(************************************************************)
+
+text \<open> Directly proving that an interface satisfies the specification of a Merkle interface as given
+above is difficult. Instead, we provide several layers of auxiliary definitions that can easily be
+proved layer-by-layer.
+
+In particular, proving that an interface on recursive datatypes is a Merkle interface requires
+induction. As the induction hypothesis only applies to a subset of values of a type, we add
+auxiliary definitions equipped with an explicit set @{term A} of values to which the definition
+applies. Once the induction proof is complete, we can typically instantiate @{term A} with @{term
+UNIV}. In particular, in the induction proof for a layer, we can assume that properties for the
+earlier layers hold for \<^emph>\<open>all\<close> values, not just those in the induction hypothesis.
+\<close>
+
+(************************************************************)
+subsubsection \<open> Blinding \<close>
+(************************************************************)
+locale blinding_respects_hashes =
+ fixes h :: "('a\<^sub>m, 'a\<^sub>h) hash"
+ and bo :: "'a\<^sub>m blinding_of"
+ assumes hash: "bo \<le> vimage2p h h (=)"
+begin
+
+lemma blinding_hash_eq: "bo x y \<Longrightarrow> h x = h y"
+ by(drule hash[THEN predicate2D])(simp add: vimage2p_def)
+
+end
+
+locale blinding_of_on =
+ blinding_respects_hashes h bo
+ for A :: "'a\<^sub>m set"
+ and h :: "('a\<^sub>m, 'a\<^sub>h) hash"
+ and bo :: "'a\<^sub>m blinding_of"
+ + assumes refl: "x \<in> A \<Longrightarrow> bo x x"
+ and trans: "\<lbrakk> bo x y; bo y z; x \<in> A \<rbrakk> \<Longrightarrow> bo x z"
+ and antisym: "\<lbrakk> bo x y; bo y x; x \<in> A \<rbrakk> \<Longrightarrow> x = y"
+begin
+
+lemma refl_pointfree: "eq_onp (\<lambda>x. x \<in> A) \<le> bo"
+ by(auto elim!: eq_onpE intro: refl)
+
+lemma blinding_respects_hashes: "blinding_respects_hashes h bo" ..
+lemmas hash = hash
+
+lemma trans_pointfree: "eq_onp (\<lambda>x. x \<in> A) OO bo OO bo \<le> bo"
+ by(auto elim!: eq_onpE intro: trans)
+
+lemma antisym_pointfree: "inf (eq_onp (\<lambda>x. x \<in> A) OO bo) bo\<inverse>\<inverse> \<le> (=)"
+ by(auto elim!: eq_onpE dest: antisym)
+
+end
+
+(************************************************************)
+subsubsection \<open> Merging \<close>
+(************************************************************)
+
+text \<open> In general, we prove the properties of blinding before the properties of merging. Thus,
+ in the following definitions we assume that the blinding properties already hold on @{term UNIV}.
+ The @{term Ball} restricts the argument of the merge operation on which induction will be done. \<close>
+
+locale merge_on =
+ blinding_of_on UNIV h bo
+ for A :: "'a\<^sub>m set"
+ and h :: "('a\<^sub>m, 'a\<^sub>h) hash"
+ and bo :: "'a\<^sub>m blinding_of"
+ and m :: "'a\<^sub>m merge" +
+ assumes join: "\<lbrakk> h a = h b; a \<in> A \<rbrakk>
+ \<Longrightarrow> \<exists>ab. m a b = Some ab \<and> bo a ab \<and> bo b ab \<and> (\<forall>u. bo a u \<longrightarrow> bo b u \<longrightarrow> bo ab u)"
+ and undefined: "\<lbrakk> h a \<noteq> h b; a \<in> A \<rbrakk> \<Longrightarrow> m a b = None"
+begin
+
+lemma same: "a \<in> A \<Longrightarrow> m a a = Some a"
+ using join[of a a] refl[of a] by(auto 4 3 intro: antisym)
+
+lemma blinding_of_antisym_on: "blinding_of_on UNIV h bo" ..
+
+lemma transp: "transp bo"
+ by(auto intro: transpI trans)
+
+lemmas hash = hash
+ and refl = refl
+ and antisym = antisym[OF _ _ UNIV_I]
+
+lemma respects_hashes:
+ "a \<in> A \<Longrightarrow> h a = h b \<longleftrightarrow> (\<exists>ab. m a b = Some ab)"
+ using join undefined
+ by fastforce
+
+lemma join':
+ "a \<in> A \<Longrightarrow> \<forall>ab. m a b = Some ab \<longleftrightarrow> bo a ab \<and> bo b ab \<and> (\<forall>u. bo a u \<longrightarrow> bo b u \<longrightarrow> bo ab u)"
+ using join undefined
+ by (metis (full_types) hash local.antisym option.distinct(1) option.sel predicate2D vimage2p_def)
+
+lemma merge_on_subset:
+ "B \<subseteq> A \<Longrightarrow> merge_on B h bo m"
+ by unfold_locales (auto dest: same join undefined)
+
+end
+
+subsection \<open> Interface equality \<close>
+
+text \<open> Here, we prove that the auxiliary definitions specify the same interface as the original ones.\<close>
+
+lemma merkle_interface_aux: "merkle_interface h bo m = merge_on UNIV h bo m"
+ (is "?lhs = ?rhs")
+proof
+ show ?rhs if ?lhs
+ proof
+ interpret merkle_interface h bo m by(fact that)
+ show "bo \<le> vimage2p h h (=)" by(fact hash)
+ show "bo x x" for x using reflp by(simp add: reflp_def)
+ show "bo x z" if "bo x y" "bo y z" for x y z using transp that by(rule transpD)
+ show "x = y" if "bo x y" "bo y x" for x y using antisymp that by(rule antisympD)
+ show "\<exists>ab. m a b = Some ab \<and> bo a ab \<and> bo b ab \<and> (\<forall>u. bo a u \<longrightarrow> bo b u \<longrightarrow> bo ab u)" if "h a = h b" for a b
+ using that by(simp add: merge_respects_hashes join)
+ show "m a b = None" if "h a \<noteq> h b" for a b using that by(simp add: merge_respects_hashes)
+ qed
+
+ show ?lhs if ?rhs
+ proof
+ interpret merge_on UNIV h bo m by(fact that)
+ show eq: "h a = h b \<longleftrightarrow> (\<exists>ab. m a b = Some ab)" for a b by(simp add: respects_hashes)
+ show idem: "m a a = Some a" for a by(simp add: same)
+ show commute: "m a b = m b a" for a b
+ using join[of a b] join[of b a] undefined antisym by(cases "m a b") force+
+ have undefined_partitioned: "m a c = None" if "m a b = None" "m b c = Some bc" for a b c bc
+ using that eq by (metis option.distinct(1) option.exhaust)
+ have merge_twice: "m a b = Some c \<Longrightarrow> m a c = Some c" for a b c by (simp add: join')
+ show "m a b \<bind> m c = m b c \<bind> m a" for a b c
+ proof(simp split: Option.bind_split; safe)
+ show "None = m a d" if "m a b = None" "m b c = Some d" for d using that
+ by(metis undefined_partitioned merge_twice)
+ show "m c d = None" if "m a b = Some d" "m b c = None" for d using that
+ by(metis commute merge_twice undefined_partitioned)
+ next
+ fix ab bc
+ assume assms: "m a b = Some ab" "m b c = Some bc"
+ then obtain cab and abc where cab: "m c ab = Some cab" and abc: "m a bc = Some abc"
+ using eq[THEN iffD2, OF exI] eq[THEN iffD1] by (metis merge_twice)
+ thus "m c ab = m a bc" using assms
+ by(clarsimp simp add: join')(metis UNIV_I abc cab local.antisym local.trans)
+ qed
+ show "bo a b \<longleftrightarrow> m a b = Some b" for a b using idem join' by auto
+ qed
+qed
+
+lemma merkle_interfaceI [locale_witness]:
+ assumes "merge_on UNIV h bo m"
+ shows "merkle_interface h bo m"
+ using assms unfolding merkle_interface_aux by auto
+
+lemma (in merkle_interface) merkle_interfaceD: "merge_on UNIV h bo m"
+ using merkle_interface_aux[of h bo m, symmetric]
+ by simp unfold_locales
+
+subsection \<open> Parametricity rules \<close>
+
+context includes lifting_syntax begin
+parametric_constant le_fun_parametric[transfer_rule]: le_fun_def
+parametric_constant vimage2p_parametric[transfer_rule]: vimage2p_def
+parametric_constant blinding_respects_hashes_parametric_aux: blinding_respects_hashes_def
+
+lemma blinding_respects_hashes_parametric [transfer_rule]:
+ "((A1 ===> A2) ===> (A1 ===> A1 ===> (\<longleftrightarrow>)) ===> (\<longleftrightarrow>))
+ blinding_respects_hashes blinding_respects_hashes"
+ if [transfer_rule]: "bi_unique A2" "bi_total A1"
+ by(rule blinding_respects_hashes_parametric_aux that le_fun_parametric | simp add: rel_fun_eq)+
+
+parametric_constant blinding_of_on_axioms_parametric [transfer_rule]:
+ blinding_of_on_axioms_def[folded Ball_def, unfolded le_fun_def le_bool_def eq_onp_def relcompp.simps, simplified]
+parametric_constant blinding_of_on_parametric [transfer_rule]: blinding_of_on_def
+parametric_constant antisymp_parametric[transfer_rule]: antisymp_def
+parametric_constant transp_parametric[transfer_rule]: transp_def
+
+parametric_constant merge_on_axioms_parametric [transfer_rule]: merge_on_axioms_def
+parametric_constant merge_on_parametric[transfer_rule]: merge_on_def
+
+parametric_constant merkle_interface_parametric[transfer_rule]: merkle_interface_def
+end
+
+end
\ No newline at end of file
diff --git a/thys/ADS_Functor/ROOT b/thys/ADS_Functor/ROOT
new file mode 100644
--- /dev/null
+++ b/thys/ADS_Functor/ROOT
@@ -0,0 +1,11 @@
+chapter AFP
+
+session ADS_Functor (AFP) = "HOL-Library" +
+ options [timeout = 600]
+ theories
+ Merkle_Interface
+ ADS_Construction
+ Generic_ADS_Construction
+ Canton_Transaction_Tree
+ document_files
+ "root.tex"
diff --git a/thys/ADS_Functor/document/root.tex b/thys/ADS_Functor/document/root.tex
new file mode 100644
--- /dev/null
+++ b/thys/ADS_Functor/document/root.tex
@@ -0,0 +1,77 @@
+\documentclass[11pt,a4paper]{article}
+\usepackage{isabelle,isabellesym}
+
+% further packages required for unusual symbols (see also
+% isabellesym.sty), use only when needed
+
+%\usepackage{amssymb}
+ %for \<leadsto>, \<box>, \<diamond>, \<sqsupset>, \<mho>, \<Join>,
+ %\<lhd>, \<lesssim>, \<greatersim>, \<lessapprox>, \<greaterapprox>,
+ %\<triangleq>, \<yen>, \<lozenge>
+
+%\usepackage{eurosym}
+ %for \<euro>
+
+%\usepackage[only,bigsqcap]{stmaryrd}
+ %for \<Sqinter>
+
+%\usepackage{eufrak}
+ %for \<AA> ... \<ZZ>, \<aa> ... \<zz> (also included in amssymb)
+
+%\usepackage{textcomp}
+ %for \<onequarter>, \<onehalf>, \<threequarters>, \<degree>, \<cent>,
+ %\<currency>
+
+% this should be the last package used
+\usepackage{pdfsetup}
+
+% urls in roman style, theory text in math-similar italics
+\urlstyle{rm}
+\isabellestyle{it}
+
+% for uniform font size
+%\renewcommand{\isastyle}{\isastyleminor}
+
+
+\begin{document}
+
+\title{Authenticated Data Structures as Functors}
+\author{Andreas Lochbihler \qquad Ognjen Maric \\[1em] Digital Asset}
+
+\maketitle
+
+\begin{abstract}
+ Authenticated data structures allow several systems to convince each other that they are referring to the same data structure,
+ even if each of them knows only a part of the data structure.
+ Using inclusion proofs, knowledgable systems can selectively share their knowledge with other systems
+ and the latter can verify the authenticity of what is being shared.
+
+ In this paper, we show how to modularly define authenticated data structures, their inclusion proofs, and operations thereon
+ as datatypes in Isabelle/HOL, using a shallow embedding.
+ Modularity allows us to construct complicated trees from reusable building blocks, which we call Merkle functors.
+ Merkle functors include sums, products, and function spaces and are closed under composition and least fixpoints.
+
+ As a practical application, we model the hierarchical transactions of Canton,
+ a practical interoperability protocol for distributed ledgers, as authenticated data structures.
+ This is a first step towards formalizing the Canton protocol and verifying its integrity and security guarantees.
+\end{abstract}
+
+
+\tableofcontents
+
+% sane default for proof documents
+\parindent 0pt\parskip 0.5ex
+
+% generated text of all theories
+\input{session}
+
+% optional bibliography
+%\bibliographystyle{abbrv}
+%\bibliography{root}
+
+\end{document}
+
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: t
+%%% End:
diff --git a/thys/Attack_Trees/AT.thy b/thys/Attack_Trees/AT.thy
new file mode 100644
--- /dev/null
+++ b/thys/Attack_Trees/AT.thy
@@ -0,0 +1,1051 @@
+section "Attack Trees"
+theory AT
+imports MC
+begin
+
+text \<open>Attack Trees are an intuitive and practical formal method to analyse and quantify
+attacks on security and privacy. They are very useful to identify the steps an attacker
+takes through a system when approaching the attack goal. Here, we provide
+a proof calculus to analyse concrete attacks using a notion of attack validity.
+We define a state based semantics with Kripke models and the temporal logic
+CTL in the proof assistant Isabelle \cite{npw:02} using its Higher Order Logic
+(HOL). We prove the correctness and completeness (adequacy) of Attack Trees in Isabelle
+with respect to the model.\<close>
+
+subsection "Attack Tree datatype"
+text \<open>The following datatype definition @{text \<open>attree\<close>} defines attack trees.
+The simplest case of an attack tree is a base attack.
+The principal idea is that base attacks are defined by a pair of
+state sets representing the initial states and the {\it attack property}
+-- a set of states characterized by the fact that this property holds
+in them.
+Attacks can also be combined as the conjunction or disjunction of other attacks.
+The operator @{text \<open>\<oplus>\<^sub>\<or>\<close>} creates or-trees and @{text \<open>\<oplus>\<^sub>\<and>\<close>} creates and-trees.
+And-attack trees @{text \<open>l \<oplus>\<^sub>\<and> s\<close>} and or-attack trees @{text \<open>l \<oplus>\<^sub>\<or> s\<close>}
+combine lists of attack trees $l$ either conjunctively or disjunctively and
+consist of a list of sub-attacks -- again attack trees.\<close>
+datatype ('s :: state) attree = BaseAttack "('s set) * ('s set)" ("\<N>\<^bsub>(_)\<^esub>") |
+ AndAttack "('s attree) list" "('s set) * ('s set)" ("_ \<oplus>\<^sub>\<and>\<^bsup>(_)\<^esup>" 60) |
+ OrAttack "('s attree) list" "('s set) * ('s set)" ("_ \<oplus>\<^sub>\<or>\<^bsup>(_)\<^esup>" 61)
+
+primrec attack :: "('s :: state) attree \<Rightarrow> ('s set) * ('s set)"
+ where
+"attack (BaseAttack b) = b"|
+"attack (AndAttack as s) = s" |
+"attack (OrAttack as s) = s"
+
+subsection \<open>Attack Tree refinement\<close>
+text \<open>When we develop an attack tree, we proceed from an abstract attack, given
+by an attack goal, by breaking it down into a series of sub-attacks. This
+proceeding corresponds to a process of {\it refinement}. Therefore, as part of
+the attack tree calculus, we provide a notion of attack tree refinement.
+
+The relation @{text \<open>refines_to\<close>} "constructs" the attack tree. Here the above
+defined attack vectors are used to define how nodes in an attack tree
+can be expanded into more detailed (refined) attack sequences. This
+process of refinement @{text "\<sqsubseteq>"} allows to eventually reach a fully detailed
+attack that can then be proved using @{text "\<turnstile>"}.\<close>
+inductive refines_to :: "[('s :: state) attree, 's attree] \<Rightarrow> bool" ("_ \<sqsubseteq> _" [40] 40)
+where
+refI: "\<lbrakk> A = ((l @ [ \<N>\<^bsub>(si',si'')\<^esub>] @ l'')\<oplus>\<^sub>\<and>\<^bsup>(si,si''')\<^esup> );
+ A' = (l' \<oplus>\<^sub>\<and>\<^bsup>(si',si'')\<^esup>);
+ A'' = (l @ l' @ l'' \<oplus>\<^sub>\<and>\<^bsup>(si,si''')\<^esup>)
+ \<rbrakk> \<Longrightarrow> A \<sqsubseteq> A''"|
+ref_or: "\<lbrakk> as \<noteq> []; \<forall> A' \<in> set(as). (A \<sqsubseteq> A') \<and> attack A = s \<rbrakk> \<Longrightarrow> A \<sqsubseteq> (as \<oplus>\<^sub>\<or>\<^bsup>s\<^esup>)" |
+ref_trans: "\<lbrakk> A \<sqsubseteq> A'; A' \<sqsubseteq> A'' \<rbrakk> \<Longrightarrow> A \<sqsubseteq> A''"|
+ref_refl : "A \<sqsubseteq> A"
+
+
+subsection \<open>Validity of Attack Trees\<close>
+text \<open>A valid attack, intuitively, is one which is fully refined into fine-grained
+attacks that are feasible in a model. The general model we provide is
+a Kripke structure, i.e., a set of states and a generic state transition.
+Thus, feasible steps in the model are single steps of the state transition.
+We call them valid base attacks.
+The composition of sequences of valid base attacks into and-attacks yields
+again valid attacks if the base attacks line up with respect to the states
+in the state transition. If there are different valid attacks for the same
+attack goal starting from the same initial state set, these can be
+summarized in an or-attack.
+More precisely, the different cases of the validity predicate are distinguished
+by pattern matching over the attack tree structure.
+\begin{itemize}
+\item A base attack @{text \<open>\<N>(s0,s1)\<close>} is valid if from all
+states in the pre-state set @{text \<open>s0\<close>} we can get with a single step of the
+state transition relation to a state in the post-state set \<open>s1\<close>. Note,
+that it is sufficient for a post-state to exist for each pre-state. After all,
+we are aiming to validate attacks, that is, possible attack paths to some
+state that fulfills the attack property.
+\item An and-attack @{text \<open>As \<oplus>\<^sub>\<and> (s0,s1)\<close>} is a valid attack
+ if either of the following cases holds:
+ \begin{itemize}
+ \item empty attack sequence @{text \<open>As\<close>}: in this case
+ all pre-states in @{text \<open>s0\<close>} must already be attack states
+ in @{text \<open>s1\<close>}, i.e., @{text \<open>s0 \<subseteq> s1\<close>};
+ \item attack sequence @{text \<open>As\<close>} is singleton: in this case, the
+ singleton element attack @{text \<open>a\<close>} in @{text \<open>[a]\<close>},
+ must be a valid attack and it must be an attack with pre-state
+ @{text \<open>s0\<close>} and post-state @{text \<open>s1\<close>};
+ \item otherwise, @{text \<open>As\<close>} must be a list matching @{text \<open>a # l\<close>} for
+ some attack @{text \<open>a\<close>} and tail of attack list @{text \<open>l\<close>} such that
+ @{text \<open>a\<close>} is a valid attack with pre-state identical to the overall
+ pre-state @{text \<open>s0\<close>} and the goal of the tail @{text \<open>l\<close>} is
+ @{text \<open>s1\<close>} the goal of the overall attack. The pre-state of the
+ attack represented by @{text \<open>l\<close>} is @{text \<open>snd(attack a)\<close>} since this is
+ the post-state set of the first step @{text \<open>a\<close>}.
+\end{itemize}
+ \item An or-attack @{text \<open>As \<oplus>\<^sub>\<or>(s0,s1)\<close>} is a valid attack
+ if either of the following cases holds:
+ \begin{itemize}
+ \item the empty attack case is identical to the and-attack above:
+ @{text \<open>s0 \<subseteq> s1\<close>};
+ \item attack sequence @{text \<open>As\<close>} is singleton: in this case, the
+ singleton element attack @{text \<open>a\<close>}
+ must be a valid attack and
+ its pre-state must include the overall attack pre-state set @{text \<open>s0\<close>}
+ (since @{text \<open>a\<close>} is singleton in the or) while the post-state of
+ @{text \<open>a\<close>} needs to be included in the global attack goal @{text \<open>s1\<close>};
+ \item otherwise, @{text \<open>As\<close>} must be a list @{text \<open>a # l\<close>} for
+ an attack @{text \<open>a\<close>} and a list @{text \<open>l\<close>} of alternative attacks.
+ The pre-states can be just a subset of @{text \<open>s0\<close>} (since there are
+ other attacks in @{text \<open>l\<close>} that can cover the rest) and the goal
+ states @{text \<open>snd(attack a)\<close>} need to lie all in the overall goal
+ state @{text \<open>set s1\<close>}. The other or-attacks in @{text \<open>l\<close>} need
+ to cover only the pre-states @{text \<open>fst s - fst(attack a)\<close>}
+ (where @{text \<open>-\<close>} is set difference) and have the same goal @{text \<open>snd s\<close>}.
+ \end{itemize}
+\end{itemize}
+The proof calculus is thus completely described by one recursive function. \<close>
+fun is_attack_tree :: "[('s :: state) attree] \<Rightarrow> bool" ("\<turnstile>_" [40] 40)
+where
+att_base: "(\<turnstile> \<N>\<^bsub>s\<^esub>) = ( (\<forall> x \<in> (fst s). (\<exists> y \<in> (snd s). x \<rightarrow>\<^sub>i y )))" |
+att_and: "(\<turnstile>(As \<oplus>\<^sub>\<and>\<^bsup>s\<^esup>)) =
+ (case As of
+ [] \<Rightarrow> (fst s \<subseteq> snd s)
+ | [a] \<Rightarrow> ( \<turnstile> a \<and> attack a = s)
+ | (a # l) \<Rightarrow> (( \<turnstile> a) \<and> (fst(attack a) = fst s) \<and>
+ (\<turnstile>(l \<oplus>\<^sub>\<and>\<^bsup>(snd(attack a),snd(s))\<^esup>))))" |
+att_or: "(\<turnstile>(As \<oplus>\<^sub>\<or>\<^bsup>s\<^esup>)) =
+ (case As of
+ [] \<Rightarrow> (fst s \<subseteq> snd s)
+ | [a] \<Rightarrow> ( \<turnstile> a \<and> (fst(attack a) \<supseteq> fst s) \<and> (snd(attack a) \<subseteq> snd s))
+ | (a # l) \<Rightarrow> (( \<turnstile> a) \<and> fst(attack a) \<subseteq> fst s \<and>
+ snd(attack a) \<subseteq> snd s \<and>
+ ( \<turnstile>(l \<oplus>\<^sub>\<or>\<^bsup>(fst s - fst(attack a), snd s)\<^esup>))))"
+text \<open>Since the definition is constructive, code can be generated directly from it, here
+into the programming language Scala.\<close>
+export_code is_attack_tree in Scala
+
+subsection "Lemmas for Attack Tree validity"
+lemma att_and_one: assumes "\<turnstile> a" and "attack a = s"
+ shows "\<turnstile>([a] \<oplus>\<^sub>\<and>\<^bsup>s\<^esup>)"
+proof -
+ show " \<turnstile>([a] \<oplus>\<^sub>\<and>\<^bsup>s\<^esup>)" using assms
+ by (subst att_and, simp del: att_and att_or)
+qed
+
+declare is_attack_tree.simps[simp del]
+
+lemma att_and_empty[rule_format] : " \<turnstile>([] \<oplus>\<^sub>\<and>\<^bsup>(s', s'')\<^esup>) \<longrightarrow> s' \<subseteq> s''"
+ by (simp add: is_attack_tree.simps(2))
+
+lemma att_and_empty2: " \<turnstile>([] \<oplus>\<^sub>\<and>\<^bsup>(s, s)\<^esup>)"
+ by (simp add: is_attack_tree.simps(2))
+
+lemma att_or_empty[rule_format] : " \<turnstile>([] \<oplus>\<^sub>\<or>\<^bsup>(s', s'')\<^esup>) \<longrightarrow> s' \<subseteq> s''"
+ by (simp add: is_attack_tree.simps(3))
+
+lemma att_or_empty_back[rule_format]: " s' \<subseteq> s'' \<longrightarrow> \<turnstile>([] \<oplus>\<^sub>\<or>\<^bsup>(s', s'')\<^esup>)"
+ by (simp add: is_attack_tree.simps(3))
+
+lemma att_or_empty_rev: assumes "\<turnstile>(l \<oplus>\<^sub>\<or>\<^bsup>(s, s')\<^esup>)" and "\<not>(s \<subseteq> s')" shows "l \<noteq> []"
+ using assms att_or_empty by blast
+
+lemma att_or_empty2: "\<turnstile>([] \<oplus>\<^sub>\<or>\<^bsup>(s, s)\<^esup>)"
+ by (simp add: att_or_empty_back)
+
+lemma att_andD1: " \<turnstile>(x1 # x2 \<oplus>\<^sub>\<and>\<^bsup>s\<^esup>) \<Longrightarrow> \<turnstile> x1"
+ by (metis (no_types, lifting) is_attack_tree.simps(2) list.exhaust list.simps(4) list.simps(5))
+
+lemma att_and_nonemptyD2[rule_format]:
+ "(x2 \<noteq> [] \<longrightarrow> \<turnstile>(x1 # x2 \<oplus>\<^sub>\<and>\<^bsup>s\<^esup>) \<longrightarrow> \<turnstile> (x2 \<oplus>\<^sub>\<and>\<^bsup>(snd(attack x1),snd s)\<^esup>))"
+ by (metis (no_types, lifting) is_attack_tree.simps(2) list.exhaust list.simps(5))
+
+lemma att_andD2 : " \<turnstile>(x1 # x2 \<oplus>\<^sub>\<and>\<^bsup>s\<^esup>) \<Longrightarrow> \<turnstile> (x2 \<oplus>\<^sub>\<and>\<^bsup>(snd(attack x1),snd s)\<^esup>)"
+ by (metis (mono_tags, lifting) att_and_empty2 att_and_nonemptyD2 is_attack_tree.simps(2) list.simps(4) list.simps(5))
+
+lemma att_and_fst_lem[rule_format]:
+ "\<turnstile>(x1 # x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>) \<longrightarrow> xa \<in> fst (attack (x1 # x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>))
+ \<longrightarrow> xa \<in> fst (attack x1)"
+ by (induction x2a, (subst att_and, simp)+)
+
+lemma att_orD1: " \<turnstile>(x1 # x2 \<oplus>\<^sub>\<or>\<^bsup>x\<^esup>) \<Longrightarrow> \<turnstile> x1"
+ by (case_tac x2, (subst (asm) att_or, simp)+)
+
+lemma att_or_snd_hd: " \<turnstile>(a # list \<oplus>\<^sub>\<or>\<^bsup>(aa, b)\<^esup>) \<Longrightarrow> snd(attack a) \<subseteq> b"
+ by (case_tac list, (subst (asm) att_or, simp)+)
+
+lemma att_or_singleton[rule_format]:
+ " \<turnstile>([x1] \<oplus>\<^sub>\<or>\<^bsup>x\<^esup>) \<longrightarrow> \<turnstile>([] \<oplus>\<^sub>\<or>\<^bsup>(fst x - fst (attack x1), snd x)\<^esup>)"
+ by (subst att_or, simp, rule impI, rule att_or_empty_back, blast)
+
+lemma att_orD2[rule_format]:
+ " \<turnstile>(x1 # x2 \<oplus>\<^sub>\<or>\<^bsup>x\<^esup>) \<longrightarrow> \<turnstile> (x2 \<oplus>\<^sub>\<or>\<^bsup>(fst x - fst(attack x1), snd x)\<^esup>)"
+ by (case_tac x2, simp add: att_or_singleton, simp, subst att_or, simp)
+
+lemma att_or_snd_att[rule_format]: "\<forall> x. \<turnstile> (x2 \<oplus>\<^sub>\<or>\<^bsup>x\<^esup>) \<longrightarrow> (\<forall> a \<in> (set x2). snd(attack a) \<subseteq> snd x )"
+proof (induction x2)
+ case Nil
+ then show ?case by (simp add: att_or)
+next
+ case (Cons a x2)
+ then show ?case using att_orD2 att_or_snd_hd by fastforce
+qed
+
+lemma singleton_or_lem: " \<turnstile>([x1] \<oplus>\<^sub>\<or>\<^bsup>x\<^esup>) \<Longrightarrow> fst x \<subseteq> fst(attack x1)"
+ by (subst (asm) att_or, simp)+
+
+lemma or_att_fst_sup0[rule_format]: "x2 \<noteq> [] \<longrightarrow> (\<forall> x. (\<turnstile> ((x2 \<oplus>\<^sub>\<or>\<^bsup>x\<^esup>):: ('s :: state) attree)) \<longrightarrow>
+ ((\<Union> y::'s attree\<in> set x2. fst (attack y)) \<supseteq> fst(x))) "
+proof (induction x2)
+ case Nil
+ then show ?case by simp
+next
+ case (Cons a x2)
+ then show ?case using att_orD2 singleton_or_lem by fastforce
+qed
+
+lemma or_att_fst_sup:
+ assumes "(\<turnstile> ((x1 # x2 \<oplus>\<^sub>\<or>\<^bsup>x\<^esup>):: ('s :: state) attree))"
+ shows "((\<Union> y::'s attree\<in> set (x1 # x2). fst (attack y)) \<supseteq> fst(x))"
+ by (rule or_att_fst_sup0, simp, rule assms)
+
+text \<open>The lemma @{text \<open>att_elem_seq\<close>} is the main lemma for Correctness.
+ It shows that for a given attack tree x1, for each element in the set of start sets
+ of the first attack, we can reach in zero or more steps a state in the states in which
+ the attack is successful (the final attack state @{text \<open>snd(attack x1)\<close>}).
+ This proof is a big alternative to an earlier version of the proof with
+ @{text \<open>first_step\<close>} etc that mapped first on a sequence of sets of states.\<close>
+lemma att_elem_seq[rule_format]: "\<turnstile> x1 \<longrightarrow> (\<forall> x \<in> fst(attack x1).
+ (\<exists> y. y \<in> snd(attack x1) \<and> x \<rightarrow>\<^sub>i* y))"
+ text \<open>First attack tree induction\<close>
+proof (induction x1)
+ case (BaseAttack x)
+ then show ?case
+ by (metis AT.att_base EF_step EF_step_star_rev attack.simps(1))
+next
+ case (AndAttack x1a x2)
+ then show ?case
+ apply (rule_tac x = x2 in spec)
+ apply (subgoal_tac "(\<forall> x1aa::'a attree.
+ x1aa \<in> set x1a \<longrightarrow>
+ \<turnstile>x1aa \<longrightarrow>
+ (\<forall>x::'a\<in>fst (attack x1aa). \<exists>y::'a. y \<in> snd (attack x1aa) \<and> x \<rightarrow>\<^sub>i* y))")
+ apply (rule mp)
+ prefer 2
+ apply (rotate_tac -1)
+ apply assumption
+ text \<open>Induction for @{text \<open>\<and>\<close>}: the manual instantiation seems tedious but in the @{text \<open>\<and>\<close>}
+ case necessary to get the right induction hypothesis.\<close>
+ proof (rule_tac list = "x1a" in list.induct)
+ text \<open>The @{text \<open>\<and>\<close>} induction empty case\<close>
+ show "(\<forall>x1aa::'a attree.
+ x1aa \<in> set [] \<longrightarrow>
+ \<turnstile>x1aa \<longrightarrow> (\<forall>x::'a\<in>fst (attack x1aa). \<exists>y::'a. y \<in> snd (attack x1aa) \<and> x \<rightarrow>\<^sub>i* y)) \<longrightarrow>
+ (\<forall>x::'a set \<times> 'a set.
+ \<turnstile>([] \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>) \<longrightarrow>
+ (\<forall>xa::'a\<in>fst (attack ([] \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)). \<exists>y::'a. y \<in> snd (attack ([] \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)) \<and> xa \<rightarrow>\<^sub>i* y))"
+ using att_and_empty state_transition_refl_def by fastforce
+ text \<open>The @{text \<open>\<and>\<close>} induction case nonempty\<close>
+ next show "\<And>(x1a::'a attree list) (x2::'a set \<times> 'a set) (x1::'a attree) (x2a::'a attree list).
+ (\<And>x1aa::'a attree.
+ (x1aa \<in> set x1a) \<Longrightarrow>
+ ((\<turnstile>x1aa) \<longrightarrow> (\<forall>x::'a\<in>fst (attack x1aa). \<exists>y::'a. y \<in> snd (attack x1aa) \<and> x \<rightarrow>\<^sub>i* y))) \<Longrightarrow>
+ \<forall>x1aa::'a attree.
+ (x1aa \<in> set x1a) \<longrightarrow>
+ (\<turnstile>x1aa) \<longrightarrow> ((\<forall>x::'a\<in>fst (attack x1aa). \<exists>y::'a. y \<in> snd (attack x1aa) \<and> x \<rightarrow>\<^sub>i* y)) \<Longrightarrow>
+ (\<forall>x1aa::'a attree.
+ (x1aa \<in> set x2a) \<longrightarrow>
+ (\<turnstile>x1aa) \<longrightarrow> (\<forall>x::'a\<in>fst (attack x1aa). \<exists>y::'a. y \<in> snd (attack x1aa) \<and> x \<rightarrow>\<^sub>i* y)) \<longrightarrow>
+ (\<forall>x::'a set \<times> 'a set.
+ (\<turnstile>(x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)) \<longrightarrow>
+ ((\<forall>xa::'a\<in>fst (attack (x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)). \<exists>y::'a. y \<in> snd (attack (x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)) \<and> xa \<rightarrow>\<^sub>i* y))) \<Longrightarrow>
+ ((\<forall>x1aa::'a attree.
+ (x1aa \<in> set (x1 # x2a)) \<longrightarrow>
+ (\<turnstile>x1aa) \<longrightarrow> ((\<forall>x::'a\<in>fst (attack x1aa). \<exists>y::'a. y \<in> snd (attack x1aa) \<and> x \<rightarrow>\<^sub>i* y))) \<longrightarrow>
+ (\<forall>x::'a set \<times> 'a set.
+ ( \<turnstile>(x1 # x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)) \<longrightarrow>
+ (\<forall>xa::'a\<in>fst (attack (x1 # x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)).
+ (\<exists>y::'a. y \<in> snd (attack (x1 # x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)) \<and> (xa \<rightarrow>\<^sub>i* y)))))"
+ apply (rule impI, rule allI, rule impI)
+ text \<open>Set free the Induction Hypothesis: this is necessary to provide the grounds for specific
+ instantiations in the step.\<close>
+ apply (subgoal_tac "(\<forall>x::'a set \<times> 'a set.
+ \<turnstile>(x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>) \<longrightarrow>
+ (\<forall>xa::'a\<in>fst (attack (x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)).
+ \<exists>y::'a. y \<in> snd (attack (x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)) \<and> xa \<rightarrow>\<^sub>i* y))")
+ prefer 2
+ apply simp
+ text \<open>The following induction step for @{text \<open>\<and>\<close>} needs a number of manual instantiations
+ so that the proof is not found automatically. In the subsequent case for @{text \<open>\<or>\<close>},
+ sledgehammer finds the proof.\<close>
+ proof -
+ show "\<And>(x1a::'a attree list) (x2::'a set \<times> 'a set) (x1::'a attree) (x2a::'a attree list) x::'a set \<times> 'a set.
+ \<forall>x1aa::'a attree.
+ x1aa \<in> set (x1 # x2a) \<longrightarrow>
+ \<turnstile>x1aa \<longrightarrow> (\<forall>x::'a\<in>fst (attack x1aa). \<exists>y::'a. y \<in> snd (attack x1aa) \<and> x \<rightarrow>\<^sub>i* y) \<Longrightarrow>
+ \<turnstile>(x1 # x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>) \<Longrightarrow>
+ \<forall>x::'a set \<times> 'a set.
+ \<turnstile>(x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>) \<longrightarrow>
+ (\<forall>xa::'a\<in>fst (attack (x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)). \<exists>y::'a. y \<in> snd (attack (x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)) \<and> xa \<rightarrow>\<^sub>i* y) \<Longrightarrow>
+ \<forall>xa::'a\<in>fst (attack (x1 # x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)). \<exists>y::'a. y \<in> snd (attack (x1 # x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)) \<and> xa \<rightarrow>\<^sub>i* y"
+ apply (rule ballI)
+ apply (rename_tac xa)
+ text \<open>Prepare the steps\<close>
+ apply (drule_tac x = "(snd(attack x1), snd x)" in spec)
+ apply (rotate_tac -1)
+ apply (erule impE)
+ apply (erule att_andD2)
+ text \<open>Premise for x1\<close>
+ apply (drule_tac x = x1 in spec)
+ apply (drule mp)
+ apply simp
+ apply (drule mp)
+ apply (erule att_andD1)
+ text \<open>Instantiate first step for xa\<close>
+ apply (rotate_tac -1)
+ apply (drule_tac x = xa in bspec)
+ apply (erule att_and_fst_lem, assumption)
+ apply (erule exE)
+ apply (erule conjE)
+ text \<open>Take this y and put it as first into the second part\<close>
+ apply (drule_tac x = y in bspec)
+ apply simp
+ apply (erule exE)
+ apply (erule conjE)
+ text \<open>Bind the first @{text \<open>xa \<rightarrow>\<^sub>i* y\<close>} and second @{text \<open>y \<rightarrow>\<^sub>i* ya\<close>} together for solution\<close>
+ apply (rule_tac x = ya in exI)
+ apply (rule conjI)
+ apply simp
+ by (simp add: state_transition_refl_def)
+ qed
+ qed auto
+next
+ case (OrAttack x1a x2)
+ then show ?case
+ proof (induction x1a arbitrary: x2)
+ case Nil
+ then show ?case
+ by (metis EF_lem2a EF_step_star_rev att_or_empty attack.simps(3) subsetD surjective_pairing)
+ next
+ case (Cons a x1a)
+ then show ?case
+ by (smt DiffI att_orD1 att_orD2 att_or_snd_att attack.simps(3) insert_iff list.set(2) prod.sel(1) snd_conv subset_iff)
+ qed
+qed
+
+
+lemma att_elem_seq0: "\<turnstile> x1 \<Longrightarrow> (\<forall> x \<in> fst(attack x1).
+ (\<exists> y. y \<in> snd(attack x1) \<and> x \<rightarrow>\<^sub>i* y))"
+ by (simp add: att_elem_seq)
+
+subsection \<open>Valid refinements\<close>
+definition valid_ref :: "[('s :: state) attree, 's attree] \<Rightarrow> bool" ("_ \<sqsubseteq>\<^sub>V _" 50)
+ where
+"A \<sqsubseteq>\<^sub>V A' \<equiv> ( (A \<sqsubseteq> A') \<and> \<turnstile> A')"
+
+definition ref_validity :: "[('s :: state) attree] \<Rightarrow> bool" ("\<turnstile>\<^sub>V _" 50)
+ where
+"\<turnstile>\<^sub>V A \<equiv> (\<exists> A'. (A \<sqsubseteq>\<^sub>V A'))"
+
+lemma ref_valI: " A \<sqsubseteq> A'\<Longrightarrow> \<turnstile> A' \<Longrightarrow> \<turnstile>\<^sub>V A"
+ using ref_validity_def valid_ref_def by blast
+
+section "Correctness and Completeness"
+text \<open>This section presents the main theorems of Correctness and Completeness
+ between AT and Kripke, essentially:
+
+@{text \<open>\<turnstile> (init K, p) \<equiv> K \<turnstile> EF p\<close>}.
+
+First, we proof a number of lemmas needed for both directions before we
+show the Correctness theorem followed by the Completeness theorem.
+\<close>
+subsection \<open>Lemma for Correctness and Completeness\<close>
+lemma nth_app_eq[rule_format]:
+ "\<forall> sl x. sl \<noteq> [] \<longrightarrow> sl ! (length sl - Suc (0)) = x
+ \<longrightarrow> (l @ sl) ! (length l + length sl - Suc (0)) = x"
+ by (induction l) auto
+
+lemma nth_app_eq1[rule_format]: "i < length sla \<Longrightarrow> (sla @ sl) ! i = sla ! i"
+ by (simp add: nth_append)
+
+lemma nth_app_eq1_rev: "i < length sla \<Longrightarrow> sla ! i = (sla @ sl) ! i"
+ by (simp add: nth_append)
+
+lemma nth_app_eq2[rule_format]: "\<forall> sl i. length sla \<le> i \<and> i < length (sla @ sl)
+ \<longrightarrow> (sla @ sl) ! i = sl ! (i - (length sla))"
+ by (simp add: nth_append)
+
+
+lemma tl_ne_ex[rule_format]: "l \<noteq> [] \<longrightarrow> (? x . l = x # (tl l))"
+ by (induction l, auto)
+
+
+lemma tl_nempty_lngth[rule_format]: "tl sl \<noteq> [] \<longrightarrow> 2 \<le> length(sl)"
+ using le_less by fastforce
+
+lemma list_app_one_length: "length l = n \<Longrightarrow> (l @ [s]) ! n = s"
+ by (erule subst, simp)
+
+lemma tl_lem1[rule_format]: "l \<noteq> [] \<longrightarrow> tl l = [] \<longrightarrow> length l = 1"
+ by (induction l, simp+)
+
+lemma nth_tl_length[rule_format]: "tl sl \<noteq> [] \<longrightarrow>
+ tl sl ! (length (tl sl) - Suc (0)) = sl ! (length sl - Suc (0))"
+ by (induction sl, simp+)
+
+lemma nth_tl_length1[rule_format]: "tl sl \<noteq> [] \<longrightarrow>
+ tl sl ! n = sl ! (n + 1)"
+ by (induction sl, simp+)
+
+lemma ineq1: "i < length sla - n \<Longrightarrow>
+ (0) \<le> n \<Longrightarrow> i < length sla"
+by simp
+
+lemma ineq2[rule_format]: "length sla \<le> i \<longrightarrow> i + (1) - length sla = i - length sla + 1"
+by arith
+
+lemma ineq3: "tl sl \<noteq> [] \<Longrightarrow> length sla \<le> i \<Longrightarrow> i < length (sla @ tl sl) - (1)
+ \<Longrightarrow> i - length sla + (1) < length sl - (1)"
+by simp
+
+lemma tl_eq1[rule_format]: "sl \<noteq> [] \<longrightarrow> tl sl ! (0) = sl ! Suc (0)"
+ by (induction sl, simp+)
+
+lemma tl_eq2[rule_format]: "tl sl = [] \<longrightarrow> sl ! (0) = sl ! (length sl - (1))"
+ by (induction sl, simp+)
+
+lemma tl_eq3[rule_format]: "tl sl \<noteq> [] \<longrightarrow>
+ tl sl ! (length sl - Suc (Suc (0))) = sl ! (length sl - Suc (0))"
+ by (induction sl, simp+)
+
+lemma nth_app_eq3: assumes "tl sl \<noteq> []"
+ shows "(sla @ tl sl) ! (length (sla @ tl sl) - (1)) = sl ! (length sl - (1))"
+ using assms nth_app_eq nth_tl_length by fastforce
+
+lemma not_empty_ex: "A \<noteq> {} \<Longrightarrow> ? x. x \<in> A"
+by force
+
+lemma fst_att_eq: "(fst x # sl) ! (0) = fst (attack (al \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>))"
+by simp
+
+lemma list_eq1[rule_format]: "sl \<noteq> [] \<longrightarrow>
+ (fst x # sl) ! (length (fst x # sl) - (1)) = sl ! (length sl - (1))"
+ by (induction sl, auto)
+
+lemma attack_eq1: "snd (attack (x1 # x2a \<oplus>\<^sub>\<and>\<^bsup>x\<^esup>)) = snd (attack (x2a \<oplus>\<^sub>\<and>\<^bsup>(snd (attack x1), snd x)\<^esup>))"
+by simp
+
+lemma fst_lem1[rule_format]: "\<forall> (a:: 's set) b (c :: 's set) d. (a, c) = (b, d) \<longrightarrow> a = b"
+by auto
+
+lemma fst_eq1: "(sla ! (0), y) = attack x1 \<Longrightarrow>
+ sla ! (0) = fst (attack x1)"
+ by (rule_tac c = y and d = "snd(attack x1)" in fst_lem1, simp)
+
+lemma base_att_lem1: " y0 \<subseteq> y1 \<Longrightarrow> \<turnstile> \<N>\<^bsub>(y1, y)\<^esub> \<Longrightarrow>\<turnstile> \<N>\<^bsub>(y0, y)\<^esub>"
+ by (simp add: att_base, blast)
+
+lemma ref_pres_att: "A \<sqsubseteq> A' \<Longrightarrow> attack A = attack A'"
+proof (erule refines_to.induct)
+ show "\<And>(A::'a attree) (l::'a attree list) (si'::'a set) (si''::'a set) (l''::'a attree list) (si::'a set)
+ (si'''::'a set) (A'::'a attree) (l'::'a attree list) A''::'a attree.
+ A = (l @ [\<N>\<^bsub>(si', si'')\<^esub>] @ l'' \<oplus>\<^sub>\<and>\<^bsup>(si, si''')\<^esup>) \<Longrightarrow>
+ A' = (l' \<oplus>\<^sub>\<and>\<^bsup>(si', si'')\<^esup>) \<Longrightarrow> A'' = (l @ l' @ l'' \<oplus>\<^sub>\<and>\<^bsup>(si, si''')\<^esup>) \<Longrightarrow> attack A = attack A''"
+ by simp
+next show "\<And>(as::'a attree list) (A::'a attree) (s::'a set \<times> 'a set).
+ as \<noteq> [] \<Longrightarrow>
+ (\<forall>A'::'a attree\<in> (set as). ((A \<sqsubseteq> A') \<and> (attack A = attack A')) \<and> attack A = s) \<Longrightarrow>
+ attack A = attack (as \<oplus>\<^sub>\<or>\<^bsup>s\<^esup>)"
+ using last_in_set by auto
+next show "\<And>(A::'a attree) (A'::'a attree) A''::'a attree.
+ A \<sqsubseteq> A' \<Longrightarrow> attack A = attack A' \<Longrightarrow> A' \<sqsubseteq> A'' \<Longrightarrow> attack A' = attack A'' \<Longrightarrow> attack A = attack A''"
+ by simp
+next show "\<And>A::'a attree. attack A = attack A" by (rule refl)
+qed
+
+lemma base_subset:
+ assumes "xa \<subseteq> xc"
+ shows "\<turnstile>\<N>\<^bsub>(x, xa)\<^esub> \<Longrightarrow> \<turnstile>\<N>\<^bsub>(x, xc)\<^esub>"
+proof (simp add: att_base)
+ show " \<forall>x::'a\<in>x. \<exists>xa::'a\<in>xa. x \<rightarrow>\<^sub>i xa \<Longrightarrow> \<forall>x::'a\<in>x. \<exists>xa::'a\<in>xc. x \<rightarrow>\<^sub>i xa"
+ by (meson assms in_mono)
+qed
+
+subsection "Correctness Theorem"
+text \<open>Proof with induction over the definition of EF using the main
+lemma @{text \<open>att_elem_seq0\<close>}.
+
+There is also a second version of Correctness for valid refinements.\<close>
+
+theorem AT_EF: assumes " \<turnstile> (A :: ('s :: state) attree)"
+ and "attack A = (I,s)"
+ shows "Kripke {s :: ('s :: state). \<exists> i \<in> I. (i \<rightarrow>\<^sub>i* s)} (I :: ('s :: state)set) \<turnstile> EF s"
+proof (simp add:check_def)
+ show "I \<subseteq> {sa::('s :: state). (\<exists>i::'s\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> EF s}"
+ proof (rule subsetI, rule CollectI, rule conjI, simp add: state_transition_refl_def)
+ show "\<And>x::'s. x \<in> I \<Longrightarrow> \<exists>i::'s\<in>I. (i, x) \<in> {(x::'s, y::'s). x \<rightarrow>\<^sub>i y}\<^sup>*"
+ by (rule_tac x = x in bexI, simp)
+next show "\<And>x::'s. x \<in> I \<Longrightarrow> x \<in> EF s" using assms
+ proof -
+ have a: "\<forall> x \<in> I. \<exists> y \<in> s. x \<rightarrow>\<^sub>i* y" using assms
+ proof -
+ have "\<forall>x::'s\<in>fst (attack A). \<exists>y::'s. y \<in> snd (attack A) \<and> x \<rightarrow>\<^sub>i* y"
+ by (rule att_elem_seq0, rule assms)
+ thus " \<forall>x::'s\<in>I. \<exists>y::'s\<in>s. x \<rightarrow>\<^sub>i* y" using assms
+ by force
+ qed
+ thus "\<And>x::'s. x \<in> I \<Longrightarrow> x \<in> EF s"
+ proof -
+ fix x
+ assume b: "x \<in> I"
+ have "\<exists>y::'s\<in>s::('s :: state) set. x \<rightarrow>\<^sub>i* y"
+ by (rule_tac x = x and A = I in bspec, rule a, rule b)
+ from this obtain y where "y \<in> s" and "x \<rightarrow>\<^sub>i* y" by (erule bexE)
+ thus "x \<in> EF s"
+ by (erule_tac f = s in EF_step_star)
+ qed
+ qed
+ qed
+qed
+
+theorem ATV_EF: "\<lbrakk> \<turnstile>\<^sub>V A; (I,s) = attack A \<rbrakk> \<Longrightarrow>
+ (Kripke {s. \<exists> i \<in> I. (i \<rightarrow>\<^sub>i* s) } I \<turnstile> EF s)"
+ by (metis (full_types) AT_EF ref_pres_att ref_validity_def valid_ref_def)
+
+subsection "Completeness Theorem"
+text \<open>This section contains the completeness direction, informally:
+
+@{text \<open>\<turnstile> EF s \<Longrightarrow> \<exists> A. \<turnstile> A \<and> attack A = (I,s)\<close>}.
+
+The main theorem is presented last since its
+proof just summarises a number of main lemmas @{text \<open>Compl_step1, Compl_step2,
+Compl_step3, Compl_step4\<close>} which are presented first together with other
+auxiliary lemmas.\<close>
+
+subsubsection "Lemma @{text \<open>Compl_step1\<close>}"
+lemma Compl_step1:
+"Kripke {s :: ('s :: state). \<exists> i \<in> I. (i \<rightarrow>\<^sub>i* s)} I \<turnstile> EF s
+\<Longrightarrow> \<forall> x \<in> I. \<exists> y \<in> s. x \<rightarrow>\<^sub>i* y"
+ by (simp add: EF_step_star_rev valEF_E)
+
+subsubsection "Lemma @{text \<open>Compl_step2\<close>}"
+text \<open>First, we prove some auxiliary lemmas.\<close>
+lemma rtrancl_imp_singleton_seq2: "x \<rightarrow>\<^sub>i* y \<Longrightarrow>
+ x = y \<or> (\<exists> s. s \<noteq> [] \<and> (tl s \<noteq> []) \<and> s ! 0 = x \<and> s ! (length s - 1) = y \<and>
+ (\<forall> i < (length s - 1). (s ! i) \<rightarrow>\<^sub>i (s ! (Suc i))))"
+
+ unfolding state_transition_refl_def
+proof (induction rule: rtrancl_induct)
+ case base
+ then show ?case
+ by simp
+next
+ case (step y z)
+ show ?case
+ using step.IH
+ proof (elim disjE exE conjE)
+ assume "x=y"
+ with step.hyps show ?case
+ by (force intro!: exI [where x="[y,z]"])
+ next
+ show "\<And>s. \<lbrakk>s \<noteq> []; tl s \<noteq> []; s ! 0 = x;
+ s ! (length s - 1) = y;
+ \<forall>i<length s - 1.
+ s ! i \<rightarrow>\<^sub>i s ! Suc i\<rbrakk>
+ \<Longrightarrow> x = z \<or>
+ (\<exists>s. s \<noteq> [] \<and>
+ tl s \<noteq> [] \<and> s ! 0 = x \<and>
+ s ! (length s - 1) = z \<and>
+ (\<forall>i<length s - 1. s ! i \<rightarrow>\<^sub>i s ! Suc i))"
+ apply (rule disjI2)
+ apply (rule_tac x="s @ [z]" in exI)
+ apply (auto simp: nth_append)
+ by (metis One_nat_def Suc_lessI diff_Suc_1 mem_Collect_eq old.prod.case step.hyps(2))
+ qed
+qed
+
+lemma tl_nempty_length[rule_format]: "s \<noteq> [] \<longrightarrow> tl s \<noteq> [] \<longrightarrow> 0 < length s - 1"
+ by (induction s, simp+)
+
+lemma tl_nempty_length2[rule_format]: "s \<noteq> [] \<longrightarrow> tl s \<noteq> [] \<longrightarrow> Suc 0 < length s"
+ by (induction s, simp+)
+
+lemma length_last[rule_format]: "(l @ [x]) ! (length (l @ [x]) - 1) = x"
+ by (induction l, simp+)
+
+lemma Compl_step2: "\<forall> x \<in> I. \<exists> y \<in> s. x \<rightarrow>\<^sub>i* y \<Longrightarrow>
+ ( \<forall> x \<in> I. x \<in> s \<or> (\<exists> (sl :: ((('s :: state) set)list)).
+ (sl \<noteq> []) \<and> (tl sl \<noteq> []) \<and>
+ (sl ! 0, sl ! (length sl - 1)) = ({x},s) \<and>
+ (\<forall> i < (length sl - 1). \<turnstile> \<N>\<^bsub>(sl ! i,sl ! (i+1) )\<^esub>
+ )))"
+proof (rule ballI, drule_tac x = x in bspec, assumption, erule bexE)
+ fix x y
+ assume a: "x \<in> I" and b: "y \<in> s" and c: "x \<rightarrow>\<^sub>i* y"
+ show "x \<in> s \<or>
+ (\<exists>sl::'s set list.
+ sl \<noteq> [] \<and>
+ tl sl \<noteq> [] \<and>
+ (sl ! (0), sl ! (length sl - (1))) = ({x}, s) \<and>
+ (\<forall>i<length sl - (1). \<turnstile>\<N>\<^bsub>(sl ! i, sl ! (i + (1)))\<^esub>))"
+ proof -
+ have d : "x = y \<or>
+ (\<exists>s'. s' \<noteq> [] \<and>
+ tl s' \<noteq> [] \<and>
+ s' ! (0) = x \<and>
+ s' ! (length s' - (1)) = y \<and> (\<forall>i<length s' - (1). s' ! i \<rightarrow>\<^sub>i s' ! Suc i))"
+ using c rtrancl_imp_singleton_seq2 by blast
+ thus "x \<in> s \<or>
+ (\<exists>sl::'s set list.
+ sl \<noteq> [] \<and>
+ tl sl \<noteq> [] \<and>
+ (sl ! (0), sl ! (length sl - (1))) = ({x}, s) \<and>
+ (\<forall>i<length sl - (1). \<turnstile>\<N>\<^bsub>(sl ! i, sl ! (i + (1)))\<^esub>))"
+ apply (rule disjE)
+ using b apply blast
+ apply (rule disjI2, elim conjE exE)
+ apply (rule_tac x = "[{s' ! j}. j \<leftarrow> [0..<(length s' - 1)]] @ [s]" in exI)
+ apply (auto simp: nth_append)
+ apply (metis AT.att_base Suc_lessD fst_conv prod.sel(2) singletonD singletonI)
+ apply (metis AT.att_base Suc_lessI b fst_conv prod.sel(2) singletonD)
+ using tl_nempty_length2 by blast
+ qed
+qed
+
+subsubsection "Lemma @{text \<open>Compl_step3\<close>}"
+text \<open>First, we need a few lemmas.\<close>
+lemma map_hd_lem[rule_format] : "n > 0 \<longrightarrow> (f 0 # map (\<lambda>i. f i) [1..<n]) = map (\<lambda>i. f i) [0..<n]"
+ by (simp add : hd_map upt_rec)
+
+lemma map_Suc_lem[rule_format] : "n > 0 \<longrightarrow> map (\<lambda> i:: nat. f i)[1..<n] =
+ map (\<lambda> i:: nat. f(Suc i))[0..<(n - 1)]"
+proof -
+ have "(f 0 # map (\<lambda>n. f (Suc n)) [0..<n - 1] = f 0 # map f [1..<n]) = (map (\<lambda>n. f (Suc n)) [0..<n - 1] = map f [1..<n])"
+ by blast
+ then show ?thesis
+ by (metis Suc_pred' map_hd_lem map_upt_Suc)
+qed
+
+lemma forall_ex_fun: "finite S \<Longrightarrow> (\<forall> x \<in> S. (\<exists> y. P y x)) \<longrightarrow> (\<exists> f. \<forall> x \<in> S. P (f x) x)"
+proof (induction rule: finite.induct)
+ case emptyI
+ then show ?case
+ by simp
+next
+ case (insertI F x)
+ then show ?case
+ proof (clarify)
+ assume d: "(\<forall>x::'a\<in>insert x F. \<exists>y::'b. P y x)"
+ have "(\<forall>x::'a\<in>F. \<exists>y::'b. P y x)"
+ using d by blast
+ then obtain f where f: "\<forall>x::'a\<in>F. P (f x) x"
+ using insertI.IH by blast
+ from d obtain y where "P y x" by blast
+ thus "(\<exists>f::'a \<Rightarrow> 'b. \<forall>x::'a\<in>insert x F. P (f x) x)" using f
+ by (rule_tac x = "\<lambda> z. if z = x then y else f z" in exI, simp)
+ qed
+qed
+
+primrec nodup :: "['a, 'a list] \<Rightarrow> bool"
+ where
+ nodup_nil: "nodup a [] = True" |
+ nodup_step: "nodup a (x # ls) = (if x = a then (a \<notin> (set ls)) else nodup a ls)"
+
+definition nodup_all:: "'a list \<Rightarrow> bool"
+ where
+ "nodup_all l \<equiv> \<forall> x \<in> set l. nodup x l"
+
+lemma nodup_all_lem[rule_format]:
+ "nodup_all (x1 # a # l) \<longrightarrow> (insert x1 (insert a (set l)) - {x1}) = insert a (set l)"
+ by (induction l, (simp add: nodup_all_def)+)
+
+lemma nodup_all_tl[rule_format]: "nodup_all (x # l) \<longrightarrow> nodup_all l"
+ by (induction l, (simp add: nodup_all_def)+)
+
+lemma finite_nodup: "finite I \<Longrightarrow> \<exists> l. set l = I \<and> nodup_all l"
+proof (induction rule: finite.induct)
+ case emptyI
+ then show ?case
+ by (simp add: nodup_all_def)
+next
+ case (insertI A a)
+ then show ?case
+ by (metis insertE insert_absorb list.simps(15) nodup_all_def nodup_step)
+qed
+
+lemma Compl_step3: "I \<noteq> {} \<Longrightarrow> finite I \<Longrightarrow>
+ ( \<forall> x \<in> I. x \<in> s \<or> (\<exists> (sl :: ((('s :: state) set)list)).
+ (sl \<noteq> []) \<and> (tl sl \<noteq> []) \<and>
+ (sl ! 0, sl ! (length sl - 1)) = ({x},s) \<and>
+ (\<forall> i < (length sl - 1). \<turnstile> \<N>\<^bsub>(sl ! i,sl ! (i+1) )\<^esub>
+ )) \<Longrightarrow>
+ (\<exists> lI. set lI = {x :: 's :: state. x \<in> I \<and> x \<notin> s} \<and> (\<exists> Sj :: ((('s :: state) set)list) list.
+ length Sj = length lI \<and> nodup_all lI \<and>
+ (\<forall> j < length Sj. (((Sj ! j) \<noteq> []) \<and> (tl (Sj ! j) \<noteq> []) \<and>
+ ((Sj ! j) ! 0, (Sj ! j) ! (length (Sj ! j) - 1)) = ({lI ! j},s) \<and>
+ (\<forall> i < (length (Sj ! j) - 1). \<turnstile> \<N>\<^bsub>((Sj ! j) ! i, (Sj ! j) ! (i+1) )\<^esub>
+ ))))))"
+proof -
+ assume i: "I \<noteq> {}" and f: "finite I" and
+ fa: "\<forall>x::'s\<in>I.
+ x \<in> s \<or>
+ (\<exists>sl::'s set list.
+ sl \<noteq> [] \<and>
+ tl sl \<noteq> [] \<and>
+ (sl ! (0), sl ! (length sl - (1))) = ({x}, s) \<and>
+ (\<forall>i<length sl - (1). \<turnstile>\<N>\<^bsub>(sl ! i, sl ! (i + (1)))\<^esub>))"
+ have a: "\<exists> lI. set lI = {x::'s \<in> I. x \<notin> s} \<and> nodup_all lI"
+ by (simp add: f finite_nodup)
+ from this obtain lI where b: "set lI = {x::'s \<in> I. x \<notin> s} \<and> nodup_all lI"
+ by (erule exE)
+ thus "\<exists>lI::'s list.
+ set lI = {x::'s \<in> I. x \<notin> s} \<and>
+ (\<exists>Sj::'s set list list.
+ length Sj = length lI \<and>
+ nodup_all lI \<and>
+ (\<forall>j<length Sj.
+ Sj ! j \<noteq> [] \<and>
+ tl (Sj ! j) \<noteq> [] \<and>
+ (Sj ! j ! (0), Sj ! j ! (length (Sj ! j) - (1))) = ({lI ! j}, s) \<and>
+ (\<forall>i<length (Sj ! j) - (1). \<turnstile>\<N>\<^bsub>(Sj ! j ! i, Sj ! j ! (i + (1)))\<^esub>)))"
+ apply (rule_tac x = lI in exI)
+ apply (rule conjI)
+ apply (erule conjE, assumption)
+ proof -
+ have c: "\<forall> x \<in> set(lI). (\<exists> sl::'s set list.
+ sl \<noteq> [] \<and>
+ tl sl \<noteq> [] \<and>
+ (sl ! (0), sl ! (length sl - (1))) = ({x}, s) \<and>
+ (\<forall>i<length sl - (1). \<turnstile>\<N>\<^bsub>(sl ! i, sl ! (i + (1)))\<^esub>))"
+ using b fa by fastforce
+ thus "\<exists>Sj::'s set list list.
+ length Sj = length lI \<and>
+ nodup_all lI \<and>
+ (\<forall>j<length Sj.
+ Sj ! j \<noteq> [] \<and>
+ tl (Sj ! j) \<noteq> [] \<and>
+ (Sj ! j ! (0), Sj ! j ! (length (Sj ! j) - (1))) = ({lI ! j}, s) \<and>
+ (\<forall>i<length (Sj ! j) - (1). \<turnstile>\<N>\<^bsub>(Sj ! j ! i, Sj ! j ! (i + (1)))\<^esub>))"
+ apply (subgoal_tac "finite (set lI)")
+ apply (rotate_tac -1)
+ apply (drule forall_ex_fun)
+ apply (drule mp)
+ apply assumption
+ apply (erule exE)
+ apply (rule_tac x = "[f (lI ! j). j \<leftarrow> [0..<(length lI)]]" in exI)
+ apply simp
+ apply (insert b)
+ apply (erule conjE, assumption)
+ apply (rule_tac A = "set lI" and B = I in finite_subset)
+ apply blast
+ by (rule f)
+ qed
+qed
+
+subsubsection \<open>Lemma @{text \<open>Compl_step4\<close>}\<close>
+text \<open>Again, we need some additional lemmas first.\<close>
+lemma list_one_tl_empty[rule_format]: "length l = Suc (0 :: nat) \<longrightarrow> tl l = []"
+ by (induction l, simp+)
+
+lemma list_two_tl_not_empty[rule_format]: "\<forall> list. length l = Suc (Suc (length list)) \<longrightarrow> tl l \<noteq> []"
+ by (induction l, simp+, force)
+
+lemma or_empty: "\<turnstile>([] \<oplus>\<^sub>\<or>\<^bsup>({}, s)\<^esup>)" by (simp add: att_or)
+
+text \<open>Note, this is not valid for any l, i.e., @{text \<open>\<turnstile> l \<oplus>\<^sub>\<or>\<^bsup>({}, s)\<^esup>\<close>} is not a theorem.\<close>
+lemma list_or_upt[rule_format]:
+ "\<forall> l . lI \<noteq> [] \<longrightarrow> length l = length lI \<longrightarrow> nodup_all lI \<longrightarrow>
+ (\<forall> i < length lI. (\<turnstile> (l ! i)) \<and> (attack (l ! i) = ({lI ! i}, s)))
+ \<longrightarrow> ( \<turnstile> (l \<oplus>\<^sub>\<or>\<^bsup>(set lI, s)\<^esup>))"
+proof (induction lI, simp, clarify)
+ fix x1 x2 l
+ show "\<forall>l::'a attree list.
+ x2 \<noteq> [] \<longrightarrow>
+ length l = length x2 \<longrightarrow>
+ nodup_all x2 \<longrightarrow>
+ (\<forall>i<length x2. \<turnstile>(l ! i) \<and> attack (l ! i) = ({x2 ! i}, s)) \<longrightarrow> \<turnstile>(l \<oplus>\<^sub>\<or>\<^bsup>(set x2, s)\<^esup>) \<Longrightarrow>
+ x1 # x2 \<noteq> [] \<Longrightarrow>
+ length l = length (x1 # x2) \<Longrightarrow>
+ nodup_all (x1 # x2) \<Longrightarrow>
+ \<forall>i<length (x1 # x2). \<turnstile>(l ! i) \<and> attack (l ! i) = ({(x1 # x2) ! i}, s) \<Longrightarrow> \<turnstile>(l \<oplus>\<^sub>\<or>\<^bsup>(set (x1 # x2), s)\<^esup>)"
+ apply (case_tac x2, simp, subst att_or, case_tac l, simp+)
+ text \<open>Case @{text \<open>\<forall>i<Suc (Suc (length list)). \<turnstile>l ! i \<and> attack (l ! i) = ({(x1 # a # list) ! i}, s) \<Longrightarrow>
+ x2 = a # list \<Longrightarrow> \<turnstile>l \<oplus>\<^sub>\<or>\<^bsup>(insert x1 (insert a (set list)), s)\<^esup>\<close>}\<close>
+ apply (subst att_or, case_tac l, simp, clarify, simp, rename_tac lista, case_tac lista, simp+)
+ text \<open>Remaining conjunct of three conditions: @{text \<open> \<turnstile>aa \<and>
+ fst (attack aa) \<subseteq> insert x1 (insert a (set list)) \<and>
+ snd (attack aa) \<subseteq> s \<and> \<turnstile>ab # listb \<oplus>\<^sub>\<or>\<^bsup>(insert x1 (insert a (set list)) - fst (attack aa), s)\<^esup>\<close>}\<close>
+ apply (rule conjI)
+ text \<open>First condition @{text \<open> \<turnstile>aa\<close>}\<close>
+ apply (drule_tac x = 0 in spec, drule mp, simp, (erule conjE)+, simp, rule conjI)
+ text \<open>Second condition @{text \<open>fst (attack aa) \<subseteq> insert x1 (insert a (set list))\<close>}\<close>
+ apply (drule_tac x = 0 in spec, drule mp, simp, erule conjE, simp)
+ text \<open>The remaining conditions
+
+ @{text \<open>snd (attack aa) \<subseteq> s \<and> \<turnstile>ab # listb \<oplus>\<^sub>\<or>\<^bsup>(insert x1 (insert a (set list)) - fst (attack aa), s)\<^esup>\<close>}
+
+ are solved automatically!\<close>
+ by (metis Suc_mono add.right_neutral add_Suc_right list.size(4) nodup_all_lem nodup_all_tl nth_Cons_0 nth_Cons_Suc order_refl prod.sel(1) prod.sel(2) zero_less_Suc)
+qed
+
+lemma app_tl_empty_hd[rule_format]: "tl (l @ [a]) = [] \<longrightarrow> hd (l @ [a]) = a"
+ by (induction l) auto
+
+lemma tl_hd_empty[rule_format]: "tl (l @ [a]) = [] \<longrightarrow> l = []"
+ by (induction l) auto
+
+lemma tl_hd_not_empty[rule_format]: "tl (l @ [a]) \<noteq> [] \<longrightarrow> l \<noteq> []"
+ by (induction l) auto
+
+lemma app_tl_empty_length[rule_format]: "tl (map f [0..<length l] @ [a]) = []
+ \<Longrightarrow> l = []"
+ by (drule tl_hd_empty, simp)
+
+lemma not_empty_hd_fst[rule_format]: "l \<noteq> [] \<longrightarrow> hd(l @ [a]) = l ! 0"
+ by (induction l) auto
+
+lemma app_tl_hd_list[rule_format]: "tl (map f [0..<length l] @ [a]) \<noteq> []
+ \<Longrightarrow> hd(map f [0..<length l] @ [a]) = (map f [0..<length l]) ! 0"
+ by (drule tl_hd_not_empty, erule not_empty_hd_fst)
+
+lemma tl_app_in[rule_format]: "l \<noteq> [] \<longrightarrow>
+ map f [0..<(length l - (Suc 0:: nat))] @ [f(length l - (Suc 0 :: nat))] = map f [0..<length l]"
+ by (induction l) auto
+
+lemma map_fst[rule_format]: "n > 0 \<longrightarrow> map f [0..<n] = f 0 # (map f [1..<n])"
+ by (induction n) auto
+
+lemma step_lem[rule_format]: "l \<noteq> [] \<Longrightarrow>
+ tl (map (\<lambda> i. f((x1 # a # l) ! i)((a # l) ! i)) [0..<length l]) =
+ map (\<lambda>i. f((a # l) ! i)(l ! i)) [0..<length l - (1)]"
+proof (simp)
+ assume l: "l \<noteq> []"
+ have a: "map (\<lambda>i. f ((x1 # a # l) ! i) ((a # l) ! i)) [0..<length l] =
+ (f(x1)(a) # (map (\<lambda>i. f ((a # l) ! i) (l ! i)) [0..<(length l - 1)]))"
+ proof -
+ have b : "map (\<lambda>i. f ((x1 # a # l) ! i) ((a # l) ! i)) [0..<length l] =
+ f ((x1 # a # l) ! 0) ((a # l) ! 0) #
+ (map (\<lambda>i. f ((x1 # a # l) ! i) ((a # l) ! i)) [1..<length l])"
+ by (rule map_fst, simp, rule l)
+ have c: "map (\<lambda>i. f ((x1 # a # l) ! i) ((a # l) ! i)) [Suc (0)..<length l] =
+ map (\<lambda>i. f ((x1 # a # l) ! Suc i) ((a # l) ! Suc i)) [(0)..<(length l - 1)]"
+ by (subgoal_tac "[Suc (0)..<length l] = map Suc [0..<(length l - 1)]",
+ simp, simp add: map_Suc_upt l)
+ thus "map (\<lambda>i. f ((x1 # a # l) ! i) ((a # l) ! i)) [0..<length l] =
+ f x1 a # map (\<lambda>i. f ((a # l) ! i) (l ! i)) [0..<length l - (1)]"
+ by (simp add: b c)
+ qed
+ thus "l \<noteq> [] \<Longrightarrow>
+ tl (map (\<lambda>i. f ((x1 # a # l) ! i) ((a # l) ! i)) [0..<length l]) =
+ map (\<lambda>i. f ((a # l) ! i) (l ! i)) [0..<length l - Suc (0)]"
+ by (subst a, simp)
+ qed
+
+lemma step_lem2a[rule_format]: "0 < length list \<Longrightarrow> map (\<lambda>i. \<N>\<^bsub>((x1 # a # list) ! i, (a # list) ! i)\<^esub>)
+ [0..<length list] @
+ [\<N>\<^bsub>((x1 # a # list) ! length list, (a # list) ! length list)\<^esub>] =
+ aa # listb \<longrightarrow> \<N>\<^bsub>((x1, a))\<^esub> = aa"
+ by (subst map_fst, assumption, simp)
+
+lemma step_lem2b[rule_format]: "0 = length list \<Longrightarrow> map (\<lambda>i. \<N>\<^bsub>((x1 # a # list) ! i, (a # list) ! i)\<^esub>)
+ [0..<length list] @
+ [\<N>\<^bsub>((x1 # a # list) ! length list, (a # list) ! length list)\<^esub>] =
+ aa # listb \<longrightarrow> \<N>\<^bsub>((x1, a))\<^esub> = aa"
+by simp
+
+lemma step_lem2: "map (\<lambda>i. \<N>\<^bsub>((x1 # a # list) ! i, (a # list) ! i)\<^esub>)
+ [0..<length list] @
+ [\<N>\<^bsub>((x1 # a # list) ! length list, (a # list) ! length list)\<^esub>] =
+ aa # listb \<Longrightarrow> \<N>\<^bsub>((x1, a))\<^esub> = aa"
+proof (case_tac "length list", rule step_lem2b, erule sym, assumption)
+ show "\<And>nat.
+ map (\<lambda>i. \<N>\<^bsub>((x1 # a # list) ! i, (a # list) ! i)\<^esub>) [0..<length list] @
+ [\<N>\<^bsub>((x1 # a # list) ! length list, (a # list) ! length list)\<^esub>] =
+ aa # listb \<Longrightarrow>
+ length list = Suc nat \<Longrightarrow> \<N>\<^bsub>(x1, a)\<^esub> = aa"
+ by (rule_tac list = list in step_lem2a, simp)
+qed
+
+lemma base_list_and[rule_format]: "Sji \<noteq> [] \<longrightarrow> tl Sji \<noteq> [] \<longrightarrow>
+ (\<forall> li. Sji ! (0) = li \<longrightarrow>
+ Sji! (length (Sji) - 1) = s \<longrightarrow>
+ (\<forall>i<length (Sji) - 1. \<turnstile>\<N>\<^bsub>(Sji ! i, Sji ! Suc i)\<^esub>) \<longrightarrow>
+ \<turnstile> (map (\<lambda>i. \<N>\<^bsub>(Sji ! i, Sji ! Suc i)\<^esub>)
+ [0..<length (Sji) - Suc (0)] \<oplus>\<^sub>\<and>\<^bsup>(li, s)\<^esup>))"
+proof (induction Sji)
+ case Nil
+ then show ?case by simp
+next
+ case (Cons a Sji)
+ then show ?case
+ apply (subst att_and, case_tac Sji, simp, simp)
+ apply (rule impI)+
+ proof -
+ fix aa list
+ show "list \<noteq> [] \<longrightarrow>
+ list ! (length list - Suc 0) = s \<longrightarrow>
+ (\<forall>i<length list. \<turnstile>\<N>\<^bsub>((aa # list) ! i, list ! i)\<^esub>) \<longrightarrow>
+ \<turnstile>(map (\<lambda>i. \<N>\<^bsub>((aa # list) ! i, list ! i)\<^esub>) [0..<length list] \<oplus>\<^sub>\<and>\<^bsup>(aa, s)\<^esup>) \<Longrightarrow>
+ Sji = aa # list \<Longrightarrow>
+ (aa # list) ! length list = s \<Longrightarrow>
+ \<forall>i<Suc (length list). \<turnstile>\<N>\<^bsub>((a # aa # list) ! i, (aa # list) ! i)\<^esub> \<Longrightarrow>
+ case map (\<lambda>i. \<N>\<^bsub>((a # aa # list) ! i, (aa # list) ! i)\<^esub>) [0..<length list] @
+ [\<N>\<^bsub>((a # aa # list) ! length list, s)\<^esub>] of
+ [] \<Rightarrow> fst (a, s) \<subseteq> snd (a, s) | [aa] \<Rightarrow> \<turnstile>aa \<and> attack aa = (a, s)
+ | aa # ab # list \<Rightarrow>
+ \<turnstile>aa \<and> fst (attack aa) = fst (a, s) \<and> \<turnstile>(ab # list \<oplus>\<^sub>\<and>\<^bsup>(snd (attack aa), snd (a, s))\<^esup>)"
+ proof (case_tac "map (\<lambda>i. \<N>\<^bsub>((a # aa # list) ! i, (aa # list) ! i)\<^esub>) [0..<length list] @
+ [\<N>\<^bsub>((a # aa # list) ! length list, s)\<^esub>]", simp, clarify, simp)
+ fix ab lista
+ have *: "tl (map (\<lambda>i. \<N>\<^bsub>((a # aa # list) ! i, (aa # list) ! i)\<^esub>) [0..<length list])
+ = (map (\<lambda>i. \<N>\<^bsub>((aa # list) ! i, (list) ! i)\<^esub>) [0..<(length list - 1)])"
+ if "list \<noteq> []"
+ apply (subgoal_tac "tl (map (\<lambda>i. \<N>\<^bsub>((a # aa # list) ! i, (aa # list) ! i)\<^esub>) [0..<length list])
+ = (map (\<lambda>i. \<N>\<^bsub>((aa # list) ! i, (list) ! i)\<^esub>) [0..<(length list - 1)])")
+ apply blast
+ apply (subst step_lem [OF that])
+ apply simp
+ done
+ show "list \<noteq> [] \<longrightarrow>
+ (\<forall>i<length list. \<turnstile>\<N>\<^bsub>((aa # list) ! i, list ! i)\<^esub>) \<longrightarrow>
+ \<turnstile>(map (\<lambda>i. \<N>\<^bsub>((aa # list) ! i, list ! i)\<^esub>)
+ [0..<length list] \<oplus>\<^sub>\<and>\<^bsup>(aa, list ! (length list - Suc 0))\<^esup>) \<Longrightarrow>
+ Sji = aa # list \<Longrightarrow>
+ \<forall>i<Suc (length list). \<turnstile>\<N>\<^bsub>((a # aa # list) ! i, (aa # list) ! i)\<^esub> \<Longrightarrow>
+ map (\<lambda>i. \<N>\<^bsub>((a # aa # list) ! i, (aa # list) ! i)\<^esub>) [0..<length list] @
+ [\<N>\<^bsub>((a # aa # list) ! length list, (aa # list) ! length list)\<^esub>] =
+ ab # lista \<Longrightarrow>
+ s = (aa # list) ! length list \<Longrightarrow>
+ case lista of [] \<Rightarrow> \<turnstile>ab \<and> attack ab = (a, (aa # list) ! length list)
+ | aba # lista \<Rightarrow>
+ \<turnstile>ab \<and> fst (attack ab) = a \<and> \<turnstile>(aba # lista \<oplus>\<^sub>\<and>\<^bsup>(snd (attack ab), (aa # list) ! length list)\<^esup>)"
+ apply (auto simp: split: list.split)
+ apply (metis (no_types, lifting) app_tl_hd_list length_greater_0_conv list.sel(1) list.sel(3) list.simps(3) list.simps(8) list.size(3) map_fst nth_Cons_0 self_append_conv2 upt_0 zero_less_Suc)
+ apply (metis (no_types, lifting) app_tl_hd_list attack.simps(1) fst_conv length_greater_0_conv list.sel(1) list.sel(3) list.simps(3) list.simps(8) list.size(3) map_fst nth_Cons_0 self_append_conv2 upt_0)
+ apply (metis (mono_tags, lifting) app_tl_hd_list attack.simps(1) fst_conv length_greater_0_conv list.sel(1) list.sel(3) list.simps(3) list.simps(8) list.size(3) map_fst nth_Cons_0 self_append_conv2 upt_0)
+ by (smt * One_nat_def app_tl_hd_list attack.simps(1) length_greater_0_conv list.sel(1) list.sel(3) list.simps(3) list.simps(8) list.size(3) map_fst nth_Cons_0 nth_Cons_pos self_append_conv2 snd_conv tl_app_in tl_append2 upt_0)
+ qed
+ qed
+qed
+
+lemma Compl_step4: "I \<noteq> {} \<Longrightarrow> finite I \<Longrightarrow> \<not> I \<subseteq> s \<Longrightarrow>
+(\<exists> lI. set lI = {x. x \<in> I \<and> x \<notin> s} \<and> (\<exists> Sj :: ((('s :: state) set)list) list.
+ length Sj = length lI \<and> nodup_all lI \<and>
+ (\<forall> j < length Sj. (((Sj ! j) \<noteq> []) \<and> (tl (Sj ! j) \<noteq> []) \<and>
+ ((Sj ! j) ! 0, (Sj ! j) ! (length (Sj ! j) - 1)) = ({lI ! j},s) \<and>
+ (\<forall> i < (length (Sj ! j) - 1). \<turnstile> \<N>\<^bsub>((Sj ! j) ! i, (Sj ! j) ! (i+1) )\<^esub>
+ )))))
+ \<Longrightarrow> \<exists> (A :: ('s :: state) attree). \<turnstile> A \<and> attack A = (I,s)"
+proof (erule exE, erule conjE, erule exE, erule conjE)
+ fix lI Sj
+ assume a: "I \<noteq> {}" and b: "finite I" and c: "\<not> I \<subseteq> s"
+ and d: "set lI = {x::'s \<in> I. x \<notin> s}" and e: "length Sj = length lI"
+ and f: "nodup_all lI \<and>
+ (\<forall>j<length Sj. Sj ! j \<noteq> [] \<and>
+ tl (Sj ! j) \<noteq> [] \<and>
+ (Sj ! j ! (0), Sj ! j ! (length (Sj ! j) - (1))) = ({lI ! j}, s) \<and>
+ (\<forall>i<length (Sj ! j) - (1). \<turnstile>\<N>\<^bsub>(Sj ! j ! i, Sj ! j ! (i + (1)))\<^esub>))"
+ show "\<exists>A::'s attree. \<turnstile>A \<and> attack A = (I, s)"
+ apply (rule_tac x =
+ "[([] \<oplus>\<^sub>\<or>\<^bsup>({x. x \<in> I \<and> x \<in> s}, s)\<^esup>),
+ ([[ \<N>\<^bsub>((Sj ! j) ! i, (Sj ! j) ! (i + (1)))\<^esub>.
+ i \<leftarrow> [0..<(length (Sj ! j)-(1))]] \<oplus>\<^sub>\<and>\<^bsup>(({lI ! j},s))\<^esup>. j \<leftarrow> [0..<(length Sj)]]
+ \<oplus>\<^sub>\<or>\<^bsup>({x. x \<in> I \<and> x \<notin> s},s)\<^esup>)] \<oplus>\<^sub>\<or>\<^bsup>(I, s)\<^esup>" in exI)
+ proof
+ show "\<turnstile>([[] \<oplus>\<^sub>\<or>\<^bsup>({x::'s \<in> I. x \<in> s}, s)\<^esup>,
+ map (\<lambda>j.
+ ((map (\<lambda>i. \<N>\<^bsub>(Sj ! j ! i, Sj ! j ! (i + (1)))\<^esub>)
+ [0..<length (Sj ! j) - (1)]) \<oplus>\<^sub>\<and>\<^bsup>({lI ! j}, s)\<^esup>))
+ [0..<length Sj] \<oplus>\<^sub>\<or>\<^bsup>({x::'s \<in> I. x \<notin> s}, s)\<^esup>] \<oplus>\<^sub>\<or>\<^bsup>(I, s)\<^esup>)"
+ proof -
+ have g: "I - {x::'s \<in> I. x \<in> s} = {x::'s \<in> I. x \<notin> s}" by blast
+ thus "\<turnstile>([[] \<oplus>\<^sub>\<or>\<^bsup>({x::'s \<in> I. x \<in> s}, s)\<^esup>,
+ (map (\<lambda>j.
+ ((map (\<lambda>i. \<N>\<^bsub>(Sj ! j ! i, Sj ! j ! (i + (1)))\<^esub>)
+ [0..<length (Sj ! j) - (1)]) \<oplus>\<^sub>\<and>\<^bsup>({lI ! j}, s)\<^esup>))
+ [0..<length Sj]) \<oplus>\<^sub>\<or>\<^bsup>({x::'s \<in> I. x \<notin> s}, s)\<^esup>] \<oplus>\<^sub>\<or>\<^bsup>(I, s)\<^esup>)"
+ apply (subst att_or, simp)
+ proof
+ show "I - {x \<in> I. x \<in> s} = {x \<in> I. x \<notin> s} \<Longrightarrow> \<turnstile>([] \<oplus>\<^sub>\<or>\<^bsup>({x \<in> I. x \<in> s}, s)\<^esup>)"
+ by (metis (no_types, lifting) CollectD att_or_empty_back subsetI)
+ next show "I - {x \<in> I. x \<in> s} = {x \<in> I. x \<notin> s} \<Longrightarrow>
+ \<turnstile>([map (\<lambda>j. ((map (\<lambda>i. \<N>\<^bsub>(Sj ! j ! i, Sj ! j ! Suc i)\<^esub>) [0..<length (Sj ! j) - Suc 0]) \<oplus>\<^sub>\<and>\<^bsup>({lI ! j}, s)\<^esup>))
+ [0..<length Sj] \<oplus>\<^sub>\<or>\<^bsup>({x \<in> I. x \<notin> s}, s)\<^esup>] \<oplus>\<^sub>\<or>\<^bsup>({x \<in> I. x \<notin> s}, s)\<^esup>)"
+ text \<open>Use lemma @{text \<open>list_or_upt\<close>} to distribute attack validity over list lI\<close>
+ proof (erule ssubst, subst att_or, simp, rule subst, rule d, rule_tac lI = lI in list_or_upt)
+ show "lI \<noteq> []"
+ using c d by auto
+ next show "\<And>i.
+ i < length lI \<Longrightarrow>
+ \<turnstile>(map (\<lambda>j.
+ ((map (\<lambda>i. \<N>\<^bsub>(Sj ! j ! i, Sj ! j ! Suc i)\<^esub>)
+ [0..<length (Sj ! j) - Suc (0)]) \<oplus>\<^sub>\<and>\<^bsup>({lI ! j}, s)\<^esup>))
+ [0..<length Sj] !
+ i) \<and>
+ (attack
+ (map (\<lambda>j.
+ ((map (\<lambda>i. \<N>\<^bsub>(Sj ! j ! i, Sj ! j ! Suc i)\<^esub>)
+ [0..<length (Sj ! j) - Suc (0)]) \<oplus>\<^sub>\<and>\<^bsup>({lI ! j}, s)\<^esup>))
+ [0..<length Sj] !
+ i) =
+ ({lI ! i}, s))"
+ proof (simp add: a b c d e f)
+ show "\<And>i.
+ i < length lI \<Longrightarrow>
+ \<turnstile>(map (\<lambda>ia. \<N>\<^bsub>(Sj ! i ! ia, Sj ! i ! Suc ia)\<^esub>)
+ [0..<length (Sj ! i) - Suc (0)] \<oplus>\<^sub>\<and>\<^bsup>({lI ! i}, s)\<^esup>)"
+ proof -
+ fix i :: nat
+ assume a1: "i < length lI"
+ have "\<forall>n. \<turnstile>map (\<lambda>na. \<N>\<^bsub>(Sj ! n ! na, Sj ! n ! Suc na)\<^esub>) [0..< length (Sj ! n) - 1] \<oplus>\<^sub>\<and>\<^bsup>(Sj ! n ! 0, Sj ! n ! (length (Sj ! n) - 1))\<^esup> \<or> \<not> n < length Sj"
+ by (metis (no_types) One_nat_def add.right_neutral add_Suc_right base_list_and f)
+ then show "\<turnstile>map (\<lambda>n. \<N>\<^bsub>(Sj ! i ! n, Sj ! i ! Suc n)\<^esub>) [0..< length (Sj ! i) - Suc 0] \<oplus>\<^sub>\<and>\<^bsup>({lI ! i}, s)\<^esup>"
+ using a1 by (metis (no_types) One_nat_def e f)
+ qed
+ qed
+ qed (auto simp add: e f)
+ qed
+ qed
+ qed auto
+qed
+
+subsubsection \<open>Main Theorem Completeness\<close>
+theorem Completeness: "I \<noteq> {} \<Longrightarrow> finite I \<Longrightarrow>
+Kripke {s :: ('s :: state). \<exists> i \<in> I. (i \<rightarrow>\<^sub>i* s)} (I :: ('s :: state)set) \<turnstile> EF s
+\<Longrightarrow> \<exists> (A :: ('s :: state) attree). \<turnstile> A \<and> attack A = (I,s)"
+proof (case_tac "I \<subseteq> s")
+ show "I \<noteq> {} \<Longrightarrow> finite I \<Longrightarrow>
+ Kripke {s::'s. \<exists>i::'s\<in>I. i \<rightarrow>\<^sub>i* s} I \<turnstile> EF s \<Longrightarrow> I \<subseteq> s \<Longrightarrow> \<exists>A::'s attree. \<turnstile>A \<and> attack A = (I, s)"
+ using att_or_empty_back attack.simps(3) by blast
+next
+ show "I \<noteq> {} \<Longrightarrow> finite I \<Longrightarrow>
+ Kripke {s::'s. \<exists>i::'s\<in>I. i \<rightarrow>\<^sub>i* s} I \<turnstile> EF s \<Longrightarrow> \<not> I \<subseteq> s
+ \<Longrightarrow> \<exists>A::'s attree. \<turnstile>A \<and> attack A = (I, s)"
+ by (iprover intro: Compl_step1 Compl_step2 Compl_step3 Compl_step4 elim: )
+qed
+
+subsubsection \<open>Contrapositions of Correctness and Completeness\<close>
+lemma contrapos_compl:
+ "I \<noteq> {} \<Longrightarrow> finite I \<Longrightarrow>
+ (\<not> (\<exists> (A :: ('s :: state) attree). \<turnstile> A \<and> attack A = (I, - s))) \<Longrightarrow>
+\<not> (Kripke {s. \<exists>i\<in>I. i \<rightarrow>\<^sub>i* s} I \<turnstile> EF (- s))"
+ using Completeness by auto
+
+lemma contrapos_corr:
+"(\<not>(Kripke {s :: ('s :: state). \<exists> i \<in> I. (i \<rightarrow>\<^sub>i* s)} I \<turnstile> EF s))
+\<Longrightarrow> attack A = (I,s)
+\<Longrightarrow> \<not> (\<turnstile> A)"
+ using AT_EF by blast
+
+end
\ No newline at end of file
diff --git a/thys/Attack_Trees/GDPRhealthcare.thy b/thys/Attack_Trees/GDPRhealthcare.thy
new file mode 100644
--- /dev/null
+++ b/thys/Attack_Trees/GDPRhealthcare.thy
@@ -0,0 +1,387 @@
+section \<open>Application example from IoT healthcare\<close>
+text \<open>The example of an IoT healthcare systems is taken from the context of the CHIST-ERA project
+SUCCESS \cite{suc:16}. In this system architecture, data is collected by sensors
+in the home or via a smart phone helping to monitor bio markers of the patient. The data
+collection is in a cloud based server to enable hospitals (or scientific institutions)
+to access the data which is controlled via the smart phone.
+The identities Patient and Doctor represent patients
+and their doctors; double quotes ''s'' indicate strings
+in Isabelle/HOL.
+The global policy is `only the patient and the doctor can access the data in the cloud'.\<close>
+theory GDPRhealthcare
+imports Infrastructure
+begin
+text \<open>Local policies are represented as a function over an @{text \<open>igraph G\<close>}
+ that additionally assigns each location of a scenario to its local policy
+ given as a pair of requirements to an actor (first element of the pair) in
+ order to grant him actions in the location (second element of the pair).
+ The predicate @{text \<open>@G\<close>} checks whether an actor is at a given location
+ in the @{text \<open>igraph G\<close>}.\<close>
+locale scenarioGDPR =
+fixes gdpr_actors :: "identity set"
+defines gdpr_actors_def: "gdpr_actors \<equiv> {''Patient'', ''Doctor''}"
+fixes gdpr_locations :: "location set"
+defines gdpr_locations_def: "gdpr_locations \<equiv>
+ {Location 0, Location 1, Location 2, Location 3}"
+fixes sphone :: "location"
+defines sphone_def: "sphone \<equiv> Location 0"
+fixes home :: "location"
+defines home_def: "home \<equiv> Location 1"
+fixes hospital :: "location"
+defines hospital_def: "hospital \<equiv> Location 2"
+fixes cloud :: "location"
+defines cloud_def: "cloud \<equiv> Location 3"
+fixes global_policy :: "[infrastructure, identity] \<Rightarrow> bool"
+defines global_policy_def: "global_policy I a \<equiv> a \<noteq> ''Doctor''
+ \<longrightarrow> \<not>(enables I hospital (Actor a) eval)"
+fixes global_policy' :: "[infrastructure, identity] \<Rightarrow> bool"
+defines global_policy'_def: "global_policy' I a \<equiv> a \<notin> gdpr_actors
+ \<longrightarrow> \<not>(enables I cloud (Actor a) get)"
+fixes ex_creds :: "actor \<Rightarrow> (string set * string set)"
+defines ex_creds_def: "ex_creds \<equiv> (\<lambda> x. if x = Actor ''Patient'' then
+ ({''PIN'',''skey''}, {}) else
+ (if x = Actor ''Doctor'' then
+ ({''PIN''},{}) else ({},{})))"
+fixes ex_locs :: "location \<Rightarrow> string * (dlm * data) set"
+defines "ex_locs \<equiv> (\<lambda> x. if x = cloud then
+ (''free'',{((Actor ''Patient'',{Actor ''Doctor''}),42)})
+ else ('''',{}))"
+fixes ex_loc_ass :: "location \<Rightarrow> identity set"
+defines ex_loc_ass_def: "ex_loc_ass \<equiv>
+ (\<lambda> x. if x = home then {''Patient''}
+ else (if x = hospital then {''Doctor'', ''Eve''}
+ else {}))"
+(* The nicer representation with case suffers from
+ not so nice presentation in the cases (need to unfold the syntax)
+fixes ex_loc_ass_alt :: "location \<Rightarrow> identity set"
+defines ex_loc_ass_alt_def: "ex_loc_ass_alt \<equiv>
+ (\<lambda> x. (case x of
+ Location (Suc 0) \<Rightarrow> {''Patient''}
+ | Location (Suc (Suc 0)) \<Rightarrow> {''Doctor'', ''Eve''}
+ | _ \<Rightarrow> {}))"
+*)
+fixes ex_graph :: "igraph"
+defines ex_graph_def: "ex_graph \<equiv> Lgraph
+ {(home, cloud), (sphone, cloud), (cloud,hospital)}
+ ex_loc_ass
+ ex_creds ex_locs"
+fixes ex_graph' :: "igraph"
+defines ex_graph'_def: "ex_graph' \<equiv> Lgraph
+ {(home, cloud), (sphone, cloud), (cloud,hospital)}
+ (\<lambda> x. if x = cloud then {''Patient''} else
+ (if x = hospital then {''Doctor'',''Eve''} else {}))
+ ex_creds ex_locs"
+fixes ex_graph'' :: "igraph"
+defines ex_graph''_def: "ex_graph'' \<equiv> Lgraph
+ {(home, cloud), (sphone, cloud), (cloud,hospital)}
+ (\<lambda> x. if x = cloud then {''Patient'', ''Eve''} else
+ (if x = hospital then {''Doctor''} else {}))
+ ex_creds ex_locs"
+(* Same as above: the nicer representation with case suffers from
+ not so nice presentation in the cases (need to unfold the syntax)
+fixes local_policies_alt :: "[igraph, location] \<Rightarrow> policy set"
+defines local_policies_alt_def: "local_policies_alt G \<equiv>
+ (\<lambda> x. case x of
+ Location (Suc 0) \<Rightarrow> {(\<lambda> y. True, {put,get,move,eval})}
+ | Location 0 \<Rightarrow> {((\<lambda> y. has G (y, ''PIN'')), {put,get,move,eval})}
+ | Location (Suc (Suc (Suc 0))) \<Rightarrow> {(\<lambda> y. True, {put,get,move,eval})}
+ | Location (Suc (Suc 0)) \<Rightarrow>
+ {((\<lambda> y. (\<exists> n. (n @\<^bsub>G\<^esub> hospital) \<and> Actor n = y \<and>
+ has G (y, ''skey''))), {put,get,move,eval})}
+ | _ \<Rightarrow> {})"
+*)
+fixes local_policies :: "[igraph, location] \<Rightarrow> policy set"
+defines local_policies_def: "local_policies G \<equiv>
+ (\<lambda> x. if x = home then
+ {(\<lambda> y. True, {put,get,move,eval})}
+ else (if x = sphone then
+ {((\<lambda> y. has G (y, ''PIN'')), {put,get,move,eval})}
+ else (if x = cloud then
+ {(\<lambda> y. True, {put,get,move,eval})}
+ else (if x = hospital then
+ {((\<lambda> y. (\<exists> n. (n @\<^bsub>G\<^esub> hospital) \<and> Actor n = y \<and>
+ has G (y, ''skey''))), {put,get,move,eval})} else {}))))"
+(* problems with case in locales?
+defines local_policies_def: "local_policies G x \<equiv>
+ (case x of
+ home \<Rightarrow> {(\<lambda> y. True, {put,get,move,eval})}
+ | sphone \<Rightarrow> {((\<lambda> y. has G (y, ''PIN'')), {put,get,move,eval})}
+ | cloud \<Rightarrow> {(\<lambda> y. True, {put,get,move,eval})}
+ | hospital \<Rightarrow> {((\<lambda> y. (\<exists> n. (n @\<^bsub>G\<^esub> hospital) \<and> Actor n = y \<and>
+ has G (y, ''skey''))), {put,get,move,eval})}
+ | _ \<Rightarrow> {})"
+*)
+fixes gdpr_scenario :: "infrastructure"
+defines gdpr_scenario_def:
+"gdpr_scenario \<equiv> Infrastructure ex_graph local_policies"
+fixes Igdpr :: "infrastructure set"
+defines Igdpr_def:
+ "Igdpr \<equiv> {gdpr_scenario}"
+(* other states of scenario *)
+(* First step: Patient goes onto cloud *)
+fixes gdpr_scenario' :: "infrastructure"
+defines gdpr_scenario'_def:
+"gdpr_scenario' \<equiv> Infrastructure ex_graph' local_policies"
+fixes GDPR' :: "infrastructure set"
+defines GDPR'_def:
+ "GDPR' \<equiv> {gdpr_scenario'}"
+(* Second step: Eve goes onto cloud from where she'll be able to get the data *)
+fixes gdpr_scenario'' :: "infrastructure"
+defines gdpr_scenario''_def:
+"gdpr_scenario'' \<equiv> Infrastructure ex_graph'' local_policies"
+fixes GDPR'' :: "infrastructure set"
+defines GDPR''_def:
+ "GDPR'' \<equiv> {gdpr_scenario''}"
+fixes gdpr_states
+defines gdpr_states_def: "gdpr_states \<equiv> { I. gdpr_scenario \<rightarrow>\<^sub>i* I }"
+fixes gdpr_Kripke
+defines "gdpr_Kripke \<equiv> Kripke gdpr_states {gdpr_scenario}"
+fixes sgdpr
+defines "sgdpr \<equiv> {x. \<not> (global_policy' x ''Eve'')}"
+begin
+subsection \<open>Using Attack Tree Calculus\<close>
+text \<open>Since we consider a predicate transformer semantics, we use sets of states
+ to represent properties. For example, the attack property is given by the above
+ @{text \<open>set sgdpr\<close>}.
+
+The attack we are interested in is to see whether for the scenario
+
+@{text \<open>gdpr scenario \<equiv> Infrastructure ex_graph local_policies\<close>}
+
+from the initial state
+
+@{text \<open>Igdpr \<equiv>{gdpr scenario}\<close>},
+
+the critical state
+@{text \<open>sgdpr\<close>} can be reached, i.e., is there a valid attack @{text \<open>(Igdpr,sgdpr)\<close>}?
+
+We first present a number of lemmas showing single and multi-step state transitions
+for relevant states reachable from our @{text \<open>gdpr_scenario\<close>}.\<close>
+lemma step1: "gdpr_scenario \<rightarrow>\<^sub>n gdpr_scenario'"
+proof (rule_tac l = home and a = "''Patient''" and l' = cloud in move)
+ show "graphI gdpr_scenario = graphI gdpr_scenario" by (rule refl)
+next show "''Patient'' @\<^bsub>graphI gdpr_scenario\<^esub> home"
+ by (simp add: gdpr_scenario_def ex_graph_def ex_loc_ass_def atI_def nodes_def)
+next show "home \<in> nodes (graphI gdpr_scenario)"
+ by (simp add: gdpr_scenario_def ex_graph_def ex_loc_ass_def atI_def nodes_def, blast)
+next show "cloud \<in> nodes (graphI gdpr_scenario)"
+ by (simp add: gdpr_scenario_def nodes_def ex_graph_def, blast)
+next show "''Patient'' \<in> actors_graph (graphI gdpr_scenario)"
+ by (simp add: actors_graph_def gdpr_scenario_def ex_graph_def ex_loc_ass_def nodes_def, blast)
+next show "enables gdpr_scenario cloud (Actor ''Patient'') move"
+ by (simp add: enables_def gdpr_scenario_def ex_graph_def local_policies_def
+ ex_creds_def ex_locs_def has_def credentials_def)
+next show "gdpr_scenario' =
+ Infrastructure (move_graph_a ''Patient'' home cloud (graphI gdpr_scenario)) (delta gdpr_scenario)"
+ apply (simp add: gdpr_scenario'_def ex_graph'_def move_graph_a_def
+ gdpr_scenario_def ex_graph_def home_def cloud_def hospital_def
+ ex_loc_ass_def ex_creds_def)
+ apply (rule ext)
+ by (simp add: hospital_def)
+qed
+
+lemma step1r: "gdpr_scenario \<rightarrow>\<^sub>n* gdpr_scenario'"
+proof (simp add: state_transition_in_refl_def)
+ show " (gdpr_scenario, gdpr_scenario') \<in> {(x::infrastructure, y::infrastructure). x \<rightarrow>\<^sub>n y}\<^sup>*"
+ by (insert step1, auto)
+qed
+
+lemma step2: "gdpr_scenario' \<rightarrow>\<^sub>n gdpr_scenario''"
+proof (rule_tac l = hospital and a = "''Eve''" and l' = cloud in move, rule refl)
+ show "''Eve'' @\<^bsub>graphI gdpr_scenario'\<^esub> hospital"
+ by (simp add: gdpr_scenario'_def ex_graph'_def hospital_def cloud_def atI_def nodes_def)
+next show "hospital \<in> nodes (graphI gdpr_scenario')"
+ by (simp add: gdpr_scenario'_def ex_graph'_def hospital_def cloud_def atI_def nodes_def, blast)
+next show "cloud \<in> nodes (graphI gdpr_scenario')"
+ by (simp add: gdpr_scenario'_def nodes_def ex_graph'_def, blast)
+next show "''Eve'' \<in> actors_graph (graphI gdpr_scenario')"
+ by (simp add: actors_graph_def gdpr_scenario'_def ex_graph'_def nodes_def
+ hospital_def cloud_def, blast)
+next show "enables gdpr_scenario' cloud (Actor ''Eve'') move"
+ by (simp add: enables_def gdpr_scenario'_def ex_graph_def local_policies_def
+ ex_creds_def ex_locs_def has_def credentials_def cloud_def sphone_def)
+next show "gdpr_scenario'' =
+ Infrastructure (move_graph_a ''Eve'' hospital cloud (graphI gdpr_scenario')) (delta gdpr_scenario')"
+ apply (simp add: gdpr_scenario'_def ex_graph''_def move_graph_a_def gdpr_scenario''_def
+ ex_graph'_def home_def cloud_def hospital_def ex_creds_def)
+ apply (rule ext)
+ apply (simp add: hospital_def)
+ by blast
+qed
+
+lemma step2r: "gdpr_scenario' \<rightarrow>\<^sub>n* gdpr_scenario''"
+proof (simp add: state_transition_in_refl_def)
+ show "(gdpr_scenario', gdpr_scenario'') \<in> {(x::infrastructure, y::infrastructure). x \<rightarrow>\<^sub>n y}\<^sup>*"
+ by (insert step2, auto)
+qed
+
+(* Attack example: Eve can get onto cloud and get Patient's data
+ because the policy allows Eve to get on cloud.
+ This attack can easily be fixed by disabling Eve to "get"
+ in the policy (just change the "True" for cloud to a set with no
+ Eve in it).
+ However, it would not prevent Insider attacks (where Eve is
+ impersonating the Doctor, for example). Insider attacks can
+ be checked using the UasI predicate.
+*)
+text \<open>For the Kripke structure
+
+@{text \<open>gdpr_Kripke \<equiv> Kripke { I. gdpr_scenario \<rightarrow>\<^sub>i* I } {gdpr_scenario}\<close>}
+
+we first derive a valid and-attack using the attack tree proof calculus.
+
+@{text \<open>"\<turnstile>[\<N>\<^bsub>(Igdpr,GDPR')\<^esub>, \<N>\<^bsub>(GDPR',sgdpr)\<^esub>] \<oplus>\<^sub>\<and>\<^bsup>(Igdpr,sgdpr)\<^esup>\<close>}
+
+The set @{text \<open>GDPR'\<close>} (see above) is an intermediate state where Eve accesses the cloud.\<close>
+
+lemma gdpr_ref: "[\<N>\<^bsub>(Igdpr,sgdpr)\<^esub>] \<oplus>\<^sub>\<and>\<^bsup>(Igdpr,sgdpr)\<^esup> \<sqsubseteq>
+ ([\<N>\<^bsub>(Igdpr,GDPR')\<^esub>, \<N>\<^bsub>(GDPR',sgdpr)\<^esub>] \<oplus>\<^sub>\<and>\<^bsup>(Igdpr,sgdpr)\<^esup>)"
+proof (rule_tac l = "[]" and l' = "[\<N>\<^bsub>(Igdpr,GDPR')\<^esub>, \<N>\<^bsub>(GDPR',sgdpr)\<^esub>]" and
+ l'' = "[]" and si = Igdpr and si' = Igdpr and
+ si'' = sgdpr and si''' = sgdpr in refI, simp, rule refl)
+ show "([\<N>\<^bsub>(Igdpr, GDPR')\<^esub>, \<N>\<^bsub>(GDPR', sgdpr)\<^esub>] \<oplus>\<^sub>\<and>\<^bsup>(Igdpr, sgdpr)\<^esup>) =
+ ([] @ [\<N>\<^bsub>(Igdpr, GDPR')\<^esub>, \<N>\<^bsub>(GDPR', sgdpr)\<^esub>] @ [] \<oplus>\<^sub>\<and>\<^bsup>(Igdpr, sgdpr)\<^esup>)"
+ by simp
+qed
+
+lemma att_gdpr: "\<turnstile>([\<N>\<^bsub>(Igdpr,GDPR')\<^esub>, \<N>\<^bsub>(GDPR',sgdpr)\<^esub>] \<oplus>\<^sub>\<and>\<^bsup>(Igdpr,sgdpr)\<^esup>)"
+proof (subst att_and, simp, rule conjI)
+ show " \<turnstile>\<N>\<^bsub>(Igdpr, GDPR')\<^esub>"
+ apply (simp add: Igdpr_def GDPR'_def att_base)
+ using state_transition_infra_def step1 by blast
+next
+ have "\<not> global_policy' gdpr_scenario'' ''Eve''" "gdpr_scenario' \<rightarrow>\<^sub>n gdpr_scenario''"
+ using step2
+ by (auto simp: global_policy'_def gdpr_scenario''_def gdpr_actors_def
+ enables_def local_policies_def cloud_def sphone_def intro!: step2)
+ then show "\<turnstile>([\<N>\<^bsub>(GDPR', sgdpr)\<^esub>] \<oplus>\<^sub>\<and>\<^bsup>(GDPR', sgdpr)\<^esup>)"
+ apply (subst att_and)
+ apply (simp add: GDPR'_def sgdpr_def att_base)
+ using state_transition_infra_def by blast
+qed
+
+lemma gdpr_abs_att: "\<turnstile>\<^sub>V([\<N>\<^bsub>(Igdpr,sgdpr)\<^esub>] \<oplus>\<^sub>\<and>\<^bsup>(Igdpr,sgdpr)\<^esup>)"
+ by (rule ref_valI, rule gdpr_ref, rule att_gdpr)
+
+text \<open>We can then simply apply the Correctness theorem @{text \<open>AT EF\<close>} to immediately
+ prove the following CTL statement.
+
+ @{text \<open>gdpr_Kripke \<turnstile> EF sgdpr\<close>}
+
+This application of the meta-theorem of Correctness of attack trees saves us
+proving the CTL formula tediously by exploring the state space.\<close>
+lemma gdpr_att: "gdpr_Kripke \<turnstile> EF {x. \<not>(global_policy' x ''Eve'')}"
+proof -
+ have a: " \<turnstile>([\<N>\<^bsub>(Igdpr, GDPR')\<^esub>, \<N>\<^bsub>(GDPR', sgdpr)\<^esub>] \<oplus>\<^sub>\<and>\<^bsup>(Igdpr, sgdpr)\<^esup>)"
+ by (rule att_gdpr)
+ hence "(Igdpr,sgdpr) = attack ([\<N>\<^bsub>(Igdpr, GDPR')\<^esub>, \<N>\<^bsub>(GDPR', sgdpr)\<^esub>] \<oplus>\<^sub>\<and>\<^bsup>(Igdpr, sgdpr)\<^esup>)"
+ by simp
+ hence "Kripke {s::infrastructure. \<exists>i::infrastructure\<in>Igdpr. i \<rightarrow>\<^sub>i* s} Igdpr \<turnstile> EF sgdpr"
+ using ATV_EF gdpr_abs_att by fastforce
+ thus "gdpr_Kripke \<turnstile> EF {x::infrastructure. \<not> global_policy' x ''Eve''}"
+ by (simp add: gdpr_Kripke_def gdpr_states_def Igdpr_def sgdpr_def)
+qed
+
+theorem gdpr_EF: "gdpr_Kripke \<turnstile> EF sgdpr"
+ using gdpr_att sgdpr_def by blast
+
+text \<open>Similarly, vice-versa, the CTL statement proved in @{text \<open>gdpr_EF\<close>}
+ can now be directly translated into Attack Trees using the Completeness
+ Theorem\footnote{This theorem could easily
+ be proved as a direct instance of @{text \<open>att_gdpr\<close>} above but we want to illustrate
+ an alternative proof method using Completeness here.}.\<close>
+theorem gdpr_AT: "\<exists> A. \<turnstile> A \<and> attack A = (Igdpr,sgdpr)"
+proof -
+ have a: "gdpr_Kripke \<turnstile> EF sgdpr " by (rule gdpr_EF)
+ have b: "Igdpr \<noteq> {}" by (simp add: Igdpr_def)
+ thus "\<exists>A::infrastructure attree. \<turnstile>A \<and> attack A = (Igdpr, sgdpr)"
+ proof (rule Completeness)
+ show "Kripke {s. \<exists>i\<in>Igdpr. i \<rightarrow>\<^sub>i* s} Igdpr \<turnstile> EF sgdpr"
+ using a by (simp add: gdpr_Kripke_def Igdpr_def gdpr_states_def)
+ qed (auto simp: Igdpr_def)
+qed
+
+text \<open>Conversely, since we have an attack given by rule @{text \<open>gdpr_AT\<close>}, we can immediately
+ infer @{text \<open>EF s\<close>} using Correctness @{text \<open>AT_EF\<close>}\footnote{Clearly, this theorem is identical
+ to @{text \<open>gdpr_EF\<close>} and could thus be inferred from that one but we want to show here an
+ alternative way of proving it using the Correctness theorem @{text \<open>AT_EF\<close>}.}.\<close>
+theorem gdpr_EF': "gdpr_Kripke \<turnstile> EF sgdpr"
+ using gdpr_AT by (auto simp: gdpr_Kripke_def gdpr_states_def Igdpr_def dest: AT_EF)
+
+
+(* However, when integrating DLM into the model and hence labeling
+ information becomes part of the conditions of the get_data rule this isn't
+ possible any more: gdpr_EF is not true any more *)
+(** GDPR properties for the illustration of the DLM labeling **)
+section \<open>Data Protection by Design for GDPR compliance\<close>
+subsection \<open>General Data Protection Regulation (GDPR)\<close>
+text \<open>Since 26th May 2018, the GDPR has become mandatory within the European Union and hence
+also for any supplier of IT products. Breaches of the regulation will be fined with penalties
+of 20 Million EUR. Despite the relatively large size of the document of 209 pages, the technically
+relevant portion for us is only about 30 pages (Pages 81–111, Chapters I to Chapter III, Section 3).
+In summary, Chapter III specifies that the controller must give the data subject read access (1)
+to any information, communications, and “meta-data” of the data, e.g., retention time and purpose.
+In addition, the system must enable deletion of data (2) and restriction of processing.
+An invariant condition for data processing resulting from these Articles is that the system functions
+must preserve any of the access rights of personal data (3).
+
+Using labeled data, we can now express the essence of Article 4 Paragraph (1):
+’personal data’ means any information relating to an identified or identifiable natural
+person (’data subject’).
+
+The labels of data must not be changed by processing: we have identified this as
+an invariant (3) resulting from the GDPR above. This invariant is formalized in
+our Isabelle model by the type definition of functions on labeled data @{text \<open>label_fun\<close>}
+(see Section 4.2) that preserve the data labels.\<close>
+
+subsection \<open>Policy enforcement and privacy preservation\<close>
+text \<open>We can now use the labeled data to encode the privacy constraints of the
+ GDPR in the rules. For example, the get data rule (see Section 4.3) has labelled data
+ @{text \<open>((Actor a’, as), n)\<close>} and uses the labeling in the precondition to guarantee
+ that only entitled users can get data.
+
+We can prove that processing preserves ownership as defined in the initial state for all paths
+globally (AG) within the Kripke structure and in all locations of the graph.\<close>
+(* GDPR three: Processing preserves ownership in any location *)
+lemma gdpr_three: "h \<in> gdpr_actors \<Longrightarrow> l \<in> gdpr_locations \<Longrightarrow>
+ owns (Igraph gdpr_scenario) l (Actor h) d \<Longrightarrow>
+ gdpr_Kripke \<turnstile> AG {x. \<forall> l \<in> gdpr_locations. owns (Igraph x) l (Actor h) d }"
+proof (simp add: gdpr_Kripke_def check_def, rule conjI)
+ show "gdpr_scenario \<in> gdpr_states" by (simp add: gdpr_states_def state_transition_refl_def)
+next
+ show "h \<in> gdpr_actors \<Longrightarrow>
+ l \<in> gdpr_locations \<Longrightarrow>
+ owns (Igraph gdpr_scenario) l (Actor h) d \<Longrightarrow>
+ gdpr_scenario \<in> AG {x::infrastructure. \<forall>l\<in>gdpr_locations. owns (Igraph x) l (Actor h) d}"
+ apply (simp add: AG_def gfp_def)
+ apply (rule_tac x = "{x::infrastructure. \<forall>l\<in>gdpr_locations. owns (Igraph x) l (Actor h) d}" in exI)
+ by (auto simp: AX_def gdpr_scenario_def owns_def)
+qed
+
+text \<open>The final application example of Correctness contraposition
+ shows that there is no attack to ownership possible.
+The proved meta-theory for attack trees can be applied to facilitate the proof.
+The contraposition of the Correctness property grants that if there is no attack on
+@{text \<open>(I,\<not>f)\<close>}, then @{text \<open>(EF \<not>f)\<close>} does not hold in the Kripke structure. This
+yields the theorem since the @{text \<open>AG f\<close>} statement corresponds to @{text \<open>\<not>(EF \<not>f)\<close>}.
+\<close>
+theorem no_attack_gdpr_three:
+"h \<in> gdpr_actors \<Longrightarrow> l \<in> gdpr_locations \<Longrightarrow>
+ owns (Igraph gdpr_scenario) l (Actor h) d \<Longrightarrow>
+attack A = (Igdpr, -{x. \<forall> l \<in> gdpr_locations. owns (Igraph x) l (Actor h) d })
+\<Longrightarrow> \<not> (\<turnstile> A)"
+proof (rule_tac I = Igdpr and
+ s = "-{x::infrastructure. \<forall>l\<in>gdpr_locations. owns (Igraph x) l (Actor h) d}"
+ in contrapos_corr)
+ show "h \<in> gdpr_actors \<Longrightarrow>
+ l \<in> gdpr_locations \<Longrightarrow>
+ owns (Igraph gdpr_scenario) l (Actor h) d \<Longrightarrow>
+ attack A = (Igdpr, - {x::infrastructure. \<forall>l\<in>gdpr_locations. owns (Igraph x) l (Actor h) d}) \<Longrightarrow>
+ \<not> (Kripke {s::infrastructure. \<exists>i::infrastructure\<in>Igdpr. i \<rightarrow>\<^sub>i* s}
+ Igdpr \<turnstile> EF (- {x::infrastructure. \<forall>l\<in>gdpr_locations. owns (Igraph x) l (Actor h) d}))"
+ apply (rule AG_imp_notnotEF)
+ apply (simp add: gdpr_Kripke_def Igdpr_def gdpr_states_def)
+ using Igdpr_def gdpr_Kripke_def gdpr_states_def gdpr_three by auto
+qed
+end
+end
\ No newline at end of file
diff --git a/thys/Attack_Trees/Infrastructure.thy b/thys/Attack_Trees/Infrastructure.thy
new file mode 100644
--- /dev/null
+++ b/thys/Attack_Trees/Infrastructure.thy
@@ -0,0 +1,273 @@
+section \<open>Infrastructures\<close>
+text \<open>The Isabelle Infrastructure framework supports the representation of infrastructures
+as graphs with actors and policies attached to nodes. These infrastructures
+are the {\it states} of the Kripke structure.
+The transition between states is triggered by non-parametrized
+actions @{text \<open>get, move, eval, put\<close>} executed by actors.
+Actors are given by an abstract type @{text \<open>actor\<close>} and a function
+@{text \<open>Actor\<close>} that creates elements of that type from identities
+(of type @{text \<open>string\<close>}). Policies are given by pairs of predicates
+(conditions) and sets of (enabled) actions.\<close>
+subsection \<open>Actors, actions, and data labels\<close>
+theory Infrastructure
+ imports AT
+begin
+datatype action = get | move | eval | put
+
+typedecl actor
+type_synonym identity = string
+consts Actor :: "string \<Rightarrow> actor"
+type_synonym policy = "((actor \<Rightarrow> bool) * action set)"
+
+definition ID :: "[actor, string] \<Rightarrow> bool"
+ where "ID a s \<equiv> (a = Actor s)"
+text \<open>The Decentralised Label Model (DLM) \cite{ml:98} introduced the idea to
+label data by owners and readers. We pick up this idea and formalize
+a new type to encode the owner and the set of readers as a pair.
+The first element is the owner of a data item, the second one is the
+set of all actors that may access the data item.
+This enables the unique security
+labelling of data within the system additionally taking the ownership into
+account.\<close>
+type_synonym data = nat
+type_synonym dlm = "actor * actor set"
+
+subsection \<open>Infrastructure graphs and policies\<close>
+text\<open>Actors are contained in an infrastructure graph. An @{text \<open>igraph\<close>} contains
+a set of location pairs representing the topology of the infrastructure
+as a graph of nodes and a list of actor identities associated to each node
+(location) in the graph.
+Also an @{text \<open>igraph\<close>} associates actors to a pair of string sets by
+a pair-valued function whose first range component is a set describing
+the credentials in the possession of an actor and the second component
+is a set defining the roles the actor can take on. More importantly in this
+context, an @{text \<open>igraph\<close>} assigns locations to a pair of a string that defines
+the state of the component and an element of type @{text \<open>(dlm * data) set\<close>}. This
+set of labelled data may represent a condition on that data.
+Corresponding projection functions for each of these components of an
+@{text \<open>igraph\<close>} are provided; they are named @{text \<open>gra\<close>} for the actual set of pairs of
+locations, @{text \<open>agra\<close>} for the actor map, @{text \<open>cgra\<close>} for the credentials,
+and @{text \<open>lgra\<close>} for the state of a location and the data at that location.\<close>
+datatype location = Location nat
+ datatype igraph = Lgraph "(location * location)set" "location \<Rightarrow> identity set"
+ "actor \<Rightarrow> (string set * string set)"
+ "location \<Rightarrow> string * (dlm * data) set"
+datatype infrastructure =
+ Infrastructure "igraph"
+ "[igraph, location] \<Rightarrow> policy set"
+
+primrec loc :: "location \<Rightarrow> nat"
+where "loc(Location n) = n"
+primrec gra :: "igraph \<Rightarrow> (location * location)set"
+where "gra(Lgraph g a c l) = g"
+primrec agra :: "igraph \<Rightarrow> (location \<Rightarrow> identity set)"
+where "agra(Lgraph g a c l) = a"
+primrec cgra :: "igraph \<Rightarrow> (actor \<Rightarrow> string set * string set)"
+where "cgra(Lgraph g a c l) = c"
+primrec lgra :: "igraph \<Rightarrow> (location \<Rightarrow> string * (dlm * data) set)"
+where "lgra(Lgraph g a c l) = l"
+
+definition nodes :: "igraph \<Rightarrow> location set"
+where "nodes g == { x. (? y. ((x,y): gra g) | ((y,x): gra g))}"
+
+definition actors_graph :: "igraph \<Rightarrow> identity set"
+where "actors_graph g == {x. ? y. y : nodes g \<and> x \<in> (agra g y)}"
+
+text \<open>There are projection functions text{@ \<open>graphI\<close>} and text{@ \<open>delta\<close>} when applied
+to an infrastructure return the graph and the policy, respectively. Other projections
+are introduced for the labels, the credential, and roles and to express their meaning.\<close>
+primrec graphI :: "infrastructure \<Rightarrow> igraph"
+where "graphI (Infrastructure g d) = g"
+primrec delta :: "[infrastructure, igraph, location] \<Rightarrow> policy set"
+where "delta (Infrastructure g d) = d"
+primrec tspace :: "[infrastructure, actor ] \<Rightarrow> string set * string set"
+ where "tspace (Infrastructure g d) = cgra g"
+primrec lspace :: "[infrastructure, location ] \<Rightarrow> string * (dlm * data)set"
+where "lspace (Infrastructure g d) = lgra g"
+
+definition credentials :: "string set * string set \<Rightarrow> string set"
+ where "credentials lxl \<equiv> (fst lxl)"
+definition has :: "[igraph, actor * string] \<Rightarrow> bool"
+ where "has G ac \<equiv> snd ac \<in> credentials(cgra G (fst ac))"
+definition roles :: "string set * string set \<Rightarrow> string set"
+ where "roles lxl \<equiv> (snd lxl)"
+definition role :: "[igraph, actor * string] \<Rightarrow> bool"
+ where "role G ac \<equiv> snd ac \<in> roles(cgra G (fst ac))"
+definition isin :: "[igraph,location, string] \<Rightarrow> bool"
+ where "isin G l s \<equiv> s = fst (lgra G l)"
+
+text \<open>Predicates and projections for the labels to encode their meaning.\<close>
+definition owner :: "dlm * data \<Rightarrow> actor" where "owner d \<equiv> fst(fst d)"
+definition owns :: "[igraph, location, actor, dlm * data] \<Rightarrow> bool"
+ where "owns G l a d \<equiv> owner d = a"
+definition readers :: "dlm * data \<Rightarrow> actor set"
+ where "readers d \<equiv> snd (fst d)"
+
+text \<open>The predicate @{text \<open>has_access\<close>} is true for owners or readers.\<close>
+definition has_access :: "[igraph, location, actor, dlm * data] \<Rightarrow> bool"
+where "has_access G l a d \<equiv> owns G l a d \<or> a \<in> readers d"
+
+(*
+text \<open>Actors can delete data.\<close>
+definition actor_can_delete :: "[infrastructure, actor, location] \<Rightarrow> bool"
+where actor_can_delete_def: "actor_can_delete I h l \<equiv>
+ (\<forall> as n. ((h, as), n) \<notin> (snd (lgra (graphI I) l)))"
+*)
+text \<open>We define a type of functions that preserves the security labeling and a
+ corresponding function application operator.\<close>
+typedef label_fun = "{f :: dlm * data \<Rightarrow> dlm * data.
+ \<forall> x:: dlm * data. fst x = fst (f x)}"
+ by (fastforce)
+
+definition secure_process :: "label_fun \<Rightarrow> dlm * data \<Rightarrow> dlm * data" (infixr "\<Updown>" 50)
+ where "f \<Updown> d \<equiv> (Rep_label_fun f) d"
+
+(* This part is relevant to model Insiders but is not needed for Infrastructures.
+
+datatype psy_states = happy | depressed | disgruntled | angry | stressed
+datatype motivations = financial | political | revenge | curious | competitive_advantage | power | peer_recognition
+
+datatype actor_state = Actor_state "psy_states" "motivations set"
+primrec motivation :: "actor_state \<Rightarrow> motivations set"
+where "motivation (Actor_state p m) = m"
+primrec psy_state :: "actor_state \<Rightarrow> psy_states"
+where "psy_state (Actor_state p m) = p"
+
+definition tipping_point :: "actor_state \<Rightarrow> bool" where
+ "tipping_point a \<equiv> ((motivation a \<noteq> {}) \<and> (happy \<noteq> psy_state a))"
+
+consts astate :: "identity \<Rightarrow> actor_state"
+
+(* Two versions of an impersonation predicate "a can act as b".
+ The first one is stronger and allows substitution of the insider in any context;
+ the second one is parameterized over a context predicate to describe this. *)
+definition UasI :: "[identity, identity] \<Rightarrow> bool "
+where "UasI a b \<equiv> (Actor a = Actor b) \<and> (\<forall> x y. x \<noteq> a \<and> y \<noteq> a \<and> Actor x = Actor y \<longrightarrow> x = y)"
+
+definition UasI' :: "[actor => bool, identity, identity] \<Rightarrow> bool "
+where "UasI' P a b \<equiv> P (Actor b) \<longrightarrow> P (Actor a)"
+
+definition Insider :: "[identity, identity set] \<Rightarrow> bool"
+where "Insider a C \<equiv> (tipping_point (astate a) \<longrightarrow> (\<forall> b\<in>C. UasI a b))"
+
+definition Insider' :: "[actor \<Rightarrow> bool, identity, identity set] \<Rightarrow> bool"
+where "Insider' P a C \<equiv> (tipping_point (astate a) \<longrightarrow> (\<forall> b\<in>C. UasI' P a b \<and> inj_on Actor C))"
+*)
+
+text \<open>The predicate atI -- mixfix syntax @{text \<open>@\<^bsub>G\<^esub>\<close>} -- expresses that an actor (identity)
+ is at a certain location in an igraph.\<close>
+definition atI :: "[identity, igraph, location] \<Rightarrow> bool" ("_ @\<^bsub>(_)\<^esub> _" 50)
+where "a @\<^bsub>G\<^esub> l \<equiv> a \<in> (agra G l)"
+
+text \<open>Policies specify the expected behaviour of actors of an infrastructure.
+They are defined by the @{text \<open>enables\<close>} predicate:
+an actor @{text \<open>h\<close>} is enabled to perform an action @{text \<open>a\<close>}
+in infrastructure @{text \<open>I\<close>}, at location @{text \<open>l\<close>}
+if there exists a pair @{text \<open>(p,e)\<close>} in the local policy of @{text \<open>l\<close>}
+(@{text \<open>delta I l\<close>} projects to the local policy) such that the action
+@{text \<open>a\<close>} is a member of the action set @{text \<open>e\<close>} and the policy
+predicate @{text \<open>p\<close>} holds for actor @{text \<open>h\<close>}.\<close>
+definition enables :: "[infrastructure, location, actor, action] \<Rightarrow> bool"
+where
+"enables I l a a' \<equiv> (\<exists> (p,e) \<in> delta I (graphI I) l. a' \<in> e \<and> p a)"
+
+text \<open>The behaviour is the good behaviour, i.e. everything allowed by the policy of infrastructure I.\<close>
+definition behaviour :: "infrastructure \<Rightarrow> (location * actor * action)set"
+where "behaviour I \<equiv> {(t,a,a'). enables I t a a'}"
+
+text \<open>The misbehaviour is the complement of the behaviour of an infrastructure I.\<close>
+definition misbehaviour :: "infrastructure \<Rightarrow> (location * actor * action)set"
+where "misbehaviour I \<equiv> -(behaviour I)"
+
+subsection "State transition on infrastructures"
+text \<open>The state transition defines how actors may act on infrastructures through actions
+ within the boundaries of the policy. It is given as an inductive definition over the
+ states which are infrastructures. This state transition relation is dependent on actions but also on
+ enabledness and the current state of the infrastructure.
+
+ First we introduce some auxiliary functions dealing
+ with repetitions in lists and actors moving in an igraph.\<close>
+primrec jonce :: "['a, 'a list] \<Rightarrow> bool"
+where
+jonce_nil: "jonce a [] = False" |
+jonce_cons: "jonce a (x#ls) = (if x = a then (a \<notin> (set ls)) else jonce a ls)"
+(*
+primrec nodup :: "['a, 'a list] \<Rightarrow> bool"
+ where
+ nodup_nil: "nodup a [] = True" |
+ nodup_step: "nodup a (x # ls) = (if x = a then (a \<notin> (set ls)) else nodup a ls)"
+*)
+definition move_graph_a :: "[identity, location, location, igraph] \<Rightarrow> igraph"
+where "move_graph_a n l l' g \<equiv> Lgraph (gra g)
+ (if n \<in> ((agra g) l) & n \<notin> ((agra g) l') then
+ ((agra g)(l := (agra g l) - {n}))(l' := (insert n (agra g l')))
+ else (agra g))(cgra g)(lgra g)"
+
+inductive state_transition_in :: "[infrastructure, infrastructure] \<Rightarrow> bool" ("(_ \<rightarrow>\<^sub>n _)" 50)
+where
+ move: "\<lbrakk> G = graphI I; a @\<^bsub>G\<^esub> l; l \<in> nodes G; l' \<in> nodes G;
+ (a) \<in> actors_graph(graphI I); enables I l' (Actor a) move;
+ I' = Infrastructure (move_graph_a a l l' (graphI I))(delta I) \<rbrakk> \<Longrightarrow> I \<rightarrow>\<^sub>n I'"
+| get : "\<lbrakk> G = graphI I; a @\<^bsub>G\<^esub> l; a' @\<^bsub>G\<^esub> l; has G (Actor a, z);
+ enables I l (Actor a) get;
+ I' = Infrastructure
+ (Lgraph (gra G)(agra G)
+ ((cgra G)(Actor a' :=
+ (insert z (fst(cgra G (Actor a'))), snd(cgra G (Actor a')))))
+ (lgra G))
+ (delta I)
+ \<rbrakk> \<Longrightarrow> I \<rightarrow>\<^sub>n I'"
+| get_data : "G = graphI I \<Longrightarrow> a @\<^bsub>G\<^esub> l \<Longrightarrow>
+ enables I l' (Actor a) get \<Longrightarrow>
+ ((Actor a', as), n) \<in> snd (lgra G l') \<Longrightarrow> Actor a \<in> as \<Longrightarrow>
+ I' = Infrastructure
+ (Lgraph (gra G)(agra G)(cgra G)
+ ((lgra G)(l := (fst (lgra G l),
+ snd (lgra G l) \<union> {((Actor a', as), n)}))))
+ (delta I)
+ \<Longrightarrow> I \<rightarrow>\<^sub>n I'"
+| process : "G = graphI I \<Longrightarrow> a @\<^bsub>G\<^esub> l \<Longrightarrow>
+ enables I l (Actor a) eval \<Longrightarrow>
+ ((Actor a', as), n) \<in> snd (lgra G l) \<Longrightarrow> Actor a \<in> as \<Longrightarrow>
+ I' = Infrastructure
+ (Lgraph (gra G)(agra G)(cgra G)
+ ((lgra G)(l := (fst (lgra G l),
+ snd (lgra G l) - {((Actor a', as), n)}
+ \<union> {(f :: label_fun) \<Updown> ((Actor a', as), n)}))))
+ (delta I)
+ \<Longrightarrow> I \<rightarrow>\<^sub>n I'"
+| del_data : "G = graphI I \<Longrightarrow> a \<in> actors G \<Longrightarrow> l \<in> nodes G \<Longrightarrow>
+ ((Actor a, as), n) \<in> snd (lgra G l) \<Longrightarrow>
+ I' = Infrastructure
+ (Lgraph (gra G)(agra G)(cgra G)
+ ((lgra G)(l := (fst (lgra G l), snd (lgra G l) - {((Actor a, as), n)}))))
+ (delta I)
+ \<Longrightarrow> I \<rightarrow>\<^sub>n I'"
+| put : "G = graphI I \<Longrightarrow> a @\<^bsub>G\<^esub> l \<Longrightarrow> enables I l (Actor a) put \<Longrightarrow>
+ I' = Infrastructure
+ (Lgraph (gra G)(agra G)(cgra G)
+ ((lgra G)(l := (s, snd (lgra G l) \<union> {((Actor a, as), n)}))))
+ (delta I)
+ \<Longrightarrow> I \<rightarrow>\<^sub>n I'"
+
+text \<open>Note that the type infrastructure can now be instantiated to the axiomatic type class
+ @{text\<open>state\<close>} which enables the use of the underlying Kripke structures and CTL.\<close>
+instantiation "infrastructure" :: state
+begin
+definition
+ state_transition_infra_def: "(i \<rightarrow>\<^sub>i i') = (i \<rightarrow>\<^sub>n (i' :: infrastructure))"
+
+instance
+ by (rule MC.class.MC.state.of_class.intro)
+
+definition state_transition_in_refl ("(_ \<rightarrow>\<^sub>n* _)" 50)
+where "s \<rightarrow>\<^sub>n* s' \<equiv> ((s,s') \<in> {(x,y). state_transition_in x y}\<^sup>*)"
+
+end
+
+lemma move_graph_eq: "move_graph_a a l l g = g"
+ by (simp add: move_graph_a_def, case_tac g, force)
+
+end
+
+
\ No newline at end of file
diff --git a/thys/Attack_Trees/MC.thy b/thys/Attack_Trees/MC.thy
new file mode 100644
--- /dev/null
+++ b/thys/Attack_Trees/MC.thy
@@ -0,0 +1,467 @@
+section "Kripke structures and CTL"
+
+text \<open>We apply Kripke structures and CTL to model state based systems and analyse properties under
+dynamic state changes. Snapshots of systems are the states on which we define a state transition.
+Temporal logic is then employed to express security and privacy properties.\<close>
+theory MC
+imports Main
+begin
+
+subsection "Lemmas to support least and greatest fixpoints"
+
+lemma predtrans_empty:
+ assumes "mono (\<tau> :: 'a set \<Rightarrow> 'a set)"
+ shows "\<forall> i. (\<tau> ^^ i) ({}) \<subseteq> (\<tau> ^^(i + 1))({})"
+ using assms funpow_decreasing le_add1 by blast
+
+lemma ex_card: "finite S \<Longrightarrow> \<exists> n:: nat. card S = n"
+by simp
+
+lemma less_not_le: "\<lbrakk>(x:: nat) < y; y \<le> x\<rbrakk> \<Longrightarrow> False"
+by arith
+
+lemma infchain_outruns_all:
+ assumes "finite (UNIV :: 'a set)"
+ and "\<forall>i :: nat. ((\<tau> :: 'a set \<Rightarrow> 'a set)^^ i) ({}:: 'a set) \<subset> (\<tau> ^^ (i + 1)) {}"
+ shows "\<forall>j :: nat. \<exists>i :: nat. j < card ((\<tau> ^^ i) {})"
+proof (rule allI, induct_tac j)
+ show "\<exists>i. 0 < card ((\<tau> ^^ i) {})" using assms
+ by (metis bot.not_eq_extremum card_gt_0_iff finite_subset subset_UNIV)
+next show "\<And>j n. \<exists>i. n < card ((\<tau> ^^ i) {})
+ \<Longrightarrow> \<exists>i. Suc n < card ((\<tau> ^^ i) {})"
+ proof -
+ fix j n
+ assume a: "\<exists>i. n < card ((\<tau> ^^ i) {})"
+ obtain i where "n < card ((\<tau> ^^ i) {})"
+ using a by blast
+ thus "\<exists> i. Suc n < card ((\<tau> ^^ i) {})" using assms
+ by (meson finite_subset le_less_trans le_simps(3) psubset_card_mono subset_UNIV)
+ qed
+ qed
+
+lemma no_infinite_subset_chain:
+ assumes "finite (UNIV :: 'a set)"
+ and "mono (\<tau> :: ('a set \<Rightarrow> 'a set))"
+ and "\<forall>i :: nat. ((\<tau> :: 'a set \<Rightarrow> 'a set) ^^ i) {} \<subset> (\<tau> ^^ (i + (1 :: nat))) ({} :: 'a set)"
+ shows "False"
+text \<open>Proof idea: since @{term "UNIV"} is finite, we have from @{text \<open>ex_card\<close>} that there is
+ an n with @{term "card UNIV = n"}. Now, use @{text \<open>infchain_outruns_all\<close>} to show as
+ contradiction point that
+ @{term "\<exists> i :: nat. card UNIV < card ((\<tau> ^^ i) {})"}.
+ Since all sets are subsets of @{term "UNIV"}, we also have
+ @{term "card ((\<tau> ^^ i) {}) \<le> card UNIV"}:
+ Contradiction!, i.e. proof of False \<close>
+proof -
+ have a: "\<forall> (j :: nat). (\<exists> (i :: nat). (j :: nat) < card((\<tau> ^^ i)({} :: 'a set)))" using assms
+ by (erule_tac \<tau> = \<tau> in infchain_outruns_all)
+ hence b: "\<exists> (n :: nat). card(UNIV :: 'a set) = n" using assms
+ by (erule_tac S = UNIV in ex_card)
+ from this obtain n where c: "card(UNIV :: 'a set) = n" by (erule exE)
+ hence d: "\<exists>i. card UNIV < card ((\<tau> ^^ i) {})" using a
+ by (drule_tac x = "card UNIV" in spec)
+ from this obtain i where e: "card (UNIV :: 'a set) < card ((\<tau> ^^ i) {})"
+ by (erule exE)
+ hence f: "(card((\<tau> ^^ i){})) \<le> (card (UNIV :: 'a set))" using assms
+ apply (erule_tac A = "((\<tau> ^^ i){})" in Finite_Set.card_mono)
+ by (rule subset_UNIV)
+ thus "False" using e
+ by (erule_tac y = "card((\<tau> ^^ i){})" in less_not_le)
+qed
+
+lemma finite_fixp:
+ assumes "finite(UNIV :: 'a set)"
+ and "mono (\<tau> :: ('a set \<Rightarrow> 'a set))"
+ shows "\<exists> i. (\<tau> ^^ i) ({}) = (\<tau> ^^(i + 1))({})"
+text \<open>Proof idea:
+with @{text predtrans_empty} we know
+
+@{term "\<forall> i. (\<tau> ^^ i){} \<subseteq> (\<tau> ^^(i + 1))({})"} (1).
+
+If we can additionally show
+
+@{term "\<exists> i. (\<tau> ^^ i)({}) \<supseteq> (\<tau> ^^(i + 1))({})"} (2),
+
+we can get the goal together with equalityI
+@{text "\<subseteq> + \<supseteq> \<longrightarrow> ="}.
+To prove (1) we observe that
+@{term "(\<tau> ^^ i)({}) \<supseteq> (\<tau> ^^(i + 1))({})"}
+can be inferred from
+@{term "\<not>((\<tau> ^^ i)({}) \<subseteq> (\<tau> ^^(i + 1))({}))"}
+and (1).
+Finally, the latter is solved directly by @{text \<open>no_infinite_subset_chain\<close>}.\<close>
+proof -
+ have a: "\<forall>i. (\<tau> ^^ i) ({}:: 'a set) \<subseteq> (\<tau> ^^ (i + (1))) {}"
+ by(rule predtrans_empty, rule assms(2))
+ have a3: "\<not> (\<forall> i :: nat. (\<tau> ^^ i) {} \<subset> (\<tau> ^^(i + 1)) {})"
+ by (rule notI, rule no_infinite_subset_chain, (rule assms)+)
+ hence b: "(\<exists> i :: nat. \<not>((\<tau> ^^ i) {} \<subset> (\<tau> ^^(i + 1)) {}))" using assms a3
+ by blast
+ thus "\<exists> i. (\<tau> ^^ i) ({}) = (\<tau> ^^(i + 1))({})" using a
+ by blast
+qed
+
+lemma predtrans_UNIV:
+ assumes "mono (\<tau> :: ('a set \<Rightarrow> 'a set))"
+ shows "\<forall> i. (\<tau> ^^ i) (UNIV) \<supseteq> (\<tau> ^^(i + 1))(UNIV)"
+proof (rule allI, induct_tac i)
+ show "(\<tau> ^^ ((0) + (1))) UNIV \<subseteq> (\<tau> ^^ (0)) UNIV"
+ by simp
+next show "\<And>(i) n.
+ (\<tau> ^^ (n + (1))) UNIV \<subseteq> (\<tau> ^^ n) UNIV \<Longrightarrow> (\<tau> ^^ (Suc n + (1))) UNIV \<subseteq> (\<tau> ^^ Suc n) UNIV"
+ proof -
+ fix i n
+ assume a: "(\<tau> ^^ (n + (1))) UNIV \<subseteq> (\<tau> ^^ n) UNIV"
+ have "(\<tau> ((\<tau> ^^ n) UNIV)) \<supseteq> (\<tau> ((\<tau> ^^ (n + (1 :: nat))) UNIV))" using assms a
+ by (rule monoE)
+ thus "(\<tau> ^^ (Suc n + (1))) UNIV \<subseteq> (\<tau> ^^ Suc n) UNIV" by simp
+ qed
+ qed
+
+lemma Suc_less_le: "x < (y - n) \<Longrightarrow> x \<le> (y - (Suc n))"
+ by simp
+
+lemma card_univ_subtract:
+ assumes "finite (UNIV :: 'a set)" and "mono \<tau>"
+ and "(\<forall>i :: nat. ((\<tau> :: 'a set \<Rightarrow> 'a set) ^^ (i + (1 :: nat)))(UNIV :: 'a set) \<subset> (\<tau> ^^ i) UNIV)"
+ shows "(\<forall> i :: nat. card((\<tau> ^^ i) (UNIV ::'a set)) \<le> (card (UNIV :: 'a set)) - i)"
+proof (rule allI, induct_tac i)
+ show "card ((\<tau> ^^ (0)) UNIV) \<le> card (UNIV :: 'a set) - (0)" using assms
+ by (simp)
+next show "\<And>(i) n.
+ card ((\<tau> ^^ n) (UNIV :: 'a set)) \<le> card (UNIV :: 'a set) - n \<Longrightarrow>
+ card ((\<tau> ^^ Suc n) (UNIV :: 'a set)) \<le> card (UNIV :: 'a set) - Suc n" using assms
+ proof -
+ fix i n
+ assume a: "card ((\<tau> ^^ n) (UNIV :: 'a set)) \<le> card (UNIV :: 'a set) - n"
+ have b: "(\<tau> ^^ (n + (1)))(UNIV :: 'a set) \<subset> (\<tau> ^^ n) UNIV" using assms
+ by (erule_tac x = n in spec)
+ have "card((\<tau> ^^ (n + (1 :: nat)))(UNIV :: 'a set)) < card((\<tau> ^^ n) (UNIV :: 'a set))"
+ by (rule psubset_card_mono, rule finite_subset, rule subset_UNIV, rule assms(1), rule b)
+ thus "card ((\<tau> ^^ Suc n) (UNIV :: 'a set)) \<le> card (UNIV :: 'a set) - Suc n" using a
+ by simp
+ qed
+ qed
+
+lemma card_UNIV_tau_i_below_zero:
+ assumes "finite (UNIV :: 'a set)" and "mono \<tau>"
+ and "(\<forall>i :: nat. ((\<tau> :: ('a set \<Rightarrow> 'a set)) ^^ (i + (1 :: nat)))(UNIV :: 'a set) \<subset> (\<tau> ^^ i) UNIV)"
+ shows "card((\<tau> ^^ (card (UNIV ::'a set))) (UNIV ::'a set)) \<le> 0"
+proof -
+ have "(\<forall> i :: nat. card((\<tau> ^^ i) (UNIV ::'a set)) \<le> (card (UNIV :: 'a set)) - i)" using assms
+ by (rule card_univ_subtract)
+ thus "card((\<tau> ^^ (card (UNIV ::'a set))) (UNIV ::'a set)) \<le> 0"
+ by (drule_tac x = "card (UNIV ::'a set)" in spec, simp)
+qed
+
+lemma finite_card_zero_empty: "\<lbrakk> finite S; card S \<le> 0\<rbrakk> \<Longrightarrow> S = {}"
+by simp
+
+lemma UNIV_tau_i_is_empty:
+ assumes "finite (UNIV :: 'a set)" and "mono (\<tau> :: ('a set \<Rightarrow> 'a set))"
+ and "(\<forall>i :: nat. ((\<tau> :: 'a set \<Rightarrow> 'a set) ^^ (i + (1 :: nat)))(UNIV :: 'a set) \<subset> (\<tau> ^^ i) UNIV)"
+ shows "(\<tau> ^^ (card (UNIV ::'a set))) (UNIV ::'a set) = {}"
+ by (meson assms card_UNIV_tau_i_below_zero finite_card_zero_empty finite_subset subset_UNIV)
+
+lemma down_chain_reaches_empty:
+ assumes "finite (UNIV :: 'a set)" and "mono (\<tau> :: 'a set \<Rightarrow> 'a set)"
+ and "(\<forall>i :: nat. ((\<tau> :: 'a set \<Rightarrow> 'a set) ^^ (i + (1 :: nat))) UNIV \<subset> (\<tau> ^^ i) UNIV)"
+ shows "\<exists> (j :: nat). (\<tau> ^^ j) UNIV = {}"
+ using UNIV_tau_i_is_empty assms by blast
+
+lemma no_infinite_subset_chain2:
+ assumes "finite (UNIV :: 'a set)" and "mono (\<tau> :: ('a set \<Rightarrow> 'a set))"
+ and "\<forall>i :: nat. (\<tau> ^^ i) UNIV \<supset> (\<tau> ^^ (i + (1 :: nat))) UNIV"
+ shows "False"
+proof -
+ have "\<exists> j :: nat. (\<tau> ^^ j) UNIV = {}" using assms
+ by (rule down_chain_reaches_empty)
+ from this obtain j where a: "(\<tau> ^^ j) UNIV = {}" by (erule exE)
+ have "(\<tau> ^^ (j + (1))) UNIV \<subset> (\<tau> ^^ j) UNIV" using assms
+ by (erule_tac x = j in spec)
+ thus False using a by simp
+qed
+
+lemma finite_fixp2:
+ assumes "finite(UNIV :: 'a set)" and "mono (\<tau> :: ('a set \<Rightarrow> 'a set))"
+ shows "\<exists> i. (\<tau> ^^ i) UNIV = (\<tau> ^^(i + 1)) UNIV"
+proof -
+ have "\<forall>i. (\<tau> ^^ (i + (1))) UNIV \<subseteq> (\<tau> ^^ i) UNIV"
+ by (rule predtrans_UNIV , simp add: assms(2))
+ moreover have "\<exists>i. \<not> (\<tau> ^^ (i + (1))) UNIV \<subset> (\<tau> ^^ i) UNIV" using assms
+ proof -
+ have "\<not> (\<forall> i :: nat. (\<tau> ^^ i) UNIV \<supset> (\<tau> ^^(i + 1)) UNIV)"
+ using assms(1) assms(2) no_infinite_subset_chain2 by blast
+ thus "\<exists>i. \<not> (\<tau> ^^ (i + (1))) UNIV \<subset> (\<tau> ^^ i) UNIV" by blast
+ qed
+ ultimately show "\<exists> i. (\<tau> ^^ i) UNIV = (\<tau> ^^(i + 1)) UNIV"
+ by blast
+qed
+
+lemma lfp_loop:
+ assumes "finite (UNIV :: 'b set)" and "mono (\<tau> :: ('b set \<Rightarrow> 'b set))"
+ shows "\<exists> n . lfp \<tau> = (\<tau> ^^ n) {}"
+proof -
+ have "\<exists>i. (\<tau> ^^ i) {} = (\<tau> ^^ (i + (1))) {}" using assms
+ by (rule finite_fixp)
+ from this obtain i where " (\<tau> ^^ i) {} = (\<tau> ^^ (i + (1))) {}"
+ by (erule exE)
+ hence "(\<tau> ^^ i) {} = (\<tau> ^^ Suc i) {}"
+ by simp
+ hence "(\<tau> ^^ Suc i) {} = (\<tau> ^^ i) {}"
+ by (rule sym)
+ hence "lfp \<tau> = (\<tau> ^^ i) {}"
+ by (simp add: assms(2) lfp_Kleene_iter)
+ thus "\<exists> n . lfp \<tau> = (\<tau> ^^ n) {}"
+ by (rule exI)
+qed
+
+text \<open>These next two are repeated from the corresponding
+ theorems in HOL/ZF/Nat.thy for the sake of self-containedness of the exposition.\<close>
+lemma Kleene_iter_gpfp:
+ assumes "mono f" and "p \<le> f p" shows "p \<le> (f^^k) (top::'a::order_top)"
+proof(induction k)
+ case 0 show ?case by simp
+next
+ case Suc
+ from monoD[OF assms(1) Suc] assms(2)
+ show ?case by simp
+qed
+
+lemma gfp_loop:
+ assumes "finite (UNIV :: 'b set)"
+ and "mono (\<tau> :: ('b set \<Rightarrow> 'b set))"
+ shows "\<exists> n . gfp \<tau> = (\<tau> ^^ n)UNIV"
+proof -
+ have " \<exists>i. (\<tau> ^^ i)(UNIV :: 'b set) = (\<tau> ^^ (i + (1))) UNIV" using assms
+ by (rule finite_fixp2)
+ from this obtain i where "(\<tau> ^^ i)UNIV = (\<tau> ^^ (i + (1))) UNIV" by (erule exE)
+ thus "\<exists> n . gfp \<tau> = (\<tau> ^^ n)UNIV" using assms
+ by (metis Suc_eq_plus1 gfp_Kleene_iter)
+qed
+
+subsection \<open>Generic type of state with state transition and CTL operators\<close>
+text \<open>The system states and their transition relation are defined as a class called
+ @{text \<open>state\<close>} containing an abstract constant@{text \<open>state_transition\<close>}. It introduces the
+syntactic infix notation @{text \<open>I \<rightarrow>\<^sub>i I'\<close>} to denote that system state @{text \<open>I\<close>} and @{text \<open>I’\<close>}
+are in this relation over an arbitrary (polymorphic) type @{text \<open>'a\<close>}.\<close>
+class state =
+ fixes state_transition :: "['a :: type, 'a] \<Rightarrow> bool" (infixr "\<rightarrow>\<^sub>i" 50)
+
+text \<open>The above class definition lifts Kripke structures and CTL to a general level.
+The definition of the inductive relation is given by a set of specific rules which are,
+however, part of an application like infrastructures. Branching time temporal logic CTL
+is defined in general over Kripke structures with arbitrary state transitions and can later
+be applied to suitable theories, like infrastructures.
+Based on the generic state transition @{text \<open>\<rightarrow>\<close>} of the type class state, the CTL-operators
+EX and AX express that property f holds in some or all next states, respectively.\<close>
+
+definition AX where "AX f \<equiv> {s. {f0. s \<rightarrow>\<^sub>i f0} \<subseteq> f}"
+definition EX' where "EX' f \<equiv> {s . \<exists> f0 \<in> f. s \<rightarrow>\<^sub>i f0 }"
+
+text \<open>The CTL formula @{text \<open>AG f\<close>} means that on all paths branching from a state @{text \<open>s\<close>}
+the formula @{text \<open>f\<close>} is always true (@{text \<open>G\<close>} stands for ‘globally’). It can be defined
+using the Tarski fixpoint theory by applying the greatest fixpoint operator. In a similar way,
+the other CTL operators are defined.\<close>
+definition AF where "AF f \<equiv> lfp (\<lambda> Z. f \<union> AX Z)"
+definition EF where "EF f \<equiv> lfp (\<lambda> Z. f \<union> EX' Z)"
+definition AG where "AG f \<equiv> gfp (\<lambda> Z. f \<inter> AX Z)"
+definition EG where "EG f \<equiv> gfp (\<lambda> Z. f \<inter> EX' Z)"
+definition AU where "AU f1 f2 \<equiv> lfp(\<lambda> Z. f2 \<union> (f1 \<inter> AX Z))"
+definition EU where "EU f1 f2 \<equiv> lfp(\<lambda> Z. f2 \<union> (f1 \<inter> EX' Z))"
+definition AR where "AR f1 f2 \<equiv> gfp(\<lambda> Z. f2 \<inter> (f1 \<union> AX Z))"
+definition ER where "ER f1 f2 \<equiv> gfp(\<lambda> Z. f2 \<inter> (f1 \<union> EX' Z))"
+
+subsection \<open>Kripke structures and Modelchecking\<close>
+datatype 'a kripke =
+ Kripke "'a set" "'a set"
+
+primrec states where "states (Kripke S I) = S"
+primrec init where "init (Kripke S I) = I"
+
+text \<open>The formal Isabelle definition of what it means that formula f holds
+in a Kripke structure M can be stated as: the initial states of the Kripke
+structure init M need to be contained in the set of all states states M that
+imply f.\<close>
+definition check ("_ \<turnstile> _" 50)
+ where "M \<turnstile> f \<equiv> (init M) \<subseteq> {s \<in> (states M). s \<in> f }"
+
+definition state_transition_refl (infixr "\<rightarrow>\<^sub>i*" 50)
+where "s \<rightarrow>\<^sub>i* s' \<equiv> ((s,s') \<in> {(x,y). state_transition x y}\<^sup>*)"
+
+subsection \<open>Lemmas for CTL operators\<close>
+
+subsubsection \<open>EF lemmas\<close>
+lemma EF_lem0: "(x \<in> EF f) = (x \<in> f \<union> EX' (lfp (\<lambda>Z :: ('a :: state) set. f \<union> EX' Z)))"
+proof -
+ have "lfp (\<lambda>Z :: ('a :: state) set. f \<union> EX' Z) =
+ f \<union> (EX' (lfp (\<lambda>Z :: 'a set. f \<union> EX' Z)))"
+ by (rule def_lfp_unfold, rule reflexive, unfold mono_def EX'_def, auto)
+ thus "(x \<in> EF (f :: ('a :: state) set)) = (x \<in> f \<union> EX' (lfp (\<lambda>Z :: ('a :: state) set. f \<union> EX' Z)))"
+ by (simp add: EF_def)
+qed
+
+lemma EF_lem00: "(EF f) = (f \<union> EX' (lfp (\<lambda> Z :: ('a :: state) set. f \<union> EX' Z)))"
+ by (auto simp: EF_lem0)
+
+lemma EF_lem000: "(EF f) = (f \<union> EX' (EF f))"
+ by (metis EF_def EF_lem00)
+
+lemma EF_lem1: "x \<in> f \<or> x \<in> (EX' (EF f)) \<Longrightarrow> x \<in> EF f"
+proof (simp add: EF_def)
+ assume a: "x \<in> f \<or> x \<in> EX' (lfp (\<lambda>Z::'a set. f \<union> EX' Z))"
+ show "x \<in> lfp (\<lambda>Z::'a set. f \<union> EX' Z)"
+ proof -
+ have b: "lfp (\<lambda>Z :: ('a :: state) set. f \<union> EX' Z) =
+ f \<union> (EX' (lfp (\<lambda>Z :: ('a :: state) set. f \<union> EX' Z)))"
+ using EF_def EF_lem00 by blast
+ thus "x \<in> lfp (\<lambda>Z::'a set. f \<union> EX' Z)" using a
+ by (subst b, blast)
+ qed
+qed
+
+lemma EF_lem2b:
+ assumes "x \<in> (EX' (EF f))"
+ shows "x \<in> EF f"
+ by (simp add: EF_lem1 assms)
+
+lemma EF_lem2a: assumes "x \<in> f" shows "x \<in> EF f"
+ by (simp add: EF_lem1 assms)
+
+lemma EF_lem2c: assumes "x \<notin> f" shows "x \<in> EF (- f)"
+ by (simp add: EF_lem1 assms)
+
+lemma EF_lem2d: assumes "x \<notin> EF f" shows "x \<notin> f"
+ using EF_lem1 assms by auto
+
+lemma EF_lem3b: assumes "x \<in> EX' (f \<union> EX' (EF f))" shows "x \<in> (EF f)"
+ by (metis EF_lem000 EF_lem2b assms)
+
+lemma EX_lem0l: "x \<in> (EX' f) \<Longrightarrow> x \<in> (EX' (f \<union> g))"
+ by (auto simp: EX'_def)
+
+lemma EX_lem0r: "x \<in> (EX' g) \<Longrightarrow> x \<in> (EX' (f \<union> g))"
+ by (auto simp: EX'_def)
+
+lemma EX_step: assumes "x \<rightarrow>\<^sub>i y" and "y \<in> f" shows "x \<in> EX' f"
+ using assms by (auto simp: EX'_def)
+
+lemma EF_E[rule_format]: "\<forall> f. x \<in> (EF f) \<longrightarrow> x \<in> (f \<union> EX' (EF f))"
+ using EF_lem000 by blast
+
+lemma EF_step: assumes "x \<rightarrow>\<^sub>i y" and "y \<in> f" shows "x \<in> EF f"
+ using EF_lem3b EX_step assms by blast
+
+lemma EF_step_step: assumes "x \<rightarrow>\<^sub>i y" and "y \<in> EF f" shows "x \<in> EF f"
+ using EF_lem2b EX_step assms by blast
+
+lemma EF_step_star: "\<lbrakk> x \<rightarrow>\<^sub>i* y; y \<in> f \<rbrakk> \<Longrightarrow> x \<in> EF f"
+proof (simp add: state_transition_refl_def)
+ show "(x, y) \<in> {(x::'a, y::'a). x \<rightarrow>\<^sub>i y}\<^sup>* \<Longrightarrow> y \<in> f \<Longrightarrow> x \<in> EF f"
+ proof (erule converse_rtrancl_induct)
+ show "y \<in> f \<Longrightarrow> y \<in> EF f"
+ by (erule EF_lem2a)
+ next show "\<And>ya z::'a. y \<in> f \<Longrightarrow>
+ (ya, z) \<in> {(x,y). x \<rightarrow>\<^sub>i y} \<Longrightarrow>
+ (z, y) \<in> {(x,y). x \<rightarrow>\<^sub>i y}\<^sup>* \<Longrightarrow> z \<in> EF f \<Longrightarrow> ya \<in> EF f"
+ by (simp add: EF_step_step)
+ qed
+ qed
+
+lemma EF_induct: "(a::'a::state) \<in> EF f \<Longrightarrow>
+ mono (\<lambda> Z. f \<union> EX' Z) \<Longrightarrow>
+ (\<And>x. x \<in> ((\<lambda> Z. f \<union> EX' Z)(EF f \<inter> {x::'a::state. P x})) \<Longrightarrow> P x) \<Longrightarrow>
+ P a"
+ by (metis (mono_tags, lifting) EF_def def_lfp_induct_set)
+
+lemma valEF_E: "M \<turnstile> EF f \<Longrightarrow> x \<in> init M \<Longrightarrow> x \<in> EF f"
+ by (auto simp: check_def)
+
+lemma EF_step_star_rev[rule_format]: "x \<in> EF s \<Longrightarrow> (\<exists> y \<in> s. x \<rightarrow>\<^sub>i* y)"
+proof (erule EF_induct)
+ show "mono (\<lambda>Z::'a set. s \<union> EX' Z)"
+ by (force simp add: mono_def EX'_def)
+next show "\<And>x::'a. x \<in> s \<union> EX' (EF s \<inter> {x::'a. \<exists>y::'a\<in>s. x \<rightarrow>\<^sub>i* y}) \<Longrightarrow> \<exists>y::'a\<in>s. x \<rightarrow>\<^sub>i* y"
+ apply (erule UnE)
+ using state_transition_refl_def apply auto[1]
+ by (auto simp add: EX'_def state_transition_refl_def intro: converse_rtrancl_into_rtrancl)
+qed
+
+lemma EF_step_inv: "(I \<subseteq> {sa::'s :: state. (\<exists>i\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> EF s})
+ \<Longrightarrow> \<forall> x \<in> I. \<exists> y \<in> s. x \<rightarrow>\<^sub>i* y"
+ using EF_step_star_rev by fastforce
+
+subsubsection \<open>AG lemmas\<close>
+
+lemma AG_in_lem: "x \<in> AG s \<Longrightarrow> x \<in> s"
+ by (auto simp add: AG_def gfp_def)
+
+lemma AG_lem1: "x \<in> s \<and> x \<in> (AX (AG s)) \<Longrightarrow> x \<in> AG s"
+proof (simp add: AG_def)
+ have "gfp (\<lambda>Z::'a set. s \<inter> AX Z) = s \<inter> (AX (gfp (\<lambda>Z::'a set. s \<inter> AX Z)))"
+ by (rule def_gfp_unfold) (auto simp: mono_def AX_def)
+ then show "x \<in> s \<and> x \<in> AX (gfp (\<lambda>Z::'a set. s \<inter> AX Z)) \<Longrightarrow> x \<in> gfp (\<lambda>Z::'a set. s \<inter> AX Z)"
+ by blast
+qed
+
+lemma AG_lem2: "x \<in> AG s \<Longrightarrow> x \<in> (s \<inter> (AX (AG s)))"
+proof -
+ have a: "AG s = s \<inter> (AX (AG s))"
+ unfolding AG_def
+ by (rule def_gfp_unfold) (auto simp: mono_def AX_def)
+ thus "x \<in> AG s \<Longrightarrow> x \<in> (s \<inter> (AX (AG s)))"
+ by (erule subst)
+qed
+
+lemma AG_lem3: "AG s = (s \<inter> (AX (AG s)))"
+ using AG_lem1 AG_lem2 by blast
+
+lemma AG_step: "y \<rightarrow>\<^sub>i z \<Longrightarrow> y \<in> AG s \<Longrightarrow> z \<in> AG s"
+ using AG_lem2 AX_def by blast
+
+lemma AG_all_s: " x \<rightarrow>\<^sub>i* y \<Longrightarrow> x \<in> AG s \<Longrightarrow> y \<in> AG s"
+proof (simp add: state_transition_refl_def)
+ show "(x, y) \<in> {(x,y). x \<rightarrow>\<^sub>i y}\<^sup>* \<Longrightarrow> x \<in> AG s \<Longrightarrow> y \<in> AG s"
+ by (erule rtrancl_induct) (auto simp add: AG_step)
+qed
+
+lemma AG_imp_notnotEF:
+"I \<noteq> {} \<Longrightarrow> ((Kripke {s. \<exists> i \<in> I. (i \<rightarrow>\<^sub>i* s)} I \<turnstile> AG s)) \<Longrightarrow>
+ (\<not>(Kripke {s. \<exists> i \<in> I. (i \<rightarrow>\<^sub>i* s)} (I :: ('s :: state)set) \<turnstile> EF (- s)))"
+proof (rule notI, simp add: check_def)
+ assume a0: "I \<noteq> {}" and
+ a1: "I \<subseteq> {sa::'s. (\<exists>i\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> AG s}" and
+ a2: "I \<subseteq> {sa::'s. (\<exists>i\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> EF (- s)}"
+ show "False"
+ proof -
+ have a3: "{sa::'s. (\<exists>i\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> AG s} \<inter>
+ {sa::'s. (\<exists>i\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> EF (- s)} = {}"
+ proof -
+ have "(? x. x \<in> {sa::'s. (\<exists>i\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> AG s} \<and>
+ x \<in> {sa::'s. (\<exists>i\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> EF (- s)}) \<Longrightarrow> False"
+ proof -
+ assume a4: "(? x. x \<in> {sa::'s. (\<exists>i\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> AG s} \<and>
+ x \<in> {sa::'s. (\<exists>i\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> EF (- s)})"
+ from a4 obtain x where a5: "x \<in> {sa::'s. (\<exists>i\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> AG s} \<and>
+ x \<in> {sa::'s. (\<exists>i\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> EF (- s)}"
+ by (erule exE)
+ thus "False"
+ by (metis (mono_tags, lifting) AG_all_s AG_in_lem ComplD EF_step_star_rev a5 mem_Collect_eq)
+ qed
+ thus "{sa::'s. (\<exists>i\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> AG s} \<inter>
+ {sa::'s. (\<exists>i\<in>I. i \<rightarrow>\<^sub>i* sa) \<and> sa \<in> EF (- s)} = {}"
+ by blast
+ qed
+ moreover have b: "? x. x : I" using a0
+ by blast
+ moreover obtain x where "x \<in> I"
+ using b by blast
+ ultimately show "False" using a0 a1 a2
+ by blast
+ qed
+qed
+
+text \<open>A simplified way of Modelchecking is given by the following lemma.\<close>
+lemma check2_def: "(Kripke S I \<turnstile> f) = (I \<subseteq> S \<inter> f)"
+ by (auto simp add: check_def)
+
+end
\ No newline at end of file
diff --git a/thys/Attack_Trees/ROOT b/thys/Attack_Trees/ROOT
new file mode 100644
--- /dev/null
+++ b/thys/Attack_Trees/ROOT
@@ -0,0 +1,12 @@
+chapter AFP
+
+session "Attack_Trees" (AFP) = "HOL" +
+ options [timeout = 300]
+theories
+ "MC"
+ "AT"
+ "Infrastructure"
+ "GDPRhealthcare"
+document_files
+ "root.bib"
+ "root.tex"
diff --git a/thys/Attack_Trees/document/root.bib b/thys/Attack_Trees/document/root.bib
new file mode 100644
--- /dev/null
+++ b/thys/Attack_Trees/document/root.bib
@@ -0,0 +1,92 @@
+@book{npw:02,
+author={Tobias Nipkow and Lawrence Paulson and Markus Wenzel},
+title="Isabelle/HOL --- A Proof Assistant for Higher-Order Logic",
+publisher={Springer},
+series= {LNCS},
+volume= {2283},
+year= {2002},
+note={\url{http://www.in.tum.de/~nipkow/LNCS2283/}}}
+@article{kp:16,
+author = {F. Kamm\"uller and C. W. Probst},
+title = {Modeling and Verification of Insider Threats Using Logical Analysis},
+journal = {IEEE Systems Journal, Special issue on Insider Threats to
+ Information Security, Digital Espionage, and Counter
+ Intelligence},
+volume = {11},
+number = {2},
+pages = {534--545},
+ doi = {10.1109/JSYST.2015.2453215},
+ url = {http://dx.doi.org/10.1109/JSYST.2015.2453215},
+ year = 2017,
+}
+@inproceedings{kk:16,
+ author = {Florian Kamm\"uller and
+ Manfred Kerber},
+ title = {Investigating Airplane Safety and Security against Insider Threats Using Logical Modeling},
+ booktitle = {IEEE Security and Privacy Workshops, Workshop on Research in Insider Threats, WRIT'16},
+ year = {2016},
+ publisher = {IEEE}
+}
+@proceedings{writ16,
+ title = {Proceedings of the fourth IEEE Workshop on Research in Insider Threats, WRIT'16},
+ booktitle = {WRIT'16},
+ publisher = {IEEE},
+ year = {2016}
+}
+@misc{mw:09,
+author = {Makarius Wenzel},
+title = {Re: [isabelle] typedecl versus explicit type parameters},
+note = {Isabelle users mailing list},
+url = {https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2009-July/msg00111.html)},
+year = {2009}
+}
+@misc{kk:20,
+ title={Applying the Isabelle Insider Framework to Airplane Security},
+ author={Florian Kamm\"uller and Manfred Kerber},
+ year={2020},
+ eprint={2003.11838},
+ archivePrefix={arXiv},
+ primaryClass={cs.SE},
+ note = {arxive preprint 2003.11838},
+ url = {https://arxiv.org/abs/2003.11838}
+}
+@inproceedings{kam:16b,
+author = {F. Kamm\"uller},
+title = {Isabelle Modelchecking for Insider Threats},
+booktitle = {Data Privacy Management, DPM'16, 11th Int. Workshop},
+note = {Co-located with ESORICS'16},
+series = {LNCS},
+volume = {9963},
+publisher = {Springer},
+year = {2016}
+}
+@Inproceedings{kam:18a,
+author = {F. Kamm\"uller},
+title = {Formal Modeling and Analysis of Data Protection
+ for GDPR Compliance of IoT Healthcare Systems},
+booktitle = {IEEE Systems, Man and Cybernetics, SMC2018},
+publisher = {IEEE},
+year = {2018}
+}
+@Inproceedings{kam:18b,
+author = {F. Kamm\"uller},
+title = {Attack Trees in Isabelle},
+booktitle = {20th International Conference on Information and Communications Security, ICICS2018},
+publisher = {Springer},
+series = {LNCS},
+volume = {11149},
+year = {2018}
+}
+@InProceedings{ml:98,
+ author = {A. C. Myers and B. Liskov},
+ title = {Complete, Safe Information Flow with Decentralized Labels},
+ booktitle = {Proceedings of the IEEE Symposium on Security and Privacy},
+ year = {1999},
+ publisher = {IEEE}
+}
+@misc{suc:16,
+author = {CHIST-ERA},
+title = {SUCCESS: SecUre aCCESSibility for the internet of things},
+note = {http://www.chistera.eu/projects/success},
+year = {2016}
+}
diff --git a/thys/Attack_Trees/document/root.tex b/thys/Attack_Trees/document/root.tex
new file mode 100644
--- /dev/null
+++ b/thys/Attack_Trees/document/root.tex
@@ -0,0 +1,78 @@
+\documentclass[11pt,a4paper]{article}
+\usepackage{isabelle,isabellesym}
+
+% further packages required for unusual symbols (see also
+% isabellesym.sty), use only when needed
+
+%\usepackage{amssymb}
+ %for \<leadsto>, \<box>, \<diamond>, \<sqsupset>, \<mho>, \<Join>,
+ %\<lhd>, \<lesssim>, \<greatersim>, \<lessapprox>, \<greaterapprox>,
+ %\<triangleq>, \<yen>, \<lozenge>
+
+%\usepackage{eurosym}
+ %for \<euro>
+
+%\usepackage[only,bigsqcap]{stmaryrd}
+ %for \<Sqinter>
+
+%\usepackage{eufrak}
+ %for \<AA> ... \<ZZ>, \<aa> ... \<zz> (also included in amssymb)
+
+%\usepackage{textcomp}
+ %for \<onequarter>, \<onehalf>, \<threequarters>, \<degree>, \<cent>,
+ %\<currency>
+
+% this should be the last package used
+\usepackage{pdfsetup}
+
+% urls in roman style, theory text in math-similar italics
+\urlstyle{rm}
+\isabellestyle{it}
+
+% for uniform font size
+%\renewcommand{\isastyle}{\isastyleminor}
+
+
+\begin{document}
+
+
+\title{Attack Trees in Isabelle for GDPR compliance of IoT healthcare systems}
+\author{Florian Kamm\"uller}
+
+\maketitle
+
+\begin{abstract}
+In this article, we present a proof theory for Attack Trees. Attack Trees are a well established and
+useful model for the construction of attacks on systems since they allow a stepwise exploration of
+high level attacks in application scenarios. Using the expressiveness of Higher Order Logic in Isabelle,
+we succeed in developing a generic theory of Attack Trees with a state-based semantics based on Kripke
+structures and CTL (see \cite{kam:16b} for more details).
+The resulting framework allows mechanically supported logic analysis of the meta-theory
+of the proof calculus of Attack Trees and at the same time the developed proof theory enables application
+to case studies.
+A central correctness and completeness result proved in Isabelle establishes a connection
+between the notion of Attack tTree validity and CTL.
+The application is illustrated on the example of a healthcare IoT system and GDPR compliance verification.
+A more detailed account of the Attack Tree formalisation is given in \cite{kam:18b} and the case study
+is described in detail in \cite{kam:18a}.
+%bla \cite{kk:16}\cite{kp:16}\cite{mw:09}\cite{kk:20}
+\end{abstract}
+\tableofcontents
+
+% sane default for proof documents
+\parindent 0pt\parskip 0.5ex
+
+% generated text of all theories
+\input{session}
+
+% optional bibliography
+\bibliographystyle{abbrv}
+\bibliography{root}
+
+
+\end{document}
+
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: t
+%%% End:
diff --git a/thys/Buchi_Complementation/Complementation_Final.thy b/thys/Buchi_Complementation/Complementation_Final.thy
--- a/thys/Buchi_Complementation/Complementation_Final.thy
+++ b/thys/Buchi_Complementation/Complementation_Final.thy
@@ -1,183 +1,183 @@
section \<open>Final Instantiation of Algorithms Related to Complementation\<close>
theory Complementation_Final
imports
"Complementation_Implement"
"Formula"
"Transition_Systems_and_Automata.NBA_Translate"
"Transition_Systems_and_Automata.NGBA_Algorithms"
"HOL-Library.Permutation"
begin
subsection \<open>Syntax\<close>
(* TODO: this syntax has unnecessarily high inner binding strength, requiring extra parentheses
the regular let syntax correctly uses inner binding strength 0: ("(2_ =/ _)" 10) *)
no_syntax "_do_let" :: "[pttrn, 'a] \<Rightarrow> do_bind" ("(2let _ =/ _)" [1000, 13] 13)
syntax "_do_let" :: "[pttrn, 'a] \<Rightarrow> do_bind" ("(2let _ =/ _)" 13)
subsection \<open>Hashcodes on Complement States\<close>
definition "hci k \<equiv> uint32_of_nat k * 1103515245 + 12345"
definition "hc \<equiv> \<lambda> (p, q, b). hci p + hci q * 31 + (if b then 1 else 0)"
- definition "list_hash xs \<equiv> fold (bitXOR \<circ> hc) xs 0"
+ definition "list_hash xs \<equiv> fold ((XOR) \<circ> hc) xs 0"
lemma list_hash_eq:
assumes "distinct xs" "distinct ys" "set xs = set ys"
shows "list_hash xs = list_hash ys"
proof -
have "remdups xs <~~> remdups ys" using eq_set_perm_remdups assms(3) by this
then have "xs <~~> ys" using assms(1, 2) by (simp add: distinct_remdups_id)
- then have "fold (bitXOR \<circ> hc) xs a = fold (bitXOR \<circ> hc) ys a" for a
+ then have "fold ((XOR) \<circ> hc) xs a = fold ((XOR) \<circ> hc) ys a" for a
proof (induct arbitrary: a)
case (swap y x l)
have "x XOR y XOR a = y XOR x XOR a" for x y by (transfer) (simp add: word_bw_lcs(3))
then show ?case by simp
qed simp+
then show ?thesis unfolding list_hash_def by this
qed
definition state_hash :: "nat \<Rightarrow> Complementation_Implement.state \<Rightarrow> nat" where
"state_hash n p \<equiv> nat_of_hashcode (list_hash p) mod n"
lemma state_hash_bounded_hashcode[autoref_ga_rules]: "is_bounded_hashcode state_rel
(gen_equals (Gen_Map.gen_ball (foldli \<circ> list_map_to_list)) (list_map_lookup (=))
(prod_eq (=) (\<longleftrightarrow>))) state_hash"
proof
show [param]: "(gen_equals (Gen_Map.gen_ball (foldli \<circ> list_map_to_list)) (list_map_lookup (=))
(prod_eq (=) (\<longleftrightarrow>)), (=)) \<in> state_rel \<rightarrow> state_rel \<rightarrow> bool_rel" by autoref
show "state_hash n xs = state_hash n ys" if "xs \<in> Domain state_rel" "ys \<in> Domain state_rel"
"gen_equals (Gen_Map.gen_ball (foldli \<circ> list_map_to_list))
(list_map_lookup (=)) (prod_eq (=) (=)) xs ys" for xs ys n
proof -
have 1: "distinct (map fst xs)" "distinct (map fst ys)"
using that(1, 2) unfolding list_map_rel_def list_map_invar_def by (auto simp: in_br_conv)
have 2: "distinct xs" "distinct ys" using 1 by (auto intro: distinct_mapI)
have 3: "(xs, map_of xs) \<in> state_rel" "(ys, map_of ys) \<in> state_rel"
using 1 unfolding list_map_rel_def list_map_invar_def by (auto simp: in_br_conv)
have 4: "(gen_equals (Gen_Map.gen_ball (foldli \<circ> list_map_to_list)) (list_map_lookup (=))
(prod_eq (=) (\<longleftrightarrow>)) xs ys, map_of xs = map_of ys) \<in> bool_rel" using 3 by parametricity
have 5: "map_to_set (map_of xs) = map_to_set (map_of ys)" using that(3) 4 by simp
have 6: "set xs = set ys" using map_to_set_map_of 1 5 by blast
show "state_hash n xs = state_hash n ys" unfolding state_hash_def using list_hash_eq 2 6 by metis
qed
show "state_hash n x < n" if "1 < n" for n x using that unfolding state_hash_def by simp
qed
subsection \<open>Complementation\<close>
schematic_goal complement_impl:
assumes [simp]: "finite (NBA.nodes A)"
assumes [autoref_rules]: "(Ai, A) \<in> \<langle>Id, nat_rel\<rangle> nbai_nba_rel"
shows "(?f :: ?'c, op_translate (complement_4 A)) \<in> ?R"
by (autoref_monadic (plain))
concrete_definition complement_impl uses complement_impl
theorem complement_impl_correct:
assumes "finite (NBA.nodes A)"
assumes "(Ai, A) \<in> \<langle>Id, nat_rel\<rangle> nbai_nba_rel"
shows "NBA.language (nbae_nba (nbaei_nbae (complement_impl Ai))) =
streams (nba.alphabet A) - NBA.language A"
using op_translate_language[OF complement_impl.refine[OF assms]]
using complement_4_correct[OF assms(1)]
by simp
subsection \<open>Language Subset\<close>
definition [simp]: "op_language_subset A B \<equiv> NBA.language A \<subseteq> NBA.language B"
lemmas [autoref_op_pat] = op_language_subset_def[symmetric]
schematic_goal language_subset_impl:
assumes [simp]: "finite (NBA.nodes B)"
assumes [autoref_rules]: "(Ai, A) \<in> \<langle>Id, nat_rel\<rangle> nbai_nba_rel"
assumes [autoref_rules]: "(Bi, B) \<in> \<langle>Id, nat_rel\<rangle> nbai_nba_rel"
shows "(?f :: ?'c, do {
let AB' = intersect' A (complement_4 B);
ASSERT (finite (NGBA.nodes AB'));
RETURN (NGBA.language AB' = {})
}) \<in> ?R"
by (autoref_monadic (plain))
concrete_definition language_subset_impl uses language_subset_impl
lemma language_subset_impl_refine[autoref_rules]:
assumes "SIDE_PRECOND (finite (NBA.nodes A))"
assumes "SIDE_PRECOND (finite (NBA.nodes B))"
assumes "SIDE_PRECOND (nba.alphabet A \<subseteq> nba.alphabet B)"
assumes "(Ai, A) \<in> \<langle>Id, nat_rel\<rangle> nbai_nba_rel"
assumes "(Bi, B) \<in> \<langle>Id, nat_rel\<rangle> nbai_nba_rel"
shows "(language_subset_impl Ai Bi, (OP op_language_subset :::
\<langle>Id, nat_rel\<rangle> nbai_nba_rel \<rightarrow> \<langle>Id, nat_rel\<rangle> nbai_nba_rel \<rightarrow> bool_rel) $ A $ B) \<in> bool_rel"
proof -
have "(RETURN (language_subset_impl Ai Bi), do {
let AB' = intersect' A (complement_4 B);
ASSERT (finite (NGBA.nodes AB'));
RETURN (NGBA.language AB' = {})
}) \<in> \<langle>bool_rel\<rangle> nres_rel"
using language_subset_impl.refine assms(2, 4, 5) unfolding autoref_tag_defs by this
also have "(do {
let AB' = intersect' A (complement_4 B);
ASSERT (finite (NGBA.nodes AB'));
RETURN (NGBA.language AB' = {})
}, RETURN (NBA.language A \<subseteq> NBA.language B)) \<in> \<langle>bool_rel\<rangle> nres_rel"
proof refine_vcg
show "finite (NGBA.nodes (intersect' A (complement_4 B)))" using assms(1, 2) by auto
have 1: "NBA.language A \<subseteq> streams (nba.alphabet B)"
using nba.language_alphabet streams_mono2 assms(3) unfolding autoref_tag_defs by blast
have 2: "NBA.language (complement_4 B) = streams (nba.alphabet B) - NBA.language B"
using complement_4_correct assms(2) by auto
show "(NGBA.language (intersect' A (complement_4 B)) = {},
NBA.language A \<subseteq> NBA.language B) \<in> bool_rel" using 1 2 by auto
qed
finally show ?thesis using RETURN_nres_relD unfolding nres_rel_comp by force
qed
subsection \<open>Language Equality\<close>
definition [simp]: "op_language_equal A B \<equiv> NBA.language A = NBA.language B"
lemmas [autoref_op_pat] = op_language_equal_def[symmetric]
schematic_goal language_equal_impl:
assumes [simp]: "finite (NBA.nodes A)"
assumes [simp]: "finite (NBA.nodes B)"
assumes [simp]: "nba.alphabet A = nba.alphabet B"
assumes [autoref_rules]: "(Ai, A) \<in> \<langle>Id, nat_rel\<rangle> nbai_nba_rel"
assumes [autoref_rules]: "(Bi, B) \<in> \<langle>Id, nat_rel\<rangle> nbai_nba_rel"
shows "(?f :: ?'c, NBA.language A \<subseteq> NBA.language B \<and> NBA.language B \<subseteq> NBA.language A) \<in> ?R"
by autoref
concrete_definition language_equal_impl uses language_equal_impl
lemma language_equal_impl_refine[autoref_rules]:
assumes "SIDE_PRECOND (finite (NBA.nodes A))"
assumes "SIDE_PRECOND (finite (NBA.nodes B))"
assumes "SIDE_PRECOND (nba.alphabet A = nba.alphabet B)"
assumes "(Ai, A) \<in> \<langle>Id, nat_rel\<rangle> nbai_nba_rel"
assumes "(Bi, B) \<in> \<langle>Id, nat_rel\<rangle> nbai_nba_rel"
shows "(language_equal_impl Ai Bi, (OP op_language_equal :::
\<langle>Id, nat_rel\<rangle> nbai_nba_rel \<rightarrow> \<langle>Id, nat_rel\<rangle> nbai_nba_rel \<rightarrow> bool_rel) $ A $ B) \<in> bool_rel"
using language_equal_impl.refine[OF assms[unfolded autoref_tag_defs]] by auto
schematic_goal product_impl:
assumes [simp]: "finite (NBA.nodes B)"
assumes [autoref_rules]: "(Ai, A) \<in> \<langle>Id, nat_rel\<rangle> nbai_nba_rel"
assumes [autoref_rules]: "(Bi, B) \<in> \<langle>Id, nat_rel\<rangle> nbai_nba_rel"
shows "(?f :: ?'c, do {
let AB' = intersect A (complement_4 B);
ASSERT (finite (NBA.nodes AB'));
op_translate AB'
}) \<in> ?R"
by (autoref_monadic (plain))
concrete_definition product_impl uses product_impl
(* TODO: possible optimizations:
- introduce op_map_map operation for maps instead of manually iterating via FOREACH
- consolidate various binds and maps in expand_map_get_7 *)
export_code
Set.empty Set.insert Set.member
"Inf :: 'a set set \<Rightarrow> 'a set" "Sup :: 'a set set \<Rightarrow> 'a set" image Pow set
nat_of_integer integer_of_nat
Variable Negation Conjunction Disjunction satisfies map_formula
nbaei alphabetei initialei transitionei acceptingei
nbae_nba_impl complement_impl language_equal_impl product_impl
in SML module_name Complementation file_prefix Complementation
end
\ No newline at end of file
diff --git a/thys/BytecodeLogicJmlTypes/Sound.thy b/thys/BytecodeLogicJmlTypes/Sound.thy
--- a/thys/BytecodeLogicJmlTypes/Sound.thy
+++ b/thys/BytecodeLogicJmlTypes/Sound.thy
@@ -1,1081 +1,1076 @@
(*File: Sound.thy
Author: L Beringer & M Hofmann, LMU Munich
Date: 05/12/2008
Purpose: Interpretation of judgements, and soundness proof,
of the program logic with invariants, pre-
and post-conditions, and local assertions, but no
exploitation of sucessful bytecode verification.
*)
(*<*)
theory Sound imports Logic MultiStep Reachability begin
(*>*)
-(*<*)
-lemma MaxZero[rule_format]: "max n k = (0::nat) \<Longrightarrow> n=0 \<and> k=0"
-by (simp add: max_def, case_tac "n\<le>k", clarsimp,clarsimp)
-(*>*)
-
section\<open>Soundness\<close>
text\<open>This section contains the soundness proof of the program logic.
In the first subsection, we define our notion of validity, thus
formalising our intuitive explanation of the terms preconditions,
specifications, and invariants. The following two subsections contain
the details of the proof and can easily be skipped during a first pass
through the document.\<close>
subsection\<open>Validity\<close>
text\<open>A judgement is valid at the program point \<open>C.m.l\<close>
(i.e.~at label \<open>l\<close> in method \<open>m\<close> of class \<open>C\<close>),
written $\mathit{valid}\; C\; m\; l\; A\; B\; I$ or, in symbols, $$\vDash\,
\lbrace A \rbrace\, C,m,l\, \lbrace B \rbrace\, I,$$ if $A$ is a
precondition for $B$ and for all local annotations following $l$ in an
execution of \<open>m\<close>, and all reachable states in the current frame
or yet-to-be created subframes satisfy $I$. More precisely,
whenever an execution of the method starting in an initial state $s_0$
reaches the label \<open>l\<close> with state \<open>s\<close>, the following
properties are implied by $A(s_0,s)$.
\begin{enumerate}
\item If the continued execution from \<open>s\<close> reaches a final
state \<open>t\<close> (i.e.~the method terminates), then that final state
\<open>t\<close> satisfies $B(s_0,s,t)$.
\item Any state $s'$ visited in the current frame during the remaining
program execution whose label carries an annotation \<open>Q\<close> will
satisfy $Q(s_0,s')$, even if the execution of the frame does not
terminate.
\item Any state $s'$ visited in the current frame or a subframe of
the current frame will satisfy $I(s_0,s,\mathit{heap}(s'))$, again even if the
execution does not terminate.
\end{enumerate}
Formally, this interpretation is expressed as follows.
\<close>
definition valid::"Class \<Rightarrow> Method \<Rightarrow> Label \<Rightarrow> Assn \<Rightarrow> Post \<Rightarrow> Inv \<Rightarrow> bool" where
"valid C m l A B I =
(\<forall> M. mbody_is C m M \<longrightarrow>
(\<forall> Mspec Minv Anno . MST\<down>(C,m) = Some(Mspec,Minv,Anno) \<longrightarrow>
(\<forall> par code l0 . M = (par,code,l0) \<longrightarrow>
(\<forall> s0 s . MS M l0 (mkState s0) l s \<longrightarrow> A s0 s \<longrightarrow>
((\<forall> h v. Opsem M l s h v \<longrightarrow> B s0 s (h,v)) \<and>
(\<forall> ll r . (MS M l s ll r \<longrightarrow> (\<forall> Q . Anno\<down>(ll) = Some Q \<longrightarrow> Q s0 r)) \<and>
(Reach M l s r \<longrightarrow> I s0 s (heap r))))))))"
abbreviation valid_syntax :: "Assn \<Rightarrow> Class \<Rightarrow> Method \<Rightarrow>
Label \<Rightarrow> Post \<Rightarrow> Inv \<Rightarrow> bool"
(" \<Turnstile> \<lbrace> _ \<rbrace> _ , _ , _ \<lbrace> _ \<rbrace> _" [200,200,200,200,200,200] 200)
where "valid_syntax A C m l B I == valid C m l A B I"
text\<open>This notion of validity extends that of Bannwart-M\"uller by
allowing the post-condition to differ from method specification and to
refer to the initial state, and by including invariants. In
the logic of Bannwart and M\"uller, the validity of a method
specification is given by a partial correctness (Hoare-style)
interpretation, while the validity of preconditions of
individual instructions is such that a precondition at $l$ implies the
preconditions of its immediate control flow successors.\<close>
text\<open>Validity us lifted to contexts and the method specification
table. In the case of the former, we simply require that all entries
be valid.\<close>
definition G_valid::"CTXT \<Rightarrow> bool" where
"G_valid G = (\<forall> C m l A B I. G\<down>(C,m,l) = Some (A,B,I)\<longrightarrow>
\<Turnstile> \<lbrace>A\<rbrace> C, m, l \<lbrace>B\<rbrace> I)"
text\<open>Regarding the specification table, we require that the initial
label of each method satisfies an assertion that ties the method
precondition to the current state.\<close>
definition MST_valid ::bool where
"MST_valid = (\<forall> C m par code l0 T MI Anno.
mbody_is C m (par,code,l0) \<longrightarrow> MST\<down>(C, m) = Some (T,MI,Anno) \<longrightarrow>
\<Turnstile> \<lbrace>(\<lambda> s0 s. s = mkState s0)\<rbrace> C, m, l0 \<lbrace>(mkPost T)\<rbrace> (mkInv MI))"
definition Prog_valid::bool where
"Prog_valid = (\<exists> G . G_valid G \<and> MST_valid)"
text\<open>The remainder of this section contains a proof of soundness,
i.e.~of the property $$\<open>VP\<close> \Longrightarrow \<open>Prog_valid\<close>,$$ and is structured into two parts. The first step
(Section \ref{SoundnessUnderValidContexts}) establishes a soundness
result where the \<open>VP\<close> property is replaced by validity
assumptions regarding the method specification table and the
context. In the second step (Section \ref{SectionContextElimination}),
we show that these validity assumptions are satisfied by verified
programs, which implies the overall soundness theorem.\<close>
subsection\<open>Soundness under valid contexts
\label{SoundnessUnderValidContexts}\<close>
text\<open>The soundness proof proceeds by induction on the axiomatic
semantics, based on an auxiliary lemma for method invocations that is
proven by induction on the derivation height of the operational
semantics. For the latter induction, relativised notions of validity
are employed that restrict the derivation height of the program
continuations affected by an assertion. The appropriate definitions of
relativised validity for judgements, for the precondition table, and
for the method specification table are as follows.\<close>
definition validn::
"nat \<Rightarrow> Class \<Rightarrow> Method \<Rightarrow> Label \<Rightarrow> Assn \<Rightarrow> Post \<Rightarrow> Inv \<Rightarrow> bool" where
"validn K C m l A B I =
(\<forall> M. mbody_is C m M \<longrightarrow>
(\<forall> Mspec Minv Anno . MST\<down>(C,m) = Some(Mspec,Minv,Anno) \<longrightarrow>
(\<forall> par code l0 . M = (par,code,l0) \<longrightarrow>
(\<forall> s0 s . MS M l0 (mkState s0) l s \<longrightarrow> A s0 s \<longrightarrow>
(\<forall> k . k \<le> K \<longrightarrow>
((\<forall> h v. (M,l,s,k,h,v):Exec \<longrightarrow> B s0 s (h,v)) \<and>
(\<forall> ll r . ((M,l,s,k,ll,r):MStep \<longrightarrow>
(\<forall> Q . Anno\<down>(ll) = Some Q \<longrightarrow> Q s0 r)) \<and>
((M,l,s,k,r):Reachable \<longrightarrow> I s0 s (heap r)))))))))"
abbreviation validn_syntax :: "nat \<Rightarrow> Assn \<Rightarrow> Class \<Rightarrow> Method \<Rightarrow>
Label \<Rightarrow> Post \<Rightarrow> Inv \<Rightarrow> bool"
("\<Turnstile>\<^sub>_ \<lbrace> _ \<rbrace> _ , _ , _ \<lbrace> _ \<rbrace> _ " [200,200,200,200,200,200,200] 200)
where "validn_syntax K A C m l B I == validn K C m l A B I"
definition G_validn::"nat \<Rightarrow> CTXT \<Rightarrow> bool" where
"G_validn K G = (\<forall> C m l A B I. G\<down>(C,m,l) = Some (A,B,I) \<longrightarrow>
\<Turnstile>\<^sub>K \<lbrace>A\<rbrace> C, m, l \<lbrace>B\<rbrace> I)"
definition MST_validn::"nat \<Rightarrow> bool" where
"MST_validn K = (\<forall> C m par code l0 T MI Anno.
mbody_is C m (par,code,l0) \<longrightarrow> MST\<down>(C, m) = Some (T,MI,Anno) \<longrightarrow>
\<Turnstile>\<^sub>K \<lbrace>(\<lambda> s0 s. s = mkState s0)\<rbrace> C, m, l0 \<lbrace>(mkPost T)\<rbrace> (mkInv MI))"
definition Prog_validn::"nat \<Rightarrow> bool" where
"Prog_validn K = (\<exists> G . G_validn K G \<and> MST_validn K)"
text\<open>The relativised notions are related to each other, and to the
native notions of validity as follows.\<close>
lemma valid_validn: "\<Turnstile> \<lbrace>A\<rbrace> C, m, l \<lbrace>B\<rbrace> I \<Longrightarrow> \<Turnstile>\<^sub>K \<lbrace>A\<rbrace> C, m, l \<lbrace>B\<rbrace> I"
(*<*)
apply (simp add: valid_def validn_def Opsem_def MS_def Reach_def)
apply clarsimp
apply (erule_tac x=a in allE)
apply (erule_tac x=aa in allE)
apply (erule_tac x=b in allE, clarsimp)
apply (erule_tac x=ab in allE)
apply (erule_tac x=ba in allE)
apply (erule_tac x=ac in allE)
apply (erule_tac x=ad in allE)
apply (erule_tac x=bb in allE, erule impE) apply fast
apply fastforce
done
(*>*)
lemma validn_valid: "\<lbrakk>\<forall> K . \<Turnstile>\<^sub>K \<lbrace>A\<rbrace> C, m, l \<lbrace>B\<rbrace> I\<rbrakk> \<Longrightarrow> \<Turnstile> \<lbrace>A\<rbrace> C, m, l \<lbrace>B\<rbrace> I"
(*<*)
apply (unfold valid_def validn_def)(*apply ( Opsem_def MS_def Reach_def)*)
apply (rule, rule, rule)
apply (rule, rule, rule)
apply (rule, rule, rule)
apply (rule, rule, rule)
apply (rule, rule)
apply (rule,rule,rule, rule)
apply (unfold Opsem_def, erule exE)
apply (erule_tac x=n in allE)
apply (erule_tac x=M in allE, erule impE, assumption)
apply (erule_tac x=Mspec in allE, erule_tac x=Minv in allE, erule_tac x=Anno in allE, erule impE, assumption)
apply (erule_tac x=par in allE, erule_tac x=code in allE)
apply (erule_tac x=l0 in allE, erule impE, assumption)
apply (erule_tac x=s0 in allE, erule_tac x=s in allE)
apply (erule impE, assumption)
apply (erule impE, assumption)
apply (erule_tac x=n in allE, erule impE, simp)
apply fast
apply (rule, rule, rule)
apply (rule, rule, rule)
apply (unfold MS_def, erule exE, erule exE)
apply (erule_tac x=ka in allE)
apply (erule_tac x=M in allE, erule impE, assumption)
apply (erule_tac x=Mspec in allE, erule_tac x=Minv in allE, erule_tac x=Anno in allE, erule impE, assumption)
apply (erule_tac x=par in allE, erule_tac x=code in allE)
apply (erule_tac x=l0 in allE, erule impE, assumption)
apply (erule_tac x=s0 in allE, erule_tac x=s in allE)
apply (erule impE, fast)
apply (erule impE, assumption)
apply (erule_tac x=ka in allE, erule impE, simp)
apply fast
apply rule
apply (unfold Reach_def, erule exE, erule exE)
apply (erule_tac x=ka in allE)
apply (erule_tac x=M in allE, erule impE, assumption)
apply (erule_tac x=Mspec in allE, erule_tac x=Minv in allE, erule_tac x=Anno in allE, erule impE, assumption)
apply (erule_tac x=par in allE, erule_tac x=code in allE)
apply (erule_tac x=l0 in allE, erule impE, assumption)
apply (erule_tac x=s0 in allE, erule_tac x=s in allE)
apply (erule impE, fast)
apply (erule impE, assumption)
apply (erule_tac x=ka in allE, erule impE, simp)
apply fast
done
(*>*)
lemma validn_lower:
"\<lbrakk>\<Turnstile>\<^sub>K \<lbrace>A\<rbrace> C, m, l \<lbrace>B\<rbrace> I; L \<le> K\<rbrakk> \<Longrightarrow> \<Turnstile>\<^sub>L \<lbrace>A\<rbrace> C, m, l \<lbrace>B\<rbrace> I"
(*<*)
apply (unfold validn_def)
apply (rule, rule, rule)
apply (rule, rule, rule)
apply (rule, rule, rule)
apply (rule, rule, rule)
apply (rule, rule, rule)
apply rule
apply (erule_tac x=M in allE, erule impE, assumption)
apply (erule_tac x=Mspec in allE, erule_tac x=Minv in allE, erule_tac x=Anno in allE, erule impE, assumption)
apply (erule_tac x=par in allE)
apply (erule_tac x=code in allE)
apply (erule_tac x=l0 in allE, erule impE, assumption)
apply (erule_tac x=s0 in allE)
apply (erule_tac x=s in allE, erule impE, assumption, erule impE, assumption)
apply (erule_tac x=k in allE, erule impE, simp)
apply assumption
done
(*>*)
lemma G_valid_validn: "G_valid G \<Longrightarrow> G_validn K G"
(*<*)
apply (simp add: G_valid_def G_validn_def, clarsimp)
apply (rule valid_validn) apply fast
done
(*>*)
lemma G_validn_valid:"\<lbrakk>\<forall> K . G_validn K G\<rbrakk> \<Longrightarrow> G_valid G"
(*<*)
apply (simp add: G_valid_def G_validn_def, clarsimp)
apply (rule validn_valid) apply clarsimp
done
(*>*)
lemma G_validn_lower: "\<lbrakk>G_validn K G; L \<le> K\<rbrakk> \<Longrightarrow> G_validn L G"
(*<*)
apply (simp add: G_validn_def, clarsimp)
apply (rule validn_lower) apply fast apply assumption
done
(*>*)
lemma MST_validn_valid:"\<lbrakk>\<forall> K. MST_validn K\<rbrakk> \<Longrightarrow> MST_valid"
(*<*)
apply (simp add: MST_validn_def MST_valid_def, clarsimp)
apply (rule validn_valid, clarsimp)
done
(*>*)
lemma MST_valid_validn:"MST_valid \<Longrightarrow> MST_validn K"
(*<*)
apply (unfold MST_validn_def MST_valid_def)
apply (rule, rule, rule)
apply (rule, rule, rule)
apply (rule, rule, rule)
apply rule
apply (rule valid_validn)
apply fast
done
(*>*)
lemma MST_validn_lower: "\<lbrakk>MST_validn K; L \<le> K\<rbrakk> \<Longrightarrow> MST_validn L"
(*<*)
apply (simp add: MST_validn_def, clarsimp)
apply (erule_tac x=C in allE)
apply (erule_tac x=m in allE)
apply (erule_tac x=par in allE)
apply (erule_tac x=code in allE)
apply (erule_tac x=l0 in allE, erule impE, assumption)
apply (erule_tac x=T in allE)
apply (erule_tac x=MI in allE, clarsimp)
apply (erule validn_lower) apply assumption
done
(*>*)
text\<open>We define an abbreviation for the side conditions of the rule
for static method invocations\ldots\<close>
definition INVS_SC::
"Class \<Rightarrow> Method \<Rightarrow> Label \<Rightarrow> Class \<Rightarrow> Method \<Rightarrow> MethSpec \<Rightarrow> MethInv \<Rightarrow>
ANNO \<Rightarrow> ANNO \<Rightarrow> Mbody \<Rightarrow> Assn \<Rightarrow> Inv \<Rightarrow> bool" where
"INVS_SC C m l D m' T MI Anno Anno2 M' A I = (\<exists> M par code l0 T1 MI1.
mbody_is C m M \<and> get_ins M l = Some (invokeS D m') \<and>
MST\<down>(C,m) = Some (T1,MI1,Anno) \<and>
MST\<down>(D,m') = Some (T,MI,Anno2) \<and>
mbody_is D m' M' \<and> M'=(par,code,l0) \<and>
(\<forall> Q . Anno\<down>(l) = Some Q \<longrightarrow> (\<forall> s0 s . A s0 s \<longrightarrow> Q s0 s)) \<and>
(\<forall> s0 s . A s0 s \<longrightarrow> I s0 s (heap s)) \<and>
(\<forall> s0 ops1 ops2 S R h t . (ops1,par,R,ops2) : Frame \<longrightarrow>
A s0 (ops1,S,h) \<longrightarrow> MI (R,h) t \<longrightarrow> I s0 (ops1,S,h) t))"
text\<open>\ldots and another abbreviation for the soundness property of
the same rule.\<close>
definition INVS_soundK::
"nat \<Rightarrow> CTXT \<Rightarrow> Class \<Rightarrow> Method \<Rightarrow> Label \<Rightarrow> Class \<Rightarrow> Method \<Rightarrow>
MethSpec \<Rightarrow> MethInv \<Rightarrow> ANNO \<Rightarrow> ANNO \<Rightarrow> Mbody \<Rightarrow> Assn \<Rightarrow>
Post \<Rightarrow> Inv \<Rightarrow> bool" where
"INVS_soundK K G C m l D m' T MI Anno Anno2 M' A B I =
(INVS_SC C m l D m' T MI Anno Anno2 M' A I \<longrightarrow>
G_validn K G \<longrightarrow> MST_validn K \<longrightarrow>
\<Turnstile>\<^sub>K \<lbrace>(SINV_pre (fst M') T A)\<rbrace> C,m,(l+1)
\<lbrace>(SINV_post (fst M') T B)\<rbrace> (SINV_inv (fst M') T I)
\<longrightarrow> \<Turnstile>\<^sub>(K+1) \<lbrace> A \<rbrace> C,m,l \<lbrace> B \<rbrace> I)"
text\<open>The proof that this property holds for all $K$ proceeds by
induction on $K$.\<close>
lemma INVS_soundK_all:
"INVS_soundK K G C m l D m' T MI Anno Anno2 M' A B I"
(*<*)
apply (induct K)
(*K=0*)
apply (simp add: INVS_soundK_def , clarsimp)
apply (unfold validn_def)
apply (rule, rule) apply (erule_tac x=M in allE, erule impE, assumption)
apply (rule, rule, rule, rule)
apply (erule_tac x=Mspec in allE, erule_tac x=Minv in allE, erule_tac x=Annoa in allE, erule impE, assumption)
apply (rule, rule, rule, rule)
apply (erule_tac x=par in allE, erule_tac x=code in allE, erule_tac x=l0 in allE, erule impE, assumption)
apply (rule, rule, rule)
apply (rule, rule, rule)
apply rule
apply (rule, rule, rule) apply (case_tac k, clarsimp) apply (drule no_zero_height_Exec_derivs, simp) apply clarsimp
- apply (erule eval_cases) apply (simp add: INVS_SC_def mbody_is_def,clarsimp) apply clarsimp apply (drule MaxZero, clarsimp)
+ apply (erule eval_cases) apply (simp add: INVS_SC_def mbody_is_def,clarsimp) apply clarsimp
apply (drule no_zero_height_Exec_derivs, simp)
apply (rule, rule, rule, rule)
apply clarsimp
apply (case_tac k, clarsimp) apply (drule ZeroHeightMultiElim, clarsimp)
apply (simp add: INVS_SC_def mbody_is_def,clarsimp)
apply clarsimp
apply (drule MultiSplit, simp, clarsimp) apply (drule no_zero_height_Step_derivs, simp)
apply rule apply (case_tac k, clarsimp) apply (drule ZeroHeightReachableElim, clarsimp)
apply (simp add: INVS_SC_def mbody_is_def,clarsimp)
apply clarsimp apply (drule ReachableSplit, simp, clarsimp)
apply (erule disjE)
apply clarsimp apply (drule no_zero_height_Step_derivs, simp)
apply clarsimp apply (drule ZeroHeightReachableElim, clarsimp)
apply (rotate_tac 5, erule thin_rl)
apply (simp add: INVS_SC_def mbody_is_def,clarsimp)
apply (simp add: MST_validn_def)
apply (erule_tac x=D in allE, rotate_tac -1)
apply (erule_tac x=m' in allE, rotate_tac -1)
apply (erule_tac x=para in allE, rotate_tac -1)
apply (erule_tac x=codea in allE, rotate_tac -1)
apply (erule_tac x=l0a in allE, rotate_tac -1,erule impE) apply (simp add: mbody_is_def)
apply (rotate_tac -1, erule_tac x=T in allE)
apply (rotate_tac -1, erule_tac x=MI in allE, clarsimp)
apply (simp add: validn_def)
apply (simp add: mbody_is_def)
apply (simp add: heap_def)
(*k>0*)
apply (simp add: INVS_soundK_def , clarsimp)
apply (rotate_tac -1, erule thin_rl)
apply (unfold validn_def)
apply (rule, rule) apply (erule_tac x=M in allE, erule impE, assumption)
apply (rule, rule) apply (rule, rule) apply (rule, rule) apply (rule, rule) apply (rule, rule)
apply (erule_tac x=Mspec in allE, erule_tac x=Minv in allE, erule_tac x=Annoa in allE, erule impE, assumption)
apply (erule_tac x=par in allE, erule_tac x=code in allE)
apply (erule_tac x=l0 in allE, erule impE, assumption)
apply (rule, rule)
apply (rule, rule)
apply (erule_tac x=s0 in allE)
apply rule
(*Exec*)
apply (rule, rule, rule)
apply (erule eval_cases) apply clarsimp apply (simp add: INVS_SC_def mbody_is_def) apply clarsimp
apply (erule_tac x=t in allE, erule impE)
apply (frule InvokeElim) apply (simp add: INVS_SC_def mbody_is_def) apply clarsimp apply fastforce
apply (simp add: MS_def, erule exE, rule) apply clarsimp apply (erule MultiApp) apply assumption
apply (erule impE, drule InvokeElim, simp add: INVS_SC_def mbody_is_def) apply clarsimp apply fastforce
apply clarsimp apply (simp add: INVS_SC_def mbody_is_def SINV_pre_def) apply clarsimp apply (rule, rule)
apply (rule, rule, rule, assumption) apply (rule, rule) prefer 2 apply (rule, assumption) apply simp
apply (simp add: MST_validn_def) apply (erule_tac x=D in allE, erule_tac x=m' in allE)
apply (erule_tac x=parb in allE, erule_tac x=codeb in allE)
apply (erule_tac x=l0b in allE, simp add: mbody_is_def)
apply (simp add: validn_def)
apply (erule_tac x=parb in allE, erule_tac x=codeb in allE)
apply (erule_tac x=l0b in allE, simp add: mbody_is_def)
apply (rotate_tac -1, erule_tac x=R in allE, erule_tac x=bb in allE)
apply (rotate_tac -1, erule_tac x="[]" in allE, rotate_tac -1, erule_tac x=R in allE, erule_tac x=bb in allE)
apply (simp add: mkState_def)
apply (erule impE) apply (simp add: MS_def, rule) apply (rule MS_zero) apply simp apply simp apply simp
apply (erule_tac x=ka in allE, erule impE, simp)
apply (simp add: mkPost_def, erule conjE)
apply (erule_tac x=hh in allE, erule_tac x=va in allE, simp add: mkState_def)
apply (erule_tac x=ma in allE, erule impE) apply (simp add: max_def) apply (case_tac "n \<le> ma") apply clarsimp apply clarsimp
apply (erule conjE) apply (erule_tac x=h in allE, erule_tac x=v in allE, erule impE)
apply (drule InvokeElim) apply (simp add: INVS_SC_def mbody_is_def) apply clarsimp apply fastforce
apply clarsimp
apply (simp add: SINV_post_def) apply (simp add: INVS_SC_def mbody_is_def, clarsimp)
apply (drule InvokeElim, clarsimp) apply fastforce apply clarsimp
apply (simp add: mbody_is_def, clarsimp)
apply (erule_tac x=ac in allE, erule_tac x=ops in allE, rotate_tac -1)
apply (erule_tac x=ad in allE, erule_tac x=R in allE, rotate_tac -1, erule impE, assumption)
apply (erule_tac x=bb in allE, simp, erule mp)
apply (simp add: MST_validn_def)
apply (erule_tac x=D in allE, erule_tac x=m' in allE, rotate_tac -1)
apply (erule_tac x=par in allE, erule_tac x=code in allE, erule_tac x=l0 in allE, simp add: mbody_is_def)
apply (simp add: validn_def)
apply (rotate_tac -1) apply (erule thin_rl) apply (rotate_tac -1)
apply (simp add: mbody_is_def)
apply (erule_tac x=R in allE, rotate_tac -1, erule_tac x=bb in allE, rotate_tac -1)
apply (erule_tac x="[]" in allE, rotate_tac -1, erule_tac x=R in allE, rotate_tac -1)
apply (erule_tac x=bb in allE, erule impE)
apply (simp add: MS_def, rule, rule MS_zero) apply (simp, simp add: mkState_def, simp)
apply (simp add: mkState_def)
apply (erule_tac x=k in allE, clarsimp)
apply (erule_tac x=bc in allE, erule_tac x=va in allE, simp add: mkPost_def mkState_def)
(*MStep*)
apply (rule, rule)
apply (rule, rule)
apply (rule, rule)
apply (case_tac k, clarsimp) apply (drule ZeroHeightMultiElim, clarsimp) apply (simp add: INVS_SC_def) apply clarsimp
apply clarsimp
apply (frule MultiSplit) apply clarsimp apply clarsimp
apply (frule InvokeElim) apply (simp add: INVS_SC_def mbody_is_def) apply clarsimp apply fastforce
apply clarsimp
apply (erule_tac x="v # ops" in allE, erule_tac x=ad in allE, erule_tac x=b in allE, erule impE)
apply (simp add: MS_def, erule exE, rule) apply (erule MultiApp) apply assumption
apply (erule impE, simp add: SINV_pre_def INVS_SC_def mbody_is_def) apply clarsimp
apply (rule, rule, rule, rule, rule, assumption) apply (rule, rule)
prefer 2 apply (rule, assumption) apply simp
apply (simp add: MST_validn_def) apply (erule_tac x=D in allE, erule_tac x=m' in allE)
apply (erule_tac x=parb in allE, erule_tac x=codeb in allE)
apply (erule_tac x=l0b in allE, simp add: mbody_is_def)
apply (simp add: validn_def)
apply (erule_tac x=parb in allE, erule_tac x=codeb in allE)
apply (erule_tac x=l0b in allE, simp add: mbody_is_def)
apply (rotate_tac -1, erule_tac x=R in allE, erule_tac x=bb in allE)
apply (rotate_tac -1, erule_tac x="[]" in allE, rotate_tac -1, erule_tac x=R in allE, erule_tac x=bb in allE)
apply (simp add: mkState_def)
apply (erule impE) apply (simp add: MS_def, rule) apply (rule MS_zero) apply simp apply simp apply simp
apply (erule_tac x=k in allE, erule impE, simp)
apply (simp add: mkPost_def, erule conjE)
apply (erule_tac x=hh in allE, erule_tac x=va in allE, simp add: mkState_def)
apply (erule_tac x=ma in allE, erule impE) apply simp
apply (erule conjE) apply (erule_tac x=ll in allE, erule_tac x=ae in allE)
apply (erule_tac x=af in allE, rotate_tac -1, erule_tac x=bc in allE, clarsimp)
(*Reach*)
apply rule
apply (case_tac k, clarsimp) apply (drule ZeroHeightReachableElim, clarsimp)
apply (simp add: INVS_SC_def mbody_is_def, clarsimp)
apply clarsimp
apply (drule ReachableSplit) apply simp apply clarsimp
apply (erule disjE)
apply clarsimp
apply (frule InvokeElim) apply (simp add: INVS_SC_def mbody_is_def) apply clarsimp apply fastforce
apply clarsimp
apply (erule_tac x="v#ops" in allE, erule_tac x=ad in allE, erule_tac x=b in allE, erule impE)
apply (simp add: MS_def, clarsimp, rule) apply (erule MultiApp) apply assumption
apply (erule impE) apply (simp add: SINV_pre_def) apply (simp add: INVS_SC_def mbody_is_def, clarsimp)
apply (rule, rule, rule, rule, rule) apply assumption
apply (rule, rule) prefer 2 apply (rule, assumption, simp)
apply (simp add: MST_validn_def)
apply (erule_tac x=D in allE, erule_tac x=m' in allE, rotate_tac -1)
apply (erule_tac x=parb in allE, erule_tac x=codeb in allE, erule_tac x=l0b in allE, simp add: mbody_is_def)
apply (simp add: validn_def)
apply (simp add: mbody_is_def) apply (rotate_tac -1)
apply (erule_tac x=R in allE, rotate_tac -1, erule_tac x=bb in allE, rotate_tac -1)
apply (erule_tac x="[]" in allE, rotate_tac -1, erule_tac x=R in allE, rotate_tac -1)
apply (erule_tac x=bb in allE, erule impE)
apply (simp add: MS_def, rule, rule MS_zero) apply (simp, simp add: mkState_def, simp)
apply (simp add: mkState_def)
apply (erule_tac x=k in allE, clarsimp)
apply (erule_tac x=b in allE, erule_tac x=v in allE, simp add: mkPost_def mkState_def)
apply (erule_tac x=ma in allE, clarsimp)
apply (rotate_tac -1)
apply (simp add: SINV_inv_def)
apply (erule_tac x="l+1" in allE, erule_tac x=ae in allE)
apply (erule_tac x=af in allE, rotate_tac -1, erule_tac x=bc in allE, clarsimp)
apply (rotate_tac -2, erule thin_rl)
apply (simp add: SINV_inv_def INVS_SC_def mbody_is_def) apply clarsimp
apply (rotate_tac -1, erule_tac x=ac in allE)
apply (rotate_tac -1, erule_tac x=ops in allE)
apply (rotate_tac -1, erule_tac x=ad in allE)
apply (rotate_tac -1, erule_tac x=R in allE, clarsimp)
apply (rotate_tac -1, erule_tac x=bb in allE, erule mp)
apply (erule thin_rl)
apply (simp add: MST_validn_def)
apply (erule_tac x=D in allE, erule_tac x=m' in allE, rotate_tac -1)
apply (erule_tac x=parb in allE, erule_tac x=codeb in allE, erule_tac x=l0b in allE, simp add: mbody_is_def)
apply (simp add: validn_def)
apply (simp add: mbody_is_def) apply (rotate_tac -1)
apply (erule_tac x=R in allE, rotate_tac -1, erule_tac x=bb in allE, rotate_tac -1)
apply (erule_tac x="[]" in allE, rotate_tac -1, erule_tac x=R in allE, rotate_tac -1)
apply (erule_tac x=bb in allE, erule impE)
apply (simp add: MS_def, rule, rule MS_zero) apply (simp, simp add: mkState_def, simp)
apply (simp add: mkState_def)
apply (erule_tac x=k in allE, clarsimp)
apply (erule_tac x=b in allE, erule_tac x=v in allE, simp add: mkPost_def mkState_def)
apply clarsimp
apply (simp add: INVS_SC_def mbody_is_def, clarsimp)
apply (rotate_tac -1, erule_tac x=ab in allE)
apply (rotate_tac -1, erule_tac x=ba in allE)
apply (rotate_tac -1, erule_tac x=ac in allE)
apply (rotate_tac -1, erule_tac x=ops1 in allE)
apply (rotate_tac -1, erule_tac x=ad in allE)
apply (rotate_tac -1, erule_tac x=R in allE, clarsimp)
apply (rotate_tac -1, erule_tac x=bb in allE, clarsimp)
apply (rotate_tac -1, erule_tac x="heap (ae,af,bc)" in allE, erule mp)
apply (rotate_tac 6) apply (erule thin_rl)
apply (simp add: MST_validn_def)
apply (erule_tac x=D in allE, erule_tac x=m' in allE)
apply (simp add: mbody_is_def)
apply (simp add: validn_def)
apply (simp add: mbody_is_def mkInv_def)
apply (rotate_tac -1)
apply (erule_tac x=R in allE, erule_tac x=bb in allE)
apply (rotate_tac -1) apply (erule_tac x="[]" in allE)
apply (rotate_tac -1)
apply (erule_tac x=R in allE, erule_tac x=bb in allE, simp add: mkState_def, erule impE)
apply (simp add: MS_def, rule, rule MS_zero) apply (simp, simp, simp)
apply (erule_tac x=nat in allE, simp)
done
(*>*)
text\<open>The heart of the soundness proof - the induction on the
axiomatic semantics.\<close>
lemma SOUND_Aux[rule_format]:
"(b,G,C,m,l,A,B,I):SP_Judgement \<Longrightarrow> G_validn K G \<longrightarrow> MST_validn K \<longrightarrow>
((b \<longrightarrow> \<Turnstile>\<^sub>K \<lbrace>A\<rbrace> C, m, l \<lbrace>B\<rbrace> I) \<and>
((\<not> b) \<longrightarrow> \<Turnstile>\<^sub>(Suc K) \<lbrace>A\<rbrace> C, m, l \<lbrace>B\<rbrace> I))"
(*<*)
apply (erule SP_Judgement.induct)
(*INSTR*)
apply clarsimp
apply (rotate_tac -4) apply (erule thin_rl)
apply (simp add: mbody_is_def validn_def Opsem_def MS_def Reach_def, clarsimp)
apply rule
apply clarsimp apply (erule eval_cases) apply simp
apply clarsimp apply (frule InstrElimNext, simp, assumption, clarsimp)
apply (erule_tac x=ad in allE)
apply (erule_tac x=bb in allE)
apply (erule_tac x=a in allE, rotate_tac -1)
apply (erule_tac x=aa in allE)
apply (erule_tac x=b in allE, erule impE)
apply rule
apply (erule MultiApp)
apply assumption
apply (erule impE, simp add: SP_pre_def) apply (rule, rule, rule, rule, assumption)
apply (rule, rule,assumption)
apply (erule_tac x=ma in allE, clarsimp)
apply (erule_tac x=h in allE, erule_tac x=v in allE, clarsimp)
apply (simp add: SP_post_def)
apply (rotate_tac -1)
apply (erule_tac x=ae in allE, rotate_tac -1)
apply (erule_tac x=af in allE, rotate_tac -1)
apply (erule_tac x=bc in allE, erule mp)
apply (rule, rule, assumption)
apply clarsimp
apply rule
apply clarsimp
apply (case_tac ka) apply clarsimp apply (drule ZeroHeightMultiElim) apply clarsimp
apply clarsimp
apply (rotate_tac -2, drule MultiSplit) apply simp apply clarsimp
apply (frule InstrElimNext, simp, assumption, clarsimp)
apply (erule_tac x=ad in allE)
apply (erule_tac x=bb in allE)
apply (erule_tac x=ag in allE, rotate_tac -1)
apply (erule_tac x=ah in allE)
apply (erule_tac x=bd in allE, erule impE)
apply rule
apply (erule MultiApp)
apply assumption
apply (erule impE, simp add: SP_pre_def) apply (rule, rule, rule, rule, assumption)
apply (rule, rule, assumption)
apply (erule_tac x=ma in allE, clarsimp)
apply (erule_tac x=ll in allE, rotate_tac -1)
apply (erule_tac x=a in allE, rotate_tac -1)
apply (erule_tac x=aa in allE, rotate_tac -1)
apply (erule_tac x=b in allE, clarsimp)
apply clarsimp
apply (case_tac ka) apply clarsimp apply (drule ZeroHeightReachableElim) apply clarsimp
apply clarsimp
apply (rotate_tac -2, drule ReachableSplit) apply simp apply clarsimp
apply (rotate_tac -1, erule disjE)
apply clarsimp
apply (frule InstrElimNext, simp, assumption, clarsimp)
apply (erule_tac x=ad in allE)
apply (erule_tac x=bb in allE)
apply (erule_tac x=ag in allE, rotate_tac -1)
apply (erule_tac x=ah in allE)
apply (erule_tac x=bd in allE, erule impE)
apply rule
apply (erule MultiApp)
apply assumption
apply (erule impE, simp add: SP_pre_def) apply (rule, rule, rule, rule, assumption)
apply (rule, rule, assumption)
apply (erule_tac x=ma in allE, clarsimp)
apply (rotate_tac -1, erule_tac x=ll in allE)
apply (rotate_tac -1, erule_tac x=a in allE)
apply (rotate_tac -1, erule_tac x=aa in allE)
apply (rotate_tac -1, erule_tac x=b in allE, clarsimp)
apply (simp add: SP_inv_def)
(* apply (rotate_tac -1, erule_tac x="l+1" in allE)*)
apply (rotate_tac -1, erule_tac x=ae in allE)
apply (rotate_tac -1, erule_tac x=af in allE)
apply (rotate_tac -1, erule_tac x=bc in allE, erule mp) apply (rule, rule, assumption)
apply clarsimp
(*GOTO*)
apply clarsimp
apply (rotate_tac 5) apply (erule thin_rl)
apply (simp add: mbody_is_def validn_def Opsem_def MS_def Reach_def, clarsimp)
apply rule
apply clarsimp apply (erule eval_cases) apply simp
apply clarsimp apply (drule GotoElim1) apply (simp, clarsimp)
apply (erule_tac x=ad in allE)
apply (erule_tac x=bb in allE)
apply (erule_tac x=ae in allE, rotate_tac- 1)
apply (erule_tac x=af in allE)
apply (erule_tac x=bc in allE, erule impE)
apply rule
apply (erule MultiApp)
apply (erule Goto)
apply (erule impE, simp add: SP_pre_def) apply (rule, rule, rule, rule, assumption)
apply (rule, rule, erule Goto)
apply (erule_tac x=ma in allE, clarsimp)
apply (erule_tac x=h in allE, erule_tac x=v in allE, clarsimp)
apply(simp add: SP_post_def, rotate_tac -1)
apply (erule_tac x=ae in allE, rotate_tac -1)
apply (erule_tac x=af in allE, rotate_tac -1)
apply (erule_tac x=bc in allE, erule mp)
apply (rule, rule, erule Goto)
apply clarsimp
apply rule
apply clarsimp
apply (case_tac ka) apply clarsimp apply (drule ZeroHeightMultiElim) apply clarsimp
apply clarsimp
apply (rotate_tac -2, drule MultiSplit) apply simp apply clarsimp
apply (drule GotoElim1) apply simp apply clarsimp
apply (erule_tac x=ad in allE)
apply (erule_tac x=bb in allE)
apply (erule_tac x=ae in allE,rotate_tac -1)
apply (erule_tac x=af in allE)
apply (erule_tac x=bc in allE, erule impE)
apply rule
apply (erule MultiApp)
apply (erule Goto)
apply (erule impE, simp add: SP_pre_def) apply (rule, rule, rule, rule, assumption)
apply (rule, rule, erule Goto)
apply (erule_tac x=ma in allE, clarsimp)
apply (erule_tac x=ll in allE, rotate_tac -1)
apply (erule_tac x=a in allE, rotate_tac -1)
apply (erule_tac x=aa in allE)
apply (erule_tac x=b in allE, clarsimp)
apply clarsimp
apply (case_tac ka) apply clarsimp apply (drule ZeroHeightReachableElim) apply clarsimp
apply clarsimp
apply (rotate_tac -2, drule ReachableSplit) apply simp apply clarsimp
apply (drule GotoElim1) apply simp apply clarsimp
apply (erule_tac x=ad in allE)
apply (erule_tac x=bb in allE)
apply (erule_tac x=ae in allE,rotate_tac -1)
apply (erule_tac x=af in allE)
apply (erule_tac x=bc in allE, erule impE)
apply rule
apply (erule MultiApp)
apply (erule Goto)
apply (erule impE, simp add: SP_pre_def) apply (rule, rule, rule, rule, assumption)
apply (rule, rule, erule Goto)
apply (erule_tac x=ma in allE, clarsimp)
apply (rotate_tac -1, erule_tac x=ll in allE)
apply (rotate_tac -1, erule_tac x=a in allE)
apply (rotate_tac -1, erule_tac x=aa in allE)
apply (rotate_tac -1, erule_tac x=b in allE, clarsimp)
apply (simp add: SP_inv_def)
(* apply (rotate_tac -1, erule_tac x=pc in allE)*)
apply (rotate_tac -1, erule_tac x=ae in allE)
apply (rotate_tac -1, erule_tac x=af in allE)
apply (rotate_tac -1, erule_tac x=bc in allE, erule mp)
apply (rule, rule)
apply (erule Goto)
(*If*)
apply clarsimp
apply (rotate_tac 5) apply (erule thin_rl,erule thin_rl)
apply (simp add: mbody_is_def validn_def Opsem_def MS_def Reach_def, clarsimp)
apply rule
apply clarsimp apply (erule eval_cases) apply simp
apply clarsimp apply (drule IfElim1) apply (simp, clarsimp)
apply (erule disjE)
apply clarsimp
apply (rotate_tac 3) apply (erule thin_rl)
apply (rotate_tac -1)
apply (erule_tac x=ad in allE)
apply (erule_tac x=bb in allE)
apply (erule_tac x=a in allE, rotate_tac -1)
apply (erule_tac x=af in allE)
apply (erule_tac x=bc in allE, erule impE)
apply rule
apply (erule MultiApp)
apply (erule IfT, simp)
apply (erule impE, simp add: SP_pre_def) apply (rule, rule, rule, rule) prefer 2
apply (rule, rule, erule IfT, simp) apply fastforce
apply (erule_tac x=ma in allE, clarsimp)
apply (erule_tac x=h in allE, erule_tac x=v in allE, clarsimp)
apply (simp add: SP_post_def)
apply (rotate_tac -1)
apply (erule_tac x="TRUE # a" in allE, erule_tac x=af in allE, erule_tac x=bc in allE, erule impE)
apply (rule, rule, rule IfT, simp,simp)
apply clarsimp
apply clarsimp
apply (rotate_tac 2) apply (erule thin_rl)
apply (erule_tac x=ad in allE)
apply (erule_tac x=bb in allE)
apply (erule_tac x=a in allE, rotate_tac -1)
apply (erule_tac x=af in allE)
apply (erule_tac x=bc in allE, erule impE)
apply rule
apply (erule MultiApp)
apply (rule IfF) apply (simp, assumption, simp) apply simp
apply (erule impE, simp add: SP_pre_def) apply (rule, rule, rule, rule) prefer 2
apply (rule, rule, rule IfF) apply (simp, assumption) apply fastforce
apply simp
apply clarsimp
apply (erule_tac x=ma in allE, clarsimp)
apply (erule_tac x=h in allE, erule_tac x=v in allE, clarsimp)
apply (simp add: SP_post_def)
apply (rotate_tac -1)
apply (erule_tac x="va # a" in allE, erule_tac x=af in allE, erule_tac x=bc in allE, erule impE)
apply (rule, rule, rule IfF) apply (simp, assumption) apply fastforce apply simp
apply clarsimp
apply clarsimp
apply rule
apply clarsimp
apply (case_tac ka) apply clarsimp apply (drule ZeroHeightMultiElim) apply clarsimp
apply clarsimp
apply (rotate_tac -2, drule MultiSplit) apply simp apply clarsimp
apply (drule IfElim1) apply simp apply clarsimp
apply (erule disjE)
apply clarsimp
apply (rotate_tac 4) apply (erule thin_rl)
apply (rotate_tac -1)
apply (erule_tac x=ad in allE)
apply (erule_tac x=bb in allE)
apply (erule_tac x=ag in allE, rotate_tac -1)
apply (erule_tac x=af in allE)
apply (erule_tac x=bc in allE, erule impE)
apply rule
apply (erule MultiApp)
apply (rule IfT) apply (simp, fastforce)
apply (erule impE, simp add: SP_pre_def) apply (rule, rule, rule,rule) prefer 2
apply (rule, rule, rule IfT) apply simp apply fastforce
apply clarsimp
apply (erule_tac x=ma in allE, clarsimp)
apply (erule_tac x=ll in allE, rotate_tac -1)
apply (erule_tac x=a in allE, rotate_tac -1)
apply (erule_tac x=aa in allE, rotate_tac -1)
apply (erule_tac x=b in allE, rotate_tac -1, clarsimp)
apply clarsimp
apply (rotate_tac 3) apply (erule thin_rl)
apply (erule_tac x=ad in allE)
apply (erule_tac x=bb in allE)
apply (erule_tac x=ag in allE, rotate_tac -1)
apply (erule_tac x=af in allE)
apply (erule_tac x=bc in allE, erule impE)
apply rule
apply (erule MultiApp)
apply (rule IfF) apply (simp, assumption, simp) apply simp
apply (erule impE, simp add: SP_pre_def) apply (rule, rule, rule, rule) prefer 2
apply (rule, rule, rule IfF) apply (simp, assumption) apply fastforce apply simp
apply clarsimp
apply (erule_tac x=ma in allE, clarsimp)
apply (erule_tac x=ll in allE, rotate_tac -1)
apply (erule_tac x=a in allE, rotate_tac -1)
apply (erule_tac x=aa in allE, rotate_tac -1)
apply (erule_tac x=b in allE, rotate_tac -1, clarsimp)
apply clarsimp
apply (case_tac ka) apply clarsimp apply (drule ZeroHeightReachableElim) apply clarsimp
apply clarsimp
apply (rotate_tac -2, drule ReachableSplit) apply simp apply clarsimp
apply (drule IfElim1) apply simp apply clarsimp
apply (erule disjE)
apply clarsimp
apply (rotate_tac 4) apply (erule thin_rl)
apply (rotate_tac -1)
apply (erule_tac x=ad in allE)
apply (erule_tac x=bb in allE)
apply (erule_tac x=ag in allE, rotate_tac -1)
apply (erule_tac x=af in allE)
apply (erule_tac x=bc in allE, erule impE)
apply rule
apply (erule MultiApp)
apply (rule IfT) apply (simp, fastforce)
apply (erule impE, simp add: SP_pre_def) apply (rule, rule, rule, rule) prefer 2
apply (rule, rule, rule IfT) apply simp apply fastforce
apply clarsimp
apply (erule_tac x=ma in allE, clarsimp)
apply (rotate_tac -1, erule_tac x=ll in allE)
apply (rotate_tac -1, erule_tac x=a in allE)
apply (rotate_tac -1, erule_tac x=aa in allE)
apply (rotate_tac -1, erule_tac x=b in allE, clarsimp)
apply (simp add: SP_inv_def)
(* apply (rotate_tac -1, erule_tac x=pc in allE)*)
apply (rotate_tac -1, erule_tac x="TRUE#ag" in allE)
apply (rotate_tac -1, erule_tac x=af in allE)
apply (rotate_tac -1, erule_tac x=bc in allE)
apply (rotate_tac -1, erule impE)
apply (rule, rule, erule IfT) apply simp
apply clarsimp
apply clarsimp
apply (rotate_tac 3) apply (erule thin_rl)
apply (erule_tac x=ad in allE)
apply (erule_tac x=bb in allE)
apply (erule_tac x=ag in allE, rotate_tac -1)
apply (erule_tac x=af in allE)
apply (erule_tac x=bc in allE, erule impE)
apply rule
apply (erule MultiApp)
apply (erule IfF) apply (assumption, simp, simp)
apply (erule impE, simp add: SP_pre_def) apply (rule, rule, rule, rule) prefer 2
apply (rule, rule, erule IfF) apply (assumption, fastforce,simp)
apply clarsimp
apply (erule_tac x=ma in allE, clarsimp)
apply (rotate_tac -1, erule_tac x=ll in allE)
apply (rotate_tac -1, erule_tac x=a in allE)
apply (rotate_tac -1, erule_tac x=aa in allE)
apply (rotate_tac -1, erule_tac x=b in allE, clarsimp)
apply (simp add: SP_inv_def)
(* apply (rotate_tac -1, erule_tac x="Suc l" in allE)*)
apply (rotate_tac -1, erule_tac x="v#ag" in allE)
apply (rotate_tac -1, erule_tac x=af in allE)
apply (rotate_tac -1, erule_tac x=bc in allE)
apply (rotate_tac -1, erule impE)
apply (rule, rule)
apply (erule IfF) apply (assumption, simp, simp)
apply clarsimp
(*RET*)
apply clarsimp
apply (simp add: mbody_is_def validn_def Opsem_def MS_def Reach_def, clarsimp)
apply rule
apply clarsimp apply (erule eval_cases) apply clarsimp
apply (drule RetElim1) apply simp apply simp
apply clarsimp
apply rule
apply clarsimp
apply (case_tac ka) apply clarsimp apply (drule ZeroHeightMultiElim) apply clarsimp
apply clarsimp
apply (rotate_tac -2, drule MultiSplit) apply simp apply clarsimp
apply (drule RetElim1, clarsimp) apply simp
apply clarsimp
apply (case_tac ka) apply clarsimp apply (drule ZeroHeightReachableElim) apply clarsimp
apply clarsimp
apply (rotate_tac -2, drule ReachableSplit) apply simp apply clarsimp
apply (drule RetElim1, clarsimp) apply simp
(*INVS*)
apply clarsimp
apply (subgoal_tac "INVS_soundK K G C m l D m' T MI Anno Anno2 (par,code,l0) A B I")
apply (simp add: INVS_soundK_def)
apply (erule impE) apply (simp add: INVS_SC_def) apply (rule, rule, rule, rule) apply assumption apply (rule, assumption)
apply assumption
apply assumption
apply (rule INVS_soundK_all)
(*CONSEQ*)
apply clarsimp
apply rule
apply clarsimp
apply (simp add: validn_def,clarsimp)
apply (erule thin_rl)
apply (rotate_tac 5, erule thin_rl)
apply (erule_tac x=a in allE, rotate_tac -1)
apply (erule_tac x=aa in allE, rotate_tac -1)
apply (erule_tac x=ba in allE, clarsimp, rotate_tac -1)
apply (erule_tac x=ab in allE)
apply (erule_tac x=bb in allE)
apply (rotate_tac -1)
apply (erule_tac x=ac in allE, rotate_tac -1)
apply (erule_tac x=ad in allE, rotate_tac -1)
apply (erule_tac x=bc in allE, clarsimp)
apply (erule_tac x=k in allE, clarsimp)
apply (rotate_tac -1)
apply (erule_tac x=ll in allE, rotate_tac -1)
apply (erule_tac x=ae in allE, rotate_tac -1)
apply (erule_tac x=af in allE)
apply (erule_tac x=bd in allE, clarsimp)
apply clarsimp
apply (simp add: validn_def,clarsimp)
apply (erule thin_rl)
apply (rotate_tac 5, erule thin_rl)
apply (erule_tac x=a in allE, rotate_tac -1)
apply (erule_tac x=aa in allE, rotate_tac -1)
apply (erule_tac x=ba in allE, clarsimp, rotate_tac -1)
apply (erule_tac x=ab in allE)
apply (erule_tac x=bb in allE)
apply (rotate_tac -1)
apply (erule_tac x=ac in allE, rotate_tac -1)
apply (erule_tac x=ad in allE, rotate_tac -1)
apply (erule_tac x=bc in allE, clarsimp)
apply (erule_tac x=k in allE, clarsimp)
apply (rotate_tac -1)
apply (erule_tac x=ll in allE, rotate_tac -1)
apply (erule_tac x=ae in allE, rotate_tac -1)
apply (erule_tac x=af in allE)
apply (erule_tac x=bd in allE, clarsimp)
(*INJECT*)
apply clarsimp apply (rule validn_lower) apply fast apply simp
(*AX*)
apply clarsimp apply (simp add: G_validn_def)
done
(*>*)
text\<open>The statement of this lemma gives a semantic interpretation of
the two judgement forms, as \<open>SP_Assum\<close>-judgements enjoy validity
up to execution height $K$, while \<open>SP_Deriv\<close>-judgements are
valid up to level $K+1$.\<close>
(*<*)
lemma SOUND_K:
"\<lbrakk> G \<rhd> \<lbrace>A\<rbrace> C,m,l \<lbrace>B\<rbrace> I; G_validn K G ; MST_validn K\<rbrakk>
\<Longrightarrow> \<Turnstile>\<^sub>(Suc K) \<lbrace>A\<rbrace> C, m, l \<lbrace>B\<rbrace> I"
apply (drule SOUND_Aux) apply assumption+ apply simp
done
(*>*)
text\<open>From this, we obtain a soundness result that still involves
context validity.\<close>
theorem SOUND_in_CTXT:
"\<lbrakk>G \<rhd> \<lbrace>A\<rbrace> C,m,l \<lbrace>B\<rbrace> I; G_valid G; MST_valid\<rbrakk> \<Longrightarrow> \<Turnstile> \<lbrace>A\<rbrace> C, m, l \<lbrace>B\<rbrace> I"
(*<*)
apply (rule validn_valid)
apply clarsimp
apply (rule validn_lower)
apply (erule SOUND_K)
prefer 3 apply (subgoal_tac "K \<le> Suc K", assumption) apply simp
apply (erule G_valid_validn)
apply (erule MST_valid_validn)
done
(*>*)
text\<open>We will now show that the two semantic assumptions can be replaced by
the verified-program property.\<close>
subsection\<open>Soundness of verified programs \label{SectionContextElimination}\<close>
text\<open>In order to obtain a soundness result that does not require
validity assumptions of the context or the specification table,
we show that the \<open>VP\<close> property implies context validity.
First, the elimination of contexts. By induction on
\<open>k\<close> we prove\<close>
lemma VPG_MSTn_Gn[rule_format]:
"VP_G G \<longrightarrow> MST_validn k \<longrightarrow> G_validn k G"
(*<*)
apply (induct k)
(*k=0*)
apply clarsimp
apply (simp add: VP_G_def, clarsimp)
apply (simp add: G_validn_def, clarsimp)
apply (simp add: validn_def) apply clarsimp
apply rule
apply clarsimp apply (drule no_zero_height_Exec_derivs) apply simp
apply clarsimp
apply rule
apply clarsimp apply (drule ZeroHeightMultiElim) apply clarsimp
apply (rotate_tac 2)
apply (erule_tac x=C in allE, erule_tac x=m in allE)
apply (erule_tac x=a in allE, erule_tac x=aa in allE, erule_tac x=b in allE, clarsimp)
apply (erule_tac x=C in allE, erule_tac x=m in allE)
apply (erule_tac x=l in allE, erule_tac x=A in allE, erule_tac x=B in allE, clarsimp)
apply (rule AssertionsImplyAnnoInvariants)
prefer 3 apply assumption
apply assumption+
apply clarsimp apply (drule ZeroHeightReachableElim) apply clarsimp
apply (rotate_tac 2)
apply (erule_tac x=C in allE, erule_tac x=m in allE)
apply (erule_tac x=a in allE, erule_tac x=aa in allE, erule_tac x=b in allE, clarsimp)
apply (erule_tac x=C in allE, erule_tac x=m in allE,
erule_tac x=l in allE, erule_tac x=A in allE, erule_tac x=B in allE, clarsimp)
apply (erule AssertionsImplyMethInvariants, assumption+)
(*k>0*)
apply clarsimp apply (simp add: VP_G_def)
apply (simp (no_asm) add: G_validn_def, clarsimp)
apply (subgoal_tac "MST_validn k", clarsimp)
apply (rule SOUND_K) apply fast
apply assumption
apply assumption
apply (erule MST_validn_lower) apply simp
done
(*>*)
text\<open>which implies\<close>
lemma VPG_MST_G: "\<lbrakk>VP_G G; MST_valid\<rbrakk> \<Longrightarrow> G_valid G"
(*<*)
apply (rule G_validn_valid, clarsimp)
apply (erule VPG_MSTn_Gn)
apply (erule MST_valid_validn)
done
(*>*)
text\<open>Next, the elimination of \<open>MST_valid\<close>. Again by induction on
\<open>k\<close>, we prove\<close>
lemma VPG_MSTn[rule_format]: "VP_G G \<longrightarrow> MST_validn k"
(*<*)
apply (induct k)
apply (simp add: MST_validn_def, clarsimp)
apply (simp add: validn_def, clarsimp)
apply rule
apply clarsimp apply (drule no_zero_height_Exec_derivs) apply simp
apply clarsimp
apply rule
apply clarsimp apply (drule ZeroHeightMultiElim) apply clarsimp
apply (simp add: VP_G_def, clarsimp, rotate_tac -1)
apply (erule_tac x=C in allE, erule_tac x=m in allE)
apply (erule_tac x=par in allE, erule_tac x=code in allE, erule_tac x=l0 in allE, clarsimp)
apply (rule AssertionsImplyAnnoInvariants) apply fast apply assumption+ apply simp
apply clarsimp apply (drule ZeroHeightReachableElim) apply clarsimp
apply (simp add: VP_G_def, clarsimp, rotate_tac -1)
apply (erule_tac x=C in allE, erule_tac x=m in allE)
apply (erule_tac x=par in allE, erule_tac x=code in allE, erule_tac x=l0 in allE, clarsimp)
apply (frule AssertionsImplyMethInvariants) apply assumption apply (simp add: mkState_def)
apply clarsimp
apply (frule VPG_MSTn_Gn, assumption)
apply (simp add: VP_G_def)
apply (simp (no_asm) add: MST_validn_def, clarsimp)
apply (rule SOUND_K)
apply (rotate_tac 3)
apply (erule_tac x=C in allE, erule_tac x=m in allE)
apply (erule_tac x=par in allE, erule_tac x=code in allE, erule_tac x=l0 in allE, clarsimp) apply assumption
apply assumption
apply assumption
done
(*>*)
text\<open>which yields\<close>
lemma VPG_MST:"VP_G G \<Longrightarrow> MST_valid"
(*<*)
apply (rule MST_validn_valid, clarsimp)
apply (erule VPG_MSTn)
done
(*>*)
text\<open>Combining these two results, and unfolding the definition of
program validity yields the final soundness result.\<close>
theorem VP_VALID: "VP \<Longrightarrow> Prog_valid"
(*<*)
apply (simp add: VP_def Prog_valid_def, clarsimp)
apply (frule VPG_MST, simp)
apply (drule VPG_MST_G, assumption) apply fast
done
(*>*)
(*<*)
text \<open>In particular, the $\mathit{VP}$ property implies that all
method specifications are honoured by their respective method
implementations.\<close>
theorem "VP \<Longrightarrow> MST_valid"
(*<*)
by (drule VP_VALID, simp add: Prog_valid_def)
(*>*)
end
(*>*)
diff --git a/thys/CakeML/generated/CakeML/SemanticPrimitives.thy b/thys/CakeML/generated/CakeML/SemanticPrimitives.thy
--- a/thys/CakeML/generated/CakeML/SemanticPrimitives.thy
+++ b/thys/CakeML/generated/CakeML/SemanticPrimitives.thy
@@ -1,998 +1,998 @@
chapter \<open>Generated by Lem from \<open>semantics/semanticPrimitives.lem\<close>.\<close>
theory "SemanticPrimitives"
imports
Main
"HOL-Library.Datatype_Records"
"LEM.Lem_pervasives"
"LEM.Lem_list_extra"
"LEM.Lem_string"
"Lib"
"Namespace"
"Ast"
"Ffi"
"FpSem"
"LEM.Lem_string_extra"
begin
\<comment> \<open>\<open>open import Pervasives\<close>\<close>
\<comment> \<open>\<open>open import Lib\<close>\<close>
\<comment> \<open>\<open>import List_extra\<close>\<close>
\<comment> \<open>\<open>import String\<close>\<close>
\<comment> \<open>\<open>import String_extra\<close>\<close>
\<comment> \<open>\<open>open import Ast\<close>\<close>
\<comment> \<open>\<open>open import Namespace\<close>\<close>
\<comment> \<open>\<open>open import Ffi\<close>\<close>
\<comment> \<open>\<open>open import FpSem\<close>\<close>
\<comment> \<open>\<open> The type that a constructor builds is either a named datatype or an exception.
* For exceptions, we also keep the module that the exception was declared in. \<close>\<close>
datatype tid_or_exn =
TypeId " (modN, typeN) id0 "
| TypeExn " (modN, conN) id0 "
\<comment> \<open>\<open>val type_defs_to_new_tdecs : list modN -> type_def -> set tid_or_exn\<close>\<close>
definition type_defs_to_new_tdecs :: "(string)list \<Rightarrow>((tvarN)list*string*(conN*(t)list)list)list \<Rightarrow>(tid_or_exn)set " where
" type_defs_to_new_tdecs mn tdefs = (
List.set (List.map ( \<lambda>x .
(case x of (tvs,tn,ctors) => TypeId (mk_id mn tn) )) tdefs))"
datatype_record 'v sem_env =
v ::" (modN, varN, 'v) namespace "
c ::" (modN, conN, (nat * tid_or_exn)) namespace "
\<comment> \<open>\<open> Value forms \<close>\<close>
datatype v =
Litv " lit "
\<comment> \<open>\<open> Constructor application. \<close>\<close>
| Conv " (conN * tid_or_exn)option " " v list "
\<comment> \<open>\<open> Function closures
The environment is used for the free variables in the function \<close>\<close>
| Closure " v sem_env " " varN " " exp0 "
\<comment> \<open>\<open> Function closure for recursive functions
* See Closure and Letrec above
* The last variable name indicates which function from the mutually
* recursive bundle this closure value represents \<close>\<close>
| Recclosure " v sem_env " " (varN * varN * exp0) list " " varN "
| Loc " nat "
| Vectorv " v list "
type_synonym env_ctor =" (modN, conN, (nat * tid_or_exn)) namespace "
type_synonym env_val =" (modN, varN, v) namespace "
definition Bindv :: " v " where
" Bindv = ( Conv (Some((''Bind''),TypeExn(Short(''Bind'')))) [])"
\<comment> \<open>\<open> The result of evaluation \<close>\<close>
datatype abort =
Rtype_error
| Rtimeout_error
datatype 'a error_result =
Rraise " 'a " \<comment> \<open>\<open> Should only be a value of type exn \<close>\<close>
| Rabort " abort "
datatype( 'a, 'b) result =
Rval " 'a "
| Rerr " 'b error_result "
\<comment> \<open>\<open> Stores \<close>\<close>
datatype 'a store_v =
\<comment> \<open>\<open> A ref cell \<close>\<close>
Refv " 'a "
\<comment> \<open>\<open> A byte array \<close>\<close>
| W8array " 8 word list "
\<comment> \<open>\<open> An array of values \<close>\<close>
| Varray " 'a list "
\<comment> \<open>\<open>val store_v_same_type : forall 'a. store_v 'a -> store_v 'a -> bool\<close>\<close>
definition store_v_same_type :: " 'a store_v \<Rightarrow> 'a store_v \<Rightarrow> bool " where
" store_v_same_type v1 v2 = (
(case (v1,v2) of
(Refv _, Refv _) => True
| (W8array _,W8array _) => True
| (Varray _,Varray _) => True
| _ => False
))"
\<comment> \<open>\<open> The nth item in the list is the value at location n \<close>\<close>
type_synonym 'a store =" ( 'a store_v) list "
\<comment> \<open>\<open>val empty_store : forall 'a. store 'a\<close>\<close>
definition empty_store :: "('a store_v)list " where
" empty_store = ( [])"
\<comment> \<open>\<open>val store_lookup : forall 'a. nat -> store 'a -> maybe (store_v 'a)\<close>\<close>
definition store_lookup :: " nat \<Rightarrow>('a store_v)list \<Rightarrow>('a store_v)option " where
" store_lookup l st = (
if l < List.length st then
Some (List.nth st l)
else
None )"
\<comment> \<open>\<open>val store_alloc : forall 'a. store_v 'a -> store 'a -> store 'a * nat\<close>\<close>
definition store_alloc :: " 'a store_v \<Rightarrow>('a store_v)list \<Rightarrow>('a store_v)list*nat " where
" store_alloc v2 st = (
((st @ [v2]), List.length st))"
\<comment> \<open>\<open>val store_assign : forall 'a. nat -> store_v 'a -> store 'a -> maybe (store 'a)\<close>\<close>
definition store_assign :: " nat \<Rightarrow> 'a store_v \<Rightarrow>('a store_v)list \<Rightarrow>(('a store_v)list)option " where
" store_assign n v2 st = (
if (n < List.length st) \<and>
store_v_same_type (List.nth st n) v2
then
Some (List.list_update st n v2)
else
None )"
datatype_record 'ffi state =
clock ::" nat "
refs ::" v store "
ffi ::" 'ffi ffi_state "
defined_types ::" tid_or_exn set "
defined_mods ::" ( modN list) set "
\<comment> \<open>\<open> Other primitives \<close>\<close>
\<comment> \<open>\<open> Check that a constructor is properly applied \<close>\<close>
\<comment> \<open>\<open>val do_con_check : env_ctor -> maybe (id modN conN) -> nat -> bool\<close>\<close>
fun do_con_check :: "((string),(string),(nat*tid_or_exn))namespace \<Rightarrow>(((string),(string))id0)option \<Rightarrow> nat \<Rightarrow> bool " where
" do_con_check cenv None l = ( True )"
|" do_con_check cenv (Some n) l = (
(case nsLookup cenv n of
None => False
| Some (l',ns) => l = l'
))"
\<comment> \<open>\<open>val build_conv : env_ctor -> maybe (id modN conN) -> list v -> maybe v\<close>\<close>
fun build_conv :: "((string),(string),(nat*tid_or_exn))namespace \<Rightarrow>(((string),(string))id0)option \<Rightarrow>(v)list \<Rightarrow>(v)option " where
" build_conv envC None vs = (
Some (Conv None vs))"
|" build_conv envC (Some id1) vs = (
(case nsLookup envC id1 of
None => None
| Some (len,t1) => Some (Conv (Some (id_to_n id1, t1)) vs)
))"
\<comment> \<open>\<open>val lit_same_type : lit -> lit -> bool\<close>\<close>
definition lit_same_type :: " lit \<Rightarrow> lit \<Rightarrow> bool " where
" lit_same_type l1 l2 = (
(case (l1,l2) of
(IntLit _, IntLit _) => True
| (Char _, Char _) => True
| (StrLit _, StrLit _) => True
| (Word8 _, Word8 _) => True
| (Word64 _, Word64 _) => True
| _ => False
))"
datatype 'a match_result =
No_match
| Match_type_error
| Match " 'a "
\<comment> \<open>\<open>val same_tid : tid_or_exn -> tid_or_exn -> bool\<close>\<close>
fun same_tid :: " tid_or_exn \<Rightarrow> tid_or_exn \<Rightarrow> bool " where
" same_tid (TypeId tn1) (TypeId tn2) = ( tn1 = tn2 )"
|" same_tid (TypeExn _) (TypeExn _) = ( True )"
|" same_tid _ _ = ( False )"
\<comment> \<open>\<open>val same_ctor : conN * tid_or_exn -> conN * tid_or_exn -> bool\<close>\<close>
fun same_ctor :: " string*tid_or_exn \<Rightarrow> string*tid_or_exn \<Rightarrow> bool " where
" same_ctor (cn1, TypeExn mn1) (cn2, TypeExn mn2) = ( (cn1 = cn2) \<and> (mn1 = mn2))"
|" same_ctor (cn1, _) (cn2, _) = ( cn1 = cn2 )"
\<comment> \<open>\<open>val ctor_same_type : maybe (conN * tid_or_exn) -> maybe (conN * tid_or_exn) -> bool\<close>\<close>
definition ctor_same_type :: "(string*tid_or_exn)option \<Rightarrow>(string*tid_or_exn)option \<Rightarrow> bool " where
" ctor_same_type c1 c2 = (
(case (c1,c2) of
(None, None) => True
| (Some (_,t1), Some (_,t2)) => same_tid t1 t2
| _ => False
))"
\<comment> \<open>\<open> A big-step pattern matcher. If the value matches the pattern, return an
* environment with the pattern variables bound to the corresponding sub-terms
* of the value; this environment extends the environment given as an argument.
* No_match is returned when there is no match, but any constructors
* encountered in determining the match failure are applied to the correct
* number of arguments, and constructors in corresponding positions in the
* pattern and value come from the same type. Match_type_error is returned
* when one of these conditions is violated \<close>\<close>
\<comment> \<open>\<open>val pmatch : env_ctor -> store v -> pat -> v -> alist varN v -> match_result (alist varN v)\<close>\<close>
function (sequential,domintros)
pmatch_list :: "((string),(string),(nat*tid_or_exn))namespace \<Rightarrow>((v)store_v)list \<Rightarrow>(pat)list \<Rightarrow>(v)list \<Rightarrow>(string*v)list \<Rightarrow>((string*v)list)match_result "
and
pmatch :: "((string),(string),(nat*tid_or_exn))namespace \<Rightarrow>((v)store_v)list \<Rightarrow> pat \<Rightarrow> v \<Rightarrow>(string*v)list \<Rightarrow>((string*v)list)match_result " where
"
pmatch envC s Pany v' env = ( Match env )"
|"
pmatch envC s (Pvar x) v' env = ( Match ((x,v')# env))"
|"
pmatch envC s (Plit l) (Litv l') env = (
if l = l' then
Match env
else if lit_same_type l l' then
No_match
else
Match_type_error )"
|"
pmatch envC s (Pcon (Some n) ps) (Conv (Some (n', t')) vs) env = (
(case nsLookup envC n of
Some (l, t1) =>
if same_tid t1 t' \<and> (List.length ps = l) then
if same_ctor (id_to_n n, t1) (n',t') then
if List.length vs = l then
pmatch_list envC s ps vs env
else
Match_type_error
else
No_match
else
Match_type_error
| _ => Match_type_error
))"
|"
pmatch envC s (Pcon None ps) (Conv None vs) env = (
if List.length ps = List.length vs then
pmatch_list envC s ps vs env
else
Match_type_error )"
|"
pmatch envC s (Pref p) (Loc lnum) env = (
(case store_lookup lnum s of
Some (Refv v2) => pmatch envC s p v2 env
| Some _ => Match_type_error
| None => Match_type_error
))"
|"
pmatch envC s (Ptannot p t1) v2 env = (
pmatch envC s p v2 env )"
|"
pmatch envC _ _ _ env = ( Match_type_error )"
|"
pmatch_list envC s [] [] env = ( Match env )"
|"
pmatch_list envC s (p # ps) (v2 # vs) env = (
(case pmatch envC s p v2 env of
No_match => No_match
| Match_type_error => Match_type_error
| Match env' => pmatch_list envC s ps vs env'
))"
|"
pmatch_list envC s _ _ env = ( Match_type_error )"
by pat_completeness auto
\<comment> \<open>\<open> Bind each function of a mutually recursive set of functions to its closure \<close>\<close>
\<comment> \<open>\<open>val build_rec_env : list (varN * varN * exp) -> sem_env v -> env_val -> env_val\<close>\<close>
definition build_rec_env :: "(varN*varN*exp0)list \<Rightarrow>(v)sem_env \<Rightarrow>((string),(string),(v))namespace \<Rightarrow>((string),(string),(v))namespace " where
" build_rec_env funs cl_env add_to_env = (
List.foldr ( \<lambda>x .
(case x of
(f,x,e) => \<lambda> env' . nsBind f (Recclosure cl_env funs f) env'
)) funs add_to_env )"
\<comment> \<open>\<open> Lookup in the list of mutually recursive functions \<close>\<close>
\<comment> \<open>\<open>val find_recfun : forall 'a 'b. varN -> list (varN * 'a * 'b) -> maybe ('a * 'b)\<close>\<close>
fun find_recfun :: " string \<Rightarrow>(string*'a*'b)list \<Rightarrow>('a*'b)option " where
" find_recfun n ([]) = ( None )"
|" find_recfun n ((f,x,e) # funs) = (
if f = n then
Some (x,e)
else
find_recfun n funs )"
datatype eq_result =
Eq_val " bool "
| Eq_type_error
\<comment> \<open>\<open>val do_eq : v -> v -> eq_result\<close>\<close>
function (sequential,domintros)
do_eq_list :: "(v)list \<Rightarrow>(v)list \<Rightarrow> eq_result "
and
do_eq :: " v \<Rightarrow> v \<Rightarrow> eq_result " where
"
do_eq (Litv l1) (Litv l2) = (
if lit_same_type l1 l2 then Eq_val (l1 = l2)
else Eq_type_error )"
|"
do_eq (Loc l1) (Loc l2) = ( Eq_val (l1 = l2))"
|"
do_eq (Conv cn1 vs1) (Conv cn2 vs2) = (
if (cn1 = cn2) \<and> (List.length vs1 = List.length vs2) then
do_eq_list vs1 vs2
else if ctor_same_type cn1 cn2 then
Eq_val False
else
Eq_type_error )"
|"
do_eq (Vectorv vs1) (Vectorv vs2) = (
if List.length vs1 = List.length vs2 then
do_eq_list vs1 vs2
else
Eq_val False )"
|"
do_eq (Closure _ _ _) (Closure _ _ _) = ( Eq_val True )"
|"
do_eq (Closure _ _ _) (Recclosure _ _ _) = ( Eq_val True )"
|"
do_eq (Recclosure _ _ _) (Closure _ _ _) = ( Eq_val True )"
|"
do_eq (Recclosure _ _ _) (Recclosure _ _ _) = ( Eq_val True )"
|"
do_eq _ _ = ( Eq_type_error )"
|"
do_eq_list [] [] = ( Eq_val True )"
|"
do_eq_list (v1 # vs1) (v2 # vs2) = (
(case do_eq v1 v2 of
Eq_type_error => Eq_type_error
| Eq_val r =>
if \<not> r then
Eq_val False
else
do_eq_list vs1 vs2
))"
|"
do_eq_list _ _ = ( Eq_val False )"
by pat_completeness auto
\<comment> \<open>\<open>val prim_exn : conN -> v\<close>\<close>
definition prim_exn :: " string \<Rightarrow> v " where
" prim_exn cn = ( Conv (Some (cn, TypeExn (Short cn))) [])"
\<comment> \<open>\<open> Do an application \<close>\<close>
\<comment> \<open>\<open>val do_opapp : list v -> maybe (sem_env v * exp)\<close>\<close>
fun do_opapp :: "(v)list \<Rightarrow>((v)sem_env*exp0)option " where
" do_opapp ([Closure env n e, v2]) = (
Some (( env (| v := (nsBind n v2(v env)) |)), e))"
|" do_opapp ([Recclosure env funs n, v2]) = (
if allDistinct (List.map ( \<lambda>x .
(case x of (f,x,e) => f )) funs) then
(case find_recfun n funs of
Some (n,e) => Some (( env (| v := (nsBind n v2 (build_rec_env funs env(v env))) |)), e)
| None => None
)
else
None )"
|" do_opapp _ = ( None )"
\<comment> \<open>\<open> If a value represents a list, get that list. Otherwise return Nothing \<close>\<close>
\<comment> \<open>\<open>val v_to_list : v -> maybe (list v)\<close>\<close>
function (sequential,domintros) v_to_list :: " v \<Rightarrow>((v)list)option " where
" v_to_list (Conv (Some (cn, TypeId (Short tn))) []) = (
if (cn = (''nil'')) \<and> (tn = (''list'')) then
Some []
else
None )"
|" v_to_list (Conv (Some (cn,TypeId (Short tn))) [v1,v2]) = (
if (cn = (''::'')) \<and> (tn = (''list'')) then
(case v_to_list v2 of
Some vs => Some (v1 # vs)
| None => None
)
else
None )"
|" v_to_list _ = ( None )"
by pat_completeness auto
\<comment> \<open>\<open>val v_to_char_list : v -> maybe (list char)\<close>\<close>
function (sequential,domintros) v_to_char_list :: " v \<Rightarrow>((char)list)option " where
" v_to_char_list (Conv (Some (cn, TypeId (Short tn))) []) = (
if (cn = (''nil'')) \<and> (tn = (''list'')) then
Some []
else
None )"
|" v_to_char_list (Conv (Some (cn,TypeId (Short tn))) [Litv (Char c2),v2]) = (
if (cn = (''::'')) \<and> (tn = (''list'')) then
(case v_to_char_list v2 of
Some cs => Some (c2 # cs)
| None => None
)
else
None )"
|" v_to_char_list _ = ( None )"
by pat_completeness auto
\<comment> \<open>\<open>val vs_to_string : list v -> maybe string\<close>\<close>
function (sequential,domintros) vs_to_string :: "(v)list \<Rightarrow>(string)option " where
" vs_to_string [] = ( Some (''''))"
|" vs_to_string (Litv(StrLit s1)# vs) = (
(case vs_to_string vs of
Some s2 => Some (s1 @ s2)
| _ => None
))"
|" vs_to_string _ = ( None )"
by pat_completeness auto
\<comment> \<open>\<open>val copy_array : forall 'a. list 'a * integer -> integer -> maybe (list 'a * integer) -> maybe (list 'a)\<close>\<close>
fun copy_array :: " 'a list*int \<Rightarrow> int \<Rightarrow>('a list*int)option \<Rightarrow>('a list)option " where
" copy_array (src,srcoff) len d = (
if (srcoff <( 0 :: int)) \<or> ((len <( 0 :: int)) \<or> (List.length src < nat (abs ( (srcoff + len))))) then None else
(let copied = (List.take (nat (abs ( len))) (List.drop (nat (abs ( srcoff))) src)) in
(case d of
Some (dst,dstoff) =>
if (dstoff <( 0 :: int)) \<or> (List.length dst < nat (abs ( (dstoff + len)))) then None else
Some ((List.take (nat (abs ( dstoff))) dst @
copied) @
List.drop (nat (abs ( (dstoff + len)))) dst)
| None => Some copied
)))"
\<comment> \<open>\<open>val ws_to_chars : list word8 -> list char\<close>\<close>
definition ws_to_chars :: "(8 word)list \<Rightarrow>(char)list " where
" ws_to_chars ws = ( List.map (\<lambda> w . (%n. char_of (n::nat))(unat w)) ws )"
\<comment> \<open>\<open>val chars_to_ws : list char -> list word8\<close>\<close>
definition chars_to_ws :: "(char)list \<Rightarrow>(8 word)list " where
" chars_to_ws cs = ( List.map (\<lambda> c2 . word_of_int(int(of_char c2))) cs )"
\<comment> \<open>\<open>val opn_lookup : opn -> integer -> integer -> integer\<close>\<close>
fun opn_lookup :: " opn \<Rightarrow> int \<Rightarrow> int \<Rightarrow> int " where
" opn_lookup Plus = ( (+))"
|" opn_lookup Minus = ( (-))"
|" opn_lookup Times = ( (*))"
|" opn_lookup Divide = ( (div))"
|" opn_lookup Modulo = ( (mod))"
\<comment> \<open>\<open>val opb_lookup : opb -> integer -> integer -> bool\<close>\<close>
fun opb_lookup :: " opb \<Rightarrow> int \<Rightarrow> int \<Rightarrow> bool " where
" opb_lookup Lt = ( (<))"
|" opb_lookup Gt = ( (>))"
|" opb_lookup Leq = ( (\<le>))"
|" opb_lookup Geq = ( (\<ge>))"
\<comment> \<open>\<open>val opw8_lookup : opw -> word8 -> word8 -> word8\<close>\<close>
fun opw8_lookup :: " opw \<Rightarrow> 8 word \<Rightarrow> 8 word \<Rightarrow> 8 word " where
- " opw8_lookup Andw = ( Bits.bitAND )"
-|" opw8_lookup Orw = ( Bits.bitOR )"
-|" opw8_lookup Xor = ( Bits.bitXOR )"
+ " opw8_lookup Andw = ( (AND) )"
+|" opw8_lookup Orw = ( (OR) )"
+|" opw8_lookup Xor = ( (XOR) )"
|" opw8_lookup Add = ( Groups.plus )"
|" opw8_lookup Sub = ( Groups.minus )"
\<comment> \<open>\<open>val opw64_lookup : opw -> word64 -> word64 -> word64\<close>\<close>
fun opw64_lookup :: " opw \<Rightarrow> 64 word \<Rightarrow> 64 word \<Rightarrow> 64 word " where
- " opw64_lookup Andw = ( Bits.bitAND )"
-|" opw64_lookup Orw = ( Bits.bitOR )"
-|" opw64_lookup Xor = ( Bits.bitXOR )"
+ " opw64_lookup Andw = ( (AND) )"
+|" opw64_lookup Orw = ( (OR) )"
+|" opw64_lookup Xor = ( (XOR) )"
|" opw64_lookup Add = ( Groups.plus )"
|" opw64_lookup Sub = ( Groups.minus )"
\<comment> \<open>\<open>val shift8_lookup : shift -> word8 -> nat -> word8\<close>\<close>
fun shift8_lookup :: " shift \<Rightarrow> 8 word \<Rightarrow> nat \<Rightarrow> 8 word " where
" shift8_lookup Lsl = ( shiftl )"
|" shift8_lookup Lsr = ( shiftr )"
|" shift8_lookup Asr = ( sshiftr )"
|" shift8_lookup Ror = ( (% a b. word_rotr b a) )"
\<comment> \<open>\<open>val shift64_lookup : shift -> word64 -> nat -> word64\<close>\<close>
fun shift64_lookup :: " shift \<Rightarrow> 64 word \<Rightarrow> nat \<Rightarrow> 64 word " where
" shift64_lookup Lsl = ( shiftl )"
|" shift64_lookup Lsr = ( shiftr )"
|" shift64_lookup Asr = ( sshiftr )"
|" shift64_lookup Ror = ( (% a b. word_rotr b a) )"
\<comment> \<open>\<open>val Boolv : bool -> v\<close>\<close>
definition Boolv :: " bool \<Rightarrow> v " where
" Boolv b = ( if b
then Conv (Some ((''true''), TypeId (Short (''bool'')))) []
else Conv (Some ((''false''), TypeId (Short (''bool'')))) [])"
datatype exp_or_val =
Exp " exp0 "
| Val " v "
type_synonym( 'ffi, 'v) store_ffi =" 'v store * 'ffi ffi_state "
\<comment> \<open>\<open>val do_app : forall 'ffi. store_ffi 'ffi v -> op -> list v -> maybe (store_ffi 'ffi v * result v v)\<close>\<close>
fun do_app :: "((v)store_v)list*'ffi ffi_state \<Rightarrow> op0 \<Rightarrow>(v)list \<Rightarrow>((((v)store_v)list*'ffi ffi_state)*((v),(v))result)option " where
" do_app ((s:: v store),(t1:: 'ffi ffi_state)) op1 vs = (
(case (op1, vs) of
(Opn op1, [Litv (IntLit n1), Litv (IntLit n2)]) =>
if ((op1 = Divide) \<or> (op1 = Modulo)) \<and> (n2 =( 0 :: int)) then
Some ((s,t1), Rerr (Rraise (prim_exn (''Div''))))
else
Some ((s,t1), Rval (Litv (IntLit (opn_lookup op1 n1 n2))))
| (Opb op1, [Litv (IntLit n1), Litv (IntLit n2)]) =>
Some ((s,t1), Rval (Boolv (opb_lookup op1 n1 n2)))
| (Opw W8 op1, [Litv (Word8 w1), Litv (Word8 w2)]) =>
Some ((s,t1), Rval (Litv (Word8 (opw8_lookup op1 w1 w2))))
| (Opw W64 op1, [Litv (Word64 w1), Litv (Word64 w2)]) =>
Some ((s,t1), Rval (Litv (Word64 (opw64_lookup op1 w1 w2))))
| (FP_bop bop, [Litv (Word64 w1), Litv (Word64 w2)]) =>
Some ((s,t1),Rval (Litv (Word64 (fp_bop bop w1 w2))))
| (FP_uop uop, [Litv (Word64 w)]) =>
Some ((s,t1),Rval (Litv (Word64 (fp_uop uop w))))
| (FP_cmp cmp, [Litv (Word64 w1), Litv (Word64 w2)]) =>
Some ((s,t1),Rval (Boolv (fp_cmp cmp w1 w2)))
| (Shift W8 op1 n, [Litv (Word8 w)]) =>
Some ((s,t1), Rval (Litv (Word8 (shift8_lookup op1 w n))))
| (Shift W64 op1 n, [Litv (Word64 w)]) =>
Some ((s,t1), Rval (Litv (Word64 (shift64_lookup op1 w n))))
| (Equality, [v1, v2]) =>
(case do_eq v1 v2 of
Eq_type_error => None
| Eq_val b => Some ((s,t1), Rval (Boolv b))
)
| (Opassign, [Loc lnum, v2]) =>
(case store_assign lnum (Refv v2) s of
Some s' => Some ((s',t1), Rval (Conv None []))
| None => None
)
| (Opref, [v2]) =>
(let (s',n) = (store_alloc (Refv v2) s) in
Some ((s',t1), Rval (Loc n)))
| (Opderef, [Loc n]) =>
(case store_lookup n s of
Some (Refv v2) => Some ((s,t1),Rval v2)
| _ => None
)
| (Aw8alloc, [Litv (IntLit n), Litv (Word8 w)]) =>
if n <( 0 :: int) then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
(let (s',lnum) =
(store_alloc (W8array (List.replicate (nat (abs ( n))) w)) s)
in
Some ((s',t1), Rval (Loc lnum)))
| (Aw8sub, [Loc lnum, Litv (IntLit i)]) =>
(case store_lookup lnum s of
Some (W8array ws) =>
if i <( 0 :: int) then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
(let n = (nat (abs ( i))) in
if n \<ge> List.length ws then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
Some ((s,t1), Rval (Litv (Word8 (List.nth ws n)))))
| _ => None
)
| (Aw8length, [Loc n]) =>
(case store_lookup n s of
Some (W8array ws) =>
Some ((s,t1),Rval (Litv(IntLit(int(List.length ws)))))
| _ => None
)
| (Aw8update, [Loc lnum, Litv(IntLit i), Litv(Word8 w)]) =>
(case store_lookup lnum s of
Some (W8array ws) =>
if i <( 0 :: int) then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
(let n = (nat (abs ( i))) in
if n \<ge> List.length ws then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
(case store_assign lnum (W8array (List.list_update ws n w)) s of
None => None
| Some s' => Some ((s',t1), Rval (Conv None []))
))
| _ => None
)
| (WordFromInt W8, [Litv(IntLit i)]) =>
Some ((s,t1), Rval (Litv (Word8 (word_of_int i))))
| (WordFromInt W64, [Litv(IntLit i)]) =>
Some ((s,t1), Rval (Litv (Word64 (word_of_int i))))
| (WordToInt W8, [Litv (Word8 w)]) =>
Some ((s,t1), Rval (Litv (IntLit (int(unat w)))))
| (WordToInt W64, [Litv (Word64 w)]) =>
Some ((s,t1), Rval (Litv (IntLit (int(unat w)))))
| (CopyStrStr, [Litv(StrLit str),Litv(IntLit off),Litv(IntLit len)]) =>
Some ((s,t1),
(case copy_array ( str,off) len None of
None => Rerr (Rraise (prim_exn (''Subscript'')))
| Some cs => Rval (Litv(StrLit((cs))))
))
| (CopyStrAw8, [Litv(StrLit str),Litv(IntLit off),Litv(IntLit len),
Loc dst,Litv(IntLit dstoff)]) =>
(case store_lookup dst s of
Some (W8array ws) =>
(case copy_array ( str,off) len (Some(ws_to_chars ws,dstoff)) of
None => Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
| Some cs =>
(case store_assign dst (W8array (chars_to_ws cs)) s of
Some s' => Some ((s',t1), Rval (Conv None []))
| _ => None
)
)
| _ => None
)
| (CopyAw8Str, [Loc src,Litv(IntLit off),Litv(IntLit len)]) =>
(case store_lookup src s of
Some (W8array ws) =>
Some ((s,t1),
(case copy_array (ws,off) len None of
None => Rerr (Rraise (prim_exn (''Subscript'')))
| Some ws => Rval (Litv(StrLit((ws_to_chars ws))))
))
| _ => None
)
| (CopyAw8Aw8, [Loc src,Litv(IntLit off),Litv(IntLit len),
Loc dst,Litv(IntLit dstoff)]) =>
(case (store_lookup src s, store_lookup dst s) of
(Some (W8array ws), Some (W8array ds)) =>
(case copy_array (ws,off) len (Some(ds,dstoff)) of
None => Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
| Some ws =>
(case store_assign dst (W8array ws) s of
Some s' => Some ((s',t1), Rval (Conv None []))
| _ => None
)
)
| _ => None
)
| (Ord, [Litv (Char c2)]) =>
Some ((s,t1), Rval (Litv(IntLit(int(of_char c2)))))
| (Chr, [Litv (IntLit i)]) =>
Some ((s,t1),
(if (i <( 0 :: int)) \<or> (i >( 255 :: int)) then
Rerr (Rraise (prim_exn (''Chr'')))
else
Rval (Litv(Char((%n. char_of (n::nat))(nat (abs ( i))))))))
| (Chopb op1, [Litv (Char c1), Litv (Char c2)]) =>
Some ((s,t1), Rval (Boolv (opb_lookup op1 (int(of_char c1)) (int(of_char c2)))))
| (Implode, [v2]) =>
(case v_to_char_list v2 of
Some ls =>
Some ((s,t1), Rval (Litv (StrLit ( ls))))
| None => None
)
| (Strsub, [Litv (StrLit str), Litv (IntLit i)]) =>
if i <( 0 :: int) then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
(let n = (nat (abs ( i))) in
if n \<ge> List.length str then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
Some ((s,t1), Rval (Litv (Char (List.nth ( str) n)))))
| (Strlen, [Litv (StrLit str)]) =>
Some ((s,t1), Rval (Litv(IntLit(int(List.length str)))))
| (Strcat, [v2]) =>
(case v_to_list v2 of
Some vs =>
(case vs_to_string vs of
Some str =>
Some ((s,t1), Rval (Litv(StrLit str)))
| _ => None
)
| _ => None
)
| (VfromList, [v2]) =>
(case v_to_list v2 of
Some vs =>
Some ((s,t1), Rval (Vectorv vs))
| None => None
)
| (Vsub, [Vectorv vs, Litv (IntLit i)]) =>
if i <( 0 :: int) then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
(let n = (nat (abs ( i))) in
if n \<ge> List.length vs then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
Some ((s,t1), Rval (List.nth vs n)))
| (Vlength, [Vectorv vs]) =>
Some ((s,t1), Rval (Litv (IntLit (int (List.length vs)))))
| (Aalloc, [Litv (IntLit n), v2]) =>
if n <( 0 :: int) then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
(let (s',lnum) =
(store_alloc (Varray (List.replicate (nat (abs ( n))) v2)) s)
in
Some ((s',t1), Rval (Loc lnum)))
| (AallocEmpty, [Conv None []]) =>
(let (s',lnum) = (store_alloc (Varray []) s) in
Some ((s',t1), Rval (Loc lnum)))
| (Asub, [Loc lnum, Litv (IntLit i)]) =>
(case store_lookup lnum s of
Some (Varray vs) =>
if i <( 0 :: int) then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
(let n = (nat (abs ( i))) in
if n \<ge> List.length vs then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
Some ((s,t1), Rval (List.nth vs n)))
| _ => None
)
| (Alength, [Loc n]) =>
(case store_lookup n s of
Some (Varray ws) =>
Some ((s,t1),Rval (Litv(IntLit(int(List.length ws)))))
| _ => None
)
| (Aupdate, [Loc lnum, Litv (IntLit i), v2]) =>
(case store_lookup lnum s of
Some (Varray vs) =>
if i <( 0 :: int) then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
(let n = (nat (abs ( i))) in
if n \<ge> List.length vs then
Some ((s,t1), Rerr (Rraise (prim_exn (''Subscript''))))
else
(case store_assign lnum (Varray (List.list_update vs n v2)) s of
None => None
| Some s' => Some ((s',t1), Rval (Conv None []))
))
| _ => None
)
| (ConfigGC, [Litv (IntLit i), Litv (IntLit j)]) =>
Some ((s,t1), Rval (Conv None []))
| (FFI n, [Litv(StrLit conf), Loc lnum]) =>
(case store_lookup lnum s of
Some (W8array ws) =>
(case call_FFI t1 n (List.map (\<lambda> c2 . of_nat(of_char c2)) ( conf)) ws of
(t', ws') =>
(case store_assign lnum (W8array ws') s of
Some s' => Some ((s', t'), Rval (Conv None []))
| None => None
)
)
| _ => None
)
| _ => None
))"
\<comment> \<open>\<open> Do a logical operation \<close>\<close>
\<comment> \<open>\<open>val do_log : lop -> v -> exp -> maybe exp_or_val\<close>\<close>
fun do_log :: " lop \<Rightarrow> v \<Rightarrow> exp0 \<Rightarrow>(exp_or_val)option " where
" do_log And v2 e = (
(case v2 of
Litv _ => None
| Conv m l2 => (case m of
None => None
| Some p => (case p of
(s1,t1) =>
if(s1 = (''true'')) then
((case t1 of
TypeId i => (case i of
Short s2 =>
if(s2 = (''bool'')) then
((case l2 of
[] => Some (Exp e)
| _ => None
)) else None
| Long _ _ => None
)
| TypeExn _ => None
)) else
(
if(s1 = (''false'')) then
((case t1 of
TypeId i2 => (case i2 of
Short s4 =>
if(s4 = (''bool'')) then
((case l2 of
[] => Some
(Val v2)
| _ => None
)) else None
| Long _ _ =>
None
)
| TypeExn _ => None
)) else None)
)
)
| Closure _ _ _ => None
| Recclosure _ _ _ => None
| Loc _ => None
| Vectorv _ => None
) )"
|" do_log Or v2 e = (
(case v2 of
Litv _ => None
| Conv m0 l6 => (case m0 of
None => None
| Some p0 => (case p0 of
(s8,t0) =>
if(s8 = (''false'')) then
((case t0 of
TypeId i5 => (case i5 of
Short s9 =>
if(s9 = (''bool'')) then
((case l6 of
[] => Some
(Exp e)
| _ => None
)) else None
| Long _ _ =>
None
)
| TypeExn _ => None
)) else
(
if(s8 = (''true'')) then
((case t0 of
TypeId i8 => (case i8 of
Short s11 =>
if(s11 = (''bool'')) then
((case l6 of
[] =>
Some (Val v2)
| _ =>
None
)) else None
| Long _ _ =>
None
)
| TypeExn _ => None
)) else None)
)
)
| Closure _ _ _ => None
| Recclosure _ _ _ => None
| Loc _ => None
| Vectorv _ => None
) )"
\<comment> \<open>\<open> Do an if-then-else \<close>\<close>
\<comment> \<open>\<open>val do_if : v -> exp -> exp -> maybe exp\<close>\<close>
definition do_if :: " v \<Rightarrow> exp0 \<Rightarrow> exp0 \<Rightarrow>(exp0)option " where
" do_if v2 e1 e2 = (
if v2 = (Boolv True) then
Some e1
else if v2 = (Boolv False) then
Some e2
else
None )"
\<comment> \<open>\<open> Semantic helpers for definitions \<close>\<close>
\<comment> \<open>\<open> Build a constructor environment for the type definition tds \<close>\<close>
\<comment> \<open>\<open>val build_tdefs : list modN -> list (list tvarN * typeN * list (conN * list t)) -> env_ctor\<close>\<close>
definition build_tdefs :: "(string)list \<Rightarrow>((tvarN)list*string*(string*(t)list)list)list \<Rightarrow>((string),(string),(nat*tid_or_exn))namespace " where
" build_tdefs mn tds = (
alist_to_ns
(List.rev
(List.concat
(List.map
( \<lambda>x .
(case x of
(tvs, tn, condefs) =>
List.map
( \<lambda>x . (case x of
(conN, ts) =>
(conN, (List.length ts, TypeId (mk_id mn tn)))
)) condefs
))
tds))))"
\<comment> \<open>\<open> Checks that no constructor is defined twice in a type \<close>\<close>
\<comment> \<open>\<open>val check_dup_ctors : list (list tvarN * typeN * list (conN * list t)) -> bool\<close>\<close>
definition check_dup_ctors :: "((tvarN)list*string*(string*(t)list)list)list \<Rightarrow> bool " where
" check_dup_ctors tds = (
Lem_list.allDistinct ((let x2 =
([]) in List.foldr
(\<lambda>x . (case x of
(tvs, tn, condefs) => \<lambda> x2 . List.foldr
(\<lambda>x .
(case x of
(n, ts) =>
\<lambda> x2 .
if True then
n # x2
else
x2
)) condefs
x2
)) tds x2)))"
\<comment> \<open>\<open>val combine_dec_result : forall 'a. sem_env v -> result (sem_env v) 'a -> result (sem_env v) 'a\<close>\<close>
fun combine_dec_result :: "(v)sem_env \<Rightarrow>(((v)sem_env),'a)result \<Rightarrow>(((v)sem_env),'a)result " where
" combine_dec_result env (Rerr e) = ( Rerr e )"
|" combine_dec_result env (Rval env') = ( Rval (| v = (nsAppend(v env')(v env)), c = (nsAppend(c env')(c env)) |) )"
\<comment> \<open>\<open>val extend_dec_env : sem_env v -> sem_env v -> sem_env v\<close>\<close>
definition extend_dec_env :: "(v)sem_env \<Rightarrow>(v)sem_env \<Rightarrow>(v)sem_env " where
" extend_dec_env new_env env = (
(| v = (nsAppend(v new_env)(v env)), c = (nsAppend(c new_env)(c env)) |) )"
\<comment> \<open>\<open>val decs_to_types : list dec -> list typeN\<close>\<close>
definition decs_to_types :: "(dec)list \<Rightarrow>(string)list " where
" decs_to_types ds = (
List.concat (List.map (\<lambda> d .
(case d of
Dtype locs tds => List.map ( \<lambda>x .
(case x of (tvs,tn,ctors) => tn )) tds
| _ => [] ))
ds))"
\<comment> \<open>\<open>val no_dup_types : list dec -> bool\<close>\<close>
definition no_dup_types :: "(dec)list \<Rightarrow> bool " where
" no_dup_types ds = (
Lem_list.allDistinct (decs_to_types ds))"
\<comment> \<open>\<open>val prog_to_mods : list top -> list (list modN)\<close>\<close>
definition prog_to_mods :: "(top0)list \<Rightarrow>((string)list)list " where
" prog_to_mods tops = (
List.concat (List.map (\<lambda> top1 .
(case top1 of
Tmod mn _ _ => [[mn]]
| _ => [] ))
tops))"
\<comment> \<open>\<open>val no_dup_mods : list top -> set (list modN) -> bool\<close>\<close>
definition no_dup_mods :: "(top0)list \<Rightarrow>((modN)list)set \<Rightarrow> bool " where
" no_dup_mods tops defined_mods2 = (
Lem_list.allDistinct (prog_to_mods tops) \<and>
disjnt (List.set (prog_to_mods tops)) defined_mods2 )"
\<comment> \<open>\<open>val prog_to_top_types : list top -> list typeN\<close>\<close>
definition prog_to_top_types :: "(top0)list \<Rightarrow>(string)list " where
" prog_to_top_types tops = (
List.concat (List.map (\<lambda> top1 .
(case top1 of
Tdec d => decs_to_types [d]
| _ => [] ))
tops))"
\<comment> \<open>\<open>val no_dup_top_types : list top -> set tid_or_exn -> bool\<close>\<close>
definition no_dup_top_types :: "(top0)list \<Rightarrow>(tid_or_exn)set \<Rightarrow> bool " where
" no_dup_top_types tops defined_types2 = (
Lem_list.allDistinct (prog_to_top_types tops) \<and>
disjnt (List.set (List.map (\<lambda> tn . TypeId (Short tn)) (prog_to_top_types tops))) defined_types2 )"
end
diff --git a/thys/Closest_Pair_Points/Common.thy b/thys/Closest_Pair_Points/Common.thy
--- a/thys/Closest_Pair_Points/Common.thy
+++ b/thys/Closest_Pair_Points/Common.thy
@@ -1,1229 +1,1215 @@
section "Common"
theory Common
imports
"HOL-Library.Going_To_Filter"
"Akra_Bazzi.Akra_Bazzi_Method"
"Akra_Bazzi.Akra_Bazzi_Approximation"
"HOL-Library.Code_Target_Numeral"
"Root_Balanced_Tree.Time_Monad"
begin
type_synonym point = "int * int"
subsection "Auxiliary Functions and Lemmas"
subsubsection "Time Monad"
lemma time_distrib_bind:
"time (bind_tm tm f) = time tm + time (f (val tm))"
unfolding bind_tm_def by (simp split: tm.split)
lemmas time_simps = time_distrib_bind tick_def
lemma bind_tm_cong[fundef_cong]:
assumes "\<And>v. v = val n \<Longrightarrow> f v = g v" "m = n"
shows "bind_tm m f = bind_tm n g"
using assms unfolding bind_tm_def by (auto split: tm.split)
subsubsection "Landau Auxiliary"
text \<open>
The following lemma expresses a procedure for deriving complexity properties of
the form @{prop"t \<in> O[m going_to at_top within A](f o m)"} where
\<^item> \<open>t\<close> is a (timing) function on same data domain (e.g. lists),
\<^item> \<open>m\<close> is a measure function on that data domain (e.g. length),
\<^item> \<open>t'\<close> is a function on @{typ nat},
\<^item> \<open>A\<close> is the set of valid inputs for the data domain.
One needs to show that
\<^item> \<open>t\<close> is bounded by @{term "t' o m"} for valid inputs
\<^item> @{prop"t' \<in> O(f)"}
to conclude the overall property @{prop"t \<in> O[m going_to at_top within A](f o m)"}.
\<close>
lemma bigo_measure_trans:
fixes t :: "'a \<Rightarrow> real" and t' :: "nat \<Rightarrow> real" and m :: "'a \<Rightarrow> nat" and f ::"nat \<Rightarrow> real"
assumes "\<And>x. x \<in> A \<Longrightarrow> t x \<le> (t' o m) x"
and "t' \<in> O(f)"
and "\<And>x. x \<in> A \<Longrightarrow> 0 \<le> t x"
shows "t \<in> O[m going_to at_top within A](f o m)"
proof -
have 0: "\<And>x. x \<in> A \<Longrightarrow> 0 \<le> (t' o m) x" by (meson assms(1,3) order_trans)
have 1: "t \<in> O[m going_to at_top within A](t' o m)"
apply(rule bigoI[where c=1]) using assms 0
by (simp add: eventually_inf_principal going_to_within_def)
have 2: "t' o m \<in> O[m going_to at_top](f o m)"
unfolding o_def going_to_def
by(rule landau_o.big.filtercomap[OF assms(2)])
have 3: "t' o m \<in> O[m going_to at_top within A](f o m)"
using landau_o.big.filter_mono[OF _2] going_to_mono[OF _subset_UNIV] by blast
show ?thesis by(rule landau_o.big_trans[OF 1 3])
qed
lemma const_1_bigo_n_ln_n:
"(\<lambda>(n::nat). 1) \<in> O(\<lambda>n. n * ln n)"
proof -
have "\<exists>N. \<forall>(n::nat) \<ge> N. (\<lambda>x. 1 \<le> x * ln x) n"
proof -
have "\<forall>(n::nat) \<ge> 3. (\<lambda>x. 1 \<le> x * ln x) n"
proof standard
fix n
show "3 \<le> n \<longrightarrow> 1 \<le> real n * ln (real n)"
proof standard
assume "3 \<le> n"
hence "1 \<le> real n"
by simp
moreover have "1 \<le> ln (real n)"
using ln_ln_nonneg' \<open>3 \<le> n\<close> by simp
ultimately show "1 \<le> real n * ln (real n)"
by (auto simp: order_trans)
qed
qed
thus ?thesis
by blast
qed
thus ?thesis
by auto
qed
subsubsection "Miscellaneous Lemmas"
lemma set_take_drop_i_le_j:
"i \<le> j \<Longrightarrow> set xs = set (take j xs) \<union> set (drop i xs)"
proof (induction xs arbitrary: i j)
case (Cons x xs)
show ?case
proof (cases "i = 0")
case True
thus ?thesis
using set_take_subset by force
next
case False
hence "set xs = set (take (j - 1) xs) \<union> set (drop (i - 1) xs)"
by (simp add: Cons diff_le_mono)
moreover have "set (take j (x # xs)) = insert x (set (take (j - 1) xs))"
using False Cons.prems by (auto simp: take_Cons')
moreover have "set (drop i (x # xs)) = set (drop (i - 1) xs)"
using False Cons.prems by (auto simp: drop_Cons')
ultimately show ?thesis
by auto
qed
qed simp
lemma set_take_drop:
"set xs = set (take n xs) \<union> set (drop n xs)"
using set_take_drop_i_le_j by fast
-lemma
- assumes "sorted_wrt f xs"
- shows sorted_wrt_take: "sorted_wrt f (take n xs)"
- and sorted_wrt_drop: "sorted_wrt f (drop n xs)"
-proof -
- from assms have "sorted_wrt f (take n xs @ drop n xs)" by simp
- then show "sorted_wrt f (take n xs)" and "sorted_wrt f (drop n xs)"
- unfolding sorted_wrt_append by simp_all
-qed
-
-lemma sorted_wrt_filter:
- "sorted_wrt f xs \<Longrightarrow> sorted_wrt f (filter P xs)"
- by (induction xs) auto
-
lemma sorted_wrt_take_drop:
"sorted_wrt f xs \<Longrightarrow> \<forall>x \<in> set (take n xs). \<forall>y \<in> set (drop n xs). f x y"
using sorted_wrt_append[of f "take n xs" "drop n xs"] by simp
lemma sorted_wrt_hd_less:
assumes "sorted_wrt f xs" "\<And>x. f x x"
shows "\<forall>x \<in> set xs. f (hd xs) x"
using assms by (cases xs) auto
lemma sorted_wrt_hd_less_take:
assumes "sorted_wrt f (x # xs)" "\<And>x. f x x"
shows "\<forall>y \<in> set (take n (x # xs)). f x y"
using assms sorted_wrt_hd_less in_set_takeD by fastforce
lemma sorted_wrt_take_less_hd_drop:
assumes "sorted_wrt f xs" "n < length xs"
shows "\<forall>x \<in> set (take n xs). f x (hd (drop n xs))"
using sorted_wrt_take_drop assms by fastforce
lemma sorted_wrt_hd_drop_less_drop:
assumes "sorted_wrt f xs" "\<And>x. f x x"
shows "\<forall>x \<in> set (drop n xs). f (hd (drop n xs)) x"
using assms sorted_wrt_drop sorted_wrt_hd_less by blast
lemma length_filter_P_impl_Q:
"(\<And>x. P x \<Longrightarrow> Q x) \<Longrightarrow> length (filter P xs) \<le> length (filter Q xs)"
by (induction xs) auto
lemma filter_Un:
"set xs = A \<union> B \<Longrightarrow> set (filter P xs) = { x \<in> A. P x } \<union> { x \<in> B. P x }"
by (induction xs) (auto, metis UnI1 insert_iff, metis UnI2 insert_iff)
subsubsection \<open>@{const length}\<close>
fun length_tm :: "'a list \<Rightarrow> nat tm" where
"length_tm [] =1 return 0"
| "length_tm (x # xs) =1
do {
l <- length_tm xs;
return (1 + l)
}"
lemma length_eq_val_length_tm:
"val (length_tm xs) = length xs"
by (induction xs) auto
lemma time_length_tm:
"time (length_tm xs) = length xs + 1"
by (induction xs) (auto simp: time_simps)
fun length_it' :: "nat \<Rightarrow> 'a list \<Rightarrow> nat" where
"length_it' acc [] = acc"
| "length_it' acc (x#xs) = length_it' (acc+1) xs"
definition length_it :: "'a list \<Rightarrow> nat" where
"length_it xs = length_it' 0 xs"
lemma length_conv_length_it':
"length xs + acc = length_it' acc xs"
by (induction acc xs rule: length_it'.induct) auto
lemma length_conv_length_it[code_unfold]:
"length xs = length_it xs"
unfolding length_it_def using length_conv_length_it' add_0_right by metis
subsubsection \<open>@{const rev}\<close>
fun rev_it' :: "'a list \<Rightarrow> 'a list \<Rightarrow> 'a list" where
"rev_it' acc [] = acc"
| "rev_it' acc (x#xs) = rev_it' (x#acc) xs"
definition rev_it :: "'a list \<Rightarrow> 'a list" where
"rev_it xs = rev_it' [] xs"
lemma rev_conv_rev_it':
"rev xs @ acc = rev_it' acc xs"
by (induction acc xs rule: rev_it'.induct) auto
lemma rev_conv_rev_it[code_unfold]:
"rev xs = rev_it xs"
unfolding rev_it_def using rev_conv_rev_it' append_Nil2 by metis
subsubsection \<open>@{const take}\<close>
fun take_tm :: "nat \<Rightarrow> 'a list \<Rightarrow> 'a list tm" where
"take_tm n [] =1 return []"
| "take_tm n (x # xs) =1
(case n of
0 \<Rightarrow> return []
| Suc m \<Rightarrow> do {
ys <- take_tm m xs;
return (x # ys)
}
)"
lemma take_eq_val_take_tm:
"val (take_tm n xs) = take n xs"
by (induction xs arbitrary: n) (auto split: nat.split)
lemma time_take_tm:
"time (take_tm n xs) = min n (length xs) + 1"
by (induction xs arbitrary: n) (auto simp: time_simps split: nat.split)
subsubsection \<open>@{const filter}\<close>
fun filter_tm :: "('a \<Rightarrow> bool) \<Rightarrow> 'a list \<Rightarrow> 'a list tm" where
"filter_tm P [] =1 return []"
| "filter_tm P (x # xs) =1
(if P x then
do {
ys <- filter_tm P xs;
return (x # ys)
}
else
filter_tm P xs
)"
lemma filter_eq_val_filter_tm:
"val (filter_tm P xs) = filter P xs"
by (induction xs) auto
lemma time_filter_tm:
"time (filter_tm P xs) = length xs + 1"
by (induction xs) (auto simp: time_simps)
fun filter_it' :: "'a list \<Rightarrow> ('a \<Rightarrow> bool) \<Rightarrow> 'a list \<Rightarrow> 'a list" where
"filter_it' acc P [] = rev acc"
| "filter_it' acc P (x#xs) = (
if P x then
filter_it' (x#acc) P xs
else
filter_it' acc P xs
)"
definition filter_it :: "('a \<Rightarrow> bool) \<Rightarrow> 'a list \<Rightarrow> 'a list" where
"filter_it P xs = filter_it' [] P xs"
lemma filter_conv_filter_it':
"rev acc @ filter P xs = filter_it' acc P xs"
by (induction acc P xs rule: filter_it'.induct) auto
lemma filter_conv_filter_it[code_unfold]:
"filter P xs = filter_it P xs"
unfolding filter_it_def using filter_conv_filter_it' append_Nil rev.simps(1) by metis
subsubsection \<open>\<open>split_at\<close>\<close>
fun split_at_tm :: "nat \<Rightarrow> 'a list \<Rightarrow> ('a list \<times> 'a list) tm" where
"split_at_tm n [] =1 return ([], [])"
| "split_at_tm n (x # xs) =1 (
case n of
0 \<Rightarrow> return ([], x # xs)
| Suc m \<Rightarrow>
do {
(xs', ys') <- split_at_tm m xs;
return (x # xs', ys')
}
)"
fun split_at :: "nat \<Rightarrow> 'a list \<Rightarrow> 'a list \<times> 'a list" where
"split_at n [] = ([], [])"
| "split_at n (x # xs) = (
case n of
0 \<Rightarrow> ([], x # xs)
| Suc m \<Rightarrow>
let (xs', ys') = split_at m xs in
(x # xs', ys')
)"
lemma split_at_eq_val_split_at_tm:
"val (split_at_tm n xs) = split_at n xs"
by (induction xs arbitrary: n) (auto split: nat.split prod.split)
lemma split_at_take_drop_conv:
"split_at n xs = (take n xs, drop n xs)"
by (induction xs arbitrary: n) (auto simp: split: nat.split)
lemma time_split_at_tm:
"time (split_at_tm n xs) = min n (length xs) + 1"
by (induction xs arbitrary: n) (auto simp: time_simps split: nat.split prod.split)
fun split_at_it' :: "'a list \<Rightarrow> nat \<Rightarrow> 'a list \<Rightarrow> ('a list * 'a list)" where
"split_at_it' acc n [] = (rev acc, [])"
| "split_at_it' acc n (x#xs) = (
case n of
0 \<Rightarrow> (rev acc, x#xs)
| Suc m \<Rightarrow> split_at_it' (x#acc) m xs
)"
definition split_at_it :: "nat \<Rightarrow> 'a list \<Rightarrow> ('a list * 'a list)" where
"split_at_it n xs = split_at_it' [] n xs"
lemma split_at_conv_split_at_it':
assumes "(ts, ds) = split_at n xs" "(ts', ds') = split_at_it' acc n xs"
shows "rev acc @ ts = ts'"
and "ds = ds'"
using assms
by (induction acc n xs arbitrary: ts rule: split_at_it'.induct)
(auto simp: split: prod.splits nat.splits)
lemma split_at_conv_split_at_it_prod:
assumes "(ts, ds) = split_at n xs" "(ts', ds') = split_at_it n xs"
shows "(ts, ds) = (ts', ds')"
using assms unfolding split_at_it_def
using split_at_conv_split_at_it' rev.simps(1) append_Nil by fast+
lemma split_at_conv_split_at_it[code_unfold]:
"split_at n xs = split_at_it n xs"
using split_at_conv_split_at_it_prod surj_pair by metis
declare split_at_tm.simps [simp del]
declare split_at.simps [simp del]
subsection "Mergesort"
subsubsection "Functional Correctness Proof"
definition sorted_fst :: "point list \<Rightarrow> bool" where
"sorted_fst ps = sorted_wrt (\<lambda>p\<^sub>0 p\<^sub>1. fst p\<^sub>0 \<le> fst p\<^sub>1) ps"
definition sorted_snd :: "point list \<Rightarrow> bool" where
"sorted_snd ps = sorted_wrt (\<lambda>p\<^sub>0 p\<^sub>1. snd p\<^sub>0 \<le> snd p\<^sub>1) ps"
fun merge_tm :: "('b \<Rightarrow> 'a::linorder) \<Rightarrow> 'b list \<Rightarrow> 'b list \<Rightarrow> 'b list tm" where
"merge_tm f (x # xs) (y # ys) =1 (
if f x \<le> f y then
do {
tl <- merge_tm f xs (y # ys);
return (x # tl)
}
else
do {
tl <- merge_tm f (x # xs) ys;
return (y # tl)
}
)"
| "merge_tm f [] ys =1 return ys"
| "merge_tm f xs [] =1 return xs"
fun merge :: "('b \<Rightarrow> 'a::linorder) \<Rightarrow> 'b list \<Rightarrow> 'b list \<Rightarrow> 'b list" where
"merge f (x # xs) (y # ys) = (
if f x \<le> f y then
x # merge f xs (y # ys)
else
y # merge f (x # xs) ys
)"
| "merge f [] ys = ys"
| "merge f xs [] = xs"
lemma merge_eq_val_merge_tm:
"val (merge_tm f xs ys) = merge f xs ys"
by (induction f xs ys rule: merge.induct) auto
lemma length_merge:
"length (merge f xs ys) = length xs + length ys"
by (induction f xs ys rule: merge.induct) auto
lemma set_merge:
"set (merge f xs ys) = set xs \<union> set ys"
by (induction f xs ys rule: merge.induct) auto
lemma distinct_merge:
assumes "set xs \<inter> set ys = {}" "distinct xs" "distinct ys"
shows "distinct (merge f xs ys)"
using assms by (induction f xs ys rule: merge.induct) (auto simp: set_merge)
lemma sorted_merge:
assumes "P = (\<lambda>x y. f x \<le> f y)"
shows "sorted_wrt P (merge f xs ys) \<longleftrightarrow> sorted_wrt P xs \<and> sorted_wrt P ys"
using assms by (induction f xs ys rule: merge.induct) (auto simp: set_merge)
declare split_at_take_drop_conv [simp]
function (sequential) mergesort_tm :: "('b \<Rightarrow> 'a::linorder) \<Rightarrow> 'b list \<Rightarrow> 'b list tm" where
"mergesort_tm f [] =1 return []"
| "mergesort_tm f [x] =1 return [x]"
| "mergesort_tm f xs =1 (
do {
n <- length_tm xs;
(xs\<^sub>l, xs\<^sub>r) <- split_at_tm (n div 2) xs;
l <- mergesort_tm f xs\<^sub>l;
r <- mergesort_tm f xs\<^sub>r;
merge_tm f l r
}
)"
by pat_completeness auto
termination mergesort_tm
by (relation "Wellfounded.measure (\<lambda>(_, xs). length xs)")
(auto simp add: length_eq_val_length_tm split_at_eq_val_split_at_tm)
fun mergesort :: "('b \<Rightarrow> 'a::linorder) \<Rightarrow> 'b list \<Rightarrow> 'b list" where
"mergesort f [] = []"
| "mergesort f [x] = [x]"
| "mergesort f xs = (
let n = length xs div 2 in
let (l, r) = split_at n xs in
merge f (mergesort f l) (mergesort f r)
)"
declare split_at_take_drop_conv [simp del]
lemma mergesort_eq_val_mergesort_tm:
"val (mergesort_tm f xs) = mergesort f xs"
by (induction f xs rule: mergesort.induct)
(auto simp add: length_eq_val_length_tm split_at_eq_val_split_at_tm merge_eq_val_merge_tm split: prod.split)
lemma sorted_wrt_mergesort:
"sorted_wrt (\<lambda>x y. f x \<le> f y) (mergesort f xs)"
by (induction f xs rule: mergesort.induct) (auto simp: split_at_take_drop_conv sorted_merge)
lemma set_mergesort:
"set (mergesort f xs) = set xs"
by (induction f xs rule: mergesort.induct)
(simp_all add: set_merge split_at_take_drop_conv, metis list.simps(15) set_take_drop)
lemma length_mergesort:
"length (mergesort f xs) = length xs"
by (induction f xs rule: mergesort.induct) (auto simp: length_merge split_at_take_drop_conv)
lemma distinct_mergesort:
"distinct xs \<Longrightarrow> distinct (mergesort f xs)"
proof (induction f xs rule: mergesort.induct)
case (3 f x y xs)
let ?xs' = "x # y # xs"
obtain l r where lr_def: "(l, r) = split_at (length ?xs' div 2) ?xs'"
by (metis surj_pair)
have "distinct l" "distinct r"
using "3.prems" split_at_take_drop_conv distinct_take distinct_drop lr_def by (metis prod.sel)+
hence "distinct (mergesort f l)" "distinct (mergesort f r)"
using "3.IH" lr_def by auto
moreover have "set l \<inter> set r = {}"
using "3.prems" split_at_take_drop_conv lr_def by (metis append_take_drop_id distinct_append prod.sel)
ultimately show ?case
using lr_def by (auto simp: distinct_merge set_mergesort split: prod.splits)
qed auto
lemmas mergesort = sorted_wrt_mergesort set_mergesort length_mergesort distinct_mergesort
lemma sorted_fst_take_less_hd_drop:
assumes "sorted_fst ps" "n < length ps"
shows "\<forall>p \<in> set (take n ps). fst p \<le> fst (hd (drop n ps))"
using assms sorted_wrt_take_less_hd_drop[of "\<lambda>p\<^sub>0 p\<^sub>1. fst p\<^sub>0 \<le> fst p\<^sub>1"] sorted_fst_def by fastforce
lemma sorted_fst_hd_drop_less_drop:
assumes "sorted_fst ps"
shows "\<forall>p \<in> set (drop n ps). fst (hd (drop n ps)) \<le> fst p"
using assms sorted_wrt_hd_drop_less_drop[of "\<lambda>p\<^sub>0 p\<^sub>1. fst p\<^sub>0 \<le> fst p\<^sub>1"] sorted_fst_def by fastforce
subsubsection "Time Complexity Proof"
lemma time_merge_tm:
"time (merge_tm f xs ys) \<le> length xs + length ys + 1"
by (induction f xs ys rule: merge_tm.induct) (auto simp: time_simps)
function mergesort_recurrence :: "nat \<Rightarrow> real" where
"mergesort_recurrence 0 = 1"
| "mergesort_recurrence 1 = 1"
| "2 \<le> n \<Longrightarrow> mergesort_recurrence n = 4 + 3 * n + mergesort_recurrence (nat \<lfloor>real n / 2\<rfloor>) +
mergesort_recurrence (nat \<lceil>real n / 2\<rceil>)"
by force simp_all
termination by akra_bazzi_termination simp_all
lemma mergesort_recurrence_nonneg[simp]:
"0 \<le> mergesort_recurrence n"
by (induction n rule: mergesort_recurrence.induct) (auto simp del: One_nat_def)
lemma time_mergesort_conv_mergesort_recurrence:
"time (mergesort_tm f xs) \<le> mergesort_recurrence (length xs)"
proof (induction f xs rule: mergesort_tm.induct)
case (1 f)
thus ?case by (auto simp: time_simps)
next
case (2 f x)
thus ?case using mergesort_recurrence.simps(2) by (auto simp: time_simps)
next
case (3 f x y xs')
define xs where "xs = x # y # xs'"
define n where "n = length xs"
obtain l r where lr_def: "(l, r) = split_at (n div 2) xs"
using prod.collapse by blast
define l' where "l' = mergesort f l"
define r' where "r' = mergesort f r"
note defs = xs_def n_def lr_def l'_def r'_def
have IHL: "time (mergesort_tm f l) \<le> mergesort_recurrence (length l)"
using defs "3.IH"(1) by (auto simp: length_eq_val_length_tm split_at_eq_val_split_at_tm)
have IHR: "time (mergesort_tm f r) \<le> mergesort_recurrence (length r)"
using defs "3.IH"(2) by (auto simp: length_eq_val_length_tm split_at_eq_val_split_at_tm)
have *: "length l = n div 2" "length r = n - n div 2"
using defs by (auto simp: split_at_take_drop_conv)
hence "(nat \<lfloor>real n / 2\<rfloor>) = length l" "(nat \<lceil>real n / 2\<rceil>) = length r"
by linarith+
hence IH: "time (mergesort_tm f l) \<le> mergesort_recurrence (nat \<lfloor>real n / 2\<rfloor>)"
"time (mergesort_tm f r) \<le> mergesort_recurrence (nat \<lceil>real n / 2\<rceil>)"
using IHL IHR by simp_all
have "n = length l + length r"
using * by linarith
hence "time (merge_tm f l' r') \<le> n + 1"
using time_merge_tm defs by (metis length_mergesort)
have "time (mergesort_tm f xs) = 1 + time (length_tm xs) + time (split_at_tm (n div 2) xs) +
time (mergesort_tm f l) + time (mergesort_tm f r) + time (merge_tm f l' r')"
using defs by (auto simp add: time_simps length_eq_val_length_tm mergesort_eq_val_mergesort_tm
split_at_eq_val_split_at_tm
split: prod.split)
also have "... \<le> 4 + 3 * n + time (mergesort_tm f l) + time (mergesort_tm f r)"
using time_length_tm[of xs] time_split_at_tm[of "n div 2" xs] n_def \<open>time (merge_tm f l' r') \<le> n + 1\<close> by simp
also have "... \<le> 4 + 3 * n + mergesort_recurrence (nat \<lfloor>real n / 2\<rfloor>) + mergesort_recurrence (nat \<lceil>real n / 2\<rceil>)"
using IH by simp
also have "... = mergesort_recurrence n"
using defs by simp
finally show ?case
using defs by simp
qed
theorem mergesort_recurrence:
"mergesort_recurrence \<in> \<Theta>(\<lambda>n. n * ln n)"
by (master_theorem) auto
theorem time_mergesort_tm_bigo:
"(\<lambda>xs. time (mergesort_tm f xs)) \<in> O[length going_to at_top]((\<lambda>n. n * ln n) o length)"
proof -
have 0: "\<And>xs. time (mergesort_tm f xs) \<le> (mergesort_recurrence o length) xs"
unfolding comp_def using time_mergesort_conv_mergesort_recurrence by blast
show ?thesis
using bigo_measure_trans[OF 0] by (simp add: bigthetaD1 mergesort_recurrence)
qed
subsubsection "Code Export"
lemma merge_xs_Nil[simp]:
"merge f xs [] = xs"
by (cases xs) auto
fun merge_it' :: "('b \<Rightarrow> 'a::linorder) \<Rightarrow> 'b list \<Rightarrow> 'b list \<Rightarrow> 'b list \<Rightarrow> 'b list" where
"merge_it' f acc [] [] = rev acc"
| "merge_it' f acc (x#xs) [] = merge_it' f (x#acc) xs []"
| "merge_it' f acc [] (y#ys) = merge_it' f (y#acc) ys []"
| "merge_it' f acc (x#xs) (y#ys) = (
if f x \<le> f y then
merge_it' f (x#acc) xs (y#ys)
else
merge_it' f (y#acc) (x#xs) ys
)"
definition merge_it :: "('b \<Rightarrow> 'a::linorder) \<Rightarrow> 'b list \<Rightarrow> 'b list \<Rightarrow> 'b list" where
"merge_it f xs ys = merge_it' f [] xs ys"
lemma merge_conv_merge_it':
"rev acc @ merge f xs ys = merge_it' f acc xs ys"
by (induction f acc xs ys rule: merge_it'.induct) auto
lemma merge_conv_merge_it[code_unfold]:
"merge f xs ys = merge_it f xs ys"
unfolding merge_it_def using merge_conv_merge_it' rev.simps(1) append_Nil by metis
subsection "Minimal Distance"
definition sparse :: "real \<Rightarrow> point set \<Rightarrow> bool" where
"sparse \<delta> ps \<longleftrightarrow> (\<forall>p\<^sub>0 \<in> ps. \<forall>p\<^sub>1 \<in> ps. p\<^sub>0 \<noteq> p\<^sub>1 \<longrightarrow> \<delta> \<le> dist p\<^sub>0 p\<^sub>1)"
lemma sparse_identity:
assumes "sparse \<delta> (set ps)" "\<forall>p \<in> set ps. \<delta> \<le> dist p\<^sub>0 p"
shows "sparse \<delta> (set (p\<^sub>0 # ps))"
using assms by (simp add: dist_commute sparse_def)
lemma sparse_update:
assumes "sparse \<delta> (set ps)"
assumes "dist p\<^sub>0 p\<^sub>1 \<le> \<delta>" "\<forall>p \<in> set ps. dist p\<^sub>0 p\<^sub>1 \<le> dist p\<^sub>0 p"
shows "sparse (dist p\<^sub>0 p\<^sub>1) (set (p\<^sub>0 # ps))"
using assms by (auto simp: dist_commute sparse_def, force+)
lemma sparse_mono:
"sparse \<Delta> P \<Longrightarrow> \<delta> \<le> \<Delta> \<Longrightarrow> sparse \<delta> P"
unfolding sparse_def by fastforce
subsection "Distance"
lemma dist_transform:
fixes p :: point and \<delta> :: real and l :: int
shows "dist p (l, snd p) < \<delta> \<longleftrightarrow> l - \<delta> < fst p \<and> fst p < l + \<delta>"
proof -
have "dist p (l, snd p) = sqrt ((real_of_int (fst p) - l)\<^sup>2)"
by (auto simp add: dist_prod_def dist_real_def prod.case_eq_if)
thus ?thesis
by auto
qed
fun dist_code :: "point \<Rightarrow> point \<Rightarrow> int" where
"dist_code p\<^sub>0 p\<^sub>1 = (fst p\<^sub>0 - fst p\<^sub>1)\<^sup>2 + (snd p\<^sub>0 - snd p\<^sub>1)\<^sup>2"
lemma dist_eq_sqrt_dist_code:
fixes p\<^sub>0 :: point
shows "dist p\<^sub>0 p\<^sub>1 = sqrt (dist_code p\<^sub>0 p\<^sub>1)"
by (auto simp: dist_prod_def dist_real_def split: prod.splits)
lemma dist_eq_dist_code_lt:
fixes p\<^sub>0 :: point
shows "dist p\<^sub>0 p\<^sub>1 < dist p\<^sub>2 p\<^sub>3 \<longleftrightarrow> dist_code p\<^sub>0 p\<^sub>1 < dist_code p\<^sub>2 p\<^sub>3"
using dist_eq_sqrt_dist_code real_sqrt_less_iff by presburger
lemma dist_eq_dist_code_le:
fixes p\<^sub>0 :: point
shows "dist p\<^sub>0 p\<^sub>1 \<le> dist p\<^sub>2 p\<^sub>3 \<longleftrightarrow> dist_code p\<^sub>0 p\<^sub>1 \<le> dist_code p\<^sub>2 p\<^sub>3"
using dist_eq_sqrt_dist_code real_sqrt_le_iff by presburger
lemma dist_eq_dist_code_abs_lt:
fixes p\<^sub>0 :: point
shows "\<bar>c\<bar> < dist p\<^sub>0 p\<^sub>1 \<longleftrightarrow> c\<^sup>2 < dist_code p\<^sub>0 p\<^sub>1"
using dist_eq_sqrt_dist_code
by (metis of_int_less_of_int_power_cancel_iff real_sqrt_abs real_sqrt_less_iff)
lemma dist_eq_dist_code_abs_le:
fixes p\<^sub>0 :: point
shows "dist p\<^sub>0 p\<^sub>1 \<le> \<bar>c\<bar> \<longleftrightarrow> dist_code p\<^sub>0 p\<^sub>1 \<le> c\<^sup>2"
using dist_eq_sqrt_dist_code
by (metis of_int_power_le_of_int_cancel_iff real_sqrt_abs real_sqrt_le_iff)
lemma dist_fst_abs:
fixes p :: point and l :: int
shows "dist p (l, snd p) = \<bar>fst p - l\<bar>"
proof -
have "dist p (l, snd p) = sqrt ((real_of_int (fst p) - l)\<^sup>2)"
by (simp add: dist_prod_def dist_real_def prod.case_eq_if)
thus ?thesis
by simp
qed
declare dist_code.simps [simp del]
subsection "Brute Force Closest Pair Algorithm"
subsubsection "Functional Correctness Proof"
fun find_closest_bf_tm :: "point \<Rightarrow> point list \<Rightarrow> point tm" where
"find_closest_bf_tm _ [] =1 return undefined"
| "find_closest_bf_tm _ [p] =1 return p"
| "find_closest_bf_tm p (p\<^sub>0 # ps) =1 (
do {
p\<^sub>1 <- find_closest_bf_tm p ps;
if dist p p\<^sub>0 < dist p p\<^sub>1 then
return p\<^sub>0
else
return p\<^sub>1
}
)"
fun find_closest_bf :: "point \<Rightarrow> point list \<Rightarrow> point" where
"find_closest_bf _ [] = undefined"
| "find_closest_bf _ [p] = p"
| "find_closest_bf p (p\<^sub>0 # ps) = (
let p\<^sub>1 = find_closest_bf p ps in
if dist p p\<^sub>0 < dist p p\<^sub>1 then
p\<^sub>0
else
p\<^sub>1
)"
lemma find_closest_bf_eq_val_find_closest_bf_tm:
"val (find_closest_bf_tm p ps) = find_closest_bf p ps"
by (induction p ps rule: find_closest_bf.induct) (auto simp: Let_def)
lemma find_closest_bf_set:
"0 < length ps \<Longrightarrow> find_closest_bf p ps \<in> set ps"
by (induction p ps rule: find_closest_bf.induct)
(auto simp: Let_def split: prod.splits if_splits)
lemma find_closest_bf_dist:
"\<forall>q \<in> set ps. dist p (find_closest_bf p ps) \<le> dist p q"
by (induction p ps rule: find_closest_bf.induct)
(auto split: prod.splits)
fun closest_pair_bf_tm :: "point list \<Rightarrow> (point \<times> point) tm" where
"closest_pair_bf_tm [] =1 return undefined"
| "closest_pair_bf_tm [_] =1 return undefined"
| "closest_pair_bf_tm [p\<^sub>0, p\<^sub>1] =1 return (p\<^sub>0, p\<^sub>1)"
| "closest_pair_bf_tm (p\<^sub>0 # ps) =1 (
do {
(c\<^sub>0::point, c\<^sub>1::point) <- closest_pair_bf_tm ps;
p\<^sub>1 <- find_closest_bf_tm p\<^sub>0 ps;
if dist c\<^sub>0 c\<^sub>1 \<le> dist p\<^sub>0 p\<^sub>1 then
return (c\<^sub>0, c\<^sub>1)
else
return (p\<^sub>0, p\<^sub>1)
}
)"
fun closest_pair_bf :: "point list \<Rightarrow> (point * point)" where
"closest_pair_bf [] = undefined"
| "closest_pair_bf [_] = undefined"
| "closest_pair_bf [p\<^sub>0, p\<^sub>1] = (p\<^sub>0, p\<^sub>1)"
| "closest_pair_bf (p\<^sub>0 # ps) = (
let (c\<^sub>0, c\<^sub>1) = closest_pair_bf ps in
let p\<^sub>1 = find_closest_bf p\<^sub>0 ps in
if dist c\<^sub>0 c\<^sub>1 \<le> dist p\<^sub>0 p\<^sub>1 then
(c\<^sub>0, c\<^sub>1)
else
(p\<^sub>0, p\<^sub>1)
)"
lemma closest_pair_bf_eq_val_closest_pair_bf_tm:
"val (closest_pair_bf_tm ps) = closest_pair_bf ps"
by (induction ps rule: closest_pair_bf.induct)
(auto simp: Let_def find_closest_bf_eq_val_find_closest_bf_tm split: prod.split)
lemma closest_pair_bf_c0:
"1 < length ps \<Longrightarrow> (c\<^sub>0, c\<^sub>1) = closest_pair_bf ps \<Longrightarrow> c\<^sub>0 \<in> set ps"
by (induction ps arbitrary: c\<^sub>0 c\<^sub>1 rule: closest_pair_bf.induct)
(auto simp: Let_def find_closest_bf_set split: if_splits prod.splits)
lemma closest_pair_bf_c1:
"1 < length ps \<Longrightarrow> (c\<^sub>0, c\<^sub>1) = closest_pair_bf ps \<Longrightarrow> c\<^sub>1 \<in> set ps"
proof (induction ps arbitrary: c\<^sub>0 c\<^sub>1 rule: closest_pair_bf.induct)
case (4 p\<^sub>0 p\<^sub>2 p\<^sub>3 ps)
let ?ps = "p\<^sub>2 # p\<^sub>3 # ps"
obtain c\<^sub>0 c\<^sub>1 where c\<^sub>0_def: "(c\<^sub>0, c\<^sub>1) = closest_pair_bf ?ps"
using prod.collapse by blast
define p\<^sub>1 where p\<^sub>1_def: "p\<^sub>1 = find_closest_bf p\<^sub>0 ?ps"
note defs = c\<^sub>0_def p\<^sub>1_def
have "c\<^sub>1 \<in> set ?ps"
using "4.IH" defs by simp
moreover have "p\<^sub>1 \<in> set ?ps"
using find_closest_bf_set defs by blast
ultimately show ?case
using "4.prems"(2) defs by (auto simp: Let_def split: prod.splits if_splits)
qed auto
lemma closest_pair_bf_c0_ne_c1:
"1 < length ps \<Longrightarrow> distinct ps \<Longrightarrow> (c\<^sub>0, c\<^sub>1) = closest_pair_bf ps \<Longrightarrow> c\<^sub>0 \<noteq> c\<^sub>1"
proof (induction ps arbitrary: c\<^sub>0 c\<^sub>1 rule: closest_pair_bf.induct)
case (4 p\<^sub>0 p\<^sub>2 p\<^sub>3 ps)
let ?ps = "p\<^sub>2 # p\<^sub>3 # ps"
obtain c\<^sub>0 c\<^sub>1 where c\<^sub>0_def: "(c\<^sub>0, c\<^sub>1) = closest_pair_bf ?ps"
using prod.collapse by blast
define p\<^sub>1 where p\<^sub>1_def: "p\<^sub>1 = find_closest_bf p\<^sub>0 ?ps"
note defs = c\<^sub>0_def p\<^sub>1_def
have "c\<^sub>0 \<noteq> c\<^sub>1"
using "4.IH" "4.prems"(2) defs by simp
moreover have "p\<^sub>0 \<noteq> p\<^sub>1"
using find_closest_bf_set "4.prems"(2) defs
by (metis distinct.simps(2) length_pos_if_in_set list.set_intros(1))
ultimately show ?case
using "4.prems"(3) defs by (auto simp: Let_def split: prod.splits if_splits)
qed auto
lemmas closest_pair_bf_c0_c1 = closest_pair_bf_c0 closest_pair_bf_c1 closest_pair_bf_c0_ne_c1
lemma closest_pair_bf_dist:
assumes "1 < length ps" "(c\<^sub>0, c\<^sub>1) = closest_pair_bf ps"
shows "sparse (dist c\<^sub>0 c\<^sub>1) (set ps)"
using assms
proof (induction ps arbitrary: c\<^sub>0 c\<^sub>1 rule: closest_pair_bf.induct)
case (4 p\<^sub>0 p\<^sub>2 p\<^sub>3 ps)
let ?ps = "p\<^sub>2 # p\<^sub>3 # ps"
obtain c\<^sub>0 c\<^sub>1 where c\<^sub>0_def: "(c\<^sub>0, c\<^sub>1) = closest_pair_bf ?ps"
using prod.collapse by blast
define p\<^sub>1 where p\<^sub>1_def: "p\<^sub>1 = find_closest_bf p\<^sub>0 ?ps"
note defs = c\<^sub>0_def p\<^sub>1_def
hence IH: "sparse (dist c\<^sub>0 c\<^sub>1) (set ?ps)"
using 4 c\<^sub>0_def by simp
have *: "\<forall>p \<in> set ?ps. (dist p\<^sub>0 p\<^sub>1) \<le> dist p\<^sub>0 p"
using find_closest_bf_dist defs by blast
show ?case
proof (cases "dist c\<^sub>0 c\<^sub>1 \<le> dist p\<^sub>0 p\<^sub>1")
case True
hence "\<forall>p \<in> set ?ps. dist c\<^sub>0 c\<^sub>1 \<le> dist p\<^sub>0 p"
using * by auto
hence "sparse (dist c\<^sub>0 c\<^sub>1) (set (p\<^sub>0 # ?ps))"
using sparse_identity IH by blast
thus ?thesis
using True "4.prems" defs by (auto split: prod.splits)
next
case False
hence "sparse (dist p\<^sub>0 p\<^sub>1) (set (p\<^sub>0 # ?ps))"
using sparse_update[of "dist c\<^sub>0 c\<^sub>1" ?ps p\<^sub>0 p\<^sub>1] IH * defs by argo
thus ?thesis
using False "4.prems" defs by (auto split: prod.splits)
qed
qed (auto simp: dist_commute sparse_def)
subsubsection "Time Complexity Proof"
lemma time_find_closest_bf_tm:
"time (find_closest_bf_tm p ps) \<le> length ps + 1"
by (induction p ps rule: find_closest_bf_tm.induct) (auto simp: time_simps)
lemma time_closest_pair_bf_tm:
"time (closest_pair_bf_tm ps) \<le> length ps * length ps + 1"
proof (induction ps rule: closest_pair_bf_tm.induct)
case (4 p\<^sub>0 p\<^sub>2 p\<^sub>3 ps)
let ?ps = "p\<^sub>2 # p\<^sub>3 # ps"
have "time (closest_pair_bf_tm (p\<^sub>0 # ?ps)) = 1 + time (find_closest_bf_tm p\<^sub>0 ?ps) + time (closest_pair_bf_tm ?ps)"
by (auto simp: time_simps split: prod.split)
also have "... \<le> 2 + length ?ps + time (closest_pair_bf_tm ?ps)"
using time_find_closest_bf_tm[of p\<^sub>0 ?ps] by simp
also have "... \<le> 2 + length ?ps + length ?ps * length ?ps + 1"
using "4.IH" by simp
also have "... \<le> length (p\<^sub>0 # ?ps) * length (p\<^sub>0 # ?ps) + 1"
by auto
finally show ?case
by blast
qed (auto simp: time_simps)
subsubsection "Code Export"
fun find_closest_bf_code :: "point \<Rightarrow> point list \<Rightarrow> (int * point)" where
"find_closest_bf_code p [] = undefined"
| "find_closest_bf_code p [p\<^sub>0] = (dist_code p p\<^sub>0, p\<^sub>0)"
| "find_closest_bf_code p (p\<^sub>0 # ps) = (
let (\<delta>\<^sub>1, p\<^sub>1) = find_closest_bf_code p ps in
let \<delta>\<^sub>0 = dist_code p p\<^sub>0 in
if \<delta>\<^sub>0 < \<delta>\<^sub>1 then
(\<delta>\<^sub>0, p\<^sub>0)
else
(\<delta>\<^sub>1, p\<^sub>1)
)"
lemma find_closest_bf_code_dist_eq:
"0 < length ps \<Longrightarrow> (\<delta>, c) = find_closest_bf_code p ps \<Longrightarrow> \<delta> = dist_code p c"
by (induction p ps rule: find_closest_bf_code.induct)
(auto simp: Let_def split: prod.splits if_splits)
lemma find_closest_bf_code_eq:
"0 < length ps \<Longrightarrow> c = find_closest_bf p ps \<Longrightarrow> (\<delta>', c') = find_closest_bf_code p ps \<Longrightarrow> c = c'"
proof (induction p ps arbitrary: c \<delta>' c' rule: find_closest_bf.induct)
case (3 p p\<^sub>0 p\<^sub>2 ps)
define \<delta>\<^sub>0 \<delta>\<^sub>0' where \<delta>\<^sub>0_def: "\<delta>\<^sub>0 = dist p p\<^sub>0" "\<delta>\<^sub>0' = dist_code p p\<^sub>0"
obtain \<delta>\<^sub>1 p\<^sub>1 \<delta>\<^sub>1' p\<^sub>1' where \<delta>\<^sub>1_def: "\<delta>\<^sub>1 = dist p p\<^sub>1" "p\<^sub>1 = find_closest_bf p (p\<^sub>2 # ps)"
"(\<delta>\<^sub>1', p\<^sub>1') = find_closest_bf_code p (p\<^sub>2 # ps)"
using prod.collapse by blast+
note defs = \<delta>\<^sub>0_def \<delta>\<^sub>1_def
have *: "p\<^sub>1 = p\<^sub>1'"
using "3.IH" defs by simp
hence "\<delta>\<^sub>0 < \<delta>\<^sub>1 \<longleftrightarrow> \<delta>\<^sub>0' < \<delta>\<^sub>1'"
using find_closest_bf_code_dist_eq[of "p\<^sub>2 # ps" \<delta>\<^sub>1' p\<^sub>1' p]
dist_eq_dist_code_lt defs
by simp
thus ?case
using "3.prems"(2,3) * defs by (auto split: prod.splits)
qed auto
declare find_closest_bf_code.simps [simp del]
fun closest_pair_bf_code :: "point list \<Rightarrow> (int * point * point)" where
"closest_pair_bf_code [] = undefined"
| "closest_pair_bf_code [p\<^sub>0] = undefined"
| "closest_pair_bf_code [p\<^sub>0, p\<^sub>1] = (dist_code p\<^sub>0 p\<^sub>1, p\<^sub>0, p\<^sub>1)"
| "closest_pair_bf_code (p\<^sub>0 # ps) = (
let (\<delta>\<^sub>c, c\<^sub>0, c\<^sub>1) = closest_pair_bf_code ps in
let (\<delta>\<^sub>p, p\<^sub>1) = find_closest_bf_code p\<^sub>0 ps in
if \<delta>\<^sub>c \<le> \<delta>\<^sub>p then
(\<delta>\<^sub>c, c\<^sub>0, c\<^sub>1)
else
(\<delta>\<^sub>p, p\<^sub>0, p\<^sub>1)
)"
lemma closest_pair_bf_code_dist_eq:
"1 < length ps \<Longrightarrow> (\<delta>, c\<^sub>0, c\<^sub>1) = closest_pair_bf_code ps \<Longrightarrow> \<delta> = dist_code c\<^sub>0 c\<^sub>1"
proof (induction ps arbitrary: \<delta> c\<^sub>0 c\<^sub>1 rule: closest_pair_bf_code.induct)
case (4 p\<^sub>0 p\<^sub>2 p\<^sub>3 ps)
let ?ps = "p\<^sub>2 # p\<^sub>3 # ps"
obtain \<delta>\<^sub>c c\<^sub>0 c\<^sub>1 where \<delta>\<^sub>c_def: "(\<delta>\<^sub>c, c\<^sub>0, c\<^sub>1) = closest_pair_bf_code ?ps"
by (metis prod_cases3)
obtain \<delta>\<^sub>p p\<^sub>1 where \<delta>\<^sub>p_def: "(\<delta>\<^sub>p, p\<^sub>1) = find_closest_bf_code p\<^sub>0 ?ps"
using prod.collapse by blast
note defs = \<delta>\<^sub>c_def \<delta>\<^sub>p_def
have "\<delta>\<^sub>c = dist_code c\<^sub>0 c\<^sub>1"
using "4.IH" defs by simp
moreover have "\<delta>\<^sub>p = dist_code p\<^sub>0 p\<^sub>1"
using find_closest_bf_code_dist_eq defs by blast
ultimately show ?case
using "4.prems"(2) defs by (auto split: prod.splits if_splits)
qed auto
lemma closest_pair_bf_code_eq:
assumes "1 < length ps"
assumes "(c\<^sub>0, c\<^sub>1) = closest_pair_bf ps" "(\<delta>', c\<^sub>0', c\<^sub>1') = closest_pair_bf_code ps"
shows "c\<^sub>0 = c\<^sub>0' \<and> c\<^sub>1 = c\<^sub>1'"
using assms
proof (induction ps arbitrary: c\<^sub>0 c\<^sub>1 \<delta>' c\<^sub>0' c\<^sub>1' rule: closest_pair_bf_code.induct)
case (4 p\<^sub>0 p\<^sub>2 p\<^sub>3 ps)
let ?ps = "p\<^sub>2 # p\<^sub>3 # ps"
obtain c\<^sub>0 c\<^sub>1 \<delta>\<^sub>c' c\<^sub>0' c\<^sub>1' where \<delta>\<^sub>c_def: "(c\<^sub>0, c\<^sub>1) = closest_pair_bf ?ps"
"(\<delta>\<^sub>c', c\<^sub>0', c\<^sub>1') = closest_pair_bf_code ?ps"
by (metis prod_cases3)
obtain p\<^sub>1 \<delta>\<^sub>p' p\<^sub>1' where \<delta>\<^sub>p_def: "p\<^sub>1 = find_closest_bf p\<^sub>0 ?ps"
"(\<delta>\<^sub>p', p\<^sub>1') = find_closest_bf_code p\<^sub>0 ?ps"
using prod.collapse by blast
note defs = \<delta>\<^sub>c_def \<delta>\<^sub>p_def
have A: "c\<^sub>0 = c\<^sub>0' \<and> c\<^sub>1 = c\<^sub>1'"
using "4.IH" defs by simp
moreover have B: "p\<^sub>1 = p\<^sub>1'"
using find_closest_bf_code_eq defs by blast
moreover have "\<delta>\<^sub>c' = dist_code c\<^sub>0' c\<^sub>1'"
using defs closest_pair_bf_code_dist_eq[of ?ps] by simp
moreover have "\<delta>\<^sub>p' = dist_code p\<^sub>0 p\<^sub>1'"
using defs find_closest_bf_code_dist_eq by blast
ultimately have "dist c\<^sub>0 c\<^sub>1 \<le> dist p\<^sub>0 p\<^sub>1 \<longleftrightarrow> \<delta>\<^sub>c' \<le> \<delta>\<^sub>p'"
by (simp add: dist_eq_dist_code_le)
thus ?case
using "4.prems"(2,3) defs A B by (auto simp: Let_def split: prod.splits)
qed auto
subsection "Geometry"
subsubsection "Band Filter"
lemma set_band_filter_aux:
fixes \<delta> :: real and ps :: "point list"
assumes "p\<^sub>0 \<in> ps\<^sub>L" "p\<^sub>1 \<in> ps\<^sub>R" "p\<^sub>0 \<noteq> p\<^sub>1" "dist p\<^sub>0 p\<^sub>1 < \<delta>" "set ps = ps\<^sub>L \<union> ps\<^sub>R"
assumes "\<forall>p \<in> ps\<^sub>L. fst p \<le> l" "\<forall>p \<in> ps\<^sub>R. l \<le> fst p"
assumes "ps' = filter (\<lambda>p. l - \<delta> < fst p \<and> fst p < l + \<delta>) ps"
shows "p\<^sub>0 \<in> set ps' \<and> p\<^sub>1 \<in> set ps'"
proof (rule ccontr)
assume "\<not> (p\<^sub>0 \<in> set ps' \<and> p\<^sub>1 \<in> set ps')"
then consider (A) "p\<^sub>0 \<notin> set ps' \<and> p\<^sub>1 \<notin> set ps'"
| (B) "p\<^sub>0 \<in> set ps' \<and> p\<^sub>1 \<notin> set ps'"
| (C) "p\<^sub>0 \<notin> set ps' \<and> p\<^sub>1 \<in> set ps'"
by blast
thus False
proof cases
case A
hence "fst p\<^sub>0 \<le> l - \<delta> \<or> l + \<delta> \<le> fst p\<^sub>0" "fst p\<^sub>1 \<le> l - \<delta> \<or> l + \<delta> \<le> fst p\<^sub>1"
using assms(1,2,5,8) by auto
hence "fst p\<^sub>0 \<le> l - \<delta>" "l + \<delta> \<le> fst p\<^sub>1"
using assms(1,2,6,7) by force+
hence "\<delta> \<le> dist (fst p\<^sub>0) (fst p\<^sub>1)"
using dist_real_def by simp
hence "\<delta> \<le> dist p\<^sub>0 p\<^sub>1"
using dist_fst_le[of p\<^sub>0 p\<^sub>1] by (auto split: prod.splits)
then show ?thesis
using assms(4) by fastforce
next
case B
hence "fst p\<^sub>1 \<le> l - \<delta> \<or> l + \<delta> \<le> fst p\<^sub>1"
using assms(2,5,8) by auto
hence "l + \<delta> \<le> fst p\<^sub>1"
using assms(2,7) by auto
moreover have "fst p\<^sub>0 \<le> l"
using assms(1,6) by simp
ultimately have "\<delta> \<le> dist (fst p\<^sub>0) (fst p\<^sub>1)"
using dist_real_def by simp
hence "\<delta> \<le> dist p\<^sub>0 p\<^sub>1"
using dist_fst_le[of p\<^sub>0 p\<^sub>1] less_le_trans by (auto split: prod.splits)
thus ?thesis
using assms(4) by simp
next
case C
hence "fst p\<^sub>0 \<le> l - \<delta> \<or> l + \<delta> \<le> fst p\<^sub>0"
using assms(1,2,5,8) by auto
hence "fst p\<^sub>0 \<le> l - \<delta>"
using assms(1,6) by auto
moreover have "l \<le> fst p\<^sub>1"
using assms(2,7) by simp
ultimately have "\<delta> \<le> dist (fst p\<^sub>0) (fst p\<^sub>1)"
using dist_real_def by simp
hence "\<delta> \<le> dist p\<^sub>0 p\<^sub>1"
using dist_fst_le[of p\<^sub>0 p\<^sub>1] less_le_trans by (auto split: prod.splits)
thus ?thesis
using assms(4) by simp
qed
qed
lemma set_band_filter:
fixes \<delta> :: real and ps :: "point list"
assumes "p\<^sub>0 \<in> set ps" "p\<^sub>1 \<in> set ps" "p\<^sub>0 \<noteq> p\<^sub>1" "dist p\<^sub>0 p\<^sub>1 < \<delta>" "set ps = ps\<^sub>L \<union> ps\<^sub>R"
assumes "sparse \<delta> ps\<^sub>L" "sparse \<delta> ps\<^sub>R"
assumes "\<forall>p \<in> ps\<^sub>L. fst p \<le> l" "\<forall>p \<in> ps\<^sub>R. l \<le> fst p"
assumes "ps' = filter (\<lambda>p. l - \<delta> < fst p \<and> fst p < l + \<delta>) ps"
shows "p\<^sub>0 \<in> set ps' \<and> p\<^sub>1 \<in> set ps'"
proof -
have "p\<^sub>0 \<notin> ps\<^sub>L \<or> p\<^sub>1 \<notin> ps\<^sub>L" "p\<^sub>0 \<notin> ps\<^sub>R \<or> p\<^sub>1 \<notin> ps\<^sub>R"
using assms(3,4,6,7) sparse_def by force+
then consider (A) "p\<^sub>0 \<in> ps\<^sub>L \<and> p\<^sub>1 \<in> ps\<^sub>R" | (B) "p\<^sub>0 \<in> ps\<^sub>R \<and> p\<^sub>1 \<in> ps\<^sub>L"
using assms(1,2,5) by auto
thus ?thesis
proof cases
case A
thus ?thesis
using set_band_filter_aux assms(3,4,5,8,9,10) by (auto split: prod.splits)
next
case B
moreover have "dist p\<^sub>1 p\<^sub>0 < \<delta>"
using assms(4) dist_commute by metis
ultimately show ?thesis
using set_band_filter_aux assms(3)[symmetric] assms(5,8,9,10) by (auto split: prod.splits)
qed
qed
subsubsection "2D-Boxes and Points"
lemma cbox_2D:
fixes x\<^sub>0 :: real and y\<^sub>0 :: real
shows "cbox (x\<^sub>0, y\<^sub>0) (x\<^sub>1, y\<^sub>1) = { (x, y). x\<^sub>0 \<le> x \<and> x \<le> x\<^sub>1 \<and> y\<^sub>0 \<le> y \<and> y \<le> y\<^sub>1 }"
by fastforce
lemma mem_cbox_2D:
fixes x :: real and y :: real
shows "x\<^sub>0 \<le> x \<and> x \<le> x\<^sub>1 \<and> y\<^sub>0 \<le> y \<and> y \<le> y\<^sub>1 \<longleftrightarrow> (x, y) \<in> cbox (x\<^sub>0, y\<^sub>0) (x\<^sub>1, y\<^sub>1)"
by fastforce
lemma cbox_top_un:
fixes x\<^sub>0 :: real and y\<^sub>0 :: real
assumes "y\<^sub>0 \<le> y\<^sub>1" "y\<^sub>1 \<le> y\<^sub>2"
shows "cbox (x\<^sub>0, y\<^sub>0) (x\<^sub>1, y\<^sub>1) \<union> cbox (x\<^sub>0, y\<^sub>1) (x\<^sub>1, y\<^sub>2) = cbox (x\<^sub>0, y\<^sub>0) (x\<^sub>1, y\<^sub>2)"
using assms by auto
lemma cbox_right_un:
fixes x\<^sub>0 :: real and y\<^sub>0 :: real
assumes "x\<^sub>0 \<le> x\<^sub>1" "x\<^sub>1 \<le> x\<^sub>2"
shows "cbox (x\<^sub>0, y\<^sub>0) (x\<^sub>1, y\<^sub>1) \<union> cbox (x\<^sub>1, y\<^sub>0) (x\<^sub>2, y\<^sub>1) = cbox (x\<^sub>0, y\<^sub>0) (x\<^sub>2, y\<^sub>1)"
using assms by auto
lemma cbox_max_dist:
assumes "p\<^sub>0 = (x, y)" "p\<^sub>1 = (x + \<delta>, y + \<delta>)"
assumes "(x\<^sub>0, y\<^sub>0) \<in> cbox p\<^sub>0 p\<^sub>1" "(x\<^sub>1, y\<^sub>1) \<in> cbox p\<^sub>0 p\<^sub>1" "0 \<le> \<delta>"
shows "dist (x\<^sub>0, y\<^sub>0) (x\<^sub>1, y\<^sub>1) \<le> sqrt 2 * \<delta>"
proof -
have X: "dist x\<^sub>0 x\<^sub>1 \<le> \<delta>"
using assms dist_real_def by auto
have Y: "dist y\<^sub>0 y\<^sub>1 \<le> \<delta>"
using assms dist_real_def by auto
have "dist (x\<^sub>0, y\<^sub>0) (x\<^sub>1, y\<^sub>1) = sqrt ((dist x\<^sub>0 x\<^sub>1)\<^sup>2 + (dist y\<^sub>0 y\<^sub>1)\<^sup>2)"
using dist_Pair_Pair by auto
also have "... \<le> sqrt (\<delta>\<^sup>2 + (dist y\<^sub>0 y\<^sub>1)\<^sup>2)"
using X power_mono by fastforce
also have "... \<le> sqrt (\<delta>\<^sup>2 + \<delta>\<^sup>2)"
using Y power_mono by fastforce
also have "... = sqrt 2 * sqrt (\<delta>\<^sup>2)"
using real_sqrt_mult by simp
also have "... = sqrt 2 * \<delta>"
using assms(5) by simp
finally show ?thesis .
qed
subsubsection "Pigeonhole Argument"
lemma card_le_1_if_pairwise_eq:
assumes "\<forall>x \<in> S. \<forall>y \<in> S. x = y"
shows "card S \<le> 1"
proof (rule ccontr)
assume "\<not> card S \<le> 1"
hence "2 \<le> card S"
by simp
then obtain T where *: "T \<subseteq> S \<and> card T = 2"
using ex_card by metis
then obtain x y where "x \<in> T \<and> y \<in> T \<and> x \<noteq> y"
by (meson card_2_iff')
then show False
using * assms by blast
qed
lemma card_Int_if_either_in:
assumes "\<forall>x \<in> S. \<forall>y \<in> S. x = y \<or> x \<notin> T \<or> y \<notin> T"
shows "card (S \<inter> T) \<le> 1"
proof (rule ccontr)
assume "\<not> (card (S \<inter> T) \<le> 1)"
then obtain x y where *: "x \<in> S \<inter> T \<and> y \<in> S \<inter> T \<and> x \<noteq> y"
by (meson card_le_1_if_pairwise_eq)
hence "x \<in> T" "y \<in> T"
by simp_all
moreover have "x \<notin> T \<or> y \<notin> T"
using assms * by auto
ultimately show False
by blast
qed
lemma card_Int_Un_le_Sum_card_Int:
assumes "finite S"
shows "card (A \<inter> \<Union>S) \<le> (\<Sum>B \<in> S. card (A \<inter> B))"
using assms
proof (induction "card S" arbitrary: S)
case (Suc n)
then obtain B T where *: "S = { B } \<union> T" "card T = n" "B \<notin> T"
by (metis card_Suc_eq Suc_eq_plus1 insert_is_Un)
hence "card (A \<inter> \<Union>S) = card (A \<inter> \<Union>({ B } \<union> T))"
by blast
also have "... \<le> card (A \<inter> B) + card (A \<inter> \<Union>T)"
by (simp add: card_Un_le inf_sup_distrib1)
also have "... \<le> card (A \<inter> B) + (\<Sum>B \<in> T. card (A \<inter> B))"
using Suc * by simp
also have "... \<le> (\<Sum>B \<in> S. card (A \<inter> B))"
using Suc.prems * by simp
finally show ?case .
qed simp
lemma pigeonhole:
assumes "finite T" "S \<subseteq> \<Union>T" "card T < card S"
shows "\<exists>x \<in> S. \<exists>y \<in> S. \<exists>X \<in> T. x \<noteq> y \<and> x \<in> X \<and> y \<in> X"
proof (rule ccontr)
assume "\<not> (\<exists>x \<in> S. \<exists>y \<in> S. \<exists>X \<in> T. x \<noteq> y \<and> x \<in> X \<and> y \<in> X)"
hence *: "\<forall>X \<in> T. card (S \<inter> X) \<le> 1"
using card_Int_if_either_in by metis
have "card T < card (S \<inter> \<Union>T)"
using Int_absorb2 assms by fastforce
also have "... \<le> (\<Sum>X \<in> T. card (S \<inter> X))"
using assms(1) card_Int_Un_le_Sum_card_Int by blast
also have "... \<le> card T"
using * sum_mono by fastforce
finally show False by simp
qed
subsubsection "Delta Sparse Points within a Square"
lemma max_points_square:
assumes "\<forall>p \<in> ps. p \<in> cbox (x, y) (x + \<delta>, y + \<delta>)" "sparse \<delta> ps" "0 \<le> \<delta>"
shows "card ps \<le> 4"
proof (cases "\<delta> = 0")
case True
hence "{ (x, y) } = cbox (x, y) (x + \<delta>, y + \<delta>)"
using cbox_def by simp
hence "\<forall>p \<in> ps. p = (x, y)"
using assms(1) by blast
hence "\<forall>p \<in> ps. \<forall>q \<in> ps. p = q"
apply (auto split: prod.splits) by (metis of_int_eq_iff)+
thus ?thesis
using card_le_1_if_pairwise_eq by force
next
case False
hence \<delta>: "0 < \<delta>"
using assms(3) by simp
show ?thesis
proof (rule ccontr)
assume A: "\<not> (card ps \<le> 4)"
define PS where PS_def: "PS = (\<lambda>(x, y). (real_of_int x, real_of_int y)) ` ps"
have "inj_on (\<lambda>(x, y). (real_of_int x, real_of_int y)) ps"
using inj_on_def by fastforce
hence *: "\<not> (card PS \<le> 4)"
using A PS_def by (simp add: card_image)
let ?x' = "x + \<delta> / 2"
let ?y' = "y + \<delta> / 2"
let ?ll = "cbox ( x , y ) (?x' , ?y' )"
let ?lu = "cbox ( x , ?y') (?x' , y + \<delta>)"
let ?rl = "cbox (?x', y ) ( x + \<delta>, ?y' )"
let ?ru = "cbox (?x', ?y') ( x + \<delta>, y + \<delta>)"
let ?sq = "{ ?ll, ?lu, ?rl, ?ru }"
have card_le_4: "card ?sq \<le> 4"
by (simp add: card_insert_le_m1)
have "cbox (x, y) (?x', y + \<delta>) = ?ll \<union> ?lu"
using cbox_top_un assms(3) by auto
moreover have "cbox (?x', y) (x + \<delta>, y + \<delta>) = ?rl \<union> ?ru"
using cbox_top_un assms(3) by auto
moreover have "cbox (x, y) (?x', y + \<delta>) \<union> cbox (?x', y) (x + \<delta>, y + \<delta>) = cbox (x, y) (x + \<delta>, y + \<delta>)"
using cbox_right_un assms(3) by simp
ultimately have "?ll \<union> ?lu \<union> ?rl \<union> ?ru = cbox (x, y) (x + \<delta>, y + \<delta>)"
by blast
hence "PS \<subseteq> \<Union>(?sq)"
using assms(1) PS_def by (auto split: prod.splits)
moreover have "card ?sq < card PS"
using * card_insert_le_m1 card_le_4 by linarith
moreover have "finite ?sq"
by simp
ultimately have "\<exists>p\<^sub>0 \<in> PS. \<exists>p\<^sub>1 \<in> PS. \<exists>S \<in> ?sq. (p\<^sub>0 \<noteq> p\<^sub>1 \<and> p\<^sub>0 \<in> S \<and> p\<^sub>1 \<in> S)"
using pigeonhole[of ?sq PS] by blast
then obtain S p\<^sub>0 p\<^sub>1 where #: "p\<^sub>0 \<in> PS" "p\<^sub>1 \<in> PS" "S \<in> ?sq" "p\<^sub>0 \<noteq> p\<^sub>1" "p\<^sub>0 \<in> S" "p\<^sub>1 \<in> S"
by blast
have D: "0 \<le> \<delta> / 2"
using assms(3) by simp
have LL: "\<forall>p\<^sub>0 \<in> ?ll. \<forall>p\<^sub>1 \<in> ?ll. dist p\<^sub>0 p\<^sub>1 \<le> sqrt 2 * (\<delta> / 2)"
using cbox_max_dist[of "(x, y)" x y "(?x', ?y')" "\<delta> / 2"] D by auto
have LU: "\<forall>p\<^sub>0 \<in> ?lu. \<forall>p\<^sub>1 \<in> ?lu. dist p\<^sub>0 p\<^sub>1 \<le> sqrt 2 * (\<delta> / 2)"
using cbox_max_dist[of "(x, ?y')" x ?y' "(?x', y + \<delta>)" "\<delta> / 2"] D by auto
have RL: "\<forall>p\<^sub>0 \<in> ?rl. \<forall>p\<^sub>1 \<in> ?rl. dist p\<^sub>0 p\<^sub>1 \<le> sqrt 2 * (\<delta> / 2)"
using cbox_max_dist[of "(?x', y)" ?x' y "(x + \<delta>, ?y')" "\<delta> / 2"] D by auto
have RU: "\<forall>p\<^sub>0 \<in> ?ru. \<forall>p\<^sub>1 \<in> ?ru. dist p\<^sub>0 p\<^sub>1 \<le> sqrt 2 * (\<delta> / 2)"
using cbox_max_dist[of "(?x', ?y')" ?x' ?y' "(x + \<delta>, y + \<delta>)" "\<delta> / 2"] D by auto
have "\<forall>p\<^sub>0 \<in> S. \<forall>p\<^sub>1 \<in> S. dist p\<^sub>0 p\<^sub>1 \<le> sqrt 2 * (\<delta> / 2)"
using # LL LU RL RU by blast
hence "dist p\<^sub>0 p\<^sub>1 \<le> (sqrt 2 / 2) * \<delta>"
using # by simp
moreover have "(sqrt 2 / 2) * \<delta> < \<delta>"
using sqrt2_less_2 \<delta> by simp
ultimately have "dist p\<^sub>0 p\<^sub>1 < \<delta>"
by simp
moreover have "\<delta> \<le> dist p\<^sub>0 p\<^sub>1"
using assms(2) # sparse_def PS_def by auto
ultimately show False
by simp
qed
qed
end
\ No newline at end of file
diff --git a/thys/Complx/OG_Soundness.thy b/thys/Complx/OG_Soundness.thy
--- a/thys/Complx/OG_Soundness.thy
+++ b/thys/Complx/OG_Soundness.thy
@@ -1,2032 +1,2029 @@
(*
* Copyright 2016, Data61, CSIRO
*
* This software may be distributed and modified according to the terms of
* the BSD 2-Clause license. Note that NO WARRANTY is provided.
* See "LICENSE_BSD2.txt" for details.
*
* @TAG(DATA61_BSD)
*)
section \<open>Soundness proof of Owicki-Gries w.r.t.
COMPLX small-step semantics\<close>
theory OG_Soundness
imports
OG_Hoare
SeqCatch_decomp
begin
lemma pre_weaken_pre:
" x \<in> pre P \<Longrightarrow> x \<in> pre (weaken_pre P P')"
by (induct P, clarsimp+)
lemma oghoare_Skip[rule_format, OF _ refl]:
"\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P c Q,A \<Longrightarrow> c = Skip \<longrightarrow>
(\<exists>P'. P = AnnExpr P' \<and> P' \<subseteq> Q)"
apply (induct rule: oghoare_induct, simp_all)
apply clarsimp
apply (rename_tac \<Gamma> \<Theta> F P Q A P' Q' A' P'')
apply(case_tac P, simp_all)
by force
lemma oghoare_Throw[rule_format, OF _ refl]:
"\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P c Q,A \<Longrightarrow> c = Throw \<longrightarrow>
(\<exists>P'. P = AnnExpr P' \<and> P' \<subseteq> A)"
apply (induct rule: oghoare_induct, simp_all)
apply clarsimp
apply (rename_tac \<Gamma> \<Theta> F P Q A P' Q' A' P'')
apply(case_tac P, simp_all)
by force
lemma oghoare_Basic[rule_format, OF _ refl]:
"\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P c Q,A \<Longrightarrow> c = Basic f \<longrightarrow>
(\<exists>P'. P = AnnExpr P' \<and> P' \<subseteq> {x. f x \<in> Q})"
apply (induct rule: oghoare_induct, simp_all)
apply clarsimp
apply (rename_tac \<Gamma> \<Theta> F P Q A P' Q' A' P'')
apply(case_tac P, simp_all)
by force
lemma oghoare_Spec[rule_format, OF _ refl]:
"\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P c Q,A \<Longrightarrow> c = Spec r \<longrightarrow>
(\<exists>P'. P = AnnExpr P' \<and> P' \<subseteq> {s. (\<forall>t. (s, t) \<in> r \<longrightarrow> t \<in> Q) \<and> (\<exists>t. (s, t) \<in> r)})"
apply (induct rule: oghoare_induct, simp_all)
apply clarsimp
apply (rename_tac \<Gamma> \<Theta> F P Q A P' Q' A' P'')
apply(case_tac P, simp_all)
by force
lemma oghoare_DynCom[rule_format, OF _ refl]:
"\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P c Q,A \<Longrightarrow> c = (DynCom c') \<longrightarrow>
(\<exists>r ad. P = (AnnRec r ad) \<and> r \<subseteq> pre ad \<and> (\<forall>s\<in>r. \<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> ad (c' s) Q,A))"
apply (induct rule: oghoare_induct, simp_all)
apply clarsimp
apply clarsimp
apply (rename_tac \<Gamma> \<Theta> F P Q A P' Q' A' P'' x)
apply(case_tac P, simp_all)
apply clarsimp
apply (rename_tac P s)
apply (drule_tac x=s in bspec, simp)
apply (rule Conseq)
apply (rule_tac x="{}" in exI)
apply (fastforce)
done
lemma oghoare_Guard[rule_format, OF _ refl]:
"\<Gamma>,\<Theta>\<turnstile>\<^bsub>/F\<^esub> P c Q,A \<Longrightarrow> c = Guard f g d \<longrightarrow>
(\<exists>P' r . P = AnnRec r P' \<and>
\<Gamma>,\<Theta>\<turnstile>\<^bsub>/F\<^esub> P' d Q,A \<and>
r \<inter> g \<subseteq> pre P' \<and>
(r \<inter> -g \<noteq> {} \<longrightarrow> f \<in> F))"
apply (induct rule: oghoare_induct, simp_all)
apply force
apply clarsimp
apply (rename_tac \<Gamma> \<Theta> F P Q A P' Q' A' P'' r)
apply (case_tac P, simp_all)
apply clarsimp
apply (rename_tac r)
apply(rule conjI)
apply (rule Conseq)
apply (rule_tac x="{}" in exI)
apply (rule_tac x="Q'" in exI)
apply (rule_tac x="A'" in exI)
apply (clarsimp)
apply (case_tac "(r \<union> P') \<inter> g \<noteq> {}")
apply fast
apply clarsimp
apply(drule equalityD1, force)
done
lemma oghoare_Await[rule_format, OF _ refl]:
"\<Gamma>, \<Theta>\<turnstile>\<^bsub>/F\<^esub> P x Q,A \<Longrightarrow> \<forall>b c. x = Await b c \<longrightarrow>
(\<exists>r P' Q' A'. P = AnnRec r P' \<and> \<Gamma>, \<Theta>\<tturnstile>\<^bsub>/F\<^esub>(r \<inter> b) P' c Q',A' \<and> atom_com c
\<and> Q' \<subseteq> Q \<and> A' \<subseteq> A)"
apply (induct rule: oghoare_induct, simp_all)
apply (rename_tac \<Gamma> \<Theta> F r P Q A)
apply (rule_tac x=Q in exI)
apply (rule_tac x=A in exI)
apply clarsimp
apply (rename_tac \<Gamma> \<Theta> F P c Q A)
apply clarsimp
apply(case_tac P, simp_all)
apply (rename_tac P'' Q'' A'' x y)
apply (rule_tac x=Q'' in exI)
apply (rule_tac x=A'' in exI)
apply clarsimp
apply (rule conjI[rotated])
apply blast
apply (erule SeqConseq[rotated])
apply simp
apply simp
apply blast
done
lemma oghoare_Seq[rule_format, OF _ refl]:
"\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P x Q,A \<Longrightarrow> \<forall>p1 p2. x = Seq p1 p2 \<longrightarrow>
(\<exists> P\<^sub>1 P\<^sub>2 P' Q' A'. P = AnnComp P\<^sub>1 P\<^sub>2 \<and> \<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P\<^sub>1 p1 P', A' \<and> P' \<subseteq> pre P\<^sub>2 \<and>
\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P\<^sub>2 p2 Q',A' \<and>
Q' \<subseteq> Q \<and> A' \<subseteq> A)"
apply (induct rule: oghoare_induct, simp_all)
apply blast
apply (rename_tac \<Gamma> \<Theta> F P c Q A)
apply clarsimp
apply (rename_tac P'' Q'' A'')
apply(case_tac P, simp_all)
apply clarsimp
apply (rule_tac x="P''" in exI)
apply (rule_tac x="Q''" in exI)
apply (rule_tac x="A''" in exI)
apply clarsimp
apply (rule conjI)
apply (rule Conseq)
apply (rule_tac x="P'" in exI)
apply (rule_tac x="P''" in exI)
apply (rule_tac x="A''" in exI)
apply simp
apply fastforce
done
lemma oghoare_Catch[rule_format, OF _ refl]:
"\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P x Q,A \<Longrightarrow> \<forall>p1 p2. x = Catch p1 p2 \<longrightarrow>
(\<exists> P\<^sub>1 P\<^sub>2 P' Q' A'. P = AnnComp P\<^sub>1 P\<^sub>2 \<and> \<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P\<^sub>1 p1 Q', P' \<and> P' \<subseteq> pre P\<^sub>2 \<and>
\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P\<^sub>2 p2 Q',A' \<and>
Q' \<subseteq> Q \<and> A' \<subseteq> A)"
apply (induct rule: oghoare_induct, simp_all)
apply blast
apply (rename_tac \<Gamma> \<Theta> F P c Q A)
apply clarsimp
apply(case_tac P, simp_all)
apply clarsimp
apply (rename_tac P'' Q'' A'' x)
apply (rule_tac x="P''" in exI)
apply (rule_tac x="Q''" in exI)
apply clarsimp
apply (rule conjI)
apply (rule Conseq)
apply (rule_tac x=P' in exI)
apply fastforce
apply (rule_tac x="A''" in exI)
apply clarsimp
apply fastforce
done
lemma oghoare_Cond[rule_format, OF _ refl]:
"\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P x Q,A \<Longrightarrow>
\<forall>c\<^sub>1 c\<^sub>2 b. x = (Cond b c\<^sub>1 c\<^sub>2) \<longrightarrow>
(\<exists>P' P\<^sub>1 P\<^sub>2 Q' A'. P = (AnnBin P' P\<^sub>1 P\<^sub>2) \<and>
P' \<subseteq> {s. (s\<in>b \<longrightarrow> s\<in>pre P\<^sub>1) \<and> (s\<notin>b \<longrightarrow> s\<in>pre P\<^sub>2)} \<and>
\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P\<^sub>1 c\<^sub>1 Q',A' \<and>
\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P\<^sub>2 c\<^sub>2 Q',A' \<and> Q' \<subseteq> Q \<and> A' \<subseteq> A)"
apply (induct rule: oghoare_induct, simp_all)
apply (rule conjI)
apply fastforce
apply (rename_tac Q A P\<^sub>2 c\<^sub>2 r b)
apply (rule_tac x=Q in exI)
apply (rule_tac x=A in exI)
apply fastforce
apply (rename_tac \<Gamma> \<Theta> F P c Q A)
apply clarsimp
apply (case_tac P, simp_all)
apply fastforce
done
lemma oghoare_While[rule_format, OF _ refl]:
"\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P x Q,A \<Longrightarrow>
\<forall> b c. x = While b c \<longrightarrow>
(\<exists> r i P' A' Q'. P = AnnWhile r i P' \<and>
\<Gamma>, \<Theta>\<turnstile>\<^bsub>/F\<^esub> P' c i,A' \<and>
r \<subseteq> i \<and>
i \<inter> b \<subseteq> pre P' \<and>
i \<inter> -b \<subseteq> Q' \<and>
Q' \<subseteq> Q \<and> A' \<subseteq> A)"
apply (induct rule: oghoare_induct, simp_all)
apply blast
apply (rename_tac \<Gamma> \<Theta> F P c Q A)
apply clarsimp
apply (rename_tac P' Q' A' b ca r i P'' A'' Q'')
apply (case_tac P; simp)
apply (rule_tac x= A'' in exI)
apply (rule conjI)
apply blast
apply clarsimp
apply (rule_tac x= "Q'" in exI)
by fast
lemma oghoare_Call:
"\<Gamma>,\<Theta>\<turnstile>\<^bsub>/F\<^esub> P x Q,A \<Longrightarrow>
\<forall>p. x = Call p \<longrightarrow>
(\<exists>r n.
P = (AnnCall r n) \<and>
(\<exists>as P' f b.
\<Theta> p = Some as \<and>
(as ! n) = P' \<and>
r \<subseteq> pre P' \<and>
\<Gamma> p = Some b \<and>
n < length as \<and>
\<Gamma>,\<Theta> \<turnstile>\<^bsub>/F\<^esub> P' b Q,A))"
apply (induct rule: oghoare_induct, simp_all)
apply (rename_tac \<Gamma> \<Theta> F P c Q A)
apply clarsimp
apply (case_tac P, simp_all)
apply clarsimp
apply (rule Conseq)
apply (rule_tac x="{}" in exI)
apply force
done
lemma oghoare_Parallel[rule_format, OF _ refl]:
"\<Gamma>, \<Theta>\<turnstile>\<^bsub>/F\<^esub> P x Q,A \<Longrightarrow> \<forall>cs. x = Parallel cs \<longrightarrow>
(\<exists>as. P = AnnPar as \<and>
length as = length cs \<and>
\<Inter>(set (map postcond as)) \<subseteq> Q \<and>
\<Union>(set (map abrcond as)) \<subseteq> A \<and>
(\<forall>i<length cs. \<exists>Q' A'. \<Gamma>,\<Theta>\<turnstile>\<^bsub>/F\<^esub> (pres (as!i)) (cs!i) Q', A' \<and>
Q' \<subseteq> postcond (as!i) \<and> A' \<subseteq> abrcond (as!i)) \<and>
interfree \<Gamma> \<Theta> F as cs)"
apply (induct rule: oghoare_induct, simp_all)
apply clarsimp
apply (drule_tac x=i in spec)
apply fastforce
apply clarsimp
apply (case_tac P, simp_all)
apply blast
done
lemma ann_matches_weaken[OF _ refl]:
" ann_matches \<Gamma> \<Theta> X c \<Longrightarrow> X = (weaken_pre P P') \<Longrightarrow> ann_matches \<Gamma> \<Theta> P c"
apply (induct arbitrary: P P' rule: ann_matches.induct)
apply (case_tac P, simp_all, fastforce simp add: ann_matches.intros)+
done
lemma oghoare_seq_imp_ann_matches:
" \<Gamma>,\<Theta>\<tturnstile>\<^bsub>/F\<^esub> P a c Q,A \<Longrightarrow> ann_matches \<Gamma> \<Theta> a c"
apply (induct rule: oghoare_seq_induct, simp_all add: ann_matches.intros)
apply (clarsimp, erule ann_matches_weaken)+
done
lemma oghoare_imp_ann_matches:
" \<Gamma>,\<Theta>\<turnstile>\<^bsub>/F\<^esub> a c Q,A \<Longrightarrow> ann_matches \<Gamma> \<Theta> a c"
apply (induct rule: oghoare_induct, simp_all add: ann_matches.intros oghoare_seq_imp_ann_matches)
apply (clarsimp, erule ann_matches_weaken)+
done
(* intros *)
lemma SkipRule: "P \<subseteq> Q \<Longrightarrow> \<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> (AnnExpr P) Skip Q, A"
apply (rule Conseq, simp)
apply (rule exI, rule exI, rule exI)
apply (rule conjI, rule Skip, auto)
done
lemma ThrowRule: "P \<subseteq> A \<Longrightarrow> \<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> (AnnExpr P) Throw Q, A"
apply (rule Conseq, simp)
apply (rule exI, rule exI, rule exI)
apply (rule conjI, rule Throw, auto)
done
lemma BasicRule: "P \<subseteq> {s. (f s) \<in> Q} \<Longrightarrow> \<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> (AnnExpr P) (Basic f) Q, A"
apply (rule Conseq, simp, rule exI[where x="{s. (f s) \<in> Q}"])
apply (rule exI[where x=Q], rule exI[where x=A])
apply simp
apply (subgoal_tac "(P \<union> {s. f s \<in> Q}) = {s. f s \<in> Q}")
apply (auto intro: Basic)
done
lemma SpecRule:
"P \<subseteq> {s. (\<forall>t. (s, t) \<in> r \<longrightarrow> t \<in> Q) \<and> (\<exists>t. (s, t) \<in> r)}
\<Longrightarrow> \<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> (AnnExpr P) (Spec r) Q, A"
apply (rule Conseq, simp, rule exI[where x="{s. (\<forall>t. (s, t) \<in> r \<longrightarrow> t \<in> Q) \<and> (\<exists>t. (s, t) \<in> r) }"])
apply (rule exI[where x=Q], rule exI[where x=A])
apply simp
apply (subgoal_tac "(P \<union> {s. (\<forall>t. (s, t) \<in> r \<longrightarrow> t \<in> Q) \<and> (\<exists>t. (s, t) \<in> r)}) = {s. (\<forall>t. (s, t) \<in> r \<longrightarrow> t \<in> Q) \<and> (\<exists>t. (s, t) \<in> r)}")
apply (auto intro: Spec)
done
lemma CondRule:
"\<lbrakk> P \<subseteq> {s. (s\<in>b \<longrightarrow> s\<in>pre P\<^sub>1) \<and> (s\<notin>b \<longrightarrow> s\<in>pre P\<^sub>2)};
\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P\<^sub>1 c\<^sub>1 Q,A;
\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P\<^sub>2 c\<^sub>2 Q,A \<rbrakk>
\<Longrightarrow> \<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> (AnnBin P P\<^sub>1 P\<^sub>2) (Cond b c\<^sub>1 c\<^sub>2) Q,A"
by (auto intro: Cond)
lemma WhileRule: "\<lbrakk> r \<subseteq> I; I \<inter> b \<subseteq> pre P; (I \<inter> -b) \<subseteq> Q;
\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P c I,A \<rbrakk>
\<Longrightarrow> \<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> (AnnWhile r I P) (While b c) Q,A"
by (simp add: While)
lemma AwaitRule:
"\<lbrakk>atom_com c ; \<Gamma>, \<Theta> \<tturnstile>\<^bsub>/F\<^esub>P a c Q,A ; (r \<inter> b) \<subseteq> P\<rbrakk> \<Longrightarrow>
\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> (AnnRec r a) (Await b c) Q,A"
apply (erule Await[rotated])
apply (erule (1) SeqConseq, simp+)
done
lemma rtranclp_1n_induct [consumes 1, case_names base step]:
"\<lbrakk>r\<^sup>*\<^sup>* a b; P a; \<And>y z. \<lbrakk>r y z; r\<^sup>*\<^sup>* z b; P y\<rbrakk> \<Longrightarrow> P z\<rbrakk> \<Longrightarrow> P b"
by (induct rule: rtranclp.induct)
(simp add: rtranclp.rtrancl_into_rtrancl)+
lemmas rtranclp_1n_induct2[consumes 1, case_names base step] =
rtranclp_1n_induct[of _ "(ax,ay)" "(bx,by)", split_rule]
lemma oghoare_seq_valid:
" \<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P c\<^sub>1 R,A \<Longrightarrow>
\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> R c\<^sub>2 Q,A \<Longrightarrow>
\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P Seq c\<^sub>1 c\<^sub>2 Q,A"
apply (clarsimp simp add: valid_def)
apply (rename_tac t c' s)
apply (case_tac t)
apply simp
apply (drule (1) Seq_decomp_star)
apply (erule disjE)
apply fastforce
apply clarsimp
apply (rename_tac s1 t')
apply (drule_tac x="Normal s" and y="Normal t'" in spec2)
apply (erule_tac x="Skip" in allE)
apply (fastforce simp: final_def)
apply (clarsimp simp add: final_def)
apply (drule Seq_decomp_star_Fault)
apply (erule disjE)
apply (rename_tac s2)
apply (drule_tac x="Normal s" and y="Fault s2" in spec2)
apply (erule_tac x="Skip" in allE)
apply fastforce
apply clarsimp
apply (rename_tac s s2 s')
apply (drule_tac x="Normal s" and y="Normal s'" in spec2)
apply (erule_tac x="Skip" in allE)
apply clarsimp
apply (drule_tac x="Normal s'" and y="Fault s2" in spec2)
apply (erule_tac x="Skip" in allE)
apply clarsimp
apply clarsimp
apply (simp add: final_def)
apply (drule Seq_decomp_star_Stuck)
apply (erule disjE)
apply fastforce
apply clarsimp
apply fastforce
done
lemma oghoare_if_valid:
"\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P\<^sub>1 c\<^sub>1 Q,A \<Longrightarrow>
\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P\<^sub>2 c\<^sub>2 Q,A \<Longrightarrow>
r \<inter> b \<subseteq> P\<^sub>1 \<Longrightarrow> r \<inter> - b \<subseteq> P\<^sub>2 \<Longrightarrow>
\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> r Cond b c\<^sub>1 c\<^sub>2 Q,A"
apply (simp (no_asm) add: valid_def)
apply (clarsimp)
apply (erule converse_rtranclpE)
apply (clarsimp simp: final_def)
apply (erule step_Normal_elim_cases)
apply (fastforce simp: valid_def[where c=c\<^sub>1])
apply (fastforce simp: valid_def[where c=c\<^sub>2])
done
lemma Skip_normal_steps_end:
"\<Gamma> \<turnstile> (Skip, Normal s) \<rightarrow>\<^sup>* (c, s') \<Longrightarrow> c = Skip \<and> s' = Normal s"
apply (erule converse_rtranclpE)
apply simp
apply (erule step_Normal_elim_cases)
done
lemma Throw_normal_steps_end:
"\<Gamma> \<turnstile> (Throw, Normal s) \<rightarrow>\<^sup>* (c, s') \<Longrightarrow> c = Throw \<and> s' = Normal s"
apply (erule converse_rtranclpE)
apply simp
apply (erule step_Normal_elim_cases)
done
lemma while_relpower_induct:
"\<And>t c' x .
\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P c i,A \<Longrightarrow>
i \<inter> b \<subseteq> P \<Longrightarrow>
i \<inter> - b \<subseteq> Q \<Longrightarrow>
final (c', t) \<Longrightarrow>
x \<in> i \<Longrightarrow>
t \<notin> Fault ` F \<Longrightarrow>
c' = Throw \<longrightarrow> t \<notin> Normal ` A \<Longrightarrow>
(step \<Gamma> ^^ n) (While b c, Normal x) (c', t) \<Longrightarrow> c' = Skip \<and> t \<in> Normal ` Q"
apply (induct n rule:less_induct)
apply (rename_tac n t c' x)
apply (case_tac n)
apply (clarsimp simp: final_def)
apply clarify
apply (simp only: relpowp.simps)
apply (subst (asm) relpowp_commute[symmetric])
apply clarsimp
apply (erule step_Normal_elim_cases)
apply clarsimp
apply (rename_tac t c' x v)
apply(case_tac "t")
apply clarsimp
apply(drule Seq_decomp_relpow)
apply(simp add: final_def)
apply(erule disjE, erule conjE)
apply clarify
apply(drule relpowp_imp_rtranclp)
apply (simp add: valid_def)
apply (rename_tac x n t' n1)
apply (drule_tac x="Normal x" in spec)
apply (drule_tac x="Normal t'" in spec)
apply (drule spec[where x=Throw])
apply (fastforce simp add: final_def)
apply clarsimp
apply (rename_tac c' x n t' t n1 n2)
apply (drule_tac x=n2 and y="Normal t'" in meta_spec2)
apply (drule_tac x=c' and y="t" in meta_spec2)
apply (erule meta_impE, fastforce)
apply (erule meta_impE, fastforce)
apply (erule meta_impE)
apply(drule relpowp_imp_rtranclp)
apply (simp add: valid_def)
apply (drule_tac x="Normal x" and y="Normal t" in spec2)
apply (drule spec[where x=Skip])
apply (fastforce simp add: final_def)
apply assumption
apply clarsimp
apply (rename_tac c' s n f)
apply (subgoal_tac "c' = Skip", simp)
prefer 2
apply (case_tac c'; fastforce simp: final_def)
apply (drule Seq_decomp_relpowp_Fault)
apply (erule disjE)
apply (clarsimp simp: valid_def)
apply (drule_tac x="Normal s" and y="Fault f" in spec2)
apply (drule spec[where x=Skip])
apply(drule relpowp_imp_rtranclp)
apply (fastforce simp: final_def)
apply clarsimp
apply (rename_tac t n1 n2)
apply (subgoal_tac "t \<in> i")
prefer 2
apply (fastforce dest:relpowp_imp_rtranclp simp: final_def valid_def)
apply (drule_tac x=n2 in meta_spec)
apply (drule_tac x="Fault f" in meta_spec)
apply (drule meta_spec[where x=Skip])
apply (drule_tac x=t in meta_spec)
apply (fastforce simp: final_def)
apply clarsimp
apply (rename_tac c' s n)
apply (subgoal_tac "c' = Skip", simp)
prefer 2
apply (case_tac c'; fastforce simp: final_def)
apply (drule Seq_decomp_relpowp_Stuck)
apply clarsimp
apply (erule disjE)
apply (simp add:valid_def)
apply (drule_tac x="Normal s" and y="Stuck" in spec2)
apply clarsimp
apply (drule spec[where x=Skip])
apply(drule relpowp_imp_rtranclp)
apply (fastforce)
apply clarsimp
apply (rename_tac t n1 n2)
apply (subgoal_tac "t \<in> i")
prefer 2
apply (fastforce dest:relpowp_imp_rtranclp simp: final_def valid_def)
apply (drule_tac x=n2 in meta_spec)
apply (drule meta_spec[where x=Stuck])
apply (drule meta_spec[where x=Skip])
apply (drule_tac x=t in meta_spec)
apply (fastforce simp: final_def)
apply clarsimp
apply (drule relpowp_imp_rtranclp)
apply (drule Skip_normal_steps_end)
apply fastforce
done
lemma valid_weaken_pre:
"\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P c Q,A \<Longrightarrow>
P' \<subseteq> P \<Longrightarrow> \<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P' c Q,A"
by (fastforce simp: valid_def)
lemma valid_strengthen_post:
"\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P c Q,A \<Longrightarrow>
Q \<subseteq> Q' \<Longrightarrow> \<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P c Q',A"
by (fastforce simp: valid_def)
lemma valid_strengthen_abr:
"\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P c Q,A \<Longrightarrow>
A \<subseteq> A' \<Longrightarrow> \<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P c Q,A'"
by (fastforce simp: valid_def)
lemma oghoare_while_valid:
"\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P c i,A \<Longrightarrow>
i \<inter> b \<subseteq> P \<Longrightarrow>
i \<inter> - b \<subseteq> Q \<Longrightarrow>
\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> i While b c Q,A"
apply (simp (no_asm) add: valid_def)
apply (clarsimp simp add: )
apply (drule rtranclp_imp_relpowp)
apply (clarsimp)
by (erule while_relpower_induct)
lemma oghoare_catch_valid:
"\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P\<^sub>1 c\<^sub>1 Q,P\<^sub>2 \<Longrightarrow>
\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P\<^sub>2 c\<^sub>2 Q,A \<Longrightarrow>
\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> P\<^sub>1 Catch c\<^sub>1 c\<^sub>2 Q,A"
apply (clarsimp simp add: valid_def[where c="Catch _ _"])
apply (rename_tac t c' s)
apply (case_tac t)
apply simp
apply (drule Catch_decomp_star)
apply (fastforce simp: final_def)
apply clarsimp
apply (erule disjE)
apply (clarsimp simp add: valid_def[where c="c\<^sub>1"])
apply (rename_tac s x t)
apply (drule_tac x="Normal s" in spec)
apply (drule_tac x="Normal t" in spec)
apply (drule_tac x=Throw in spec)
apply (fastforce simp: final_def valid_def)
apply (clarsimp simp add: valid_def[where c="c\<^sub>1"])
apply (rename_tac s t)
apply (drule_tac x="Normal s" in spec)
apply (drule_tac x="Normal t" in spec)
apply (drule_tac x=Skip in spec)
apply (fastforce simp: final_def)
apply (rename_tac c' s t)
apply (simp add: final_def)
apply (drule Catch_decomp_star_Fault)
apply clarsimp
apply (erule disjE)
apply (clarsimp simp: valid_def[where c=c\<^sub>1] final_def)
apply (fastforce simp: valid_def final_def)
apply (simp add: final_def)
apply (drule Catch_decomp_star_Stuck)
apply clarsimp
apply (erule disjE)
apply (clarsimp simp: valid_def[where c=c\<^sub>1] final_def)
apply (fastforce simp: valid_def final_def)
done
lemma ann_matches_imp_assertionsR:
"ann_matches \<Gamma> \<Theta> a c \<Longrightarrow> \<not> pre_par a \<Longrightarrow>
assertionsR \<Gamma> \<Theta> Q A a c (pre a)"
by (induct arbitrary: Q A rule: ann_matches.induct , (fastforce intro: assertionsR.intros )+)
lemma ann_matches_imp_assertionsR':
"ann_matches \<Gamma> \<Theta> a c \<Longrightarrow> a' \<in> pre_set a \<Longrightarrow>
assertionsR \<Gamma> \<Theta> Q A a c a'"
apply (induct arbitrary: Q A rule: ann_matches.induct)
apply ((fastforce intro: assertionsR.intros )+)[12]
apply simp
apply (erule bexE)
apply (simp only: in_set_conv_nth)
apply (erule exE)
apply (drule_tac x=i in spec)
apply clarsimp
apply (erule AsParallelExprs)
apply simp
done
lemma rtranclp_conjD:
"(\<lambda>x1 x2. r1 x1 x2 \<and> r2 x1 x2)\<^sup>*\<^sup>* x1 x2 \<Longrightarrow>
r1\<^sup>*\<^sup>* x1 x2 \<and> r2\<^sup>*\<^sup>* x1 x2"
by (metis (no_types, lifting) rtrancl_mono_proof)
lemma rtranclp_mono' :
"r\<^sup>*\<^sup>* a b \<Longrightarrow> r \<le> s \<Longrightarrow> s\<^sup>*\<^sup>* a b"
by (metis rtrancl_mono_proof sup.orderE sup2CI)
lemma state_upd_in_atomicsR[rule_format, OF _ refl refl]:
"\<Gamma>\<turnstile> cf \<rightarrow> cf' \<Longrightarrow>
cf = (c, Normal s) \<Longrightarrow>
cf' = (c', Normal t) \<Longrightarrow>
s \<noteq> t \<Longrightarrow>
ann_matches \<Gamma> \<Theta> a c \<Longrightarrow>
s \<in> pre a \<Longrightarrow>
(\<exists>p cm x. atomicsR \<Gamma> \<Theta> a c (p, cm) \<and> s \<in> p \<and>
\<Gamma> \<turnstile> (cm, Normal s) \<rightarrow> (x, Normal t) \<and> final (x, Normal t))"
apply (induct arbitrary: c c' s t a rule: step.induct, simp_all)
apply clarsimp
apply (erule ann_matches.cases, simp_all)
apply (rule exI)+
apply (rule conjI)
apply (rule atomicsR.intros)
apply clarsimp
apply (rule_tac x="Skip" in exI)
apply (simp add: final_def)
apply (rule step.Basic)
apply clarsimp
apply (erule ann_matches.cases, simp_all)
apply (rule exI)+
apply (rule conjI)
apply (rule atomicsR.intros)
apply clarsimp
apply (rule_tac x="Skip" in exI)
apply (simp add: final_def)
apply (erule step.Spec)
apply clarsimp
apply (erule ann_matches.cases, simp_all)
apply clarsimp
apply (drule meta_spec)+
apply (erule meta_impE, rule conjI, (rule refl)+)+
apply clarsimp
apply (erule (1) meta_impE)
apply (erule meta_impE, fastforce)
apply clarsimp
apply (rule exI)+
apply (rule conjI)
apply (erule AtSeqExpr1)
apply fastforce
apply clarsimp
apply (erule ann_matches.cases, simp_all)
apply clarsimp
apply (drule meta_spec)+
apply (erule meta_impE, rule conjI, (rule refl)+)+
apply clarsimp
apply (erule (1) meta_impE)
apply (erule meta_impE, fastforce)
apply clarsimp
apply (rule exI)+
apply (rule conjI)
apply (erule AtCatchExpr1)
apply fastforce
apply (erule ann_matches.cases, simp_all)
apply clarsimp
apply (drule meta_spec)+
apply (erule meta_impE, rule conjI, (rule refl)+)+
apply clarsimp
apply (erule meta_impE)
apply fastforce
apply (erule meta_impE)
apply (case_tac "i=0"; fastforce)
apply clarsimp
apply (rule exI)+
apply (rule conjI)
apply (erule AtParallelExprs)
apply fastforce
apply (drule_tac x=i in spec)
apply clarsimp
apply fastforce
apply (erule ann_matches.cases, simp_all)
apply clarsimp
apply (rule exI)+
apply (rule conjI)
apply (rule AtAwait)
apply clarsimp
apply (rename_tac c' sa t aa e r ba)
apply (rule_tac x=c' in exI)
apply (rule conjI)
apply (erule step.Await)
apply (erule rtranclp_mono')
apply clarsimp
apply assumption+
apply (simp add: final_def)
done
lemma oghoare_atom_com_sound:
"\<Gamma>, \<Theta> \<tturnstile>\<^bsub>/F\<^esub>P a c Q,A \<Longrightarrow> atom_com c \<Longrightarrow> \<Gamma> \<Turnstile>\<^bsub>/F\<^esub> P c Q, A"
unfolding valid_def
proof (induct rule: oghoare_seq_induct)
case SeqSkip thus ?case
by (fastforce
elim: converse_rtranclpE step_Normal_elim_cases(1))
next
case SeqThrow thus ?case
by (fastforce
elim: converse_rtranclpE step_Normal_elim_cases)
next
case SeqBasic thus ?case
by (fastforce
elim: converse_rtranclpE step_Normal_elim_cases
simp: final_def)
next
case (SeqSpec \<Gamma> \<Theta> F r Q A) thus ?case
apply clarsimp
apply (erule converse_rtranclpE)
apply (clarsimp simp: final_def)
apply (erule step_Normal_elim_cases)
apply (fastforce elim!: converse_rtranclpE step_Normal_elim_cases)
by clarsimp
next
case (SeqSeq \<Gamma> \<Theta> F P\<^sub>1 c\<^sub>1 P\<^sub>2 A c\<^sub>2 Q) show ?case
using SeqSeq
by (fold valid_def)
(fastforce intro: oghoare_seq_valid simp: valid_weaken_pre)
next
case (SeqCatch \<Gamma> \<Theta> F P\<^sub>1 c\<^sub>1 Q P\<^sub>2 c\<^sub>2 A) thus ?case
apply (fold valid_def)
apply simp
apply (fastforce elim: oghoare_catch_valid)+
done
next
case (SeqCond \<Gamma> \<Theta> F P b c1 Q A c2) thus ?case
by (fold valid_def)
(fastforce intro:oghoare_if_valid)
next
case (SeqWhile \<Gamma> \<Theta> F P c A b) thus ?case
by (fold valid_def)
(fastforce elim: valid_weaken_pre[rotated] oghoare_while_valid)
next
case (SeqGuard \<Gamma> \<Theta> F P c Q A r g f) thus ?case
apply (fold valid_def)
apply (simp (no_asm) add: valid_def)
apply clarsimp
apply (erule converse_rtranclpE)
apply (fastforce simp: final_def)
apply clarsimp
apply (erule step_Normal_elim_cases)
apply (case_tac "r \<inter> - g \<noteq> {}")
apply clarsimp
apply (fastforce simp: valid_def)
apply clarsimp
apply (fastforce simp: valid_def)
apply clarsimp
apply (case_tac "r \<inter> - g \<noteq> {}")
apply (fastforce dest: no_steps_final simp:final_def)
apply (fastforce dest: no_steps_final simp:final_def)
done
next
case (SeqCall \<Gamma> p f \<Theta> F P Q A) thus ?case
by simp
next
case (SeqDynCom r fa \<Gamma> \<Theta> F P c Q A) thus ?case
apply -
apply clarsimp
apply (erule converse_rtranclpE)
apply (clarsimp simp: final_def)
apply clarsimp
apply (erule step_Normal_elim_cases)
apply clarsimp
apply (rename_tac t c' x)
apply (drule_tac x=x in spec)
apply (drule_tac x=x in bspec, fastforce)
apply clarsimp
apply (drule_tac x="Normal x" in spec)
apply (drule_tac x="t" in spec)
apply (drule_tac x="c'" in spec)
apply fastforce+
done
next
case (SeqConseq \<Gamma> \<Theta> F P c Q A) thus ?case
apply (clarsimp)
apply (rename_tac t c' x)
apply (erule_tac x="Normal x" in allE)
apply (erule_tac x="t" in allE)
apply (erule_tac x="c'" in allE)
apply (clarsimp simp: pre_weaken_pre)
apply force
done
qed simp_all
lemma ParallelRuleAnn:
" length as = length cs \<Longrightarrow>
\<forall>i<length cs. \<Gamma>,\<Theta> \<turnstile>\<^bsub>/F \<^esub>(pres (as ! i)) (cs ! i) (postcond (as ! i)),(abrcond (as ! i)) \<Longrightarrow>
interfree \<Gamma> \<Theta> F as cs \<Longrightarrow>
\<Inter>(set (map postcond as)) \<subseteq> Q \<Longrightarrow>
\<Union>(set (map abrcond as)) \<subseteq> A \<Longrightarrow> \<Gamma>,\<Theta> \<turnstile>\<^bsub>/F \<^esub>(AnnPar as) (Parallel cs) Q,A"
apply (erule (3) Parallel)
apply auto
done
lemma oghoare_step[rule_format, OF _ refl refl]:
shows
"\<Gamma> \<turnstile> cf \<rightarrow> cf' \<Longrightarrow> cf = (c, Normal s) \<Longrightarrow> cf' = (c', Normal t) \<Longrightarrow>
\<Gamma>,\<Theta>\<turnstile>\<^bsub>/F \<^esub>a c Q,A \<Longrightarrow>
s \<in> pre a \<Longrightarrow>
\<exists>a'. \<Gamma>,\<Theta>\<turnstile>\<^bsub>/F \<^esub>a' c' Q,A \<and> t \<in> pre a' \<and>
(\<forall>as. assertionsR \<Gamma> \<Theta> Q A a' c' as \<longrightarrow> assertionsR \<Gamma> \<Theta> Q A a c as) \<and>
(\<forall>pm cm. atomicsR \<Gamma> \<Theta> a' c' (pm, cm) \<longrightarrow> atomicsR \<Gamma> \<Theta> a c (pm, cm))"
proof (induct arbitrary:c c' s a t Q A rule: step.induct)
case (Parallel i cs s c' s' ca c'a sa a t Q A) thus ?case
apply (clarsimp simp:)
apply (drule oghoare_Parallel)
apply clarsimp
apply (rename_tac as)
apply (frule_tac x=i in spec, erule (1) impE)
apply (elim exE conjE)
apply (drule meta_spec)+
apply (erule meta_impE, rule conjI, (rule refl)+)+
apply (erule meta_impE)
apply (rule_tac P="(pres (as ! i))" in Conseq)
apply (rule exI[where x="{}"])
apply (rule_tac x="Q'" in exI)
apply (rule_tac x="A'" in exI)
apply (simp)
apply (erule meta_impE, simp)
apply clarsimp
apply (rule_tac x="AnnPar (as[i:=(a',postcond(as!i), abrcond(as!i))])" in exI)
apply (rule conjI)
apply (rule ParallelRuleAnn, simp)
apply clarsimp
apply (rename_tac j)
apply (drule_tac x=j in spec)
apply clarsimp
apply (case_tac "i = j")
apply (clarsimp simp: )
apply (rule Conseq)
apply (rule exI[where x="{}"])
apply (fastforce)
apply (simp add: )
apply (clarsimp simp: interfree_def)
apply (rename_tac i' j')
apply (drule_tac x=i' and y=j' in spec2)
apply clarsimp
apply (case_tac "i = i'")
apply clarsimp
apply (simp add: interfree_aux_def prod.case_eq_if )
apply clarsimp
apply (case_tac "j' = i")
apply (clarsimp)
apply (simp add: interfree_aux_def prod.case_eq_if)
apply clarsimp
apply (clarsimp)
apply (erule subsetD)
apply (clarsimp simp: in_set_conv_nth)
apply (rename_tac a' x a b c i')
apply (case_tac "i' = i")
apply clarsimp
apply (drule_tac x="(a', b, c)" in bspec, simp)
apply (fastforce simp add: in_set_conv_nth)
apply fastforce
apply (drule_tac x="(a, b, c)" in bspec, simp)
apply (simp add: in_set_conv_nth)
apply (rule_tac x=i' in exI)
apply clarsimp
apply fastforce
apply clarsimp
apply (erule_tac A="(\<Union>x\<in>set as. abrcond x) " in subsetD)
apply (clarsimp simp: in_set_conv_nth)
apply (rename_tac a b c j)
apply (case_tac "j = i")
apply clarsimp
apply (rule_tac x="as!i" in bexI)
apply simp
apply clarsimp
apply clarsimp
apply (rule_tac x="(a,b,c)" in bexI)
apply simp
apply (clarsimp simp:in_set_conv_nth)
apply (rule_tac x=j in exI)
apply fastforce
apply (rule conjI)
apply (case_tac "s = Normal t")
apply clarsimp
apply (clarsimp simp: in_set_conv_nth)
apply (rename_tac a b c j)
apply (case_tac "j = i")
apply clarsimp
apply clarsimp
apply (drule_tac x="as!j" in bspec)
apply (clarsimp simp add: in_set_conv_nth)
apply (rule_tac x=j in exI)
apply fastforce
apply clarsimp
apply (frule state_upd_in_atomicsR, simp)
apply (erule oghoare_imp_ann_matches)
apply (clarsimp simp: in_set_conv_nth)
apply fastforce
apply (clarsimp simp: in_set_conv_nth)
apply (rename_tac j)
apply (case_tac "j = i")
apply clarsimp
apply clarsimp
apply (thin_tac "\<Gamma>,\<Theta> \<turnstile>\<^bsub>/F \<^esub>a' c' (postcond (as ! i)),(abrcond (as ! i))")
apply (simp add: interfree_def interfree_aux_def)
apply (drule_tac x="j" and y=i in spec2)
apply (simp add: prod.case_eq_if)
apply (drule spec2, drule (1) mp)
apply clarsimp
apply (case_tac "pre_par a")
apply (subst pre_set)
apply clarsimp
apply (drule_tac x="as!j" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply (rule_tac x=j in exI)
apply fastforce
apply clarsimp
apply (frule (1) pre_imp_pre_set)
apply (rename_tac as Q' A' a' a b c p cm x j X)
apply (drule_tac x="X" in spec, erule_tac P="assertionsR \<Gamma> \<Theta> b c a (cs ! j) X" in impE)
apply (rule ann_matches_imp_assertionsR')
apply (drule_tac x=j in spec, clarsimp)
apply (erule (1) oghoare_imp_ann_matches)
apply (rename_tac a b c p cm x j X )
apply (thin_tac "\<Gamma>\<Turnstile>\<^bsub>/F\<^esub> (b \<inter> p) cm b,b")
apply (thin_tac " \<Gamma>\<Turnstile>\<^bsub>/F\<^esub> (c \<inter> p) cm c,c")
apply (simp add: valid_def)
apply (drule_tac x="Normal sa" in spec)
apply (drule_tac x="Normal t" in spec)
apply (drule_tac x=x in spec)
apply (erule impE, fastforce)
apply force
apply (drule_tac x=j in spec)
apply clarsimp
apply (rename_tac a b c p cm x j Q'' A'')
apply (drule_tac x="pre a" in spec,erule impE, rule ann_matches_imp_assertionsR)
apply (erule (1) oghoare_imp_ann_matches)
apply (thin_tac " \<Gamma>\<Turnstile>\<^bsub>/F\<^esub> (b \<inter> p) cm b,b")
apply (thin_tac " \<Gamma>\<Turnstile>\<^bsub>/F\<^esub> (c \<inter> p) cm c,c")
apply (simp add: valid_def)
apply (drule_tac x="Normal sa" in spec)
apply (drule_tac x="Normal t" in spec)
apply (drule_tac x=x in spec)
apply (erule impE, fastforce)
apply clarsimp
apply (drule_tac x="as ! j" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply (rule_tac x=j in exI, fastforce)
apply clarsimp
apply fastforce
apply (rule conjI)
apply (clarsimp simp: )
apply (erule assertionsR.cases ; simp)
apply (clarsimp simp: )
apply (rename_tac j a)
apply (case_tac "j = i")
apply clarsimp
apply (drule_tac x=a in spec, erule (1) impE)
apply (erule (1) AsParallelExprs)
apply (subst (asm) nth_list_update_neq, simp)
apply (erule_tac i=j in AsParallelExprs)
apply fastforce
apply clarsimp
apply (rule AsParallelSkips)
apply (clarsimp simp:)
apply (rule equalityI)
apply (clarsimp simp: in_set_conv_nth)
apply (rename_tac a' x a b c j)
apply (case_tac "j = i")
apply (thin_tac "\<forall>a\<in>set as. sa \<in> precond a")
apply clarsimp
apply (drule_tac x="(a', b, c)" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply (rule_tac x="i" in exI)
apply fastforce
apply fastforce
apply (drule_tac x="as ! j" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply (rule_tac x=j in exI)
apply fastforce
apply clarsimp
apply (drule_tac x=" as ! j" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply (rule_tac x=j in exI, fastforce)
apply fastforce
apply (clarsimp simp: in_set_conv_nth)
apply (rename_tac x a b c j)
apply (thin_tac "\<forall>a\<in>set as. sa \<in> precond a")
apply (case_tac "j = i")
apply clarsimp
apply (drule_tac x="as!i" in bspec, fastforce)
apply fastforce
apply clarsimp
apply (drule_tac x="as!j" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply (rule_tac x=j in exI, fastforce)
apply fastforce
apply clarsimp
apply (erule atomicsR.cases ; simp)
apply clarsimp
apply (rename_tac j atc atp)
apply (case_tac "j = i")
apply clarsimp
apply (drule_tac x=atc and y=atp in spec2, erule impE)
apply (clarsimp)
apply (erule (1) AtParallelExprs)
apply (subst (asm) nth_list_update_neq, simp)
apply (erule_tac i=j in AtParallelExprs)
apply (clarsimp)
done
next
case (Basic f s c c' sa a t Q A) thus ?case
apply clarsimp
apply (drule oghoare_Basic)
apply clarsimp
apply (rule_tac x="AnnExpr Q" in exI)
apply clarsimp
apply (rule conjI)
apply (rule SkipRule)
apply fastforce
apply (rule conjI)
apply fastforce
apply clarsimp
apply (drule assertionsR.cases, simp_all)
apply (rule assertionsR.AsBasicSkip)
done
next
case (Spec s t r c c' sa a ta Q A) thus ?case
apply clarsimp
apply (drule oghoare_Spec)
apply clarsimp
apply (rule_tac x="AnnExpr Q" in exI)
apply clarsimp
apply (rule conjI)
apply (rule SkipRule)
apply fastforce
apply (rule conjI)
apply force
apply clarsimp
apply (erule assertionsR.cases, simp_all)
apply clarsimp
apply (rule assertionsR.AsSpecSkip)
done
next
case (Guard s g f c ca c' sa a t Q A) thus ?case
apply -
apply clarsimp
apply (drule oghoare_Guard)
apply clarsimp
apply (rule exI, rule conjI, assumption)
by (fastforce dest: oghoare_Guard
intro:assertionsR.intros atomicsR.intros)
next
case (GuardFault s g f c ca c' sa a t Q A) thus ?case
by (fastforce dest: oghoare_Guard
intro:assertionsR.intros atomicsR.intros)
next
case (Seq c\<^sub>1 s c\<^sub>1' s' c\<^sub>2 c c' sa a t A Q) thus ?case
apply (clarsimp simp:)
apply (drule oghoare_Seq)
apply clarsimp
apply (drule meta_spec)+
apply (erule meta_impE, rule conjI, (rule refl)+)+
apply (erule meta_impE)
apply (rule Conseq)
apply (rule exI[where x="{}"])
apply (rule exI)+
apply (rule conjI)
apply (simp)
apply (erule (1) conjI)
apply clarsimp
apply (rule_tac x="AnnComp a' P\<^sub>2" in exI, rule conjI)
apply (rule oghoare_oghoare_seq.Seq)
apply (rule Conseq)
apply (rule_tac x="{}" in exI)
apply (fastforce)
apply (rule Conseq)
apply (rule_tac x="{}" in exI)
apply (fastforce)
apply clarsimp
apply (rule conjI)
apply clarsimp
apply (erule assertionsR.cases, simp_all)
apply clarsimp
apply (drule_tac x=a in spec, simp)
apply (erule AsSeqExpr1)
apply clarsimp
apply (erule AsSeqExpr2)
apply clarsimp
apply (erule atomicsR.cases, simp_all)
apply clarsimp
apply (drule_tac x="a" and y=b in spec2, simp)
apply (erule AtSeqExpr1)
apply clarsimp
apply (erule AtSeqExpr2)
done
next
case (SeqSkip c\<^sub>2 s c c' sa a t Q A) thus ?case
apply clarsimp
apply (drule oghoare_Seq)
apply clarsimp
apply (rename_tac P\<^sub>1 P\<^sub>2 P' Q' A')
apply (rule_tac x=P\<^sub>2 in exI)
apply (rule conjI, rule Conseq)
apply (rule_tac x="{}" in exI)
apply (fastforce)
apply (rule conjI)
apply (drule oghoare_Skip)
apply fastforce
apply (rule conjI)
apply clarsimp
apply (erule assertionsR.AsSeqExpr2)
apply clarsimp
apply (fastforce intro: atomicsR.intros)
done
next
case (SeqThrow c\<^sub>2 s c c' sa a t Q A) thus ?case
apply clarsimp
apply (drule oghoare_Seq)
apply clarsimp
apply (rename_tac P\<^sub>1 P\<^sub>2 P' Q' A')
apply (rule_tac x=P\<^sub>1 in exI)
apply (drule oghoare_Throw)
apply clarsimp
apply (rename_tac P'')
apply (rule conjI, rule Conseq)
apply (rule_tac x="{}" in exI)
apply (rule_tac x="Q'" in exI)
apply (rule_tac x="P''" in exI)
apply (fastforce intro: Throw)
apply clarsimp
apply (erule assertionsR.cases, simp_all)
apply clarsimp
apply (rule AsSeqExpr1)
apply (rule AsThrow)
done
next
case (CondTrue s b c\<^sub>1 c\<^sub>2 c sa c' s' ann) thus ?case
apply (clarsimp)
apply (drule oghoare_Cond)
apply clarsimp
apply (rename_tac P' P\<^sub>1 P\<^sub>2 Q' A')
apply (rule_tac x= P\<^sub>1 in exI)
apply (rule conjI)
apply (rule Conseq, rule_tac x="{}" in exI, fastforce)
apply (rule conjI, fastforce)
apply (rule conjI)
apply (fastforce elim: assertionsR.cases intro: AsCondExpr1)
apply (fastforce elim: atomicsR.cases intro: AtCondExpr1)
done
next
case (CondFalse s b c\<^sub>1 c\<^sub>2 c sa c' s' ann) thus ?case
apply (clarsimp)
apply (drule oghoare_Cond)
apply clarsimp
apply (rename_tac P' P\<^sub>1 P\<^sub>2 Q' A')
apply (rule_tac x= P\<^sub>2 in exI)
apply (rule conjI)
apply (rule Conseq, rule_tac x="{}" in exI, fastforce)
apply (rule conjI, fastforce)
apply (rule conjI)
apply (fastforce elim: assertionsR.cases intro: AsCondExpr2)
apply (fastforce elim: atomicsR.cases intro: AtCondExpr2)
done
next
case (WhileTrue s b c ca sa c' s' ann) thus ?case
apply clarsimp
apply (frule oghoare_While)
apply clarsimp
apply (rename_tac r i P' A' Q')
apply (rule_tac x="AnnComp P' (AnnWhile i i P')" in exI)
apply (rule conjI)
apply (rule Seq)
apply (rule Conseq)
apply (rule_tac x="{}" in exI)
apply (rule_tac x="i" in exI)
apply (rule_tac x="A'" in exI)
apply (subst weaken_pre_empty)
apply clarsimp
apply (rule While)
apply (rule Conseq)
apply (rule_tac x="{}" in exI)
apply (rule_tac x="i" in exI)
apply (rule_tac x="A'" in exI)
apply (subst weaken_pre_empty)
apply clarsimp
apply clarsimp
apply force
apply simp
apply simp
apply (rule conjI)
apply blast
apply (rule conjI)
apply clarsimp
apply (erule assertionsR.cases, simp_all)
apply clarsimp
apply (rule AsWhileExpr)
apply clarsimp
apply (erule assertionsR.cases,simp_all)
apply clarsimp
apply (erule AsWhileExpr)
apply clarsimp
apply (rule AsWhileInv)
apply clarsimp
apply (rule AsWhileInv)
apply clarsimp
apply (rule AsWhileSkip)
apply clarsimp
apply (erule atomicsR.cases, simp_all)
apply clarsimp
apply (rule AtWhileExpr)
apply clarsimp+
apply (erule atomicsR.cases, simp_all)
apply clarsimp
apply (erule AtWhileExpr)
done
next
case (WhileFalse s b c ca sa c' ann s' Q A) thus ?case
apply clarsimp
apply (drule oghoare_While)
apply clarsimp
apply (rule_tac x="AnnExpr Q" in exI)
apply (rule conjI)
apply (rule SkipRule)
apply blast
apply (rule conjI)
apply fastforce
apply clarsimp
apply (erule assertionsR.cases, simp_all)
apply (drule sym, simp, rule AsWhileSkip)
done
next
case (Call p bs s c c' sa a t Q A) thus ?case
apply clarsimp
apply (drule oghoare_Call)
apply clarsimp
apply (rename_tac n as)
apply (rule_tac x="as ! n" in exI)
apply clarsimp
apply (rule conjI, fastforce)
apply (rule conjI)
apply clarsimp
apply (erule (2) AsCallExpr)
apply fastforce
apply clarsimp
apply (erule (2) AtCallExpr)
apply simp
done
next
case (DynCom c s ca c' sa a t Q A) thus ?case
apply -
apply clarsimp
apply (drule oghoare_DynCom)
apply clarsimp
apply (drule_tac x=t in bspec, assumption)
apply (rule exI)
apply (erule conjI)
apply (rule conjI, fastforce)
apply (rule conjI)
apply clarsimp
apply (erule (1) AsDynComExpr)
apply (clarsimp)
apply (erule (1) AtDynCom)
done
next
case (Catch c\<^sub>1 s c\<^sub>1' s' c\<^sub>2 c c' sa a t Q A) thus ?case
apply (clarsimp simp:)
apply (drule oghoare_Catch)
apply clarsimp
apply (drule meta_spec)+
apply (erule meta_impE, rule conjI, (rule refl)+)+
apply (erule meta_impE)
apply (rule Conseq)
apply (rule exI[where x="{}"])
apply (rule exI)+
apply (rule conjI)
apply (simp)
apply (erule (1) conjI)
apply clarsimp
apply (rename_tac P\<^sub>1 P\<^sub>2 P' Q' A' a')
apply (rule_tac x="AnnComp a' P\<^sub>2" in exI, rule conjI)
apply (rule oghoare_oghoare_seq.Catch)
apply (rule Conseq)
apply (rule_tac x="{}" in exI)
apply (fastforce)
apply (rule Conseq)
apply (rule_tac x="{}" in exI)
apply (fastforce)
apply clarsimp
apply (rule conjI)
apply clarsimp
apply (erule assertionsR.cases, simp_all)
apply clarsimp
apply (rename_tac a)
apply (drule_tac x=a in spec, simp)
apply (erule AsCatchExpr1)
apply clarsimp
apply (erule AsCatchExpr2)
apply clarsimp
apply (erule atomicsR.cases, simp_all)
apply clarsimp
apply (rename_tac a b a2)
apply (drule_tac x="a" and y=b in spec2, simp)
apply (erule AtCatchExpr1)
apply clarsimp
apply (erule AtCatchExpr2)
done
next
case (CatchSkip c\<^sub>2 s c c' sa a t Q A) thus ?case
apply clarsimp
apply (drule oghoare_Catch, clarsimp)
apply (rename_tac P\<^sub>1 P\<^sub>2 P' Q' A')
apply (rule_tac x=P\<^sub>1 in exI)
apply clarsimp
apply (rule conjI)
apply (rule Conseq)
apply (rule_tac x="{}" in exI)
apply (drule oghoare_Skip)
apply clarsimp
apply (rule_tac x=Q' in exI)
apply (rule_tac x=A' in exI)
apply (rule conjI, erule SkipRule)
apply clarsimp
apply clarsimp
apply (rule AsCatchExpr1)
apply (erule assertionsR.cases, simp_all)
apply (rule assertionsR.AsSkip)
done
next
case (CatchThrow c\<^sub>2 s c c' sa a t Q A) thus ?case
apply clarsimp
apply (drule oghoare_Catch, clarsimp)
apply (rename_tac P\<^sub>1 P\<^sub>2 P' Q' A')
apply (rule_tac x=P\<^sub>2 in exI)
apply (rule conjI)
apply (rule Conseq)
apply (rule_tac x="{}" in exI)
apply (fastforce )
apply (rule conjI)
apply (drule oghoare_Throw)
apply clarsimp
apply fastforce
apply (rule conjI)
apply (clarsimp)
apply (erule AsCatchExpr2)
apply clarsimp
apply (erule AtCatchExpr2)
done
next
case (ParSkip cs s c c' sa a t Q A) thus ?case
apply clarsimp
apply (drule oghoare_Parallel)
apply clarsimp
apply (rename_tac as)
apply (rule_tac x="AnnExpr (\<Inter>x\<in>set as. postcond x)" in exI)
apply (rule conjI, rule SkipRule)
apply blast
apply (rule conjI)
apply simp
apply clarsimp
apply (simp only: in_set_conv_nth)
apply clarsimp
apply (drule_tac x="i" in spec)
apply clarsimp
apply (drule_tac x="cs!i" in bspec)
apply clarsimp
apply clarsimp
apply (drule oghoare_Skip)
apply clarsimp
apply (drule_tac x="as!i" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply (rule_tac x=i in exI, fastforce)
apply clarsimp
apply blast
apply clarsimp
apply (erule assertionsR.cases; simp)
apply clarsimp
apply (rule AsParallelSkips; clarsimp)
done
next
case (ParThrow cs s c c' sa a t Q A) thus ?case
apply clarsimp
apply (drule oghoare_Parallel)
apply (clarsimp simp: in_set_conv_nth)
apply (drule_tac x=i in spec)
apply clarsimp
apply (drule oghoare_Throw)
apply clarsimp
apply (rename_tac i as Q' A' P')
apply (rule_tac x="AnnExpr P'" in exI)
apply (rule conjI)
apply (rule ThrowRule)
apply clarsimp
apply (erule_tac A="(\<Union>x\<in>set as. abrcond x)" in subsetD[where B=A], force)
apply (rule conjI)
apply (drule_tac x="as!i" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply (rule_tac x=i in exI, fastforce)
apply (fastforce)
apply clarsimp
apply (erule AsParallelExprs)
apply clarsimp
apply (erule assertionsR.cases, simp_all)
apply (rule AsThrow)
done
next
case (Await x b c c' x' c'' c''' x'' a x''' Q A) thus ?case
apply (clarsimp)
apply (drule oghoare_Await)
apply clarsimp
apply (drule rtranclp_conjD)
apply clarsimp
apply (erule disjE)
apply clarsimp
apply (rename_tac P' Q' A')
apply (rule_tac x="AnnExpr Q" in exI)
apply clarsimp
apply (rule conjI)
apply (rule Skip)
apply (rule conjI)
apply (drule (1) oghoare_atom_com_sound)
apply (fastforce simp: final_def valid_def)
apply clarsimp
apply (erule assertionsR.cases, simp_all)
apply clarsimp
apply (rule AsAwaitSkip)
apply (rule_tac x="AnnExpr A" in exI)
apply clarsimp
apply (rule conjI)
apply (rule Throw)
apply (rule conjI)
apply (drule (1) oghoare_atom_com_sound)
apply (fastforce simp: final_def valid_def)
apply clarsimp
apply (erule assertionsR.cases, simp_all)
apply clarsimp
apply (rule AsAwaitThrow)
done
qed simp_all
lemma oghoare_steps[rule_format, OF _ refl refl]:
"\<Gamma> \<turnstile> cf \<rightarrow>\<^sup>* cf' \<Longrightarrow> cf = (c, Normal s) \<Longrightarrow> cf' = (c', Normal t) \<Longrightarrow>
\<Gamma>,\<Theta>\<turnstile>\<^bsub>/F \<^esub>a c Q,A \<Longrightarrow>
s \<in> pre a \<Longrightarrow>
\<exists>a'. \<Gamma>,\<Theta>\<turnstile>\<^bsub>/F \<^esub>a' c' Q,A \<and> t \<in> pre a' \<and>
(\<forall>as. assertionsR \<Gamma> \<Theta> Q A a' c' as \<longrightarrow> assertionsR \<Gamma> \<Theta> Q A a c as) \<and>
(\<forall>pm cm. atomicsR \<Gamma> \<Theta> a' c' (pm, cm) \<longrightarrow> atomicsR \<Gamma> \<Theta> a c (pm, cm))"
apply (induct arbitrary: a c s c' t rule: converse_rtranclp_induct)
apply fastforce
apply clarsimp
apply (frule Normal_pre_star)
apply clarsimp
apply (drule (2) oghoare_step)
apply clarsimp
apply ((drule meta_spec)+, (erule meta_impE, rule conjI, (rule refl)+)+)
apply (erule (1) meta_impE)+
apply clarsimp
apply (rule exI)
apply (rule conjI, fastforce)+
apply force
done
lemma oghoare_sound_Parallel_Normal_case[rule_format, OF _ refl refl]:
"\<Gamma> \<turnstile> (c, s) \<rightarrow>\<^sup>* (c', t) \<Longrightarrow>
\<forall>P x y cs. c = Parallel cs \<longrightarrow> s = Normal x \<longrightarrow>
t = Normal y \<longrightarrow>
\<Gamma>,\<Theta>\<turnstile>\<^bsub>/F \<^esub>P c Q,A \<longrightarrow> final (c', t) \<longrightarrow>
x \<in> pre P \<longrightarrow> t \<notin> Fault ` F \<longrightarrow> (c' = Throw \<and> t \<in> Normal ` A) \<or> (c' = Skip \<and> t \<in> Normal ` Q)"
apply(erule converse_rtranclp_induct2)
apply (clarsimp simp: final_def)
apply(erule step.cases, simp_all)
\<comment> \<open>Parallel\<close>
apply clarsimp
apply (frule Normal_pre_star)
apply (drule oghoare_Parallel)
apply clarsimp
apply (rename_tac i cs c1' x y s' as)
apply (subgoal_tac "\<Gamma>\<turnstile> (Parallel cs, Normal x) \<rightarrow> (Parallel (cs[i := c1']), Normal s')")
apply (frule_tac c="Parallel cs" and
a="AnnPar as" and
Q="(\<Inter>x\<in>set as. postcond x)" and A ="(\<Union>x\<in>set as. abrcond x)"
in oghoare_step[where \<Theta>=\<Theta> and F=F])
apply(rule Parallel, simp)
apply clarsimp
apply (rule Conseq, rule exI[where x="{}"], fastforce)
apply clarsimp
apply force
apply force
apply clarsimp
apply clarsimp
apply (rename_tac a')
apply (drule_tac x=a' in spec)
apply (drule mp, rule Conseq)
apply (rule_tac x="{}" in exI)
apply (rule_tac x="(\<Inter>x\<in>set as. postcond x)" in exI)
apply (rule_tac x="(\<Union>x\<in>set as. abrcond x)" in exI)
apply (simp)
apply clarsimp
apply(erule (1) step.Parallel)
\<comment> \<open>ParSkip\<close>
apply (frule no_steps_final, simp add: final_def)
apply clarsimp
apply (drule oghoare_Parallel)
apply clarsimp
apply (rule imageI)
apply (erule subsetD)
apply clarsimp
apply (clarsimp simp: in_set_conv_nth)
apply (rename_tac i)
apply (frule_tac x="i" in spec)
apply clarsimp
apply (frule_tac x="cs!i" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply (rule_tac x="i" in exI)
apply clarsimp
apply clarsimp
apply (drule_tac x="as ! i" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply fastforce
apply (drule oghoare_Skip)
apply fastforce
\<comment> \<open>ParThrow\<close>
apply clarsimp
apply (frule no_steps_final, simp add: final_def)
apply clarsimp
apply (drule oghoare_Parallel)
apply (clarsimp simp: in_set_conv_nth)
apply (drule_tac x=i in spec)
apply clarsimp
apply (drule oghoare_Throw)
apply clarsimp
apply (rename_tac i as Q' A' P')
apply (drule_tac x="as ! i" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply (rule_tac x=i in exI, fastforce)
apply clarsimp
apply (rule imageI)
apply (erule_tac A="(\<Union>x\<in>set as. abrcond x)" in subsetD)
apply clarsimp
apply (rule_tac x="as!i" in bexI)
apply blast
apply clarsimp
done
lemma oghoare_step_Fault[rule_format, OF _ refl refl]:
"\<Gamma>\<turnstile> cf \<rightarrow> cf' \<Longrightarrow>
cf = (c, Normal x) \<Longrightarrow>
cf' = (c', Fault f) \<Longrightarrow>
x \<in> pre P \<Longrightarrow>
\<Gamma>,\<Theta>\<turnstile>\<^bsub>/F \<^esub>P c Q,A \<Longrightarrow> f \<in> F"
apply (induct arbitrary: x c c' P Q A f rule: step.induct, simp_all)
apply clarsimp
apply (drule oghoare_Guard)
apply clarsimp
apply blast
apply clarsimp
apply (drule oghoare_Seq)
apply clarsimp
apply clarsimp
apply (drule oghoare_Catch)
apply clarsimp
apply clarsimp
apply (rename_tac i cs c' x P Q A f)
apply (drule oghoare_Parallel)
apply clarsimp
apply (rename_tac i cs c' x Q A f as)
apply (drule_tac x="i" in spec)
apply clarsimp
apply (drule meta_spec)+
apply (erule meta_impE, rule conjI, (rule refl)+)+
apply (drule_tac x="as!i" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply (rule_tac x="i" in exI, fastforce)
apply (erule (1) meta_impE)
apply (erule (2) meta_impE)
apply clarsimp
apply (drule rtranclp_conjD[THEN conjunct1])
apply (drule oghoare_Await)
apply clarsimp
apply (rename_tac b c c' x Q A f r P' Q' A')
apply (drule (1) oghoare_atom_com_sound)
apply (simp add: valid_def)
apply (drule_tac x="Normal x" in spec)
apply (drule_tac x="Fault f" in spec)
apply (drule_tac x=Skip in spec)
apply clarsimp
apply (erule impE)
apply (cut_tac f=f and c=c' in steps_Fault[where \<Gamma>=\<Gamma>])
apply fastforce
apply (fastforce simp: final_def steps_Fault)
done
lemma oghoare_step_Stuck[rule_format, OF _ refl refl]:
"\<Gamma>\<turnstile> cf \<rightarrow> cf' \<Longrightarrow>
cf = (c, Normal x) \<Longrightarrow>
cf' = (c', Stuck) \<Longrightarrow>
x \<in> pre P \<Longrightarrow>
\<Gamma>,\<Theta>\<turnstile>\<^bsub>/F \<^esub>P c Q,A \<Longrightarrow> P'"
apply (induct arbitrary: x c c' P Q A rule: step.induct, simp_all)
apply clarsimp
apply (drule oghoare_Spec)
apply force
apply clarsimp
apply (drule oghoare_Seq)
apply clarsimp
apply clarsimp
apply (drule oghoare_Call)
apply clarsimp
apply clarsimp
apply (drule oghoare_Catch)
apply clarsimp
apply clarsimp
apply (drule oghoare_Parallel)
apply clarsimp
apply (rename_tac i cs c' x Q A as)
apply (drule_tac x="i" in spec)
apply clarsimp
apply (drule meta_spec)+
apply (erule meta_impE, rule conjI, (rule refl)+)+
apply (drule_tac x="as!i" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply (rule_tac x="i" in exI, fastforce)
apply (erule meta_impE[OF _ refl])
apply (erule (1) meta_impE)
apply (erule (2) meta_impE)
apply clarsimp
apply (drule rtranclp_conjD[THEN conjunct1])
apply (drule oghoare_Await)
apply clarsimp
apply (rename_tac b c c' x Q A r P'' Q' A')
apply (drule (1) oghoare_atom_com_sound)
apply (simp add: valid_def)
apply (drule_tac x="Normal x" in spec)
apply (drule_tac x=Stuck in spec)
apply (drule_tac x=Skip in spec)
apply clarsimp
apply (erule impE)
apply (cut_tac c=c' in steps_Stuck[where \<Gamma>=\<Gamma>])
apply fastforce
apply (fastforce simp: final_def steps_Fault)
apply clarsimp
apply (drule oghoare_Await)
apply clarsimp
done
lemma oghoare_steps_Fault[rule_format, OF _ refl refl]:
"\<Gamma>\<turnstile> cf \<rightarrow>\<^sup>* cf' \<Longrightarrow>
cf = (c, Normal x) \<Longrightarrow>
cf' = (c', Fault f) \<Longrightarrow>
x \<in> pre P \<Longrightarrow>
\<Gamma>,\<Theta>\<turnstile>\<^bsub>/F \<^esub>P c Q,A \<Longrightarrow> f \<in> F"
apply (induct arbitrary: x c c' f rule: rtranclp_induct)
apply fastforce
apply clarsimp
apply (rename_tac b x c c' f)
apply (case_tac b)
apply clarsimp
apply (drule (2) oghoare_steps)
apply clarsimp
apply (drule (3) oghoare_step_Fault)
apply clarsimp
apply (drule meta_spec)+
apply (erule meta_impE, (rule conjI, (rule refl)+))+
apply simp
apply (drule step_Fault_prop ; simp)
apply simp
apply clarsimp
apply (drule step_Stuck_prop ; simp)
done
lemma oghoare_steps_Stuck[rule_format, OF _ refl refl]:
"\<Gamma>\<turnstile> cf \<rightarrow>\<^sup>* cf' \<Longrightarrow>
cf = (c, Normal x) \<Longrightarrow>
cf' = (c', Stuck) \<Longrightarrow>
x \<in> pre P \<Longrightarrow>
\<Gamma>,\<Theta>\<turnstile>\<^bsub>/F \<^esub>P c Q,A \<Longrightarrow> P'"
apply (induct arbitrary: x c c' rule: rtranclp_induct)
apply fastforce
apply clarsimp
apply (rename_tac b x c c')
apply (case_tac b)
apply clarsimp
apply (drule (2) oghoare_steps)
apply clarsimp
apply (drule (3) oghoare_step_Stuck)
apply clarsimp
- apply (drule meta_spec)+
- apply (erule meta_impE, (rule conjI, (rule refl)+))+
- apply simp
apply (drule step_Fault_prop ; simp)
apply simp
done
lemma oghoare_sound_Parallel_Fault_case[rule_format, OF _ refl refl]:
"\<Gamma> \<turnstile> (c, s) \<rightarrow>\<^sup>* (c', t) \<Longrightarrow>
\<forall>P x f cs. c = Parallel cs \<longrightarrow> s = Normal x \<longrightarrow>
x \<in> pre P \<longrightarrow> t = Fault f \<longrightarrow>
\<Gamma>,\<Theta>\<turnstile>\<^bsub>/F \<^esub>P c Q,A \<longrightarrow> final (c', t) \<longrightarrow>
f \<in> F"
apply(erule converse_rtranclp_induct2)
apply (clarsimp simp: final_def)
apply clarsimp
apply (rename_tac c s P x f cs)
apply (case_tac s)
apply clarsimp
apply(erule step.cases, simp_all)
apply (clarsimp simp: final_def)
apply (drule oghoare_Parallel)
apply clarsimp
apply (rename_tac x f s' i cs c1' as)
apply (subgoal_tac "\<Gamma>\<turnstile> (Parallel cs, Normal x) \<rightarrow> (Parallel (cs[i := c1']), Normal s')")
apply (frule_tac c="Parallel cs" and a="AnnPar as" and
Q="(\<Inter>x\<in>set as. postcond x)" and A="(\<Union>x\<in>set as. abrcond x)"
in oghoare_step[where \<Theta>=\<Theta> and F=F])
apply(rule Parallel)
apply simp
apply clarsimp
apply (rule Conseq, rule exI[where x="{}"], fastforce)
apply assumption
apply clarsimp
apply clarsimp
apply simp
apply clarsimp
apply (rename_tac a')
apply (drule_tac x=a' in spec)
apply clarsimp
apply (erule notE[where P="oghoare _ _ _ _ _ _ _"])
apply (rule Conseq, rule exI[where x="{}"])
apply (clarsimp)
apply (rule_tac x="(\<Inter>x\<in>set as. postcond x)" in exI)
apply (rule_tac x="(\<Union>x\<in>set as. abrcond x)" in exI ; simp)
apply(erule (1) step.Parallel)
apply clarsimp
apply (fastforce dest: no_steps_final simp: final_def)+
apply (clarsimp simp: final_def)
apply (drule oghoare_Parallel)
apply (erule step_Normal_elim_cases, simp_all)
apply clarsimp
apply (rename_tac f cs f' i c1' as)
apply (drule_tac x="i" in spec)
apply (erule impE, fastforce)
apply clarsimp
apply (drule_tac x="as!i" in bspec)
apply (clarsimp simp: in_set_conv_nth)
apply (rule_tac x="i" in exI, fastforce)
apply (drule_tac P="pres (as ! i)" in oghoare_step_Fault[where \<Theta>=\<Theta> and F=F])
apply assumption+
apply (drule steps_Fault_prop ; simp)
apply simp
apply (drule steps_Stuck_prop ;simp)
done
lemma oghoare_soundness:
"(\<Gamma>, \<Theta> \<turnstile>\<^bsub>/F\<^esub> P c Q,A \<longrightarrow> \<Gamma> \<Turnstile>\<^bsub>/F\<^esub> (pre P) c Q, A) \<and>
(\<Gamma>, \<Theta> \<tturnstile>\<^bsub>/F\<^esub>P' P c Q,A \<longrightarrow> \<Gamma> \<Turnstile>\<^bsub>/F\<^esub> P' c Q, A)"
unfolding valid_def
proof (induct rule: oghoare_oghoare_seq.induct)
case SeqSkip thus ?case
by (fastforce
elim: converse_rtranclpE step_Normal_elim_cases(1))
next
case SeqThrow thus ?case
by (fastforce
elim: converse_rtranclpE step_Normal_elim_cases)
next
case SeqBasic thus ?case
by (fastforce
elim: converse_rtranclpE step_Normal_elim_cases
simp: final_def)
next
case (SeqSpec \<Gamma> \<Theta> F r Q A) thus ?case
apply clarsimp
apply (erule converse_rtranclpE)
apply (clarsimp simp: final_def)
apply (erule step_Normal_elim_cases)
apply (fastforce elim!: converse_rtranclpE step_Normal_elim_cases)
by clarsimp
next
case (SeqSeq \<Gamma> \<Theta> F P\<^sub>1 c\<^sub>1 P\<^sub>2 A c\<^sub>2 Q) show ?case
using SeqSeq
by (fold valid_def)
(fastforce intro: oghoare_seq_valid simp: valid_weaken_pre)
next
case (SeqCatch \<Gamma> \<Theta> F P\<^sub>1 c\<^sub>1 Q P\<^sub>2 c\<^sub>2 A) thus ?case
by (fold valid_def)
(fastforce elim: oghoare_catch_valid)+
next
case (SeqCond \<Gamma> \<Theta> F P b c1 Q A c2) thus ?case
by (fold valid_def)
(fastforce intro:oghoare_if_valid)
next
case (SeqWhile \<Gamma> \<Theta> F P c A b) thus ?case
by (fold valid_def)
(fastforce elim: valid_weaken_pre[rotated] oghoare_while_valid)
next
case (SeqGuard \<Gamma> \<Theta> F P c Q A r g f) thus ?case
apply (fold valid_def)
apply (simp (no_asm) add: valid_def)
apply clarsimp
apply (erule converse_rtranclpE)
apply (fastforce simp: final_def)
apply clarsimp
apply (erule step_Normal_elim_cases)
apply (case_tac "r \<inter> - g \<noteq> {}")
apply clarsimp
apply (fastforce simp: valid_def)
apply clarsimp
apply (fastforce simp: valid_def)
apply clarsimp
apply (case_tac "r \<inter> - g \<noteq> {}")
apply (fastforce dest: no_steps_final simp:final_def)
apply (fastforce dest: no_steps_final simp:final_def)
done
next
case (SeqCall \<Gamma> p f \<Theta> F P Q A) thus ?case
apply clarsimp
apply (erule converse_rtranclpE)
apply (clarsimp simp add: final_def)
apply (erule step_Normal_elim_cases)
apply (clarsimp simp: final_def)
apply fastforce
apply fastforce
done
next
case (SeqDynCom r P fa \<Gamma> \<Theta> F c Q A) thus ?case
apply -
apply clarsimp
apply (erule converse_rtranclpE)
apply (clarsimp simp: final_def)
apply clarsimp
apply (erule step_Normal_elim_cases)
apply clarsimp
apply (rename_tac t c' x)
apply (drule_tac x=x in bspec, fastforce)
apply clarsimp
apply (drule_tac x="Normal x" in spec)
apply (drule_tac x="t" in spec)
apply (drule_tac x="c'" in spec)
apply fastforce+
done
next
case (SeqConseq \<Gamma> \<Theta> F P c Q A) thus ?case
apply (clarsimp)
apply (rename_tac t c' x)
apply (erule_tac x="Normal x" in allE)
apply (erule_tac x="t" in allE)
apply (erule_tac x="c'" in allE)
apply (clarsimp simp: pre_weaken_pre)
apply force
done
next
case (SeqParallel as P \<Gamma> \<Theta> F cs Q A) thus ?case
by (fold valid_def)
(erule (1) valid_weaken_pre)
next
case (Call \<Theta> p as n P Q A r \<Gamma> f F) thus ?case
apply clarsimp
apply (erule converse_rtranclpE)
apply (clarsimp simp add: final_def)
apply (erule step_Normal_elim_cases)
apply (clarsimp simp: final_def)
apply (erule disjE)
apply clarsimp
apply fastforce
apply fastforce
apply fastforce
done
next
case (Await \<Gamma> \<Theta> F P c Q A r b) thus ?case
apply clarsimp
apply (erule converse_rtranclpE)
apply (clarsimp simp add: final_def)
apply (erule step_Normal_elim_cases)
apply (erule converse_rtranclpE)
apply (fastforce simp add: final_def )
apply (force dest!:no_step_final simp: final_def)
apply clarsimp
apply (rename_tac x c'')
apply (drule_tac x="Normal x" in spec)
apply (drule_tac x="Stuck" in spec)
apply (drule_tac x="Skip" in spec)
apply (clarsimp simp: final_def)
apply (erule impE[where P="rtranclp _ _ _"])
apply (cut_tac c="c''" in steps_Stuck[where \<Gamma>=\<Gamma>])
apply fastforce
apply fastforce
apply clarsimp
apply (rename_tac x c'' f)
apply (drule_tac x="Normal x" in spec)
apply (drule_tac x="Fault f" in spec)
apply (drule_tac x="Skip" in spec)
apply (erule impE[where P="rtranclp _ _ _"])
apply (cut_tac c="c''" and f=f in steps_Fault[where \<Gamma>=\<Gamma>])
apply fastforce
apply clarsimp
apply (erule converse_rtranclpE)
apply fastforce
apply (erule step_elim_cases)
apply (fastforce)
done
next
case (Parallel as cs \<Gamma> \<Theta> F Q A ) thus ?case
apply (fold valid_def)
apply (simp only:pre.simps)
apply (simp (no_asm) only: valid_def)
apply clarsimp
apply (rename_tac t c' x')
apply (case_tac t)
apply clarsimp
apply (drule oghoare_sound_Parallel_Normal_case[where \<Theta>=\<Theta> and Q=Q and A=A and F=F and P="AnnPar as", OF _ refl])
apply (rule oghoare_oghoare_seq.Parallel)
apply simp
apply clarsimp
apply assumption
apply (clarsimp)
apply clarsimp
apply (clarsimp simp: final_def)
apply (clarsimp )
apply clarsimp
apply clarsimp
apply (drule oghoare_sound_Parallel_Fault_case[where \<Theta>=\<Theta> and Q=Q and A=A and F=F and P="AnnPar as", OF _ ])
apply clarsimp
apply assumption
apply (rule oghoare_oghoare_seq.Parallel)
apply simp
apply clarsimp
apply assumption
apply clarsimp
apply clarsimp
apply (simp add: final_def)
apply (fastforce simp add: final_def)
apply (clarsimp simp: final_def)
apply (erule oghoare_steps_Stuck[where \<Theta>=\<Theta> and F=F and Q=Q and A=A and P="AnnPar as"])
apply simp
apply (rule oghoare_oghoare_seq.Parallel)
apply simp
apply simp
apply simp
apply clarsimp
apply clarsimp
done
next
case Skip thus ?case
by (fastforce
elim: converse_rtranclpE step_Normal_elim_cases(1))
next
case Basic thus ?case
by (fastforce
elim: converse_rtranclpE step_Normal_elim_cases
simp: final_def)
next
case (Spec \<Gamma> \<Theta> F r Q A) thus ?case
apply clarsimp
apply (erule converse_rtranclpE)
apply (clarsimp simp: final_def)
apply (erule step_Normal_elim_cases)
apply (fastforce elim!: converse_rtranclpE step_Normal_elim_cases)
by clarsimp
next
case (Seq \<Gamma> \<Theta> F P\<^sub>1 c\<^sub>1 P\<^sub>2 A c\<^sub>2 Q) show ?case
using Seq
by (fold valid_def)
(fastforce intro: oghoare_seq_valid simp: valid_weaken_pre)
next
case (Cond \<Gamma> \<Theta> F P\<^sub>1 c\<^sub>1 Q A P\<^sub>2 c\<^sub>2 r b) thus ?case
by (fold valid_def)
(fastforce intro:oghoare_if_valid)
next
case (While \<Gamma> \<Theta> F P c i A b Q r) thus ?case
by (fold valid_def)
(fastforce elim: valid_weaken_pre[rotated] oghoare_while_valid)
next
case Throw thus ?case
by (fastforce
elim: converse_rtranclpE step_Normal_elim_cases)
next
case (Catch \<Gamma> \<Theta> F P\<^sub>1 c\<^sub>1 Q P\<^sub>2 c\<^sub>2 A) thus ?case
apply (fold valid_def)
apply (fastforce elim: oghoare_catch_valid)+
done
next
case (Guard \<Gamma> \<Theta> F P c Q A r g f) thus ?case
apply (fold valid_def)
apply (simp)
apply (frule (1) valid_weaken_pre[rotated])
apply (simp (no_asm) add: valid_def)
apply clarsimp
apply (erule converse_rtranclpE)
apply (fastforce simp: final_def)
apply clarsimp
apply (erule step_Normal_elim_cases)
apply (case_tac "r \<inter> - g \<noteq> {}")
apply clarsimp
apply (fastforce simp: valid_def)
apply clarsimp
apply (fastforce simp: valid_def)
apply clarsimp
apply (case_tac "r \<inter> - g \<noteq> {}")
apply clarsimp
apply (fastforce dest: no_steps_final simp:final_def)
apply (clarsimp simp: ex_in_conv[symmetric])
done
next
case (DynCom r \<Gamma> \<Theta> F P c Q A) thus ?case
apply clarsimp
apply (erule converse_rtranclpE)
apply (clarsimp simp: final_def)
apply clarsimp
apply (erule step_Normal_elim_cases)
apply clarsimp
apply (rename_tac t c' x)
apply (erule_tac x=x in ballE)
apply clarsimp
apply (drule_tac x="Normal x" in spec)
apply (drule_tac x="t" in spec)
apply (drule_tac x="c'" in spec)
apply fastforce+
done
next
case (Conseq \<Gamma> \<Theta> F P c Q A) thus ?case
apply (clarsimp)
apply (rename_tac P' Q' A' t c' x)
apply (erule_tac x="Normal x" in allE)
apply (erule_tac x="t" in allE)
apply (erule_tac x="c'" in allE)
apply (clarsimp simp: pre_weaken_pre)
apply force
done
qed
lemmas oghoare_sound = oghoare_soundness[THEN conjunct1, rule_format]
lemmas oghoare_seq_sound = oghoare_soundness[THEN conjunct2, rule_format]
end
diff --git a/thys/Factored_Transition_System_Bounding/AcycSspace.thy b/thys/Factored_Transition_System_Bounding/AcycSspace.thy
--- a/thys/Factored_Transition_System_Bounding/AcycSspace.thy
+++ b/thys/Factored_Transition_System_Bounding/AcycSspace.thy
@@ -1,906 +1,900 @@
theory AcycSspace
imports
FactoredSystem
ActionSeqProcess
SystemAbstraction
Acyclicity
FmapUtils
begin
section "Acyclic State Spaces"
\<comment> \<open>NOTE name shortened.\<close>
\<comment> \<open>NOTE type for `max` had to be fixed to natural number maximum (due to problem with loose
typing).\<close>
value "(state_successors (prob_proj PROB vs))"
definition S
where "S vs lss PROB s \<equiv> wlp
(\<lambda>x y. y \<in> (state_successors (prob_proj PROB vs) x))
(\<lambda>s. problem_plan_bound (snapshot PROB s))
(max :: nat \<Rightarrow> nat \<Rightarrow> nat) (\<lambda>x y. x + y + 1) s lss
"
\<comment> \<open>NOTE name shortened.\<close>
\<comment> \<open>NOTE using `fun` because of multiple defining equations.\<close>
fun vars_change where
"vars_change [] vs s = []"
| "vars_change (a # as) vs s = (if fmrestrict_set vs (state_succ s a) \<noteq> fmrestrict_set vs s
then state_succ s a # vars_change as vs (state_succ s a)
else vars_change as vs (state_succ s a)
)"
lemma vars_change_cat:
fixes s
shows "
vars_change (as1 @ as2) vs s
= (vars_change as1 vs s @ vars_change as2 vs (exec_plan s as1))
"
by (induction as1 arbitrary: s as2 vs) auto
lemma empty_change_no_change:
fixes s
assumes "(vars_change as vs s = [])"
shows "(fmrestrict_set vs (exec_plan s as) = fmrestrict_set vs s)"
using assms
proof (induction as arbitrary: s vs)
case (Cons a as)
then show ?case
proof (cases "fmrestrict_set vs (state_succ s a) \<noteq> fmrestrict_set vs s")
case True
\<comment> \<open>NOTE This case violates the induction premise @{term "vars_change (a # as) vs s = []"} since the
empty list is impossible.\<close>
then have "state_succ s a # vars_change as vs (state_succ s a) = []"
using Cons.prems True
by simp
then show "fmrestrict_set vs (exec_plan s (a # as)) = fmrestrict_set vs s"
by blast
next
case False
then have "vars_change as vs (state_succ s a) = []"
using Cons.prems False
by force
then have
"fmrestrict_set vs (exec_plan (state_succ s a) as) = fmrestrict_set vs (state_succ s a)"
using Cons.IH[of vs "(state_succ s a)"]
by blast
then show "fmrestrict_set vs (exec_plan s (a # as)) = fmrestrict_set vs s"
using False
by simp
qed
qed auto
\<comment> \<open>NOTE renamed variable `a` to `b` to not conflict with naming for list head in induction step.\<close>
lemma zero_change_imp_all_effects_submap:
fixes s s'
assumes "(vars_change as vs s = [])" "(sat_precond_as s as)" "(ListMem b as)"
"(fmrestrict_set vs s = fmrestrict_set vs s')"
shows "(fmrestrict_set vs (snd b) \<subseteq>\<^sub>f fmrestrict_set vs s')"
using assms
proof (induction as arbitrary: s s' vs b)
case (Cons a as)
\<comment> \<open>NOTE Having either @{term "fmrestrict_set vs (state_succ s a) \<noteq> fmrestrict_set vs s"} or
@{term "\<not>ListMem b as"} leads to simpler propositions so we split here.\<close>
then show "(fmrestrict_set vs (snd b) \<subseteq>\<^sub>f fmrestrict_set vs s')"
using Cons.prems(1)
proof (cases "fmrestrict_set vs (state_succ s a) = fmrestrict_set vs s \<and> ListMem b as")
case True
let ?s="state_succ s a"
have "vars_change as vs ?s = []"
using True Cons.prems(1)
by auto
moreover have "sat_precond_as ?s as"
using Cons.prems(2) sat_precond_as.simps(2)
by blast
ultimately show ?thesis
using True Cons.prems(4) Cons.IH
by auto
next
case False
then consider
(i) "fmrestrict_set vs (state_succ s a) \<noteq> fmrestrict_set vs s"
| (ii) "\<not>ListMem b as"
by blast
then show ?thesis
using Cons.prems(1)
proof (cases)
case ii
then have "a = b"
using Cons.prems(3) ListMem_iff set_ConsD
by metis
\<comment> \<open>NOTE Mysteriously sledgehammer finds a proof here while the premises of
`no\_change\_vs\_eff\_submap` cannot be proven individually.\<close>
then show ?thesis
using Cons.prems(1, 2, 4) no_change_vs_eff_submap
by (metis list.distinct(1) sat_precond_as.simps(2) vars_change.simps(2))
qed simp
qed
qed (simp add: ListMem_iff)
lemma zero_change_imp_all_preconds_submap:
fixes s s'
assumes "(vars_change as vs s = [])" "(sat_precond_as s as)" "(ListMem b as)"
"(fmrestrict_set vs s = fmrestrict_set vs s')"
shows "(fmrestrict_set vs (fst b) \<subseteq>\<^sub>f fmrestrict_set vs s')"
using assms
proof (induction as arbitrary: vs s s')
case (Cons a as)
\<comment> \<open>NOTE Having either @{term "fmrestrict_set vs (state_succ s a) \<noteq> fmrestrict_set vs s"} or
@{term "\<not>ListMem b as"} leads to simpler propositions so we split here.\<close>
then show "(fmrestrict_set vs (fst b) \<subseteq>\<^sub>f fmrestrict_set vs s')"
using Cons.prems(1)
proof (cases "fmrestrict_set vs (state_succ s a) = fmrestrict_set vs s \<and> ListMem b as")
case True
let ?s="state_succ s a"
have "vars_change as vs ?s = []"
using True Cons.prems(1)
by auto
moreover have "sat_precond_as ?s as"
using Cons.prems(2) sat_precond_as.simps(2)
by blast
ultimately show ?thesis
using True Cons.prems(4) Cons.IH
by auto
next
case False
then consider
(i) "fmrestrict_set vs (state_succ s a) \<noteq> fmrestrict_set vs s"
| (ii) "\<not>ListMem b as"
by blast
then show ?thesis
using Cons.prems(1)
proof (cases)
case ii
then have "a = b"
using Cons.prems(3) ListMem_iff set_ConsD
by metis
then show ?thesis
using Cons.prems(2, 4) fmsubset_restrict_set_mono
by (metis sat_precond_as.simps(2))
qed simp
qed
qed (simp add: ListMem_iff)
lemma no_vs_change_valid_in_snapshot:
assumes "(as \<in> valid_plans PROB)" "(sat_precond_as s as)" "(vars_change as vs s = [])"
shows "(as \<in> valid_plans (snapshot PROB (fmrestrict_set vs s)))"
proof -
{
fix a
assume P: "ListMem a as"
then have "agree (fst a) (fmrestrict_set vs s)"
by (metis agree_imp_submap assms(2) assms(3) fmdom'_restrict_set
restricted_agree_imp_agree zero_change_imp_all_preconds_submap)
moreover have "agree (snd a) (fmrestrict_set vs s)"
by (metis (no_types) P agree_imp_submap assms(2) assms(3) fmdom'_restrict_set
restricted_agree_imp_agree zero_change_imp_all_effects_submap)
ultimately have "agree (fst a) (fmrestrict_set vs s)" "agree (snd a) (fmrestrict_set vs s)"
by simp+
}
then show ?thesis
using assms(1) as_mem_agree_valid_in_snapshot
by blast
qed
\<comment> \<open>NOTE type of `PROB` had to be fixed for `problem\_plan\_bound\_works`.\<close>
lemma no_vs_change_obtain_snapshot_bound_1st_step:
fixes PROB :: "'a problem"
assumes "finite PROB" "(vars_change as vs s = [])" "(sat_precond_as s as)"
"(s \<in> valid_states PROB)" "(as \<in> valid_plans PROB)"
shows "(\<exists>as'.
(
exec_plan (fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) s) as
= exec_plan (fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) s) as'
)
\<and> (subseq as' as)
\<and> (length as' \<le> problem_plan_bound (snapshot PROB (fmrestrict_set vs s)))
)"
proof -
let ?s="(fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) s)"
let ?PROB="(snapshot PROB (fmrestrict_set vs s))"
{
have "finite (snapshot PROB (fmrestrict_set vs s))"
using assms(1) FINITE_snapshot
by blast
}
moreover {
have "
fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) s
\<in> valid_states (snapshot PROB (fmrestrict_set vs s))"
using assms(4) graph_plan_not_eq_last_diff_paths valid_states_snapshot
by blast
}
moreover {
have "as \<in> valid_plans (snapshot PROB (fmrestrict_set vs s))"
using assms(2, 3, 5) no_vs_change_valid_in_snapshot
by blast
}
ultimately show ?thesis
using problem_plan_bound_works[of ?PROB ?s as]
by blast
qed
\<comment> \<open>NOTE type of `PROB` had to be fixed for `no\_vs\_change\_obtain\_snapshot\_bound\_1st\_step`.\<close>
lemma no_vs_change_obtain_snapshot_bound_2nd_step:
fixes PROB :: "'a problem"
assumes "finite PROB" "(vars_change as vs s = [])" "(sat_precond_as s as)"
"(s \<in> valid_states PROB)" "(as \<in> valid_plans PROB)"
shows "(\<exists>as'.
(
exec_plan (fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) s) as
= exec_plan (fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) s) as'
)
\<and> (subseq as' as)
\<and> (sat_precond_as s as')
\<and> (length as' \<le> problem_plan_bound (snapshot PROB (fmrestrict_set vs s)))
)"
proof -
obtain as'' where 1:
"
exec_plan (fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) s) as
= exec_plan (fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) s) as''"
"subseq as'' as" "length as'' \<le> problem_plan_bound (snapshot PROB (fmrestrict_set vs s))"
using assms no_vs_change_obtain_snapshot_bound_1st_step
by blast
let ?s'="(fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) s)"
let ?as'="rem_condless_act ?s' [] as''"
have "exec_plan ?s' as = exec_plan ?s' as''"
using 1(1) rem_condless_valid_1
by blast
moreover have "subseq ?as' as"
using 1(2) rem_condless_valid_8 sublist_trans
by blast
moreover have "sat_precond_as s ?as'"
using sat_precond_drest_sat_precond rem_condless_valid_2
by fast
moreover have "(length ?as' \<le> problem_plan_bound (snapshot PROB (fmrestrict_set vs s)))"
using 1 rem_condless_valid_3 le_trans
by blast
ultimately show ?thesis
using 1 rem_condless_valid_1
by auto
qed
lemma no_vs_change_obtain_snapshot_bound_3rd_step:
assumes "finite (PROB :: 'a problem)" "(vars_change as vs s = [])" "(no_effectless_act as)"
"(sat_precond_as s as)" "(s \<in> valid_states PROB)" "(as \<in> valid_plans PROB)"
shows "(\<exists>as'.
(
fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) (exec_plan s as)
= fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) (exec_plan s as')
)
\<and> (subseq as' as)
\<and> (length as' \<le> problem_plan_bound (snapshot PROB (fmrestrict_set vs s)))
)"
proof -
obtain as' :: "(('a, bool) fmap \<times> ('a, bool) fmap) list" where
"(
exec_plan (fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) s) as
= exec_plan (fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) s) as'
)" "subseq as' as" "sat_precond_as s as'"
"length as' \<le> problem_plan_bound (snapshot PROB (fmrestrict_set vs s))"
using assms(1, 2, 4, 5, 6) no_vs_change_obtain_snapshot_bound_2nd_step
by blast
moreover have
"exec_plan (fmrestrict_set vs s) (as_proj as vs) = fmrestrict_set vs (exec_plan s as)"
using assms(4) sat_precond_exec_as_proj_eq_proj_exec
by blast
moreover have "as_proj as (prob_dom (snapshot PROB (fmrestrict_set vs s))) = as"
using assms(2, 3, 4, 6) as_proj_eq_as no_vs_change_valid_in_snapshot
by blast
ultimately show ?thesis
using sublist_as_proj_eq_as proj_exec_proj_eq_exec_proj'
by metis
qed
\<comment> \<open>NOTE added lemma.\<close>
\<comment> \<open>TODO remove unused assumptions.\<close>
lemma no_vs_change_snapshot_s_vs_is_valid_bound_i:
fixes PROB :: "'a problem"
assumes "finite PROB" "(vars_change as vs s = [])" "(no_effectless_act as)"
"(sat_precond_as s as)" "(s \<in> valid_states PROB)" "(as \<in> valid_plans PROB)"
"fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) (exec_plan s as) =
fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) (exec_plan s as')"
"subseq as' as" "length as' \<le> problem_plan_bound (snapshot PROB (fmrestrict_set vs s))"
shows
"fmrestrict_set (fmdom' (exec_plan s as) - prob_dom (snapshot PROB (fmrestrict_set vs s)))
(exec_plan s as)
= fmrestrict_set (fmdom' (exec_plan s as) - prob_dom (snapshot PROB (fmrestrict_set vs s)))
s
\<and> fmrestrict_set (fmdom' (exec_plan s as') - prob_dom (snapshot PROB (fmrestrict_set vs s)))
(exec_plan s as')
= fmrestrict_set (fmdom' (exec_plan s as') - prob_dom (snapshot PROB (fmrestrict_set vs s)))
s"
proof -
let ?vs="(prob_dom (snapshot PROB (fmrestrict_set vs s)))"
let ?vs'="(fmdom' (exec_plan s as) - prob_dom (snapshot PROB (fmrestrict_set vs s)))"
let ?vs''="(fmdom' (exec_plan s as') - prob_dom (snapshot PROB (fmrestrict_set vs s)))"
let ?s="(exec_plan s as)"
let ?s'="(exec_plan s as')"
have 1: "as \<in> valid_plans (snapshot PROB (fmrestrict_set vs s))"
using assms(2, 4, 6) no_vs_change_valid_in_snapshot
by blast
{
{
fix a
assume "ListMem a as"
then have "fmdom' (snd a) \<subseteq> prob_dom (snapshot PROB (fmrestrict_set vs s))"
using 1 FDOM_eff_subset_prob_dom_pair valid_plan_mems
by metis
then have "fmdom' (fmrestrict_set (fmdom' (exec_plan s as)
- prob_dom (snapshot PROB (fmrestrict_set vs s))) (snd a))
= {}"
using subset_inter_diff_empty[of "fmdom' (snd a)"
"prob_dom (snapshot PROB (fmrestrict_set vs s))"] fmdom'_restrict_set_precise
by metis
}
then have
"fmrestrict_set ?vs' (exec_plan s as) = fmrestrict_set ?vs' s"
using disjoint_effects_no_effects[of as ?vs' s]
by blast
}
moreover {
{
fix a
assume P: "ListMem a as'"
moreover have \<alpha>: "as' \<in> valid_plans (snapshot PROB (fmrestrict_set vs s))"
using assms(8) 1 sublist_valid_plan
by blast
moreover have "a \<in> PROB"
using P \<alpha> snapshot_subset subsetCE valid_plan_mems
by fast
ultimately have "fmdom' (snd a) \<subseteq> prob_dom (snapshot PROB (fmrestrict_set vs s))"
using FDOM_eff_subset_prob_dom_pair valid_plan_mems
by metis
then have "fmdom' (fmrestrict_set (fmdom' (exec_plan s as')
- prob_dom (snapshot PROB (fmrestrict_set vs s))) (snd a))
= {}"
using subset_inter_diff_empty[of "fmdom' (snd a)"
"prob_dom (snapshot PROB (fmrestrict_set vs s))"] fmdom'_restrict_set_precise
by metis
}
then have
"fmrestrict_set ?vs'' (exec_plan s as') = fmrestrict_set ?vs'' s"
using disjoint_effects_no_effects[of as' ?vs'' s]
by blast
}
ultimately show ?thesis
by blast
qed
\<comment> \<open>NOTE type for `PROB` had to be fixed.\<close>
lemma no_vs_change_snapshot_s_vs_is_valid_bound:
fixes PROB :: "'a problem"
assumes "finite PROB" "(vars_change as vs s = [])" "(no_effectless_act as)"
"(sat_precond_as s as)" "(s \<in> valid_states PROB)" "(as \<in> valid_plans PROB)"
shows "(\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' <= problem_plan_bound (snapshot PROB (fmrestrict_set vs s)))
)"
proof -
obtain as' where 1:
"fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) (exec_plan s as) =
fmrestrict_set (prob_dom (snapshot PROB (fmrestrict_set vs s))) (exec_plan s as')"
"subseq as' as" "length as' \<le> problem_plan_bound (snapshot PROB (fmrestrict_set vs s))"
using assms no_vs_change_obtain_snapshot_bound_3rd_step
by blast
{
have a: "fmrestrict_set (fmdom' (exec_plan s as) - prob_dom (snapshot PROB (fmrestrict_set vs s)))
(exec_plan s as)
= fmrestrict_set (fmdom' (exec_plan s as) - prob_dom (snapshot PROB (fmrestrict_set vs s)))
s "
"fmrestrict_set (fmdom' (exec_plan s as') - prob_dom (snapshot PROB (fmrestrict_set vs s)))
(exec_plan s as')
= fmrestrict_set (fmdom' (exec_plan s as') - prob_dom (snapshot PROB (fmrestrict_set vs s)))
s"
using assms 1 no_vs_change_snapshot_s_vs_is_valid_bound_i
by blast+
moreover have "as' \<in> valid_plans (snapshot PROB (fmrestrict_set vs s))"
using "1"(2) assms(2) assms(4) assms(6) no_vs_change_valid_in_snapshot sublist_valid_plan
by blast
moreover have "(exec_plan s as) \<in> valid_states PROB"
using assms(5, 6) valid_as_valid_exec
by blast
moreover have "(exec_plan s as') \<in> valid_states PROB"
using assms(5, 6) 1 valid_as_valid_exec sublist_valid_plan
by blast
ultimately have "exec_plan s as = exec_plan s as'"
using assms
unfolding valid_states_def
using graph_plan_lemma_5[where vs="prob_dom (snapshot PROB (fmrestrict_set vs s))", OF _ 1(1)]
by force
}
then show ?thesis
using 1
by blast
qed
\<comment> \<open>TODO showcase (problems with stronger typing: Isabelle requires strict typing for `max`; whereas
in HOL4 this is not required, possible because 'MAX' is natural number specific.\<close>
lemma snapshot_bound_leq_S:
shows "
problem_plan_bound (snapshot PROB (fmrestrict_set vs s))
\<le> S vs lss PROB (fmrestrict_set vs s)
"
proof -
have "geq_arg (max :: nat \<Rightarrow> nat \<Rightarrow> nat)"
unfolding geq_arg_def
using max.cobounded1
by simp
then show ?thesis
unfolding S_def
using individual_weight_less_eq_lp[where
g="max :: nat \<Rightarrow> nat \<Rightarrow> nat"
and x="(fmrestrict_set vs s)" and R="(\<lambda>x y. y \<in> state_successors (prob_proj PROB vs) x)"
and w="(\<lambda>s. problem_plan_bound (snapshot PROB s))" and f="(\<lambda>x y. x + y + 1)" and l=lss]
by blast
qed
\<comment> \<open>NOTE first argument of `top\_sorted\_abs` had to be wrapped into lambda.\<close>
\<comment> \<open>NOTE the type of `1` had to be restricted to `nat` to ensure the proofs for `geq\_arg` work.\<close>
lemma S_geq_S_succ_plus_ell:
assumes "(s \<in> valid_states PROB)"
"(top_sorted_abs (\<lambda>x y. y \<in> state_successors (prob_proj PROB vs) x) lss)"
"(s' \<in> state_successors (prob_proj PROB vs) s)" "(set lss = valid_states (prob_proj PROB vs))"
shows "(
problem_plan_bound (snapshot PROB (fmrestrict_set vs s))
+ S vs lss PROB (fmrestrict_set vs s')
+ (1 :: nat)
\<le> S vs lss PROB (fmrestrict_set vs s)
)"
proof -
let ?f="\<lambda>x y. x + y + (1 :: nat)"
let ?R="(\<lambda>x y. y \<in> state_successors (prob_proj PROB vs) x)"
let ?w="(\<lambda>s. problem_plan_bound (snapshot PROB s))"
let ?g="max :: nat \<Rightarrow> nat \<Rightarrow> nat"
let ?vtx1="(fmrestrict_set vs s')"
let ?G="lss"
let ?vtx2="(fmrestrict_set vs s)"
have "geq_arg ?f"
unfolding geq_arg_def
by simp
moreover have "geq_arg ?g"
unfolding geq_arg_def
by simp
moreover have "\<forall>x. ListMem x lss \<longrightarrow> \<not>?R x x"
unfolding state_successors_def
by blast
moreover have "?R ?vtx2 ?vtx1"
unfolding state_successors_def
using assms(3) state_in_successor_proj_in_state_in_successor state_successors_def
by blast
moreover have
"ListMem ?vtx1 ?G"
using assms(1, 3, 4)
by (metis ListMem_iff contra_subsetD graph_plan_not_eq_last_diff_paths proj_successors_of_valid_are_valid)
moreover have "top_sorted_abs ?R ?G"
using assms(2)
by simp
ultimately show ?thesis
unfolding S_def
using lp_geq_lp_from_successor[of ?f ?g ?G ?R ?vtx2 ?vtx1 ?w]
by blast
qed
lemma vars_change_cons:
fixes s s'
assumes "(vars_change as vs s = (s' # ss))"
shows "(\<exists>as1 act as2.
(as = as1 @ (act # as2))
\<and> (vars_change as1 vs s = [])
\<and> (state_succ (exec_plan s as1) act = s')
\<and> (vars_change as2 vs (state_succ (exec_plan s as1) act) = ss)
)"
using assms
proof (induction as arbitrary: s s' vs ss)
case (Cons a as)
then show ?case
proof (cases "fmrestrict_set vs (state_succ s a) \<noteq> fmrestrict_set vs s")
case True
then have "state_succ s a = s'" "vars_change as vs (state_succ s a) = ss"
using Cons.prems
by simp+
then show ?thesis
by fastforce
next
case False
then have "vars_change as vs (state_succ s a) = s' # ss"
using Cons.prems
by simp
then obtain as1 act as2 where
"as = as1 @ act # as2" "vars_change as1 vs (state_succ s a) = []"
"state_succ (exec_plan (state_succ s a) as1) act = s'"
"vars_change as2 vs (state_succ (exec_plan (state_succ s a) as1) act) = ss"
using Cons.IH
by blast
then show ?thesis
by (metis False append_Cons exec_plan.simps(2) vars_change.simps(2))
qed
qed simp
lemma vars_change_cons_2:
fixes s s'
assumes "(vars_change as vs s = (s' # ss))"
shows "(fmrestrict_set vs s' \<noteq> fmrestrict_set vs s)"
using assms
apply(induction as arbitrary: s s' vs ss)
apply(auto)
by (metis list.inject)
\<comment> \<open>NOTE first argument of `top\_sorted\_abs had to be wrapped into lambda.\<close>
lemma problem_plan_bound_S_bound_1st_step:
fixes PROB :: "'a problem"
assumes "finite PROB" "(top_sorted_abs (\<lambda>x y. y \<in> state_successors (prob_proj PROB vs) x) lss)"
"(set lss = valid_states (prob_proj PROB vs))" "(s \<in> valid_states PROB)"
"(as \<in> valid_plans PROB)" "(no_effectless_act as)" "(sat_precond_as s as)"
shows "(\<exists>as'.
(exec_plan s as' = exec_plan s as)
\<and> (subseq as' as)
\<and> (length as' <= S vs lss PROB (fmrestrict_set vs s))
)"
using assms
proof (induction "vars_change as vs s" arbitrary: PROB as vs s lss)
case Nil
then obtain as' where
"exec_plan s as = exec_plan s as'" "subseq as' as"
"length as' \<le> problem_plan_bound (snapshot PROB (fmrestrict_set vs s))"
using Nil(1) Nil.prems(1,4,5,6,7) no_vs_change_snapshot_s_vs_is_valid_bound
by metis
moreover have "
problem_plan_bound (snapshot PROB (fmrestrict_set vs s))
\<le> S vs lss PROB (fmrestrict_set vs s)
"
using snapshot_bound_leq_S le_trans
by fast
ultimately show ?case
using le_trans
by fastforce
next
case (Cons s' ss)
then obtain as1 act as2 where 1:
"as = as1 @ act # as2" "vars_change as1 vs s = []" "state_succ (exec_plan s as1) act = s'"
"vars_change as2 vs (state_succ (exec_plan s as1) act) = ss"
using vars_change_cons
by smt
text\<open> Obtain conclusion of induction hypothesis for 'as2' and
'(state\_succ (exec\_plan s as1) act)'. \<close>
{
{
have "as1 \<in> valid_plans PROB"
using Cons.prems(5) 1(1) valid_append_valid_pref
by blast
moreover have "act \<in> PROB"
using Cons.prems(5) 1 valid_append_valid_suff valid_plan_valid_head
by fast
ultimately have "state_succ (exec_plan s as1) act \<in> valid_states PROB"
using Cons.prems(4) valid_as_valid_exec lemma_1_i
by blast
}
moreover have "as2 \<in> valid_plans PROB"
using Cons.prems(5) 1(1) valid_append_valid_suff valid_plan_valid_tail
by fast
moreover have "no_effectless_act as2"
using Cons.prems(6) 1(1) rem_effectless_works_13 sublist_append_back
by blast
moreover have "sat_precond_as (state_succ (exec_plan s as1) act) as2"
using Cons.prems(7) 1(1) graph_plan_lemma_17 sat_precond_as.simps(2)
by blast
ultimately have "\<exists>as'.
exec_plan (state_succ (exec_plan s as1) act) as'
= exec_plan (state_succ (exec_plan s as1) act) as2
\<and> subseq as' as2
\<and> length as' \<le> S vs lss PROB (fmrestrict_set vs (state_succ (exec_plan s as1) act))"
using Cons.prems(1, 2, 3) 1(4)
Cons(1)[where as="as2" and s="(state_succ (exec_plan s as1) act)"]
by blast
}
note a=this
{
have "no_effectless_act as1"
using Cons.prems(6) 1(1) rem_effectless_works_12
by blast
moreover have "sat_precond_as s as1"
using Cons.prems(7) 1(1) sat_precond_as_pfx
by blast
moreover have "as1 \<in> valid_plans PROB"
using Cons.prems(5) 1(1) valid_append_valid_pref
by blast
ultimately have "\<exists>as'. exec_plan s as1 = exec_plan s as' \<and>
subseq as' as1 \<and> length as' \<le> problem_plan_bound (snapshot PROB (fmrestrict_set vs s))"
using no_vs_change_snapshot_s_vs_is_valid_bound[of _ as1]
using Cons.prems(1, 4) 1(2)
by blast
}
then obtain as'' where b:
"exec_plan s as1 = exec_plan s as''" "subseq as'' as1"
"length as'' \<le> problem_plan_bound (snapshot PROB (fmrestrict_set vs s))"
by blast
{
obtain as' where i:
"exec_plan (state_succ (exec_plan s as1) act) as'
= exec_plan (state_succ (exec_plan s as1) act) as2"
"subseq as' as2"
"length as' \<le> S vs lss PROB (fmrestrict_set vs (state_succ (exec_plan s as1) act))"
using a
by blast
let ?as'="as'' @ act # as'"
have "exec_plan s ?as' = exec_plan s as"
using 1(1) b(1) i(1) exec_plan_Append exec_plan.simps(2)
by metis
moreover have "subseq ?as' as"
using 1(1) b(2) i(2) subseq_append_iff
by blast
moreover
{
{
\<comment> \<open>NOTE this is proved earlier in the original proof script. Moved here to improve
transparency.\<close>
have "sat_precond_as (exec_plan s as1) (act # as2)"
using empty_replace_proj_dual7
using 1(1) Cons.prems(7)
by blast
then have "fst act \<subseteq>\<^sub>f (exec_plan s as1)"
by simp
}
note A = this
{
have
"fmrestrict_set vs (state_succ (exec_plan s as1) act)
= (state_succ (fmrestrict_set vs (exec_plan s as'')) (action_proj act vs))"
using b(1) A drest_succ_proj_eq_drest_succ[where s="exec_plan s as1", symmetric]
by simp
also have "\<dots> = (state_succ (fmrestrict_set vs s) (action_proj act vs))"
using 1(2) b(1) empty_change_no_change
by fastforce
finally have "\<dots> = fmrestrict_set vs (state_succ s (action_proj act vs))"
using succ_drest_eq_drest_succ
by blast
}
note B = this
have C: "fmrestrict_set vs (exec_plan s as'') = fmrestrict_set vs s"
using 1(2) b(1) empty_change_no_change
by fastforce
{
have "act \<in> PROB"
using Cons.prems(5) 1 valid_append_valid_suff valid_plan_valid_head
by fast
then have \<aleph>: "action_proj act vs \<in> prob_proj PROB vs"
using action_proj_in_prob_proj
by blast
then have "(state_succ s (action_proj act vs)) \<in> (state_successors (prob_proj PROB vs) s)"
proof (cases "fst (action_proj act vs) \<subseteq>\<^sub>f s")
case True
then show ?thesis
unfolding state_successors_def
using Cons.hyps(2) 1(3) b(1) A B C \<aleph> DiffI imageI singletonD vars_change_cons_2
drest_succ_proj_eq_drest_succ
by metis
next
case False
then show ?thesis
unfolding state_successors_def
using Cons.hyps(2) 1(3) b(1) A B C \<aleph> DiffI imageI singletonD
drest_succ_proj_eq_drest_succ vars_change_cons_2
by metis
qed
}
then have D:
"problem_plan_bound (snapshot PROB (fmrestrict_set vs s))
+ S vs lss PROB (fmrestrict_set vs (state_succ s (action_proj act vs)))
+ 1
\<le> S vs lss PROB (fmrestrict_set vs s)"
using Cons.prems(2, 3, 4) S_geq_S_succ_plus_ell[where s'="state_succ s (action_proj act vs)"]
by blast
{
have
"length ?as' \<le> problem_plan_bound (snapshot PROB (fmrestrict_set vs s))
+ 1 + S vs lss PROB (fmrestrict_set vs (state_succ (exec_plan s as1) act))"
using b i
by fastforce
then have "length ?as' \<le> S vs lss PROB (fmrestrict_set vs s)"
using b(1) A B C D drest_succ_proj_eq_drest_succ
by (smt Suc_eq_plus1 add_Suc dual_order.trans)
}
}
ultimately have ?case
by blast
}
then show ?case
by blast
qed
\<comment> \<open>NOTE first argument of `top\_sorted\_abs` had to be wrapped into lambda.\<close>
lemma problem_plan_bound_S_bound_2nd_step:
assumes "finite (PROB :: 'a problem)"
"(top_sorted_abs (\<lambda>x y. y \<in> state_successors (prob_proj PROB vs) x) lss)"
"(set lss = valid_states (prob_proj PROB vs))" "(s \<in> valid_states PROB)"
"(as \<in> valid_plans PROB)"
shows "(\<exists>as'.
(exec_plan s as' = exec_plan s as)
\<and> (subseq as' as)
\<and> (length as' \<le> S vs lss PROB (fmrestrict_set vs s))
)"
proof -
\<comment> \<open>NOTE Proof premises and obtain conclusion of `problem\_plan\_bound\_S\_bound\_1st\_step`.\<close>
{
have a: "rem_condless_act s [] (rem_effectless_act as) \<in> valid_plans PROB"
using assms(5) rem_effectless_works_4' rem_condless_valid_10
by blast
then have b: "no_effectless_act (rem_condless_act s [] (rem_effectless_act as))"
using assms rem_effectless_works_6 rem_condless_valid_9
by fast
then have "sat_precond_as s (rem_condless_act s [] (rem_effectless_act as))"
using assms rem_condless_valid_2
by blast
then have "\<exists>as'.
exec_plan s as' = exec_plan s (rem_condless_act s [] (rem_effectless_act as))
\<and> subseq as' (rem_condless_act s [] (rem_effectless_act as))
\<and> length as' \<le> S vs lss PROB (fmrestrict_set vs s)
"
using assms a b problem_plan_bound_S_bound_1st_step
by blast
}
then obtain as' where 1:
"exec_plan s as' = exec_plan s (rem_condless_act s [] (rem_effectless_act as))"
"subseq as' (rem_condless_act s [] (rem_effectless_act as))"
"length as' \<le> S vs lss PROB (fmrestrict_set vs s)"
by blast
then have 2: "exec_plan s as' = exec_plan s as"
using rem_condless_valid_1 rem_effectless_works_14
by metis
then have "subseq as' as"
using 1(2) rem_condless_valid_8 rem_effectless_works_9 sublist_trans
by metis
then show ?thesis
using 1(3) 2
by blast
qed
\<comment> \<open>NOTE first argument of `top\_sorted\_abs` had to be wrapped into lambda.\<close>
lemma S_in_MPLS_leq_2_pow_n:
assumes "finite (PROB :: 'a problem)"
"(top_sorted_abs (\<lambda> x y. y \<in> state_successors (prob_proj PROB vs) x) lss)"
"(set lss = valid_states (prob_proj PROB vs))" "(s \<in> valid_states PROB)"
"(as \<in> valid_plans PROB)"
shows "(\<exists>as'.
(exec_plan s as' = exec_plan s as)
\<and> (subseq as' as)
\<and> (length as' \<le> Sup {S vs lss PROB s' | s'. s' \<in> valid_states (prob_proj PROB vs)})
)"
proof -
obtain as' where
"exec_plan s as' = exec_plan s as" "subseq as' as"
"length as' \<le> S vs lss PROB (fmrestrict_set vs s)"
using assms problem_plan_bound_S_bound_2nd_step
by blast
moreover {
\<comment> \<open>NOTE Derive sufficient conditions for inferring that `S vs lss PROB` is smaller or equal to
the supremum of the set @{term "{S vs lss PROB s' | s'. s' \<in> valid_states (prob_proj PROB vs)}"}: i.e.
being contained and that the supremum is contained as well.\<close>
let ?S="{S vs lss PROB s' | s'. s' \<in> valid_states (prob_proj PROB vs)}"
{
have "fmrestrict_set vs s \<in> valid_states (prob_proj PROB vs)"
using assms(4) graph_plan_not_eq_last_diff_paths
by blast
then have "S vs lss PROB (fmrestrict_set vs s) \<in> ?S"
using calculation(1)
by blast
}
- note 1 = this
+ moreover
{
have "finite (prob_proj PROB vs)"
- unfolding prob_proj_def valid_states_def
- using assms(1)
- by simp
+ by (simp add: assms(1) prob_proj_def)
then have "finite ?S"
using Setcompr_eq_image assms(3)
by (metis List.finite_set finite_imageI)
}
- then have "S vs lss PROB (fmrestrict_set vs s) \<le> Max ?S"
- using 1 Max.coboundedI
- by blast
- then have "S vs lss PROB (fmrestrict_set vs s) \<le> Sup ?S"
- using Sup_nat_def
- by presburger
+ ultimately have "S vs lss PROB (fmrestrict_set vs s) \<le> Sup ?S"
+ using le_cSup_finite by blast
}
ultimately show ?thesis
using le_trans
by blast
qed
\<comment> \<open>NOTE first argument of `top\_sorted\_abs` had to be wrapped into lambda.\<close>
lemma problem_plan_bound_S_bound:
fixes PROB :: "'a problem"
assumes "finite PROB" "(top_sorted_abs (\<lambda>x y. y \<in> state_successors (prob_proj PROB vs) x) lss)"
"(set lss = valid_states (prob_proj PROB vs))"
shows "
problem_plan_bound PROB
\<le> Sup {S vs lss PROB (s' :: 'a state) | s'. s' \<in> valid_states (prob_proj PROB vs)}
"
proof -
let ?f="\<lambda>PROB.
Sup {S vs lss PROB (s' :: 'a state) | s'. s' \<in> valid_states (prob_proj PROB vs)} + 1"
{
fix as and s :: "'a state"
assume "s \<in> valid_states PROB" "as \<in> valid_plans PROB"
then obtain as' where a:
"exec_plan s as' = exec_plan s as" "subseq as' as"
"length as' \<le> Sup {S vs lss PROB s' |s'. s' \<in> valid_states (prob_proj PROB vs)}"
using assms S_in_MPLS_leq_2_pow_n
by blast
then have "length as' < ?f PROB"
by linarith
moreover have "exec_plan s as = exec_plan s as'"
using a(1)
by simp
ultimately have
"\<exists>as'. exec_plan s as = exec_plan s as' \<and> subseq as' as \<and> length as' < ?f PROB"
using a(2)
by blast
}
then show ?thesis
using assms(1) problem_plan_bound_UBound[where f="?f"]
by fastforce
qed
subsection "State Space Acyclicity"
text \<open> State space acyclicity is again formalized using graphs to model the state space. However
the relation inducing the graph is the successor relation on states. [Abdulaziz et al.,
Definition 15, HOL4 Definition 15, p.27]
With this, the acyclic system compositional bound `S` can be shown to be an upper bound on the
sublist diameter (lemma `problem\_plan\_bound\_S\_bound\_thesis`). [Abdulaziz et al., p.29] \<close>
\<comment> \<open>NOTE name shortened.\<close>
\<comment> \<open>NOTE first argument of 'top\_sorted\_abs' had to be wrapped into lambda.\<close>
definition sspace_DAG where
"sspace_DAG PROB lss \<equiv> (
(set lss = valid_states PROB)
\<and> (top_sorted_abs (\<lambda>x y. y \<in> state_successors PROB x) lss)
)"
lemma problem_plan_bound_S_bound_2nd_step_thesis:
assumes "finite (PROB :: 'a problem)" "(sspace_DAG (prob_proj PROB vs) lss)"
"(s \<in> valid_states PROB)" "(as \<in> valid_plans PROB)"
shows "(\<exists>as'. (exec_plan s as' = exec_plan s as)
\<and> (subseq as' as)
\<and> (length as' \<le> S vs lss PROB (fmrestrict_set vs s))
)"
using assms problem_plan_bound_S_bound_2nd_step sspace_DAG_def
by fast
text \<open>And finally, this is the main lemma about the upper bounding algorithm.\<close>
theorem problem_plan_bound_S_bound_thesis:
assumes "finite (PROB :: 'a problem)" "(sspace_DAG (prob_proj PROB vs) lss)"
shows "(
problem_plan_bound PROB
\<le> Sup {S vs lss PROB s' | s'. s' \<in> valid_states (prob_proj PROB vs)}
)"
using assms problem_plan_bound_S_bound sspace_DAG_def
by fast
end
\ No newline at end of file
diff --git a/thys/Factored_Transition_System_Bounding/TopologicalProps.thy b/thys/Factored_Transition_System_Bounding/TopologicalProps.thy
--- a/thys/Factored_Transition_System_Bounding/TopologicalProps.thy
+++ b/thys/Factored_Transition_System_Bounding/TopologicalProps.thy
@@ -1,2352 +1,2352 @@
theory TopologicalProps
imports Main FactoredSystem ActionSeqProcess SetUtils
begin
section "Topological Properties"
subsection "Basic Definitions and Properties"
definition PLS_charles where
"PLS_charles s as PROB \<equiv> {length as' | as'.
(as' \<in> valid_plans PROB) \<and> (exec_plan s as' = exec_plan s as)}"
definition MPLS_charles where
"MPLS_charles PROB \<equiv> {Inf (PLS_charles (fst p) (snd p) PROB) | p.
((fst p) \<in> valid_states PROB)
\<and> ((snd p) \<in> valid_plans PROB)
}"
\<comment> \<open>NOTE name shortened to 'problem\_plan\_bound\_charles'.\<close>
definition problem_plan_bound_charles where
"problem_plan_bound_charles PROB \<equiv> Sup (MPLS_charles PROB)"
\<comment> \<open>NOTE name shortened to 'PLS\_state'.\<close>
definition PLS_state_1 where
"PLS_state_1 s as \<equiv> length ` {as'. (exec_plan s as' = exec_plan s as)}"
\<comment> \<open>NOTE name shortened to 'MPLS\_stage\_1'.\<close>
definition MPLS_stage_1 where
"MPLS_stage_1 PROB \<equiv>
(\<lambda> (s, as). Inf (PLS_state_1 s as))
` {(s, as). (s \<in> valid_states PROB) \<and> (as \<in> valid_plans PROB)}
"
\<comment> \<open>NOTE name shortened to 'problem\_plan\_bound\_stage\_1'.\<close>
definition problem_plan_bound_stage_1 where
"problem_plan_bound_stage_1 PROB \<equiv> Sup (MPLS_stage_1 PROB)"
for PROB :: "'a problem"
\<comment> \<open>NOTE name shortened.\<close>
definition PLS where
"PLS s as \<equiv> length ` {as'. (exec_plan s as' = exec_plan s as) \<and> (subseq as' as)}"
\<comment> \<open>NOTE added lemma.\<close>
\<comment> \<open>NOTE proof finite PLS for use in 'proof in\_MPLS\_leq\_2\_pow\_n\_i'\<close>
lemma finite_PLS: "finite (PLS s as)"
proof -
let ?S = "{as'. (exec_plan s as' = exec_plan s as) \<and> (subseq as' as)}"
let ?S1 = "length ` {as'. (exec_plan s as' = exec_plan s as) }"
let ?S2 = "length ` {as'. (subseq as' as)}"
let ?n = "length as + 1"
have "finite ?S2"
using bounded_nat_set_is_finite[where n = ?n and N = ?S2]
by fastforce
moreover have "length ` ?S \<subseteq> (?S1 \<inter> ?S2)"
by blast
ultimately have "finite (length ` ?S)"
using infinite_super
by auto
then show ?thesis
unfolding PLS_def
by blast
qed
\<comment> \<open>NOTE name shortened.\<close>
definition MPLS where
"MPLS PROB \<equiv>
(\<lambda> (s, as). Inf (PLS s as))
` {(s, as). (s \<in> valid_states PROB) \<and> (as \<in> valid_plans PROB)}
"
\<comment> \<open>NOTE name shortened.\<close>
definition problem_plan_bound where
"problem_plan_bound PROB \<equiv> Sup (MPLS PROB)"
lemma expanded_problem_plan_bound_thm_1:
fixes PROB
shows "
(problem_plan_bound PROB) = Sup (
(\<lambda>(s,as). Inf (PLS s as)) `
{(s, as). (s \<in> (valid_states PROB)) \<and> (as \<in> valid_plans PROB)}
)
"
unfolding problem_plan_bound_def MPLS_def
by blast
lemma expanded_problem_plan_bound_thm:
fixes PROB :: "(('a, 'b) fmap \<times> ('a, 'b) fmap) set"
shows "
problem_plan_bound PROB = Sup ({Inf (PLS s as) | s as.
(s \<in> valid_states PROB)
\<and> (as \<in> valid_plans PROB)
})
"
proof -
{
have "(
{Inf (PLS s as) | s as. (s \<in> valid_states PROB) \<and> (as \<in> valid_plans PROB)}
) = ((\<lambda>(s, as). Inf (PLS s as)) ` {(s, as).
(s \<in> valid_states PROB)
\<and> (as \<in> valid_plans PROB)
})
"
by fast
also have "\<dots> =
(\<lambda>(s, as). Inf (PLS s as)) `
({s. fmdom' s = prob_dom PROB} \<times> {as. set as \<subseteq> PROB})
"
unfolding valid_states_def valid_plans_def
by simp
finally have "
Sup ({Inf (PLS s as) | s as. (s \<in> valid_states PROB) \<and> (as \<in> valid_plans PROB)})
= Sup (
(\<lambda>(s, as). Inf (PLS s as)) `
({s. fmdom' s = prob_dom PROB} \<times> {as. set as \<subseteq> PROB})
)
"
by argo
}
moreover have "
problem_plan_bound PROB
=
Sup ((\<lambda>(s, as). Inf (PLS s as)) `
({s. fmdom' s = prob_dom PROB} \<times> {as. set as \<subseteq> PROB}))
"
unfolding problem_plan_bound_def MPLS_def valid_states_def valid_plans_def
by fastforce
ultimately show "
problem_plan_bound PROB
= Sup ({Inf (PLS s as) | s as.
(s \<in> valid_states PROB)
\<and> (as \<in> valid_plans PROB)
})
"
by argo
qed
subsection "Recurrence Diameter"
text \<open> The recurrence diameter---defined as the longest simple path in the digraph modelling the
state space---provides a loose upper bound on the system diameter. [Abdulaziz et al., Definition 9,
p.15] \<close>
\<comment> \<open>NOTE name shortened.\<close>
\<comment> \<open>NOTE 'fun' because of multiple defining equations, pattern matches.\<close>
fun valid_path where
"valid_path Pi [] = True"
| "valid_path Pi [s] = (s \<in> valid_states Pi)"
| "valid_path Pi (s1 # s2 # rest) = (
(s1 \<in> valid_states Pi)
\<and> (\<exists>a. (a \<in> Pi) \<and> (exec_plan s1 [a] = s2))
\<and> (valid_path Pi (s2 # rest))
)"
lemma valid_path_ITP2015: "
(valid_path Pi [] \<longleftrightarrow> True)
\<and> (valid_path Pi [s] \<longleftrightarrow> (s \<in> valid_states Pi))
\<and> (valid_path Pi (s1 # s2 # rest) \<longleftrightarrow>
(s1 \<in> valid_states Pi)
\<and> (\<exists>a.
(a \<in> Pi)
\<and> (exec_plan s1 [a] = s2)
)
\<and> (valid_path Pi (s2 # rest))
)
"
using valid_states_def
by simp
\<comment> \<open>NOTE name shortened.\<close>
\<comment> \<open>NOTE second declaration skipped (declared twice in source).\<close>
definition RD where
"RD Pi \<equiv> (Sup {length p - 1 | p. valid_path Pi p \<and> distinct p})"
for Pi :: "'a problem"
lemma in_PLS_leq_2_pow_n:
fixes PROB :: "'a problem" and s :: "'a state" and as
assumes "finite PROB" "(s \<in> valid_states PROB)" "(as \<in> valid_plans PROB)"
shows "(\<exists>x.
(x \<in> PLS s as)
\<and> (x \<le> (2 ^ card (prob_dom PROB)) - 1)
)"
proof -
obtain as' where 1:
"exec_plan s as = exec_plan s as'" "subseq as' as" "length as' \<le> 2 ^ card (prob_dom PROB) - 1"
using assms main_lemma
by blast
let ?x="length as'"
have "?x \<in> PLS s as"
unfolding PLS_def
using 1
by simp
moreover have "?x \<le> 2 ^ card (prob_dom PROB) - 1"
using 1(3)
by blast
ultimately show "(\<exists>x.
(x \<in> PLS s as)
\<and> (x \<le> (2 ^ card (prob_dom PROB)) - 1)
)"
unfolding PLS_def
by blast
qed
lemma in_MPLS_leq_2_pow_n:
fixes PROB :: "'a problem" and x
assumes "finite PROB" "(x \<in> MPLS PROB)"
shows "(x \<le> 2 ^ card (prob_dom PROB) - 1)"
proof -
let ?mpls = "MPLS PROB"
\<comment> \<open>NOTE obtain p = (s, as) where 'x = Inf (PLS s as)' from premise.\<close>
have "?mpls =
(\<lambda> (s, as). Inf (PLS s as)) `
{(s, as). (s \<in> valid_states PROB) \<and> (as \<in> valid_plans PROB)}
"
using MPLS_def
by blast
then obtain s :: "('a, bool) fmap" and as :: "(('a, bool) fmap \<times> ('a, bool) fmap) list"
where obtain_s_as: "x \<in>
((\<lambda> (s, as). Inf (PLS s as)) `
{(s, as). (s \<in> valid_states PROB) \<and> (as \<in> valid_plans PROB)})
"
using assms(2)
by blast
then have
"x \<in> {Inf (PLS (fst p) (snd p)) | p. (fst p \<in> valid_states PROB) \<and> (snd p \<in> valid_plans PROB)}"
using assms(1) obtain_s_as
by auto
then have
"\<exists> p. x = Inf (PLS (fst p) (snd p)) \<and> (fst p \<in> valid_states PROB) \<and> (snd p \<in> valid_plans PROB)"
by blast
then obtain p :: "('a, bool) fmap \<times> (('a, bool) fmap \<times> ('a, bool) fmap) list" where obtain_p:
"x = Inf (PLS (fst p) (snd p))" "(fst p \<in> valid_states PROB)" "(snd p \<in> valid_plans PROB)"
by blast
then have "fst p \<in> valid_states PROB" "snd p \<in> valid_plans PROB"
using obtain_p
by blast+
then obtain x' :: nat where obtain_x':
"x' \<in> PLS (fst p) (snd p) \<and> x' \<le> 2 ^ card (prob_dom PROB) - 1"
using assms(1) in_PLS_leq_2_pow_n[where s = "fst p" and as = "snd p"]
by blast
then have 1: "x' \<le> 2 ^ card (prob_dom PROB) - 1" "x' \<in> PLS (fst p) (snd p)"
"x = Inf (PLS (fst p) (snd p))" "finite (PLS (fst p) (snd p))"
using obtain_x' obtain_p finite_PLS
by blast+
moreover have "x \<le> x'"
using 1(2, 4) obtain_p(1) cInf_le_finite
by blast
ultimately show "(x \<le> 2 ^ card (prob_dom PROB) - 1)"
by linarith
qed
lemma FINITE_MPLS:
assumes "finite (Pi :: 'a problem)"
shows "finite (MPLS Pi)"
proof -
have "\<forall>x \<in> MPLS Pi. x \<le> 2 ^ card (prob_dom Pi) - 1"
using assms in_MPLS_leq_2_pow_n
by blast
then show "finite (MPLS Pi)"
using mems_le_finite[of "MPLS Pi" "2 ^ card (prob_dom Pi) - 1"]
by blast
qed
\<comment> \<open>NOTE 'fun' because of multiple defining equations.\<close>
fun statelist' where
"statelist' s [] = [s]"
| "statelist' s (a # as) = (s # statelist' (state_succ s a) as)"
lemma LENGTH_statelist':
fixes as s
shows "length (statelist' s as) = (length as + 1)"
by (induction as arbitrary: s) auto
lemma valid_path_statelist':
fixes as and s :: "('a, 'b) fmap"
assumes "(as \<in> valid_plans Pi)" "(s \<in> valid_states Pi)"
shows "(valid_path Pi (statelist' s as))"
using assms
proof (induction as arbitrary: s Pi)
case cons: (Cons a as)
then have 1: "a \<in> Pi" "as \<in> valid_plans Pi"
using valid_plan_valid_head valid_plan_valid_tail
by metis+
then show ?case
proof (cases as)
case Nil
{
have "state_succ s a \<in> valid_states Pi"
using 1 cons.prems(2) valid_action_valid_succ
by blast
then have "valid_path Pi [state_succ s a]"
using 1 cons.prems(2) cons.IH
by force
moreover have "(\<exists>aa. aa \<in> Pi \<and> exec_plan s [aa] = state_succ s a)"
using 1(1)
by fastforce
ultimately have "valid_path Pi (statelist' s [a])"
using cons.prems(2)
by simp
}
then show ?thesis
using Nil
by blast
next
case (Cons b list)
{
have "s \<in> valid_states Pi"
using cons.prems(2)
by simp
\<comment> \<open>TODO this step is inefficient (~5s).\<close>
then have
"valid_path Pi (state_succ s a # statelist' (state_succ (state_succ s a) b) list)"
using 1 cons.IH cons.prems(2) Cons lemma_1_i
by fastforce
moreover have
"(\<exists>aa b. (aa, b) \<in> Pi \<and> state_succ s (aa, b) = state_succ s a)"
using 1(1) surjective_pairing
by metis
ultimately have "valid_path Pi (statelist' s (a # b # list))"
using cons.prems(2)
by auto
}
then show ?thesis
using Cons
by blast
qed
qed simp
\<comment> \<open>TODO explicit proof.\<close>
lemma statelist'_exec_plan:
fixes a s p
assumes "(statelist' s as = p)"
shows "(exec_plan s as = last p)"
using assms
apply(induction as arbitrary: s p)
apply(auto)
apply(cases "as")
by
(metis LENGTH_statelist' One_nat_def add_Suc_right list.size(3) nat.simps(3))
(metis (no_types) LENGTH_statelist' One_nat_def add_Suc_right list.size(3) nat.simps(3))
lemma statelist'_EQ_NIL: "statelist' s as \<noteq> []"
by (cases as) auto
\<comment> \<open>NOTE added lemma.\<close>
lemma statelist'_TAKE_i:
assumes "Suc m \<le> length (a # as)"
shows "m \<le> length as"
using assms
by (induction as arbitrary: a m) auto
lemma statelist'_TAKE:
fixes as s p
assumes "(statelist' s as = p)"
shows "(\<forall>n. n \<le> length as \<longrightarrow> (exec_plan s (take n as)) = (p ! n))"
using assms
proof (induction as arbitrary: s p)
case Nil
{
fix n
assume P1: "n \<le> length []"
then have "exec_plan s (take n []) = s"
by simp
moreover have "p ! 0 = s"
using Nil.prems
by force
ultimately have "exec_plan s (take n []) = p ! n"
using P1
by simp
}
then show ?case by blast
next
case (Cons a as)
{
fix n
assume P2: "n \<le> length (a # as)"
then have "exec_plan s (take n (a # as)) = p ! n"
using Cons.prems
proof (cases "n = 0")
case False
then obtain m where a: "n = Suc m"
using not0_implies_Suc
by presburger
moreover have b: "statelist' s (a # as) ! n = statelist' (state_succ s a) as ! m"
using a nth_Cons_Suc
by simp
moreover have c: "exec_plan s (take n (a # as)) = exec_plan (state_succ s a) (take m as)"
using a
by force
moreover have "m \<le> length as"
using a P2 statelist'_TAKE_i
by simp
moreover have
"exec_plan (state_succ s a) (take m as) = statelist' (state_succ s a) as ! m"
using calculation(2, 3, 4) Cons.IH
by blast
ultimately show ?thesis
using Cons.prems
by argo
qed fastforce
}
then show ?case by blast
qed
lemma MPLS_nempty:
fixes PROB :: "(('a, 'b) fmap \<times> ('a, 'b) fmap) set"
assumes "finite PROB"
shows "MPLS PROB \<noteq> {}"
proof -
let ?S="{(s, as). s \<in> valid_states PROB \<and> as \<in> valid_plans PROB}"
\<comment> \<open>NOTE type of 's' had to be fixed for 'valid\_states\_nempty'.\<close>
obtain s :: "('a, 'b) fmap" where "s \<in> valid_states PROB"
using assms valid_states_nempty
by blast
moreover have "[] \<in> valid_plans PROB"
using empty_plan_is_valid
by auto
ultimately have "(s, []) \<in> ?S"
by blast
then show ?thesis
unfolding MPLS_def
by blast
qed
theorem bound_main_lemma:
fixes PROB :: "'a problem"
assumes "finite PROB"
shows "(problem_plan_bound PROB \<le> (2 ^ (card (prob_dom PROB))) - 1)"
proof -
have "MPLS PROB \<noteq> {}"
using assms MPLS_nempty
by auto
moreover have "(\<forall>x. x \<in> MPLS PROB \<longrightarrow> x \<le> 2 ^ card (prob_dom PROB) - 1)"
using assms in_MPLS_leq_2_pow_n
by blast
ultimately show ?thesis
unfolding problem_plan_bound_def
using cSup_least
by blast
qed
\<comment> \<open>NOTE types in premise had to be fixed to be able to match `valid\_as\_valid\_exec`.\<close>
lemma bound_child_parent_card_state_set_cons:
fixes P f
assumes "(\<forall>(PROB :: 'a problem) as (s :: 'a state).
(P PROB)
\<and> (as \<in> valid_plans PROB)
\<and> (s \<in> valid_states PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB)
)
)"
shows "(\<forall>PROB s as.
(P PROB)
\<and> (as \<in> valid_plans PROB)
\<and> (s \<in> (valid_states PROB))
\<longrightarrow> (\<exists>x.
(x \<in> PLS s as)
\<and> (x < f PROB)
)
)"
proof -
{
fix PROB :: "'a problem" and as and s :: "'a state"
assume P1: "(P PROB)"
"(as \<in> valid_plans PROB)"
"(s \<in> valid_states PROB)"
"(\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB)
)"
have "(exec_plan s as \<in> valid_states PROB)"
using assms P1 valid_as_valid_exec
by blast
then have "(P PROB)
\<and> (as \<in> valid_plans PROB)
\<and> (s \<in> (valid_states PROB))
\<longrightarrow> (\<exists>x.
(x \<in> PLS s as)
\<and> (x < f PROB)
)
"
unfolding PLS_def
using P1
by force
}
then show "(\<forall>PROB s as.
(P PROB)
\<and> (as \<in> valid_plans PROB)
\<and> (s \<in> (valid_states PROB))
\<longrightarrow> (\<exists>x.
(x \<in> PLS s as)
\<and> (x < f PROB)
)
)"
using assms
by simp
qed
\<comment> \<open>NOTE types of premise had to be fixed to be able to use lemma `bound\_child\_parent\_card\_state\_set\_cons`.\<close>
lemma bound_on_all_plans_bounds_MPLS:
fixes P f
assumes "(\<forall>(PROB :: 'a problem) as (s :: 'a state).
(P PROB)
\<and> (s \<in> valid_states PROB)
\<and> (as \<in> valid_plans PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB)
)
)"
shows "(\<forall>PROB x. P PROB
\<longrightarrow> (x \<in> MPLS(PROB))
\<longrightarrow> (x < f PROB)
)"
proof -
{
fix PROB :: "'a problem" and as and s :: "'a state"
assume "(P PROB)"
"(s \<in> valid_states PROB)"
"(as \<in> valid_plans PROB)"
"(\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB)
)"
then have "(\<exists>x. x \<in> PLS s as \<and> x < f PROB)"
using assms(1) bound_child_parent_card_state_set_cons[where P = P and f = f]
by presburger
}
note 1 = this
{
fix PROB x
assume P1: "P PROB" "x \<in> MPLS PROB"
\<comment> \<open>TODO refactor 'x\_in\_MPLS\_if' and use here.\<close>
then obtain s as where a:
"x = Inf (PLS s as)" "s \<in> valid_states PROB" "as \<in> valid_plans PROB"
unfolding MPLS_def
by auto
moreover have "(\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB)
)"
using P1(1) assms calculation(2, 3)
by blast
ultimately obtain x' where "x' \<in> PLS s as" "x' < f PROB"
using P1 1
by blast
then have "x < f PROB"
using a(1) mem_lt_imp_MIN_lt
by fastforce
}
then show ?thesis
by blast
qed
lemma bound_child_parent_card_state_set_cons_finite:
fixes P f
assumes "(\<forall>PROB as s.
P PROB \<and> finite PROB \<and> as \<in> (valid_plans PROB) \<and> s \<in> (valid_states PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> subseq as' as
\<and> length as' < f(PROB)
)
)"
shows "(\<forall>PROB s as.
P PROB \<and> finite PROB \<and> as \<in> (valid_plans PROB) \<and> (s \<in> (valid_states PROB))
\<longrightarrow> (\<exists>x. (x \<in> PLS s as) \<and> x < f PROB)
)"
proof -
{
fix PROB s as
assume "P PROB" "finite PROB" "as \<in> (valid_plans PROB)" "s \<in> (valid_states PROB)"
" (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> subseq as' as
\<and> length as' < f PROB
)"
(* NOTE[1]
moreover have "exec_plan s as \<in> valid_states PROB"
using calculation valid_as_valid_exec by blast
*)
then obtain as' where
"(exec_plan s as = exec_plan s as')" "subseq as' as" "length as' < f PROB"
by blast
moreover have "length as' \<in> PLS s as"
unfolding PLS_def
using calculation
by fastforce
ultimately have "(\<exists>x. (x \<in> PLS s as) \<and> x < f PROB)"
by blast
}
then show "(\<forall>PROB s as.
P PROB
\<and> finite PROB
\<and> as \<in> (valid_plans PROB)
\<and> (s \<in> (valid_states PROB))
\<longrightarrow> (\<exists>x. (x \<in> PLS s as) \<and> x < f PROB)
)"
using assms
by auto
qed
lemma bound_on_all_plans_bounds_MPLS_finite:
fixes P f
assumes "(\<forall>PROB as s.
P PROB \<and> finite PROB \<and> s \<in> (valid_states PROB) \<and> as \<in> (valid_plans PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> subseq as' as
\<and> length as' < f(PROB)
)
)"
shows "(\<forall>PROB x.
P PROB \<and> finite PROB
\<longrightarrow> (x \<in> MPLS PROB)
\<longrightarrow> x < f PROB
)"
proof -
{
fix PROB x
assume P1: "P PROB" "finite PROB" "x \<in> MPLS PROB"
\<comment> \<open>TODO refactor 'x\_in\_MPLS\_if' and use here.\<close>
then obtain s as where a:
"x = Inf (PLS s as)" "s \<in> valid_states PROB" "as \<in> valid_plans PROB"
unfolding MPLS_def
by auto
moreover have "(\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB)
)"
using P1(1, 2) assms calculation(2, 3)
by blast
moreover obtain x' where "x' \<in> PLS s as" "x' < f PROB"
using PLS_def calculation(4)
by fastforce
then have "x < f PROB"
using a(1) mem_lt_imp_MIN_lt
by fastforce
}
then show ?thesis
using assms
by blast
qed
lemma bound_on_all_plans_bounds_problem_plan_bound:
fixes P f
assumes "(\<forall>PROB as s.
(P PROB)
\<and> finite PROB
\<and> (s \<in> valid_states PROB)
\<and> (as \<in> valid_plans PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB)
)
)"
shows "(\<forall>PROB.
(P PROB)
\<and> finite PROB
\<longrightarrow> (problem_plan_bound PROB < f PROB)
)"
proof -
have 1: "\<forall>PROB x.
P PROB
\<and> finite PROB
\<longrightarrow> x \<in> MPLS PROB
\<longrightarrow> x < f PROB
"
using assms bound_on_all_plans_bounds_MPLS_finite
by blast
{
fix PROB x
assume "P PROB \<and> finite PROB
\<longrightarrow> x \<in> MPLS PROB
\<longrightarrow> x < f PROB
"
then have "\<forall>PROB.
P PROB \<and> finite PROB
\<longrightarrow> problem_plan_bound PROB < f PROB
"
unfolding problem_plan_bound_def
using 1 bound_child_parent_not_eq_last_diff_paths 1 MPLS_nempty
by metis
then have "\<forall>PROB.
P PROB \<and> finite PROB
\<longrightarrow> problem_plan_bound PROB < f PROB
"
using MPLS_nempty
by blast
}
then show "(\<forall>PROB.
(P PROB)
\<and> finite PROB
\<longrightarrow> (problem_plan_bound PROB < f PROB)
)"
using 1
by blast
qed
lemma bound_child_parent_card_state_set_cons_thesis:
assumes "finite PROB" "(\<forall>as s.
as \<in> (valid_plans PROB)
\<and> s \<in> (valid_states PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> subseq as' as
\<and> length as' < k
)
)" "as \<in> (valid_plans PROB)" "(s \<in> (valid_states PROB))"
shows "(\<exists>x. (x \<in> PLS s as) \<and> x < k)"
unfolding PLS_def
using assms
by fastforce
\<comment> \<open>NOTE added lemma.\<close>
\<comment> \<open>TODO refactor/move up.\<close>
lemma x_in_MPLS_if:
fixes x PROB
assumes "x \<in> MPLS PROB"
shows "\<exists>s as. s \<in> valid_states PROB \<and> as \<in> valid_plans PROB \<and> x = Inf (PLS s as)"
using assms
unfolding MPLS_def
by fast
lemma bound_on_all_plans_bounds_MPLS_thesis:
assumes "finite PROB" "(\<forall>as s.
(s \<in> valid_states PROB)
\<and> (as \<in> valid_plans PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < k)
)
)" "(x \<in> MPLS PROB)"
shows "(x < k)"
proof -
obtain s as where 1: "s \<in> valid_states PROB" "as \<in> valid_plans PROB" "x = Inf (PLS s as)"
using assms(3) x_in_MPLS_if
by blast
then obtain x' :: nat where "x' \<in> PLS s as" "x' < k"
using assms(1, 2) bound_child_parent_card_state_set_cons_thesis
by blast
then have "Inf (PLS s as) < k"
using mem_lt_imp_MIN_lt
by blast
then show "x < k"
using 1
by simp
qed
\<comment> \<open>NOTE added lemma.\<close>
lemma bounded_MPLS_contains_supremum:
fixes PROB
assumes "finite PROB" "(\<exists>k. \<forall>x \<in> MPLS PROB. x < k)"
shows "Sup (MPLS PROB) \<in> MPLS PROB"
proof -
obtain k where "\<forall>x \<in> MPLS PROB. x < k"
using assms(2)
by blast
moreover have "finite (MPLS PROB)"
using assms(2) finite_nat_set_iff_bounded
by presburger
moreover have "MPLS PROB \<noteq> {}"
using assms(1) MPLS_nempty
by auto
ultimately show "Sup (MPLS PROB) \<in> MPLS PROB"
unfolding Sup_nat_def
by simp
qed
lemma bound_on_all_plans_bounds_problem_plan_bound_thesis':
assumes "finite PROB" "(\<forall>as s.
s \<in> (valid_states PROB)
\<and> as \<in> (valid_plans PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> subseq as' as
\<and> length as' < k
)
)"
shows "problem_plan_bound PROB < k"
proof -
have 1: "\<forall>x \<in> MPLS PROB. x < k"
using assms(1, 2) bound_on_all_plans_bounds_MPLS_thesis
by blast
then have "Sup (MPLS PROB) \<in> MPLS PROB"
using assms(1) bounded_MPLS_contains_supremum
by auto
then have "Sup (MPLS PROB) < k"
using 1
by blast
then show ?thesis
unfolding problem_plan_bound_def
by simp
qed
lemma bound_on_all_plans_bounds_problem_plan_bound_thesis:
assumes "finite PROB" "(\<forall>as s.
(s \<in> valid_states PROB)
\<and> (as \<in> valid_plans PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' \<le> k)
)
)"
shows "(problem_plan_bound PROB \<le> k)"
proof -
have 1: "\<forall>x\<in>MPLS PROB. x < k + 1"
using assms(1, 2) bound_on_all_plans_bounds_MPLS_thesis[where k = "k + 1"] Suc_eq_plus1
less_Suc_eq_le
by metis
then have "Sup (MPLS PROB) \<in> MPLS PROB"
using assms(1) bounded_MPLS_contains_supremum
by fast
then show "(problem_plan_bound PROB \<le> k)"
unfolding problem_plan_bound_def
using 1
by fastforce
qed
lemma bound_on_all_plans_bounds_problem_plan_bound_:
fixes P f PROB
assumes "(\<forall>PROB' as s.
finite PROB \<and> (P PROB') \<and> (s \<in> valid_states PROB') \<and> (as \<in> valid_plans PROB')
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB')
)
)" "(P PROB)" "finite PROB"
shows "(problem_plan_bound PROB < f PROB)"
unfolding problem_plan_bound_def MPLS_def
using assms bound_on_all_plans_bounds_problem_plan_bound_thesis' expanded_problem_plan_bound_thm_1
by metis
lemma S_VALID_AS_VALID_IMP_MIN_IN_PLS:
fixes PROB s as
assumes "(s \<in> valid_states PROB)" "(as \<in> valid_plans PROB)"
shows "(Inf (PLS s as) \<in> (MPLS PROB))"
unfolding MPLS_def
using assms
by fast
\<comment> \<open>NOTE type of `s` had to be fixed (type mismatch in goal).\<close>
\<comment> \<open>NOTE premises rewritten to implications for proof set up.\<close>
lemma problem_plan_bound_ge_min_pls:
fixes PROB :: "'a problem" and s :: "'a state" and as k
assumes "finite PROB" "(s \<in> valid_states PROB)" "(as \<in> valid_plans PROB)"
"(problem_plan_bound PROB \<le> k)"
shows "(Inf (PLS s as) \<le> problem_plan_bound PROB)"
proof -
have "Inf (PLS s as) \<in> MPLS PROB"
using assms(2, 3) S_VALID_AS_VALID_IMP_MIN_IN_PLS
by blast
moreover have "finite (MPLS PROB)"
using assms(1) FINITE_MPLS
by blast
ultimately have "Inf (PLS s as) \<le> Sup (MPLS PROB)"
using le_cSup_finite
by blast
then show ?thesis
unfolding problem_plan_bound_def
by simp
qed
lemma PLS_NEMPTY:
fixes s as
shows "PLS s as \<noteq> {}"
unfolding PLS_def
by blast
lemma PLS_nempty_and_has_min:
fixes s as
shows "(\<exists>x. (x \<in> PLS s as) \<and> (x = Inf (PLS s as)))"
proof -
have "PLS s as \<noteq> {}"
using PLS_NEMPTY
by blast
then have "Inf (PLS s as) \<in> PLS s as"
unfolding Inf_nat_def
using LeastI_ex Max_in finite_PLS
by metis
then show ?thesis
by blast
qed
lemma PLS_works:
fixes x s as
assumes "(x \<in> PLS s as)"
shows"(\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (length as' = x)
\<and> (subseq as' as)
)"
using assms
unfolding PLS_def
by (smt imageE mem_Collect_eq)
\<comment> \<open>NOTE type of `s` had to be fixed (type mismatch in goal).\<close>
lemma problem_plan_bound_works:
fixes PROB :: "'a problem" and as and s :: "'a state"
assumes "finite PROB" "(s \<in> valid_states PROB)" "(as \<in> valid_plans PROB)"
shows "(\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' \<le> problem_plan_bound PROB)
)"
proof -
have "problem_plan_bound PROB \<le> 2 ^ card (prob_dom PROB) - 1"
using assms(1) bound_main_lemma
by blast
then have 1: "Inf (PLS s as) \<le> problem_plan_bound PROB"
using
assms(1, 2, 3)
problem_plan_bound_ge_min_pls
by blast
then have "\<exists>x. x \<in> PLS s as \<and> x = Inf (PLS s as)"
using PLS_nempty_and_has_min
by blast
then have "Inf (PLS s as) \<in> (PLS s as)"
by blast
then obtain as' where 2:
"exec_plan s as = exec_plan s as'" "length as' = Inf (PLS s as)" "subseq as' as"
using PLS_works
by blast
then have "length as' \<le> problem_plan_bound PROB"
using 1
by argo
then show "(\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' \<le> problem_plan_bound PROB)
)"
using 2(1) 2(3)
by blast
qed
\<comment> \<open>NOTE name shortened.\<close>
definition MPLS_s where
"MPLS_s PROB s \<equiv> (\<lambda> (s, as). Inf (PLS s as)) ` {(s, as) | as. as \<in> valid_plans PROB}"
\<comment> \<open>NOTE type of `PROB` had to be fixed (type mismatch in goal).\<close>
lemma bound_main_lemma_s_3:
fixes PROB :: "(('a, 'b) fmap \<times> ('a, 'b) fmap) set" and s
shows "MPLS_s PROB s \<noteq> {}"
proof -
\<comment> \<open>TODO @{term "(s, []) \<in> {}"} could be refactored (this is used in 'MPLS\_nempty' too).\<close>
have "[] \<in> valid_plans PROB"
using empty_plan_is_valid
by blast
then have "(s, []) \<in> {(s, as). as \<in> valid_plans PROB}"
by simp
then show "MPLS_s PROB s \<noteq> {}"
unfolding MPLS_s_def
by blast
qed
\<comment> \<open>NOTE name shortened.\<close>
definition problem_plan_bound_s where
"problem_plan_bound_s PROB s = Sup (MPLS_s PROB s)"
\<comment> \<open>NOTE removed typing from assumption due to matching problems in later proofs.\<close>
lemma bound_on_all_plans_bounds_PLS_s:
fixes P f
assumes "(\<forall>PROB as s.
finite PROB \<and> (P PROB) \<and> (as \<in> valid_plans PROB) \<and> (s \<in> valid_states PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB s)
)
)"
shows "(\<forall>PROB s as.
finite PROB \<and> (P PROB) \<and> (as \<in> valid_plans PROB) \<and> (s \<in> valid_states PROB)
\<longrightarrow> (\<exists>x.
(x \<in> PLS s as)
\<and> (x < f PROB s)
)
)"
using assms
unfolding PLS_def
by fastforce
\<comment> \<open>NOTE added lemma.\<close>
lemma bound_on_all_plans_bounds_MPLS_s_i:
fixes PROB s x
assumes "s \<in> valid_states PROB" "x \<in> MPLS_s PROB s"
shows "\<exists>as. x = Inf (PLS s as) \<and> as \<in> valid_plans PROB"
proof -
let ?S="{(s, as) | as. as \<in> valid_plans PROB}"
obtain x' where 1:
"x' \<in> ?S"
"x = (\<lambda> (s, as). Inf (PLS s as)) x'"
using assms
unfolding MPLS_s_def
by blast
let ?as="snd x'"
let ?s="fst x'"
have "?as \<in> valid_plans PROB"
using 1(1)
by auto
moreover have "?s = s"
using 1(1)
by fastforce
moreover have "x = Inf (PLS ?s ?as)"
using 1(2)
by (simp add: case_prod_unfold)
ultimately show ?thesis
by blast
qed
lemma bound_on_all_plans_bounds_MPLS_s:
fixes P f
assumes "(\<forall>PROB as s.
finite PROB \<and> (P PROB) \<and> (as \<in> valid_plans PROB) \<and> (s \<in> valid_states PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB s)
)
)"
shows "(\<forall>PROB x s.
finite PROB \<and> (P PROB) \<and> (s \<in> valid_states PROB) \<longrightarrow> (x \<in> MPLS_s PROB s)
\<longrightarrow> (x < f PROB s)
)"
using assms
unfolding MPLS_def
proof -
have 1: "\<forall>PROB s as.
finite PROB \<and> P PROB \<and> as \<in> valid_plans PROB \<and> s \<in> valid_states PROB \<longrightarrow>
(\<exists>x. x \<in> PLS s as \<and> x < f PROB s)"
using bound_on_all_plans_bounds_PLS_s[OF assms] .
{
fix PROB x and s :: "('a, 'b) fmap"
assume P1: "finite PROB" "(P PROB)" "(s \<in> valid_states PROB)"
{
assume "(x \<in> MPLS_s PROB s)"
then obtain as where i: "x = Inf (PLS s as)" "as \<in> valid_plans PROB"
using P1 bound_on_all_plans_bounds_MPLS_s_i
by blast
then obtain x' where "x' \<in> PLS s as" "x' < f PROB s"
using P1 i 1
by blast
then have "x < f PROB s"
using mem_lt_imp_MIN_lt i(1)
by blast
}
then have "(x \<in> MPLS_s PROB s) \<longrightarrow> (x < f PROB s)"
by blast
}
then show ?thesis
by blast
qed
\<comment> \<open>NOTE added lemma.\<close>
lemma Sup_MPLS_s_lt_if:
fixes PROB s k
assumes "(\<forall>x\<in>MPLS_s PROB s. x < k)"
shows "Sup (MPLS_s PROB s) < k"
proof -
have "MPLS_s PROB s \<noteq> {}"
using bound_main_lemma_s_3
by fast
then have "Sup (MPLS_s PROB s) \<in> MPLS_s PROB s"
using assms Sup_nat_def bounded_nat_set_is_finite
by force
then show "Sup (MPLS_s PROB s) < k"
using assms
by blast
qed
\<comment> \<open>NOTE type of `P` had to be fixed (type mismatch in goal).\<close>
lemma bound_child_parent_lemma_s_2:
fixes PROB :: "'a problem" and P :: "'a problem \<Rightarrow> bool" and s f
assumes "(\<forall>(PROB :: 'a problem) as s.
finite PROB \<and> (P PROB) \<and> (s \<in> valid_states PROB) \<and> (as \<in> valid_plans PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB s)
)
)"
shows "(
finite PROB \<and> (P PROB) \<and> (s \<in> valid_states PROB)
\<longrightarrow> problem_plan_bound_s PROB s < f PROB s
)"
proof -
\<comment> \<open>NOTE manual instantiation is required (automation fails otherwise).\<close>
have "\<forall>(PROB :: 'a problem) x s.
finite PROB \<and> P PROB \<and> s \<in> valid_states PROB
\<longrightarrow> x \<in> MPLS_s PROB s
\<longrightarrow> x < f PROB s
"
using assms bound_on_all_plans_bounds_MPLS_s[of P f]
by simp
then show
"finite PROB \<and> (P PROB) \<and> (s \<in> valid_states PROB) \<longrightarrow> (problem_plan_bound_s PROB s < f PROB s)"
unfolding problem_plan_bound_s_def
using Sup_MPLS_s_lt_if problem_plan_bound_s_def
by metis
qed
theorem bound_main_lemma_reachability_s:
fixes PROB :: "'a problem" and s
assumes "finite PROB" "s \<in> valid_states PROB"
shows "(problem_plan_bound_s PROB s < card (reachable_s PROB s))"
proof -
\<comment> \<open>NOTE derive premise for MP of 'bound\_child\_parent\_lemma\_s\_2'.\<close>
\<comment> \<open>NOTE type of `s` had to be fixed (warning in assumption declaration).\<close>
{
fix PROB :: "'a problem" and s :: "'a state" and as
assume P1: "finite PROB" "s \<in> valid_states PROB" "as \<in> valid_plans PROB"
then obtain as' where a: "exec_plan s as = exec_plan s as'" "subseq as' as"
"length as' \<le> card (reachable_s PROB s) - 1"
using P1 main_lemma_reachability_s
by blast
then have "length as' < card (reachable_s PROB s)"
using P1(1, 2) card_reachable_s_non_zero
by fastforce
then have "(\<exists>as'.
exec_plan s as = exec_plan s as' \<and> subseq as' as \<and> length as' < card (reachable_s PROB s))
"
using a
by blast
}
then have "
finite PROB \<and> True \<and> s \<in> valid_states PROB
\<longrightarrow> problem_plan_bound_s PROB s < card (reachable_s PROB s)
"
using bound_child_parent_lemma_s_2[where PROB = PROB and P = "\<lambda>_. True" and s = s
and f = "\<lambda>PROB s. card (reachable_s PROB s)"]
by blast
then show ?thesis
using assms(1, 2)
by blast
qed
lemma problem_plan_bound_s_LESS_EQ_problem_plan_bound_thm:
fixes PROB :: "'a problem" and s :: "'a state"
assumes "finite PROB" "(s \<in> valid_states PROB)"
shows "(problem_plan_bound_s PROB s < problem_plan_bound PROB + 1)"
proof -
{
fix PROB :: "'a problem" and s :: "'a state" and as
assume "finite PROB" "s \<in> valid_states PROB" "as \<in> valid_plans PROB"
then obtain as' where a: "exec_plan s as = exec_plan s as'" "subseq as' as"
"length as' \<le> problem_plan_bound PROB"
using problem_plan_bound_works
by blast
then have "length as' < problem_plan_bound PROB + 1"
by linarith
then have "\<exists>as'.
exec_plan s as = exec_plan s as' \<and> subseq as' as \<and> length as' \<le> problem_plan_bound PROB + 1
"
using a
by fastforce
}
\<comment> \<open>TODO unsure why a proof is needed at all here.\<close>
then have "\<forall>(PROB :: 'a problem) as s.
finite PROB \<and> True \<and> s \<in> valid_states PROB \<and> as \<in> valid_plans PROB
\<longrightarrow> (\<exists>as'.
exec_plan s as = exec_plan s as' \<and> subseq as' as \<and> length as' < problem_plan_bound PROB + 1)
"
by (metis Suc_eq_plus1 problem_plan_bound_works le_imp_less_Suc)
then show "(problem_plan_bound_s PROB s < problem_plan_bound PROB + 1)"
using assms bound_child_parent_lemma_s_2[where PROB = PROB and s = s and P = "\<lambda>_. True"
and f = "\<lambda>PROB s. problem_plan_bound PROB + 1"]
by fast
qed
\<comment> \<open>NOTE lemma `bound\_main\_lemma\_s\_1` skipped (this is being equivalently redeclared later).\<close>
lemma AS_VALID_MPLS_VALID:
fixes PROB as
assumes "(as \<in> valid_plans PROB)"
shows "(Inf (PLS s as) \<in> MPLS_s PROB s)"
using assms
unfolding MPLS_s_def
by fast
\<comment> \<open>NOTE moved up because it's used in the following lemma.\<close>
\<comment> \<open>NOTE type of `s` had to be fixed for 'in\_PLS\_leq\_2\_pow\_n'.\<close>
lemma bound_main_lemma_s_1:
fixes PROB :: "'a problem" and s :: "'a state" and x
assumes "finite PROB" "s \<in> (valid_states PROB)" "x \<in> MPLS_s PROB s"
shows "(x \<le> (2 ^ card (prob_dom PROB)) - 1)"
proof -
obtain as :: "(('a, bool) fmap \<times> ('a, bool) fmap) list" where "as \<in> valid_plans PROB"
using empty_plan_is_valid
by blast
then obtain x where 1: "x \<in> PLS s as" "x \<le> 2 ^ card (prob_dom PROB) - 1"
using assms in_PLS_leq_2_pow_n
by blast
then have "Inf (PLS s as) \<le> 2 ^ card (prob_dom PROB) - 1"
using mem_le_imp_MIN_le[where s = "PLS s as" and k = "2 ^ card (prob_dom PROB) - 1"]
by blast
then have "x \<le> 2 ^ card (prob_dom PROB) - 1"
using assms(3) 1
by blast
\<comment> \<open>TODO unsure why a proof is needed here (typing problem?).\<close>
then show ?thesis
using assms(1, 2, 3) S_VALID_AS_VALID_IMP_MIN_IN_PLS bound_on_all_plans_bounds_MPLS_s_i
in_MPLS_leq_2_pow_n
by metis
qed
lemma problem_plan_bound_s_ge_min_pls:
fixes PROB :: "'a problem" and as k s
assumes "finite PROB" "s \<in> (valid_states PROB)" "as \<in> (valid_plans PROB)"
"problem_plan_bound_s PROB s \<le> k"
shows "(Inf (PLS s as) \<le> problem_plan_bound_s PROB s)"
proof -
have "\<forall>x\<in>MPLS_s PROB s. x \<le> 2 ^ card (prob_dom PROB) - 1"
using assms(1, 2) bound_main_lemma_s_1 by blast
then have 1: "finite (MPLS_s PROB s)"
using mems_le_finite[where s = "MPLS_s PROB s" and k = "2 ^ card (prob_dom PROB) - 1"]
by blast
then have "MPLS_s PROB s \<noteq> {}"
using bound_main_lemma_s_3
by fast
then have "Inf (PLS s as) \<in> MPLS_s PROB s"
using assms AS_VALID_MPLS_VALID
by blast
then show "(Inf (PLS s as) \<le> problem_plan_bound_s PROB s)"
unfolding problem_plan_bound_s_def
using 1 le_cSup_finite
by blast
qed
theorem bound_main_lemma_s:
fixes PROB :: "'a problem" and s
assumes "finite PROB" "(s \<in> valid_states PROB)"
shows "(problem_plan_bound_s PROB s \<le> 2 ^ (card (prob_dom PROB)) - 1)"
proof -
have 1: "\<forall>x\<in>MPLS_s PROB s. x \<le> 2 ^ card (prob_dom PROB) - 1"
using assms bound_main_lemma_s_1
by metis
then have "MPLS_s PROB s \<noteq> {}"
using bound_main_lemma_s_3
by fast
then have "Sup (MPLS_s PROB s) \<le> 2 ^ card (prob_dom PROB) - 1"
using 1 bound_main_lemma_2[where s = "MPLS_s PROB s" and k = "2 ^ card (prob_dom PROB) - 1"]
by blast
then show "problem_plan_bound_s PROB s \<le> 2 ^ card (prob_dom PROB) - 1"
unfolding problem_plan_bound_s_def
by blast
qed
lemma problem_plan_bound_s_works:
fixes PROB :: "'a problem" and as s
assumes "finite PROB" "(as \<in> valid_plans PROB)" "(s \<in> valid_states PROB)"
shows "(\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' \<le> problem_plan_bound_s PROB s)
)"
proof -
have "problem_plan_bound_s PROB s \<le> 2 ^ card (prob_dom PROB) - 1"
using assms(1, 3) bound_main_lemma_s
by blast
then have 1: "Inf (PLS s as) \<le> problem_plan_bound_s PROB s"
using assms problem_plan_bound_s_ge_min_pls[of PROB s as " 2 ^ card (prob_dom PROB) - 1"]
by blast
then obtain x where obtain_x: "x \<in> PLS s as \<and> x = Inf (PLS s as)"
using PLS_nempty_and_has_min
by blast
then have "\<exists>as'. exec_plan s as = exec_plan s as' \<and> length as' = Inf (PLS s as) \<and> subseq as' as"
using PLS_works[where s = s and as = as and x = "Inf (PLS s as)"]
obtain_x
by fastforce
then show "(\<exists>as'.
(exec_plan s as = exec_plan s as') \<and> (subseq as' as)
\<and> (length as' \<le> problem_plan_bound_s PROB s)
)"
using 1
by metis
qed
\<comment> \<open>NOTE skipped second declaration (declared twice in source).\<close>
lemma PLS_def_ITP2015:
fixes s as
shows "PLS s as = {length as' | as'. (exec_plan s as' = exec_plan s as) \<and> (subseq as' as)}"
using PLS_def
by blast
\<comment> \<open>NOTE Set comprehension had to be rewritten to image (there is no pattern matching in the part
left of the pipe symbol).\<close>
lemma expanded_problem_plan_bound_charles_thm:
fixes PROB :: "'a problem"
shows "
problem_plan_bound_charles PROB
= Sup (
{
Inf (PLS_charles (fst p) (snd p) PROB)
| p. (fst p \<in> valid_states PROB) \<and> (snd p \<in> valid_plans PROB)})
"
unfolding problem_plan_bound_charles_def MPLS_charles_def
by blast
lemma bound_main_lemma_charles_3:
fixes PROB :: "'a problem"
assumes "finite PROB"
shows "MPLS_charles PROB \<noteq> {}"
proof -
have 1: "[] \<in> valid_plans PROB"
using empty_plan_is_valid
by auto
then obtain s :: "'a state" where obtain_s: "s \<in> valid_states PROB"
using assms valid_states_nempty
by auto
then have "Inf (PLS_charles s [] PROB) \<in> MPLS_charles PROB"
unfolding MPLS_charles_def
using 1
by auto
then show "MPLS_charles PROB \<noteq> {}"
by blast
qed
lemma in_PLS_charles_leq_2_pow_n:
fixes PROB :: "'a problem" and s as
assumes "finite PROB" "s \<in> valid_states PROB" "as \<in> valid_plans PROB"
shows "(\<exists>x.
(x \<in> PLS_charles s as PROB)
\<and> (x \<le> 2 ^ card (prob_dom PROB) - 1))
"
proof -
obtain as' where 1:
"exec_plan s as = exec_plan s as'" "subseq as' as" "length as' \<le> 2 ^ card (prob_dom PROB) - 1"
using assms main_lemma
by blast
then have "as' \<in> valid_plans PROB"
using assms(3) sublist_valid_plan
by blast
then have "length as' \<in> PLS_charles s as PROB"
unfolding PLS_charles_def
using 1
by auto
then show ?thesis
using 1(3)
by fast
qed
\<comment> \<open>NOTE added lemma.\<close>
\<comment> \<open>NOTE this lemma retrieves `s`, `as` for a given @{term "x \<in> MPLS_charles PROB"} and characterizes it as
the minimum of 'PLS\_charles s as PROB'.\<close>
lemma x_in_MPLS_charles_then:
fixes PROB s as
assumes "x \<in> MPLS_charles PROB"
shows "\<exists>s as.
s \<in> valid_states PROB \<and> as \<in> valid_plans PROB \<and> x = Inf (PLS_charles s as PROB)
"
proof -
have "\<exists>p \<in> {p. (fst p) \<in> valid_states PROB \<and> (snd p) \<in> valid_plans PROB}. x = Inf (PLS_charles (fst p) (snd p) PROB)"
using MPLS_charles_def assms
by fast
then obtain p where 1:
"p \<in> {p. (fst p) \<in> valid_states PROB \<and> (snd p) \<in> valid_plans PROB}"
"x = Inf (PLS_charles (fst p) (snd p) PROB)"
by blast
then have "fst p \<in> valid_states PROB" "snd p \<in> valid_plans PROB"
by blast+
then show ?thesis
using 1
by fast
qed
lemma in_MPLS_charles_leq_2_pow_n:
fixes PROB :: "'a problem" and x
assumes "finite PROB" "x \<in> MPLS_charles PROB"
shows "x \<le> 2 ^ card (prob_dom PROB) - 1"
proof -
obtain s as where 1:
"s \<in> valid_states PROB" "as \<in> valid_plans PROB" "x = Inf (PLS_charles s as PROB)"
using assms(2) x_in_MPLS_charles_then
by blast
then obtain x' where 2: "x' \<in> PLS_charles s as PROB""x' \<le> 2 ^ card (prob_dom PROB) - 1"
using assms(1) in_PLS_charles_leq_2_pow_n
by blast
then have "x \<le> x'"
using 1(3) mem_le_imp_MIN_le
by blast
then show ?thesis
using 1 2
by linarith
qed
lemma bound_main_lemma_charles:
fixes PROB :: "'a problem"
assumes "finite PROB"
shows "problem_plan_bound_charles PROB \<le> 2 ^ (card (prob_dom PROB)) - 1"
proof -
have 1: "\<forall>x\<in>MPLS_charles PROB. x \<le> 2 ^ (card (prob_dom PROB)) - 1"
using assms in_MPLS_charles_leq_2_pow_n
by blast
then have "MPLS_charles PROB \<noteq> {}"
using assms bound_main_lemma_charles_3
by blast
then have "Sup (MPLS_charles PROB) \<le> 2 ^ (card (prob_dom PROB)) - 1"
using 1 bound_main_lemma_2
by meson
then show ?thesis
using problem_plan_bound_charles_def
by metis
qed
lemma bound_on_all_plans_bounds_PLS_charles:
fixes P and f
assumes "\<forall>(PROB :: 'a problem) as s.
(P PROB) \<and> finite PROB \<and> (as \<in> valid_plans PROB) \<and> (s \<in> valid_states PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as') \<and> (subseq as' as)\<and> (length as' < f PROB))
"
shows "(\<forall>PROB s as.
(P PROB) \<and> finite PROB \<and> (as \<in> valid_plans PROB) \<and> (s \<in> valid_states PROB)
\<longrightarrow> (\<exists>x.
(x \<in> PLS_charles s as PROB)
\<and> (x < f PROB)))
"
proof -
{
\<comment> \<open>NOTE type for 's' had to be fixed (type mismatch in first proof step.\<close>
fix PROB :: "'a problem" and as and s :: "'a state"
assume P:
"P PROB" "finite PROB" "as \<in> valid_plans PROB" "s \<in> valid_states PROB"
"(\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB)
)"
then obtain as' where 1:
"(exec_plan s as = exec_plan s as')" "(subseq as' as)" "(length as' < f PROB)"
using P(5)
by blast
then have 2: "as' \<in> valid_plans PROB"
using P(3) sublist_valid_plan
by blast
let ?x = "length as'"
have "?x \<in> PLS_charles s as PROB"
unfolding PLS_charles_def
using 1 2
by auto
then have "\<exists>x. x \<in> PLS_charles s as PROB \<and> x < f PROB"
using 1 2
by blast
}
then show ?thesis
using assms
by auto
qed
\<comment> \<open>NOTE added lemma (refactored from `bound\_on\_all\_plans\_bounds\_MPLS\_charles`).\<close>
lemma bound_on_all_plans_bounds_MPLS_charles_i:
assumes "\<forall>(PROB :: 'a problem) s as.
(P PROB) \<and> finite PROB \<and> (as \<in> valid_plans PROB) \<and> (s \<in> valid_states PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as') \<and> (subseq as' as) \<and> (length as' < f PROB))
"
shows "\<forall>(PROB :: 'a problem) s as.
P PROB \<and> finite PROB \<and> as \<in> valid_plans PROB \<and> s \<in> valid_states PROB
\<longrightarrow> Inf {n. n \<in> PLS_charles s as PROB} < f PROB
"
proof -
{
fix PROB :: "'a problem" and s as
have "P PROB \<and> finite PROB \<and> as \<in> valid_plans PROB \<and> s \<in> valid_states PROB
\<longrightarrow> (\<exists>x. x \<in> PLS_charles s as PROB \<and> x < f PROB)
"
using assms bound_on_all_plans_bounds_PLS_charles[of P f]
by blast
then have "
P PROB \<and> finite PROB \<and> as \<in> valid_plans PROB \<and> s \<in> valid_states PROB
\<longrightarrow> Inf {n. n \<in> PLS_charles s as PROB} < f PROB
"
using mem_lt_imp_MIN_lt CollectI
by metis
}
then show ?thesis
by blast
qed
lemma bound_on_all_plans_bounds_MPLS_charles:
fixes P f
assumes "(\<forall>(PROB :: 'a problem) as s.
(P PROB) \<and> finite PROB \<and> (s \<in> valid_states PROB) \<and> (as \<in> valid_plans PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB)
)
)"
shows "(\<forall>PROB x.
(P PROB) \<and> finite PROB
\<longrightarrow> (x \<in> MPLS_charles PROB)
\<longrightarrow> (x < f PROB)
)"
proof -
have 1: "\<forall>(PROB :: 'a problem) s as.
P PROB \<and> finite PROB \<and> as \<in> valid_plans PROB \<and> s \<in> valid_states PROB
\<longrightarrow> Inf {n. n \<in> PLS_charles s as PROB} < f PROB
"
using assms bound_on_all_plans_bounds_MPLS_charles_i
by blast
moreover
{
fix PROB :: "'a problem" and x
assume P1: "(P PROB)" "finite PROB" "x \<in> MPLS_charles PROB"
then obtain s as where a:
"as \<in> valid_plans PROB" "s \<in> valid_states PROB" "x = Inf (PLS_charles s as PROB)"
using x_in_MPLS_charles_then
by blast
then have "Inf {n. n \<in> PLS_charles s as PROB} < f PROB"
using 1 P1
by blast
then have "x < f PROB"
using a
by simp
}
ultimately show ?thesis
by blast
qed
\<comment> \<open>NOTE added lemma (refactored from 'bound\_on\_all\_plans\_bounds\_problem\_plan\_bound\_charles').\<close>
lemma bound_on_all_plans_bounds_problem_plan_bound_charles_i:
fixes PROB :: "'a problem"
assumes "finite PROB" "\<forall>x \<in> MPLS_charles PROB. x < k"
shows "Sup (MPLS_charles PROB) \<in> MPLS_charles PROB"
proof -
have 1: "MPLS_charles PROB \<noteq> {}"
using assms(1) bound_main_lemma_charles_3
by auto
then have "finite (MPLS_charles PROB)"
using assms(2) finite_nat_set_iff_bounded
by blast
then show ?thesis
unfolding Sup_nat_def
using 1
by simp
qed
lemma bound_on_all_plans_bounds_problem_plan_bound_charles:
fixes P f
assumes "(\<forall>(PROB :: 'a problem) as s.
(P PROB) \<and> finite PROB \<and> (s \<in> valid_states PROB) \<and> (as \<in> valid_plans PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> (subseq as' as)
\<and> (length as' < f PROB)))
"
shows "(\<forall>PROB.
(P PROB) \<and> finite PROB \<longrightarrow> (problem_plan_bound_charles PROB < f PROB))
"
proof -
have 1: "\<forall>PROB x. P PROB \<and> finite PROB \<longrightarrow> x \<in> MPLS_charles PROB \<longrightarrow> x < f PROB"
using assms bound_on_all_plans_bounds_MPLS_charles
by blast
moreover
{
fix PROB
assume P: "P PROB" "finite PROB"
moreover have 2: "\<forall>x. x \<in> MPLS_charles PROB \<longrightarrow> x < f PROB"
using 1 P
by blast
moreover
{
fix x
assume P1: "x \<in> MPLS_charles PROB"
moreover have "x < f PROB"
using P(1, 2) P1 1
by presburger
moreover have "MPLS_charles PROB \<noteq> {}"
using P1
by blast
moreover have "Sup (MPLS_charles PROB) < f PROB"
using calculation(3) 2 bound_child_parent_not_eq_last_diff_paths[of "MPLS_charles PROB" "f PROB"]
by blast
ultimately have "(problem_plan_bound_charles PROB < f PROB)"
unfolding problem_plan_bound_charles_def
by blast
}
moreover have "Sup (MPLS_charles PROB) \<in> MPLS_charles PROB"
using P(2) 2 bound_on_all_plans_bounds_problem_plan_bound_charles_i
by blast
ultimately have "problem_plan_bound_charles PROB < f PROB"
unfolding problem_plan_bound_charles_def
by blast
}
ultimately show ?thesis
by blast
qed
subsection "The Relation between Diameter, Sublist Diameter and Recurrence Diameter Bounds."
text \<open> The goal of this subsection is to verify the relation between diameter, sublist diameter
and recurrence diameter bounds given by HOL4 Theorem 1, i.e.
@{term "\<d>(\<delta>) \<le> \<l>(\<delta>) \<and> \<l>(\<delta>) \<le> \<r>\<d>(\<delta>)"}
where @{term "\<d>(\<delta>)"}, @{term "\<l>(\<delta>)"} and @{term "\<r>\<d>(\<delta>)"} denote the diameter, sublist diameter and recurrence diameter bounds.
[Abdualaziz et al., p.20]
The relevant lemmas are `sublistD\_bounds\_D` and `RD\_bounds\_sublistD` which culminate in
theorem `sublistD\_bounds\_D\_and\_RD\_bounds\_sublistD`. \<close>
lemma sublistD_bounds_D:
fixes PROB :: "'a problem"
assumes "finite PROB"
shows "problem_plan_bound_charles PROB \<le> problem_plan_bound PROB"
proof -
\<comment> \<open>NOTE obtain the premise needed for MP of 'bound\_on\_all\_plans\_bounds\_problem\_plan\_bound\_charles'.\<close>
{
fix PROB :: "'a problem" and s :: "'a state" and as
assume P: "finite PROB" "s \<in> valid_states PROB" "as \<in> valid_plans PROB"
then have "\<exists>as'.
exec_plan s as = exec_plan s as' \<and> subseq as' as \<and> length as' \<le> problem_plan_bound PROB
"
using problem_plan_bound_works
by blast
then have "\<exists>as'.
exec_plan s as = exec_plan s as' \<and> subseq as' as \<and> length as' < problem_plan_bound PROB + 1
"
by force
}
then have "problem_plan_bound_charles PROB < problem_plan_bound PROB + 1"
using assms bound_on_all_plans_bounds_problem_plan_bound_charles[where f = "\<lambda>PROB. problem_plan_bound PROB + 1"
and P = "\<lambda>_. True"]
by blast
then show ?thesis
by simp
qed
\<comment> \<open>NOTE added lemma (this was adapted from pred\_setScript.sml:4887 with exlusion of the premise for
the empty set since `Max {}` is undefined in Isabelle/HOL.)\<close>
lemma MAX_SET_ELIM':
fixes P Q
assumes "finite P" "P \<noteq> {}" "(\<forall>x. (\<forall>y. y \<in> P \<longrightarrow> y \<le> x) \<and> x \<in> P \<longrightarrow> R x)"
shows "R (Max P)"
using assms
by force
\<comment> \<open>NOTE added lemma.\<close>
\<comment> \<open>NOTE adapted from pred\_setScript.sml:4895 (premise `finite P` was added).\<close>
lemma MIN_SET_ELIM':
fixes P Q
assumes "finite P" "P \<noteq> {}" "\<forall>x. (\<forall>y. y \<in> P \<longrightarrow> x \<le> y) \<and> x \<in> P \<longrightarrow> Q x"
shows "Q (Min P)"
proof -
let ?x="Min P"
have "Min P \<in> P"
using Min_in[OF assms(1) assms(2)]
by simp
moreover {
fix y
assume P: "y \<in> P"
then have "?x \<le> y"
using Min.coboundedI[OF assms(1)]
by blast
then have "Q ?x" using P assms
by auto
}
ultimately show ?thesis
by blast
qed
\<comment> \<open>NOTE added lemma (refactored from `RD\_bounds\_sublistD`).\<close>
lemma RD_bounds_sublistD_i_a:
fixes Pi :: "'a problem"
assumes "finite Pi"
shows "finite {length p - 1 |p. valid_path Pi p \<and> distinct p}"
proof -
{
let ?ss="{length p - 1 |p. valid_path Pi p \<and> distinct p}"
let ?ss'="{p. valid_path Pi p \<and> distinct p}"
have 1: "?ss = (\<lambda>x. length x - 1) ` ?ss'"
by blast
{
\<comment> \<open>NOTE type of `valid\_states Pi` had to be asserted to match `FINITE\_valid\_states`.\<close>
let ?S="{p. distinct p \<and> set p \<subseteq> (valid_states Pi :: 'a state set)}"
{
from assms have "finite (valid_states Pi :: 'a state set)"
using FINITE_valid_states[of Pi]
by simp
then have "finite ?S"
using FINITE_ALL_DISTINCT_LISTS
by blast
}
moreover {
{
fix x
assume "x \<in> ?ss'"
then have "x \<in> ?S"
proof (induction x)
case (Cons a x)
then have a: "valid_path Pi (a # x)" "distinct (a # x)"
by blast+
moreover {
fix x'
assume P: "x' \<in> set (a # x)"
then have "x' \<in> valid_states Pi"
proof (cases "x")
case Nil
from a(1) Nil
have "a \<in> valid_states Pi"
by simp
moreover from P Nil
have "x' = a"
by force
ultimately show ?thesis
by simp
next
case (Cons a' list)
{
{
from Cons.prems have "valid_path Pi (a # x)"
by simp
then have "a \<in> valid_states Pi" "valid_path Pi (a' # list)"
using Cons
by fastforce+
}
note a = this
moreover {
from Cons.prems have "distinct (a # x)"
by blast
then have "distinct (a' # list)"
using Cons
by simp
}
ultimately
have "(a' # list) \<in> ?ss'"
by blast
then have "(a' # list) \<in> ?S"
using Cons Cons.IH
by argo
}
then show ?thesis
using P a(1) local.Cons set_ConsD
by fastforce
qed
}
ultimately show ?case
by blast
qed simp
}
then have "?ss' \<subseteq> ?S"
by blast
}
ultimately have "finite ?ss'"
using rev_finite_subset
by auto
}
note 2 = this
from 1 2 have "finite ?ss"
using finite_imageI
by auto
}
then show ?thesis
by blast
qed
\<comment> \<open>NOTE added lemma (refactored from `RD\_bounds\_sublistD`).\<close>
lemma RD_bounds_sublistD_i_b:
fixes Pi :: "'a problem"
shows "{length p - 1 |p. valid_path Pi p \<and> distinct p} \<noteq> {}"
proof -
let ?Q="{length p - 1 |p. valid_path Pi p \<and> distinct p}"
let ?Q'="{p. valid_path Pi p \<and> distinct p}"
{
have "valid_path Pi []"
by simp
moreover have "distinct []"
by simp
ultimately have "[] \<in> ?Q'"
by simp
}
note 1 = this
have "?Q = (\<lambda>p. length p - 1) ` ?Q'"
by blast
then have "length [] - 1 \<in> ?Q"
using 1
by (metis (mono_tags, lifting) image_iff list.size(3))
then show ?thesis
by blast
qed
\<comment> \<open>NOTE added lemma (refactored from `RD\_bounds\_sublistD`).\<close>
lemma RD_bounds_sublistD_i_c:
fixes Pi :: "'a problem" and as :: "(('a, bool) fmap \<times> ('a, bool) fmap) list" and x
and s :: "('a, bool) fmap"
assumes "s \<in> valid_states Pi" "as \<in> valid_plans Pi"
"(\<forall>y. y \<in> {length p - 1 |p. valid_path Pi p \<and> distinct p} \<longrightarrow> y \<le> x)"
"x \<in> {length p - 1 |p. valid_path Pi p \<and> distinct p}"
shows "Min (PLS s as) \<le> Max {length p - 1 |p. valid_path Pi p \<and> distinct p}"
proof -
let ?P="(PLS s as)"
let ?Q="{length p - 1 |p. valid_path Pi p \<and> distinct p}"
from assms(4) obtain p where 1:
"x = length p - 1" "valid_path Pi p" "distinct p"
by blast
{
fix p'
assume "valid_path Pi p'" "distinct p'"
then obtain y where "y \<in> ?Q" "y = length p' - 1"
by blast
\<comment> \<open>NOTE we cannot infer @{term "length p' - 1 \<le> length p - 1"} since `length p' = 0` might be true.\<close>
then have a: "length p' - 1 \<le> length p - 1"
using assms(3) 1(1)
by meson
}
note 2 = this
{
from finite_PLS PLS_NEMPTY
have "finite (PLS s as)" "PLS s as \<noteq> {}"
by blast+
moreover {
fix n
assume P: "(\<forall>y. y \<in> PLS s as \<longrightarrow> n \<le> y)" "n \<in> PLS s as"
from P(2) obtain as' where i:
"n = length as'" "exec_plan s as' = exec_plan s as" "subseq as' as"
unfolding PLS_def
by blast
let ?p'="statelist' s as'"
{
have "length as' = length ?p' - 1"
by (simp add: LENGTH_statelist')
\<comment> \<open>MARKER (topologicalPropsScript.sml:195)\<close>
have "1 + (length p - 1) = length p - 1 + 1"
by presburger
\<comment> \<open>MARKER (topologicalPropsScript.sml:200)\<close>
{
from assms(2) i(3) sublist_valid_plan
have "as' \<in> valid_plans Pi"
by blast
then have "valid_path Pi ?p'"
using assms(1) valid_path_statelist'
by auto
}
moreover {
{
assume C: "\<not>distinct ?p'"
\<comment> \<open>NOTE renamed variable `drop` to `drop'` to avoid shadowing of the function by the
same name in Isabelle/HOL.\<close>
then obtain rs pfx drop' tail where C_1: "?p' = pfx @ [rs] @ drop' @ [rs] @ tail"
using not_distinct_decomp[OF C]
by fast
let ?pfxn="length pfx"
have C_2: "?p' ! ?pfxn = rs"
by (simp add: C_1)
from LENGTH_statelist'
have C_3: "length as' + 1 = length ?p'"
by metis
then have "?pfxn \<le> length as'"
using C_1
by fastforce
then have C_4: "exec_plan s (take ?pfxn as') = rs"
using C_2 statelist'_TAKE
by blast
let ?prsd = "length (pfx @ [rs] @ drop')"
let ?ap1 = "take ?pfxn as'"
\<comment> \<open>MARKER (topologicalPropsScript.sml:215)\<close>
from C_1
have C_5: "?p' ! ?prsd = rs"
by (metis append_Cons length_append nth_append_length nth_append_length_plus)
from C_1 C_3
have C_6: "?prsd \<le> length as'"
by simp
then have C_7: "exec_plan s (take ?prsd as') = rs"
using C_5 statelist'_TAKE
by auto
let ?ap2="take ?prsd as'"
let ?asfx="drop ?prsd as'"
have C_8: "as' = ?ap2 @ ?asfx"
by force
then have "exec_plan s as' = exec_plan (exec_plan s ?ap2) ?asfx"
using exec_plan_Append
by metis
then have C_9: "exec_plan s as' = exec_plan s (?ap1 @ ?asfx)"
using C_4 C_7 exec_plan_Append
by metis
from C_6
have C_10: "(length ?ap1 = ?pfxn) \<and> (length ?ap2 = ?prsd)"
by fastforce
then have C_11: "length (?ap1 @ ?asfx) < length (?ap2 @ ?asfx)"
by auto
{
from C_10
have "?pfxn + length ?asfx = length (?ap1 @ ?asfx)"
by simp
from C_9 i(2)
have C_12: "exec_plan s (?ap1 @ ?asfx) = exec_plan s as"
by argo
{
{
{
have "prefix ?ap1 ?ap2"
by (metis (no_types) length_append prefix_def take_add)
then have "subseq ?ap1 ?ap2"
using isPREFIX_sublist
by blast
}
moreover have "sublist ?asfx ?asfx"
using sublist_refl
by blast
ultimately have "subseq (?ap1 @ ?asfx) as'"
using C_8 subseq_append
by metis
}
moreover from i(3)
have "subseq as' as"
by simp
ultimately have "subseq (?ap1 @ ?asfx) as"
using sublist_trans
by blast
}
then have "length (?ap1 @ ?asfx) \<in> PLS s as"
unfolding PLS_def
using C_12
by blast
}
then have False
using P(1) i(1) C_10
by auto
}
hence "distinct ?p'"
by auto
}
ultimately have "length ?p' - 1 \<le> length p - 1"
using 2
by blast
}
note ii = this
{
from i(1) have "n + 1 = length ?p'"
using LENGTH_statelist'[symmetric]
by blast
also have "\<dots> \<le> 1 + (length p - 1)"
using ii
by linarith
finally have "n \<le> length p - 1"
by fastforce
}
then have "n \<le> length p - 1"
by blast
}
ultimately have "Min ?P \<le> length p - 1"
using MIN_SET_ELIM'[where P="?P" and Q="\<lambda>x. x \<le> length p - 1"]
by blast
}
note 3 = this
{
have "length p - 1 \<le> Max {length p - 1 |p. valid_path Pi p \<and> distinct p}"
using assms(3, 4) 1(1)
- by (metis (no_types, lifting) Sup_nat_def assms(3) cSup_eq_maximum)
+ by (smt Max.coboundedI bdd_aboveI bdd_above_nat)
moreover
have "Min (PLS s as) \<le> length p - 1"
using 3
by blast
ultimately
have "Min (PLS s as) \<le> Max {length p - 1 |p. valid_path Pi p \<and> distinct p}"
by linarith
}
then show ?thesis
by blast
qed
\<comment> \<open>NOTE added lemma (refactored from `RD\_bounds\_sublistD`).\<close>
lemma RD_bounds_sublistD_i:
fixes Pi :: "'a problem" and x
assumes "finite Pi" "(\<forall>y. y \<in> MPLS Pi \<longrightarrow> y \<le> x)" "x \<in> MPLS Pi"
shows "x \<le> Max {length p - 1 |p. valid_path Pi p \<and> distinct p}"
proof -
{
let ?P="MPLS Pi"
let ?Q="{length p - 1 |p. valid_path Pi p \<and> distinct p}"
from assms(3)
obtain s as where 1:
"s \<in> valid_states Pi" "as \<in> valid_plans Pi" "x = Inf (PLS s as)"
unfolding MPLS_def
by fast
have "x \<le> Max ?Q" proof -
text \<open> Show that `x` is not only the infimum but also the minimum of `PLS s as`.\<close>
{
have "finite (PLS s as)"
using finite_PLS
by auto
moreover
have "PLS s as \<noteq> {}"
using PLS_NEMPTY
by auto
ultimately
have a: "Inf (PLS s as) = Min (PLS s as)"
using cInf_eq_Min[of "PLS s as"]
by blast
from 1(3) a have "x = Min (PLS s as)"
by blast
}
note a = this
{
let ?limit="Min (PLS s as)"
from assms(1)
have a: "finite ?Q"
using RD_bounds_sublistD_i_a
by blast
have b: "?Q \<noteq> {}"
using RD_bounds_sublistD_i_b
by fast
from 1(1, 2)
have c: "\<forall>x. (\<forall>y. y \<in> ?Q \<longrightarrow> y \<le> x) \<and> x \<in> ?Q \<longrightarrow> ?limit \<le> Max ?Q"
using RD_bounds_sublistD_i_c
by blast
have "?limit \<le> Max ?Q"
using MAX_SET_ELIM'[where P="?Q" and R="\<lambda>x. ?limit \<le> Max ?Q", OF a b c]
by blast
}
note b = this
from a b show "x \<le> Max ?Q"
by blast
qed
}
then show ?thesis
using assms
unfolding MPLS_def
by blast
qed
\<comment> \<open>NOTE type of `Pi` had to be fixed for use of `FINITE\_valid\_states`.\<close>
lemma RD_bounds_sublistD:
fixes Pi :: "'a problem"
assumes "finite Pi"
shows "problem_plan_bound Pi \<le> RD Pi"
proof -
let ?P="MPLS Pi"
let ?Q="{length p - 1 |p. valid_path Pi p \<and> distinct p}"
{
from assms
have 1: "finite ?P"
using FINITE_MPLS
by blast
from assms
have 2: "?P \<noteq> {}"
using MPLS_nempty
by blast
from assms
have 3: "\<forall>x. (\<forall>y. y \<in> ?P \<longrightarrow> y \<le> x) \<and> x \<in> ?P \<longrightarrow> x \<le> Max ?Q"
using RD_bounds_sublistD_i
by blast
have "Max ?P \<le> Max ?Q"
using MAX_SET_ELIM'[OF 1 2 3]
by blast
}
then show ?thesis
unfolding problem_plan_bound_def RD_def Sup_nat_def
- by blast
+ using RD_bounds_sublistD_i_b by auto
qed
\<comment> \<open>NOTE type for `PROB` had to be fixed in order to be able to match `sublistD\_bounds\_D`.\<close>
theorem sublistD_bounds_D_and_RD_bounds_sublistD:
fixes PROB :: "'a problem"
assumes "finite PROB"
shows "
problem_plan_bound_charles PROB \<le> problem_plan_bound PROB
\<and> problem_plan_bound PROB \<le> RD PROB
"
using assms sublistD_bounds_D RD_bounds_sublistD
by auto
\<comment> \<open>NOTE type of `PROB` had to be fixed for MP of lemmas.\<close>
lemma empty_problem_bound:
fixes PROB :: "'a problem"
assumes "(prob_dom PROB = {})"
shows "(problem_plan_bound PROB = 0)"
proof -
{
fix PROB' and as :: "(('a, 'b) fmap \<times> ('a, 'b) fmap) list" and s :: "('a, 'b) fmap"
assume
"finite PROB" "prob_dom PROB' = {}" "s \<in> valid_states PROB'" "as \<in> valid_plans PROB'"
then have "exec_plan s [] = exec_plan s as"
using empty_prob_dom_imp_empty_plan_always_good
by blast
then have "(\<exists>as'. exec_plan s as = exec_plan s as' \<and> subseq as' as \<and> length as' < 1)"
by force
}
then show ?thesis
using bound_on_all_plans_bounds_problem_plan_bound_[where P="\<lambda>P. prob_dom P = {}" and f="\<lambda>P. 1", of PROB]
using assms empty_prob_dom_finite
by blast
qed
lemma problem_plan_bound_works':
fixes PROB :: "'a problem" and as s
assumes "finite PROB" "(s \<in> valid_states PROB)" "(as \<in> valid_plans PROB)"
shows "(\<exists>as'.
(exec_plan s as' = exec_plan s as)
\<and> (subseq as' as)
\<and> (length as' \<le> problem_plan_bound PROB)
\<and> (sat_precond_as s as')
)"
proof -
obtain as' where 1:
"exec_plan s as = exec_plan s as'" "subseq as' as" "length as' \<le> problem_plan_bound PROB"
using assms problem_plan_bound_works
by blast
\<comment> \<open>NOTE this step seems to be handled implicitely in original proof.\<close>
moreover have "rem_condless_act s [] as' \<in> valid_plans PROB"
using assms(3) 1(2) rem_condless_valid_10 sublist_valid_plan
by blast
moreover have "subseq (rem_condless_act s [] as') as'"
using rem_condless_valid_8
by blast
moreover have "length (rem_condless_act s [] as') \<le> length as'"
using rem_condless_valid_3
by blast
moreover have "sat_precond_as s (rem_condless_act s [] as')"
using rem_condless_valid_2
by blast
moreover have "exec_plan s as' = exec_plan s (rem_condless_act s [] as')"
using rem_condless_valid_1
by blast
ultimately show ?thesis
by fastforce
qed
\<comment> \<open>TODO remove? Can be solved directly with 'TopologicalProps.bound\_on\_all\_plans\_bounds\_problem\_plan\_bound\_thesis'.\<close>
lemma problem_plan_bound_UBound:
assumes "(\<forall>as s.
(s \<in> valid_states PROB)
\<and> (as \<in> valid_plans PROB)
\<longrightarrow> (\<exists>as'.
(exec_plan s as = exec_plan s as')
\<and> subseq as' as
\<and> (length as' < f PROB)
)
)" "finite PROB"
shows "(problem_plan_bound PROB < f PROB)"
proof -
let ?P = "\<lambda>Pr. PROB = Pr"
have "?P PROB" by simp
then show ?thesis
using assms bound_on_all_plans_bounds_problem_plan_bound_[where P = ?P]
by force
qed
subsection "Traversal Diameter"
\<comment> \<open>NOTE name shortened.\<close>
definition traversed_states where
"traversed_states s as \<equiv> set (state_list s as)"
lemma finite_traversed_states: "finite (traversed_states s as)"
unfolding traversed_states_def
by simp
lemma traversed_states_nempty: "traversed_states s as \<noteq> {}"
unfolding traversed_states_def
by (induction as) auto
lemma traversed_states_geq_1:
fixes s
shows "1 \<le> card (traversed_states s as)"
proof -
have "card (traversed_states s as) \<noteq> 0"
using traversed_states_nempty finite_traversed_states card_0_eq
by blast
then show "1 \<le> card (traversed_states s as)"
by linarith
qed
lemma init_is_traversed: "s \<in> traversed_states s as"
unfolding traversed_states_def
by (induction as) auto
\<comment> \<open>NOTE name shortened.\<close>
definition td where
"td PROB \<equiv> Sup {
(card (traversed_states (fst p) (snd p))) - 1
| p. (fst p \<in> valid_states PROB) \<and> (snd p \<in> valid_plans PROB)}
"
lemma traversed_states_rem_condless_act: "\<And>s.
traversed_states s (rem_condless_act s [] as) = traversed_states s as
"
apply(induction as)
apply(auto simp add: traversed_states_def rem_condless_act_cons)
subgoal by (simp add: state_succ_pair)
subgoal using init_is_traversed traversed_states_def by blast
subgoal by (simp add: state_succ_pair)
done
\<comment> \<open>NOTE added lemma.\<close>
lemma td_UBound_i:
fixes PROB :: "(('a, 'b) fmap \<times> ('a, 'b) fmap) set"
assumes "finite PROB"
shows "
{
(card (traversed_states (fst p) (snd p))) - 1
| p. (fst p \<in> valid_states PROB) \<and> (snd p \<in> valid_plans PROB)}
\<noteq> {}
"
proof -
let ?S="{p. (fst p \<in> valid_states PROB) \<and> (snd p \<in> valid_plans PROB)}"
obtain s :: "'a state" where "s \<in> valid_states PROB"
using assms valid_states_nempty
by blast
moreover have "[] \<in> valid_plans PROB"
using empty_plan_is_valid
by auto
ultimately have "?S \<noteq> {}"
using assms valid_states_nempty
by auto
then show ?thesis
by blast
qed
lemma td_UBound:
fixes PROB :: "(('a, 'b) fmap \<times> ('a, 'b) fmap) set"
assumes "finite PROB" "(\<forall>s as.
(sat_precond_as s as) \<and> (s \<in> valid_states PROB) \<and> (as \<in> valid_plans PROB)
\<longrightarrow> (card (traversed_states s as) \<le> k)
)"
shows "(td PROB \<le> k - 1)"
proof -
let ?S="{
(card (traversed_states (fst p) (snd p))) - 1
| p. (fst p \<in> valid_states PROB) \<and> (snd p \<in> valid_plans PROB)}
"
{
fix x
assume "x \<in> ?S"
then obtain p where 1:
"x = card (traversed_states (fst p) (snd p)) - 1" "fst p \<in> valid_states PROB"
"snd p \<in> valid_plans PROB"
by blast
let ?s="fst p"
let ?as="snd p"
{
let ?as'="(rem_condless_act ?s [] ?as)"
have 2: "traversed_states ?s ?as = traversed_states ?s ?as'"
using traversed_states_rem_condless_act
by blast
moreover have "sat_precond_as ?s ?as'"
using rem_condless_valid_2
by blast
moreover have "?as' \<in> valid_plans PROB"
using 1(3) rem_condless_valid_10
by blast
ultimately have "card (traversed_states ?s ?as') \<le> k"
using assms(2) 1(2)
by blast
then have "card (traversed_states ?s ?as) \<le> k"
using 2
by argo
}
then have "x \<le> k - 1"
using 1
by linarith
}
moreover have "?S \<noteq> {}"
using assms td_UBound_i
by fast
ultimately show ?thesis
unfolding td_def
using td_UBound_i bound_main_lemma_2[of ?S "k - 1"]
by presburger
qed
end
\ No newline at end of file
diff --git a/thys/Finger-Trees/FingerTree.thy b/thys/Finger-Trees/FingerTree.thy
--- a/thys/Finger-Trees/FingerTree.thy
+++ b/thys/Finger-Trees/FingerTree.thy
@@ -1,2652 +1,2651 @@
section "2-3 Finger Trees"
theory FingerTree
imports Main
begin
text \<open>
We implement and prove correct 2-3 finger trees as described by Ralf Hinze
and Ross Paterson\cite{HiPa06}.
\<close>
text \<open>
This theory is organized as follows:
Section~\ref{sec:datatype} contains the finger-tree datatype, its invariant
and its abstraction function to lists.
The Section~\ref{sec:operations} contains the operations
on finger trees and their correctness lemmas.
Section~\ref{sec:hide_invar} contains a finger tree datatype with implicit
invariant, and, finally, Section~\ref{sec:doc} contains a documentation
of the implemented operations.
\<close>
text_raw \<open>\paragraph{Technical Issues}\<close>
text \<open>
As Isabelle lacks proper support of namespaces, we
try to simulate namespaces by locales.
The problem is, that we define lots of internal functions that
should not be exposed to the user at all.
Moreover, we define some functions with names equal to names
from Isabelle's standard library. These names make perfect sense
in the context of FingerTrees, however, they shall not be exposed
to anyone using this theory indirectly, hiding the standard library
names there.
Our approach puts all functions and lemmas inside the locale
{\em FingerTree\_loc},
and then interprets this locale with the prefix {\em FingerTree}.
This makes all definitions visible outside the locale, with
qualified names. Inside the locale, however, one can use unqualified names.
\<close>
subsection "Datatype definition"
text_raw\<open>\label{sec:datatype}\<close>
locale FingerTreeStruc_loc
text \<open>
Nodes: Non empty 2-3 trees, with all elements stored within the leafs plus a
cached annotation
\<close>
datatype ('e,'a) Node = Tip 'e 'a |
Node2 'a "('e,'a) Node" "('e,'a) Node" |
Node3 'a "('e,'a) Node" "('e,'a) Node" "('e,'a) Node"
text \<open>Digit: one to four ordered Nodes\<close>
datatype ('e,'a) Digit = One "('e,'a) Node" |
Two "('e,'a) Node" "('e,'a) Node" |
Three "('e,'a) Node" "('e,'a) Node" "('e,'a) Node" |
Four "('e,'a) Node" "('e,'a) Node" "('e,'a) Node" "('e,'a) Node"
text \<open>FingerTreeStruc:
The empty tree, a single node or some nodes and a deeper tree\<close>
datatype ('e, 'a) FingerTreeStruc =
Empty |
Single "('e,'a) Node" |
Deep 'a "('e,'a) Digit" "('e,'a) FingerTreeStruc" "('e,'a) Digit"
subsubsection "Invariant"
context FingerTreeStruc_loc
begin
text_raw \<open>\paragraph{Auxiliary functions}\ \\\<close>
text \<open>Readout the cached annotation of a node\<close>
primrec gmn :: "('e,'a::monoid_add) Node \<Rightarrow> 'a" where
"gmn (Tip e a) = a" |
"gmn (Node2 a _ _) = a" |
"gmn (Node3 a _ _ _) = a"
text \<open>The annotation of a digit is computed on the fly\<close>
primrec gmd :: "('e,'a::monoid_add) Digit \<Rightarrow> 'a" where
"gmd (One a) = gmn a" |
"gmd (Two a b) = (gmn a) + (gmn b)"|
"gmd (Three a b c) = (gmn a) + (gmn b) + (gmn c)"|
"gmd (Four a b c d) = (gmn a) + (gmn b) + (gmn c) + (gmn d)"
text \<open>Readout the cached annotation of a finger tree\<close>
primrec gmft :: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> 'a" where
"gmft Empty = 0" |
"gmft (Single nd) = gmn nd" |
"gmft (Deep a _ _ _) = a"
text \<open>Depth and cached annotations have to be correct\<close>
fun is_leveln_node :: "nat \<Rightarrow> ('e,'a) Node \<Rightarrow> bool" where
"is_leveln_node 0 (Tip _ _) \<longleftrightarrow> True" |
"is_leveln_node (Suc n) (Node2 _ n1 n2) \<longleftrightarrow>
is_leveln_node n n1 \<and> is_leveln_node n n2" |
"is_leveln_node (Suc n) (Node3 _ n1 n2 n3) \<longleftrightarrow>
is_leveln_node n n1 \<and> is_leveln_node n n2 \<and> is_leveln_node n n3" |
"is_leveln_node _ _ \<longleftrightarrow> False"
primrec is_leveln_digit :: "nat \<Rightarrow> ('e,'a) Digit \<Rightarrow> bool" where
"is_leveln_digit n (One n1) \<longleftrightarrow> is_leveln_node n n1" |
"is_leveln_digit n (Two n1 n2) \<longleftrightarrow> is_leveln_node n n1 \<and>
is_leveln_node n n2" |
"is_leveln_digit n (Three n1 n2 n3) \<longleftrightarrow> is_leveln_node n n1 \<and>
is_leveln_node n n2 \<and> is_leveln_node n n3" |
"is_leveln_digit n (Four n1 n2 n3 n4) \<longleftrightarrow> is_leveln_node n n1 \<and>
is_leveln_node n n2 \<and> is_leveln_node n n3 \<and> is_leveln_node n n4"
primrec is_leveln_ftree :: "nat \<Rightarrow> ('e,'a) FingerTreeStruc \<Rightarrow> bool" where
"is_leveln_ftree n Empty \<longleftrightarrow> True" |
"is_leveln_ftree n (Single nd) \<longleftrightarrow> is_leveln_node n nd" |
"is_leveln_ftree n (Deep _ l t r) \<longleftrightarrow> is_leveln_digit n l \<and>
is_leveln_digit n r \<and> is_leveln_ftree (Suc n) t"
primrec is_measured_node :: "('e,'a::monoid_add) Node \<Rightarrow> bool" where
"is_measured_node (Tip _ _) \<longleftrightarrow> True" |
"is_measured_node (Node2 a n1 n2) \<longleftrightarrow> ((is_measured_node n1) \<and>
(is_measured_node n2)) \<and> (a = (gmn n1) + (gmn n2))" |
"is_measured_node (Node3 a n1 n2 n3) \<longleftrightarrow> ((is_measured_node n1) \<and>
(is_measured_node n2) \<and> (is_measured_node n3)) \<and>
(a = (gmn n1) + (gmn n2) + (gmn n3))"
primrec is_measured_digit :: "('e,'a::monoid_add) Digit \<Rightarrow> bool" where
"is_measured_digit (One a) = is_measured_node a" |
"is_measured_digit (Two a b) =
((is_measured_node a) \<and> (is_measured_node b))"|
"is_measured_digit (Three a b c) =
((is_measured_node a) \<and> (is_measured_node b) \<and> (is_measured_node c))"|
"is_measured_digit (Four a b c d) = ((is_measured_node a) \<and>
(is_measured_node b) \<and> (is_measured_node c) \<and> (is_measured_node d))"
primrec is_measured_ftree :: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> bool" where
"is_measured_ftree Empty \<longleftrightarrow> True" |
"is_measured_ftree (Single n1) \<longleftrightarrow> (is_measured_node n1)" |
"is_measured_ftree (Deep a l m r) \<longleftrightarrow> ((is_measured_digit l) \<and>
(is_measured_ftree m) \<and> (is_measured_digit r)) \<and>
(a = ((gmd l) + (gmft m) + (gmd r)))"
text "Structural invariant for finger trees"
definition "ft_invar t == is_leveln_ftree 0 t \<and> is_measured_ftree t"
subsubsection "Abstraction to Lists"
primrec nodeToList :: "('e,'a) Node \<Rightarrow> ('e \<times> 'a) list" where
"nodeToList (Tip e a) = [(e,a)]"|
"nodeToList (Node2 _ a b) = (nodeToList a) @ (nodeToList b)"|
"nodeToList (Node3 _ a b c)
= (nodeToList a) @ (nodeToList b) @ (nodeToList c)"
primrec digitToList :: "('e,'a) Digit \<Rightarrow> ('e \<times> 'a) list" where
"digitToList (One a) = nodeToList a"|
"digitToList (Two a b) = (nodeToList a) @ (nodeToList b)"|
"digitToList (Three a b c)
= (nodeToList a) @ (nodeToList b) @ (nodeToList c)"|
"digitToList (Four a b c d)
= (nodeToList a) @ (nodeToList b) @ (nodeToList c) @ (nodeToList d)"
text "List representation of a finger tree"
primrec toList :: "('e ,'a) FingerTreeStruc \<Rightarrow> ('e \<times> 'a) list" where
"toList Empty = []"|
"toList (Single a) = nodeToList a"|
"toList (Deep _ pr m sf) = (digitToList pr) @ (toList m) @ (digitToList sf)"
lemma nodeToList_empty: "nodeToList nd \<noteq> Nil"
by (induct nd) auto
lemma digitToList_empty: "digitToList d \<noteq> Nil"
by (cases d, auto simp add: nodeToList_empty)
text \<open>Auxiliary lemmas\<close>
lemma gmn_correct:
assumes "is_measured_node nd"
shows "gmn nd = sum_list (map snd (nodeToList nd))"
by (insert assms, induct nd) (auto simp add: add.assoc)
lemma gmd_correct:
assumes "is_measured_digit d"
shows "gmd d = sum_list (map snd (digitToList d))"
by (insert assms, cases d, auto simp add: gmn_correct add.assoc)
lemma gmft_correct: "is_measured_ftree t
\<Longrightarrow> (gmft t) = sum_list (map snd (toList t))"
by (induct t, auto simp add: ft_invar_def gmd_correct gmn_correct add.assoc)
lemma gmft_correct2: "ft_invar t \<Longrightarrow> (gmft t) = sum_list (map snd (toList t))"
by (simp only: ft_invar_def gmft_correct)
subsection \<open>Operations\<close>
text_raw\<open>\label{sec:operations}\<close>
subsubsection \<open>Empty tree\<close>
lemma Empty_correct[simp]:
"toList Empty = []"
"ft_invar Empty"
by (simp_all add: ft_invar_def)
text \<open>Exactly the empty finger tree represents the empty list\<close>
lemma toList_empty: "toList t = [] \<longleftrightarrow> t = Empty"
by (induct t, auto simp add: nodeToList_empty digitToList_empty)
subsubsection \<open>Annotation\<close>
text "Sum of annotations of all elements of a finger tree"
definition annot :: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> 'a"
where "annot t = gmft t"
lemma annot_correct:
"ft_invar t \<Longrightarrow> annot t = sum_list (map snd (toList t))"
using gmft_correct
unfolding annot_def
by (simp add: gmft_correct2)
subsubsection \<open>Appending\<close>
text \<open>Auxiliary functions to fill in the annotations\<close>
definition deep:: "('e,'a::monoid_add) Digit \<Rightarrow> ('e,'a) FingerTreeStruc
\<Rightarrow> ('e,'a) Digit \<Rightarrow> ('e, 'a) FingerTreeStruc" where
"deep pr m sf = Deep ((gmd pr) + (gmft m) + (gmd sf)) pr m sf"
definition node2 where
"node2 nd1 nd2 = Node2 ((gmn nd1)+(gmn nd2)) nd1 nd2"
definition node3 where
"node3 nd1 nd2 nd3 = Node3 ((gmn nd1)+(gmn nd2)+(gmn nd3)) nd1 nd2 nd3"
text "Append a node at the left end"
fun nlcons :: "('e,'a::monoid_add) Node \<Rightarrow> ('e,'a) FingerTreeStruc
\<Rightarrow> ('e,'a) FingerTreeStruc"
where
\<comment> \<open>Recursively we append a node, if the digit is full we push down a node3\<close>
"nlcons a Empty = Single a" |
"nlcons a (Single b) = deep (One a) Empty (One b)" |
"nlcons a (Deep _ (One b) m sf) = deep (Two a b) m sf" |
"nlcons a (Deep _ (Two b c) m sf) = deep (Three a b c) m sf" |
"nlcons a (Deep _ (Three b c d) m sf) = deep (Four a b c d) m sf" |
"nlcons a (Deep _ (Four b c d e) m sf)
= deep (Two a b) (nlcons (node3 c d e) m) sf"
text "Append a node at the right end"
fun nrcons :: "('e,'a::monoid_add) FingerTreeStruc
\<Rightarrow> ('e,'a) Node \<Rightarrow> ('e,'a) FingerTreeStruc" where
\<comment> \<open>Recursively we append a node, if the digit is full we push down a node3\<close>
"nrcons Empty a = Single a" |
"nrcons (Single b) a = deep (One b) Empty (One a)" |
"nrcons (Deep _ pr m (One b)) a = deep pr m (Two b a)"|
"nrcons (Deep _ pr m (Two b c)) a = deep pr m (Three b c a)" |
"nrcons (Deep _ pr m (Three b c d)) a = deep pr m (Four b c d a)" |
"nrcons (Deep _ pr m (Four b c d e)) a
= deep pr (nrcons m (node3 b c d)) (Two e a)"
lemma nlcons_invlevel: "\<lbrakk>is_leveln_ftree n t; is_leveln_node n nd\<rbrakk>
\<Longrightarrow> is_leveln_ftree n (nlcons nd t)"
by (induct t arbitrary: n nd rule: nlcons.induct)
(auto simp add: deep_def node3_def)
lemma nlcons_invmeas: "\<lbrakk>is_measured_ftree t; is_measured_node nd\<rbrakk>
\<Longrightarrow> is_measured_ftree (nlcons nd t)"
by (induct t arbitrary: nd rule: nlcons.induct)
(auto simp add: deep_def node3_def)
lemmas nlcons_inv = nlcons_invlevel nlcons_invmeas
lemma nlcons_list: "toList (nlcons a t) = (nodeToList a) @ (toList t)"
apply (induct t arbitrary: a rule: nlcons.induct)
apply (auto simp add: deep_def toList_def node3_def)
done
lemma nrcons_invlevel: "\<lbrakk>is_leveln_ftree n t; is_leveln_node n nd\<rbrakk>
\<Longrightarrow> is_leveln_ftree n (nrcons t nd)"
apply (induct t nd arbitrary: nd n rule:nrcons.induct)
apply(auto simp add: deep_def node3_def)
done
lemma nrcons_invmeas: "\<lbrakk>is_measured_ftree t; is_measured_node nd\<rbrakk>
\<Longrightarrow> is_measured_ftree (nrcons t nd)"
apply (induct t nd arbitrary: nd rule:nrcons.induct)
apply(auto simp add: deep_def node3_def)
done
lemmas nrcons_inv = nrcons_invlevel nrcons_invmeas
lemma nrcons_list: "toList (nrcons t a) = (toList t) @ (nodeToList a)"
apply (induct t a arbitrary: a rule: nrcons.induct)
apply (auto simp add: deep_def toList_def node3_def)
done
text "Append an element at the left end"
definition lcons :: "('e \<times> 'a::monoid_add)
\<Rightarrow> ('e,'a) FingerTreeStruc \<Rightarrow> ('e,'a) FingerTreeStruc" (infixr "\<lhd>" 65) where
"a \<lhd> t = nlcons (Tip (fst a) (snd a)) t"
lemma lcons_correct:
assumes "ft_invar t"
shows "ft_invar (a \<lhd> t)" and "toList (a \<lhd> t) = a # (toList t)"
using assms
unfolding ft_invar_def
by (simp_all add: lcons_def nlcons_list nlcons_invlevel nlcons_invmeas)
lemma lcons_inv:"ft_invar t \<Longrightarrow> ft_invar (a \<lhd> t)"
by (rule lcons_correct)
lemma lcons_list[simp]: "toList (a \<lhd> t) = a # (toList t)"
by (simp add: lcons_def nlcons_list)
text "Append an element at the right end"
definition rcons
:: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> ('e \<times> 'a) \<Rightarrow> ('e,'a) FingerTreeStruc"
(infixl "\<rhd>" 65) where
"t \<rhd> a = nrcons t (Tip (fst a) (snd a))"
lemma rcons_correct:
assumes "ft_invar t"
shows "ft_invar (t \<rhd> a)" and "toList (t \<rhd> a) = (toList t) @ [a]"
using assms
by (auto simp add: nrcons_inv ft_invar_def rcons_def nrcons_list)
lemma rcons_inv:"ft_invar t \<Longrightarrow> ft_invar (t \<rhd> a)"
by (rule rcons_correct)
lemma rcons_list[simp]: "toList (t \<rhd> a) = (toList t) @ [a]"
by(auto simp add: nrcons_list rcons_def)
subsubsection \<open>Convert list to tree\<close>
primrec toTree :: "('e \<times> 'a::monoid_add) list \<Rightarrow> ('e,'a) FingerTreeStruc" where
"toTree [] = Empty"|
"toTree (a#xs) = a \<lhd> (toTree xs)"
lemma toTree_correct[simp]:
"ft_invar (toTree l)"
"toList (toTree l) = l"
apply (induct l)
apply (simp add: ft_invar_def)
apply simp
apply (simp add: toTree_def lcons_list lcons_inv)
apply (simp add: toTree_def lcons_list lcons_inv)
done
text \<open>
Note that this lemma is a completeness statement of our implementation,
as it can be read as:
,,All lists of elements have a valid representation as a finger tree.''
\<close>
subsubsection \<open>Detaching leftmost/rightmost element\<close>
primrec digitToTree :: "('e,'a::monoid_add) Digit \<Rightarrow> ('e,'a) FingerTreeStruc"
where
"digitToTree (One a) = Single a"|
"digitToTree (Two a b) = deep (One a) Empty (One b)"|
"digitToTree (Three a b c) = deep (Two a b) Empty (One c)"|
"digitToTree (Four a b c d) = deep (Two a b) Empty (Two c d)"
primrec nodeToDigit :: "('e,'a) Node \<Rightarrow> ('e,'a) Digit" where
"nodeToDigit (Tip e a) = One (Tip e a)"|
"nodeToDigit (Node2 _ a b) = Two a b"|
"nodeToDigit (Node3 _ a b c) = Three a b c"
fun nlistToDigit :: "('e,'a) Node list \<Rightarrow> ('e,'a) Digit" where
"nlistToDigit [a] = One a" |
"nlistToDigit [a,b] = Two a b" |
"nlistToDigit [a,b,c] = Three a b c" |
"nlistToDigit [a,b,c,d] = Four a b c d"
primrec digitToNlist :: "('e,'a) Digit \<Rightarrow> ('e,'a) Node list" where
"digitToNlist (One a) = [a]" |
"digitToNlist (Two a b) = [a,b] " |
"digitToNlist (Three a b c) = [a,b,c]" |
"digitToNlist (Four a b c d) = [a,b,c,d]"
text \<open>Auxiliary function to unwrap a Node element\<close>
primrec n_unwrap:: "('e,'a) Node \<Rightarrow> ('e \<times> 'a)" where
"n_unwrap (Tip e a) = (e,a)"|
"n_unwrap (Node2 _ a b) = undefined"|
"n_unwrap (Node3 _ a b c) = undefined"
type_synonym ('e,'a) ViewnRes = "(('e,'a) Node \<times> ('e,'a) FingerTreeStruc) option"
lemma viewnres_cases:
fixes r :: "('e,'a) ViewnRes"
obtains (Nil) "r=None" |
(Cons) a t where "r=Some (a,t)"
by (cases r) auto
lemma viewnres_split:
"P (case_option f1 (case_prod f2) x) =
((x = None \<longrightarrow> P f1) \<and> (\<forall>a b. x = Some (a,b) \<longrightarrow> P (f2 a b)))"
by (auto split: option.split prod.split)
text \<open>Detach the leftmost node. Return @{const None} on empty finger tree.\<close>
fun viewLn :: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> ('e,'a) ViewnRes" where
"viewLn Empty = None"|
"viewLn (Single a) = Some (a, Empty)"|
"viewLn (Deep _ (Two a b) m sf) = Some (a, (deep (One b) m sf))"|
"viewLn (Deep _ (Three a b c) m sf) = Some (a, (deep (Two b c) m sf))"|
"viewLn (Deep _ (Four a b c d) m sf) = Some (a, (deep (Three b c d) m sf))"|
"viewLn (Deep _ (One a) m sf) =
(case viewLn m of
None \<Rightarrow> Some (a, (digitToTree sf)) |
Some (b, m2) \<Rightarrow> Some (a, (deep (nodeToDigit b) m2 sf)))"
text \<open>Detach the rightmost node. Return @{const None} on empty finger tree.\<close>
fun viewRn :: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> ('e,'a) ViewnRes" where
"viewRn Empty = None" |
"viewRn (Single a) = Some (a, Empty)" |
"viewRn (Deep _ pr m (Two a b)) = Some (b, (deep pr m (One a)))" |
"viewRn (Deep _ pr m (Three a b c)) = Some (c, (deep pr m (Two a b)))" |
"viewRn (Deep _ pr m (Four a b c d)) = Some (d, (deep pr m (Three a b c)))" |
"viewRn (Deep _ pr m (One a)) =
(case viewRn m of
None \<Rightarrow> Some (a, (digitToTree pr))|
Some (b, m2) \<Rightarrow> Some (a, (deep pr m2 (nodeToDigit b))))"
(* TODO: Head, last geht auch in O(1) !!! *)
lemma
digitToTree_inv: "is_leveln_digit n d \<Longrightarrow> is_leveln_ftree n (digitToTree d)"
"is_measured_digit d \<Longrightarrow> is_measured_ftree (digitToTree d)"
apply (cases d,auto simp add: deep_def)
apply (cases d,auto simp add: deep_def)
done
lemma digitToTree_list: "toList (digitToTree d) = digitToList d"
by (cases d) (auto simp add: deep_def)
lemma nodeToDigit_inv:
"is_leveln_node (Suc n) nd \<Longrightarrow> is_leveln_digit n (nodeToDigit nd) "
"is_measured_node nd \<Longrightarrow> is_measured_digit (nodeToDigit nd)"
by (cases nd, auto) (cases nd, auto)
lemma nodeToDigit_list: "digitToList (nodeToDigit nd) = nodeToList nd"
by (cases nd,auto)
lemma viewLn_empty: "t \<noteq> Empty \<longleftrightarrow> (viewLn t) \<noteq> None"
proof (cases t)
case Empty thus ?thesis by simp
next
case (Single Node) thus ?thesis by simp
next
case (Deep a l x r) thus ?thesis
apply(auto)
apply(case_tac l)
apply(auto)
apply(cases "viewLn x")
apply(auto)
done
qed
lemma viewLn_inv: "\<lbrakk>
is_measured_ftree t; is_leveln_ftree n t; viewLn t = Some (nd, s)
\<rbrakk> \<Longrightarrow> is_measured_ftree s \<and> is_measured_node nd \<and>
is_leveln_ftree n s \<and> is_leveln_node n nd"
apply(induct t arbitrary: n nd s rule: viewLn.induct)
apply(simp add: viewLn_empty)
apply(simp)
apply(auto simp add: deep_def)[1]
apply(auto simp add: deep_def)[1]
apply(auto simp add: deep_def)[1]
proof -
fix ux a m sf n nd s
assume av: "\<And>n nd s.
\<lbrakk>is_measured_ftree m; is_leveln_ftree n m; viewLn m = Some (nd, s)\<rbrakk>
\<Longrightarrow> is_measured_ftree s \<and>
is_measured_node nd \<and> is_leveln_ftree n s \<and> is_leveln_node n nd "
" is_measured_ftree (Deep ux (One a) m sf) "
"is_leveln_ftree n (Deep ux (One a) m sf)"
"viewLn (Deep ux (One a) m sf) = Some (nd, s)"
thus "is_measured_ftree s \<and>
is_measured_node nd \<and> is_leveln_ftree n s \<and> is_leveln_node n nd"
proof (cases "viewLn m" rule: viewnres_cases)
case Nil
with av(4) have v1: "nd = a" "s = digitToTree sf"
by auto
from v1 av(2,3) show "is_measured_ftree s \<and>
is_measured_node nd \<and> is_leveln_ftree n s \<and> is_leveln_node n nd"
apply(auto)
apply(auto simp add: digitToTree_inv)
done
next
case (Cons b m2)
with av(4) have v2: "nd = a" "s = (deep (nodeToDigit b) m2 sf)"
apply (auto simp add: deep_def)
done
note myiv = av(1)[of "Suc n" b m2]
from v2 av(2,3) have "is_measured_ftree m \<and> is_leveln_ftree (Suc n) m"
apply(simp)
done
hence bv: "is_measured_ftree m2 \<and>
is_measured_node b \<and> is_leveln_ftree (Suc n) m2 \<and> is_leveln_node (Suc n) b"
using myiv Cons
apply(simp)
done
with av(2,3) v2 show "is_measured_ftree s \<and>
is_measured_node nd \<and> is_leveln_ftree n s \<and> is_leveln_node n nd"
apply(auto simp add: deep_def nodeToDigit_inv)
done
qed
qed
lemma viewLn_list: " viewLn t = Some (nd, s)
\<Longrightarrow> toList t = (nodeToList nd) @ (toList s)"
apply(induct t arbitrary: nd s rule: viewLn.induct)
apply(simp)
apply(simp)
apply(simp)
apply(simp add: deep_def)
apply(auto simp add: toList_def)[1]
apply(simp)
apply(simp add: deep_def)
apply(auto simp add: toList_def)[1]
apply(simp)
apply(simp add: deep_def)
apply(auto simp add: toList_def)[1]
apply(simp)
subgoal premises prems for a m sf nd s
using prems
proof (cases "viewLn m" rule: viewnres_cases)
case Nil
hence av: "m = Empty" by (metis viewLn_empty)
from av prems
show "nodeToList a @ toList m @ digitToList sf = nodeToList nd @ toList s"
by (auto simp add: digitToTree_list)
next
case (Cons b m2)
with prems have bv: "nd = a" "s = (deep (nodeToDigit b) m2 sf)"
by (auto simp add: deep_def)
with Cons prems
show "nodeToList a @ toList m @ digitToList sf = nodeToList nd @ toList s"
apply(simp)
apply(simp add: deep_def)
apply(simp add: deep_def nodeToDigit_list)
done
qed
done
lemma viewRn_empty: "t \<noteq> Empty \<longleftrightarrow> (viewRn t) \<noteq> None"
proof (cases t)
case Empty thus ?thesis by simp
next
case (Single Node) thus ?thesis by simp
next
case (Deep a l x r) thus ?thesis
apply(auto)
apply(case_tac r)
apply(auto)
apply(cases "viewRn x")
apply(auto)
done
qed
lemma viewRn_inv: "\<lbrakk>
is_measured_ftree t; is_leveln_ftree n t; viewRn t = Some (nd, s)
\<rbrakk> \<Longrightarrow> is_measured_ftree s \<and> is_measured_node nd \<and>
is_leveln_ftree n s \<and> is_leveln_node n nd"
apply(induct t arbitrary: n nd s rule: viewRn.induct)
apply(simp add: viewRn_empty)
apply(simp)
apply(auto simp add: deep_def)[1]
apply(auto simp add: deep_def)[1]
apply(auto simp add: deep_def)[1]
proof -
fix ux a m "pr" n nd s
assume av: "\<And>n nd s.
\<lbrakk>is_measured_ftree m; is_leveln_ftree n m; viewRn m = Some (nd, s)\<rbrakk>
\<Longrightarrow> is_measured_ftree s \<and>
is_measured_node nd \<and> is_leveln_ftree n s \<and> is_leveln_node n nd "
" is_measured_ftree (Deep ux pr m (One a)) "
"is_leveln_ftree n (Deep ux pr m (One a))"
"viewRn (Deep ux pr m (One a)) = Some (nd, s)"
thus "is_measured_ftree s \<and>
is_measured_node nd \<and> is_leveln_ftree n s \<and> is_leveln_node n nd"
proof (cases "viewRn m" rule: viewnres_cases)
case Nil
with av(4) have v1: "nd = a" "s = digitToTree pr"
by auto
from v1 av(2,3) show "is_measured_ftree s \<and>
is_measured_node nd \<and> is_leveln_ftree n s \<and> is_leveln_node n nd"
apply(auto)
apply(auto simp add: digitToTree_inv)
done
next
case (Cons b m2)
with av(4) have v2: "nd = a" "s = (deep pr m2 (nodeToDigit b))"
apply (auto simp add: deep_def)
done
note myiv = av(1)[of "Suc n" b m2]
from v2 av(2,3) have "is_measured_ftree m \<and> is_leveln_ftree (Suc n) m"
apply(simp)
done
hence bv: "is_measured_ftree m2 \<and>
is_measured_node b \<and> is_leveln_ftree (Suc n) m2 \<and> is_leveln_node (Suc n) b"
using myiv Cons
apply(simp)
done
with av(2,3) v2 show "is_measured_ftree s \<and>
is_measured_node nd \<and> is_leveln_ftree n s \<and> is_leveln_node n nd"
apply(auto simp add: deep_def nodeToDigit_inv)
done
qed
qed
lemma viewRn_list: "viewRn t = Some (nd, s)
\<Longrightarrow> toList t = (toList s) @ (nodeToList nd)"
apply(induct t arbitrary: nd s rule: viewRn.induct)
apply(simp)
apply(simp)
apply(simp)
apply(simp add: deep_def)
apply(auto simp add: toList_def)[1]
apply(simp)
apply(simp add: deep_def)
apply(auto simp add: toList_def)[1]
apply(simp)
apply(simp add: deep_def)
apply(auto simp add: toList_def)[1]
apply(simp)
subgoal premises prems for pr m a nd s
proof (cases "viewRn m" rule: viewnres_cases)
case Nil
from Nil have av: "m = Empty" by (metis viewRn_empty)
from av prems
show "digitToList pr @ toList m @ nodeToList a = toList s @ nodeToList nd"
by (auto simp add: digitToTree_list)
next
case (Cons b m2)
with prems have bv: "nd = a" "s = (deep pr m2 (nodeToDigit b))"
apply(auto simp add: deep_def) done
with Cons prems
show "digitToList pr @ toList m @ nodeToList a = toList s @ nodeToList nd"
apply(simp)
apply(simp add: deep_def)
apply(simp add: deep_def nodeToDigit_list)
done
qed
done
-
type_synonym ('e,'a) viewres = "(('e \<times>'a) \<times> ('e,'a) FingerTreeStruc) option"
text \<open>Detach the leftmost element. Return @{const None} on empty finger tree.\<close>
definition viewL :: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> ('e,'a) viewres"
where
"viewL t = (case viewLn t of
None \<Rightarrow> None |
(Some (a, t2)) \<Rightarrow> Some ((n_unwrap a), t2))"
lemma viewL_correct:
assumes INV: "ft_invar t"
shows
"(t=Empty \<Longrightarrow> viewL t = None)"
"(t\<noteq>Empty \<Longrightarrow> (\<exists>a s. viewL t = Some (a, s) \<and> ft_invar s
\<and> toList t = a # toList s))"
proof -
assume "t=Empty" thus "viewL t = None" by (simp add: viewL_def)
next
assume NE: "t \<noteq> Empty"
from INV have INV': "is_leveln_ftree 0 t" "is_measured_ftree t"
by (simp_all add: ft_invar_def)
from NE have v1: "viewLn t \<noteq> None" by (auto simp add: viewLn_empty)
then obtain nd s where vn: "viewLn t = Some (nd, s)"
by (cases "viewLn t") (auto)
from this obtain a where v1: "viewL t = Some (a, s)"
by (auto simp add: viewL_def)
from INV' vn have
v2: "is_measured_ftree s \<and> is_leveln_ftree 0 s
\<and> is_leveln_node 0 nd \<and> is_measured_node nd"
"toList t = (nodeToList nd) @ (toList s)"
by (auto simp add: viewLn_inv[of t 0 nd s] viewLn_list[of t])
with v1 vn have v3: "nodeToList nd = [a]"
apply (auto simp add: viewL_def )
apply (induct nd)
- apply auto
+ apply (simp_all (no_asm_use))
done
with v1 v2
show "\<exists>a s. viewL t = Some (a, s) \<and> ft_invar s \<and> toList t = a # toList s"
by (auto simp add: ft_invar_def)
qed
lemma viewL_correct_empty[simp]: "viewL Empty = None"
by (simp add: viewL_def)
lemma viewL_correct_nonEmpty:
assumes "ft_invar t" "t \<noteq> Empty"
obtains a s where
"viewL t = Some (a, s)" "ft_invar s" "toList t = a # toList s"
using assms viewL_correct by blast
text \<open>Detach the rightmost element. Return @{const None} on empty finger tree.\<close>
definition viewR :: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> ('e,'a) viewres"
where
"viewR t = (case viewRn t of
None \<Rightarrow> None |
(Some (a, t2)) \<Rightarrow> Some ((n_unwrap a), t2))"
lemma viewR_correct:
assumes INV: "ft_invar t"
shows
"(t = Empty \<Longrightarrow> viewR t = None)"
"(t \<noteq> Empty \<Longrightarrow> (\<exists> a s. viewR t = Some (a, s) \<and> ft_invar s
\<and> toList t = toList s @ [a]))"
proof -
assume "t=Empty" thus "viewR t = None" by (simp add: viewR_def)
next
assume NE: "t \<noteq> Empty"
from INV have INV': "is_leveln_ftree 0 t" "is_measured_ftree t"
unfolding ft_invar_def by simp_all
from NE have v1: "viewRn t \<noteq> None" by (auto simp add: viewRn_empty)
then obtain nd s where vn: "viewRn t = Some (nd, s)"
by (cases "viewRn t") (auto)
from this obtain a where v1: "viewR t = Some (a, s)"
by (auto simp add: viewR_def)
from INV' vn have
v2: "is_measured_ftree s \<and> is_leveln_ftree 0 s
\<and> is_leveln_node 0 nd \<and> is_measured_node nd"
"toList t = (toList s) @ (nodeToList nd)"
by (auto simp add: viewRn_inv[of t 0 nd s] viewRn_list[of t])
with v1 vn have v3: "nodeToList nd = [a]"
apply (auto simp add: viewR_def )
apply (induct nd)
- apply auto
+ apply (simp_all (no_asm_use))
done
with v1 v2
show "\<exists>a s. viewR t = Some (a, s) \<and> ft_invar s \<and> toList t = toList s @ [a]"
unfolding ft_invar_def by auto
qed
lemma viewR_correct_empty[simp]: "viewR Empty = None"
unfolding viewR_def by simp
lemma viewR_correct_nonEmpty:
assumes "ft_invar t" "t \<noteq> Empty"
obtains a s where
"viewR t = Some (a, s)" "ft_invar s \<and> toList t = toList s @ [a]"
using assms viewR_correct by blast
text \<open>Finger trees viewed as a double-ended queue. The head and tail functions
here are only
defined for non-empty queues, while the view-functions were also defined for
empty finger trees.\<close>
text "Check for emptiness"
definition isEmpty :: "('e,'a) FingerTreeStruc \<Rightarrow> bool" where
[code del]: "isEmpty t = (t = Empty)"
lemma isEmpty_correct: "isEmpty t \<longleftrightarrow> toList t = []"
unfolding isEmpty_def by (simp add: toList_empty)
\<comment> \<open>Avoid comparison with @{text "(=)"}, and thus unnecessary equality-class
parameter on element types in generated code\<close>
lemma [code]: "isEmpty t = (case t of Empty \<Rightarrow> True | _ \<Rightarrow> False)"
apply (cases t)
apply (auto simp add: isEmpty_def)
done
text "Leftmost element"
definition head :: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> 'e \<times> 'a" where
"head t = (case viewL t of (Some (a, _)) \<Rightarrow> a)"
lemma head_correct:
assumes "ft_invar t" "t \<noteq> Empty"
shows "head t = hd (toList t)"
proof -
from assms viewL_correct
obtain a s where
v1:"viewL t = Some (a, s) \<and> ft_invar s \<and> toList t = a # toList s" by blast
hence v2: "head t = a" by (auto simp add: head_def)
from v1 have "hd (toList t) = a" by simp
with v2 show ?thesis by simp
qed
text "All but the leftmost element"
definition tail
:: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> ('e,'a) FingerTreeStruc"
where
"tail t = (case viewL t of (Some (_, m)) \<Rightarrow> m)"
lemma tail_correct:
assumes "ft_invar t" "t \<noteq> Empty"
shows "toList (tail t) = tl (toList t)" and "ft_invar (tail t)"
proof -
from assms viewL_correct
obtain a s where
v1:"viewL t = Some (a, s) \<and> ft_invar s \<and> toList t = a # toList s" by blast
hence v2: "tail t = s" by (auto simp add: tail_def)
from v1 have "tl (toList t) = toList s" by simp
with v1 v2 show
"toList (tail t) = tl (toList t)"
"ft_invar (tail t)"
by simp_all
qed
text "Rightmost element"
definition headR :: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> 'e \<times> 'a" where
"headR t = (case viewR t of (Some (a, _)) \<Rightarrow> a)"
lemma headR_correct:
assumes "ft_invar t" "t \<noteq> Empty"
shows "headR t = last (toList t)"
proof -
from assms viewR_correct
obtain a s where
v1:"viewR t = Some (a, s) \<and> ft_invar s \<and> toList t = toList s @ [a]" by blast
hence v2: "headR t = a" by (auto simp add: headR_def)
with v1 show ?thesis by auto
qed
text "All but the rightmost element"
definition tailR
:: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> ('e,'a) FingerTreeStruc"
where
"tailR t = (case viewR t of (Some (_, m)) \<Rightarrow> m)"
lemma tailR_correct:
assumes "ft_invar t" "t \<noteq> Empty"
shows "toList (tailR t) = butlast (toList t)" and "ft_invar (tailR t)"
proof -
from assms viewR_correct
obtain a s where
v1:"viewR t = Some (a, s) \<and> ft_invar s \<and> toList t = toList s @ [a]" by blast
hence v2: "tailR t = s" by (auto simp add: tailR_def)
with v1 show "toList (tailR t) = butlast (toList t)" and "ft_invar (tailR t)"
by auto
qed
subsubsection \<open>Concatenation\<close>
primrec lconsNlist :: "('e,'a::monoid_add) Node list
\<Rightarrow> ('e,'a) FingerTreeStruc \<Rightarrow> ('e,'a) FingerTreeStruc" where
"lconsNlist [] t = t" |
"lconsNlist (x#xs) t = nlcons x (lconsNlist xs t)"
primrec rconsNlist :: "('e,'a::monoid_add) FingerTreeStruc
\<Rightarrow> ('e,'a) Node list \<Rightarrow> ('e,'a) FingerTreeStruc" where
"rconsNlist t [] = t" |
"rconsNlist t (x#xs) = rconsNlist (nrcons t x) xs"
fun nodes :: "('e,'a::monoid_add) Node list \<Rightarrow> ('e,'a) Node list" where
"nodes [a, b] = [node2 a b]" |
"nodes [a, b, c] = [node3 a b c]" |
"nodes [a,b,c,d] = [node2 a b, node2 c d]" |
"nodes (a#b#c#xs) = (node3 a b c) # (nodes xs)"
text \<open>Recursively we concatenate two FingerTreeStrucs while we keep the
inner Nodes in a list\<close>
fun app3 :: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> ('e,'a) Node list
\<Rightarrow> ('e,'a) FingerTreeStruc \<Rightarrow> ('e,'a) FingerTreeStruc" where
"app3 Empty xs t = lconsNlist xs t" |
"app3 t xs Empty = rconsNlist t xs" |
"app3 (Single x) xs t = nlcons x (lconsNlist xs t)" |
"app3 t xs (Single x) = nrcons (rconsNlist t xs) x" |
"app3 (Deep _ pr1 m1 sf1) ts (Deep _ pr2 m2 sf2) =
deep pr1 (app3 m1
(nodes ((digitToNlist sf1) @ ts @ (digitToNlist pr2))) m2) sf2"
lemma lconsNlist_inv:
assumes "is_leveln_ftree n t"
and "is_measured_ftree t"
and "\<forall> x\<in>set xs. (is_leveln_node n x \<and> is_measured_node x)"
shows
"is_leveln_ftree n (lconsNlist xs t) \<and> is_measured_ftree (lconsNlist xs t)"
by (insert assms, induct xs, auto simp add: nlcons_invlevel nlcons_invmeas)
lemma rconsNlist_inv:
assumes "is_leveln_ftree n t"
and "is_measured_ftree t"
and "\<forall> x\<in>set xs. (is_leveln_node n x \<and> is_measured_node x)"
shows
"is_leveln_ftree n (rconsNlist t xs) \<and> is_measured_ftree (rconsNlist t xs)"
by (insert assms, induct xs arbitrary: t,
auto simp add: nrcons_invlevel nrcons_invmeas)
lemma nodes_inv:
assumes "\<forall> x \<in> set ts. is_leveln_node n x \<and> is_measured_node x"
and "length ts \<ge> 2"
shows "\<forall> x \<in> set (nodes ts). is_leveln_node (Suc n) x \<and> is_measured_node x"
proof (insert assms, induct ts rule: nodes.induct)
case (1 a b)
thus ?case by (simp add: node2_def)
next
case (2 a b c)
thus ?case by (simp add: node3_def)
next
case (3 a b c d)
thus ?case by (simp add: node2_def)
next
case (4 a b c v vb vc)
thus ?case by (simp add: node3_def)
next
show "\<lbrakk>\<forall>x\<in>set []. is_leveln_node n x \<and> is_measured_node x; 2 \<le> length []\<rbrakk>
\<Longrightarrow> \<forall>x\<in>set (nodes []). is_leveln_node (Suc n) x \<and> is_measured_node x"
by simp
next
show
"\<And>v. \<lbrakk>\<forall>x\<in>set [v]. is_leveln_node n x \<and> is_measured_node x; 2 \<le> length [v]\<rbrakk>
\<Longrightarrow> \<forall>x\<in>set (nodes [v]). is_leveln_node (Suc n) x \<and> is_measured_node x"
by simp
qed
lemma nodes_inv2:
assumes "is_leveln_digit n sf1"
and "is_measured_digit sf1"
and "is_leveln_digit n pr2"
and "is_measured_digit pr2"
and "\<forall> x \<in> set ts. is_leveln_node n x \<and> is_measured_node x"
shows
"\<forall>x\<in>set (nodes (digitToNlist sf1 @ ts @ digitToNlist pr2)).
is_leveln_node (Suc n) x \<and> is_measured_node x"
proof -
have v1:" \<forall>x\<in>set (digitToNlist sf1 @ ts @ digitToNlist pr2).
is_leveln_node n x \<and> is_measured_node x"
using assms
apply (simp add: digitToNlist_def)
apply (cases sf1)
apply (cases pr2)
apply simp_all
apply (cases pr2)
apply (simp_all)
apply (cases pr2)
apply (simp_all)
apply (cases pr2)
apply (simp_all)
done
have v2: "length (digitToNlist sf1 @ ts @ digitToNlist pr2) \<ge> 2"
apply (cases sf1)
apply (cases pr2)
apply simp_all
done
thus ?thesis
using v1 nodes_inv[of "digitToNlist sf1 @ ts @ digitToNlist pr2"]
by blast
qed
lemma app3_inv:
assumes "is_leveln_ftree n t1"
and "is_leveln_ftree n t2"
and "is_measured_ftree t1"
and "is_measured_ftree t2"
and "\<forall> x\<in>set xs. (is_leveln_node n x \<and> is_measured_node x)"
shows "is_leveln_ftree n (app3 t1 xs t2) \<and> is_measured_ftree (app3 t1 xs t2)"
proof (insert assms, induct t1 xs t2 arbitrary: n rule: app3.induct)
case (1 xs t n)
thus ?case using lconsNlist_inv by simp
next
case "2_1"
thus ?case by (simp add: rconsNlist_inv)
next
case "2_2"
thus ?case by (simp add: lconsNlist_inv rconsNlist_inv)
next
case "3_1"
thus ?case by (simp add: lconsNlist_inv nlcons_invlevel nlcons_invmeas )
next
case "3_2"
thus ?case
by (simp only: app3.simps)
(simp add: lconsNlist_inv nlcons_invlevel nlcons_invmeas)
next
case 4
thus ?case
by (simp only: app3.simps)
(simp add: rconsNlist_inv nrcons_invlevel nrcons_invmeas)
next
case (5 uu pr1 m1 sf1 ts uv pr2 m2 sf2 n)
thus ?case
proof -
have v1: "is_leveln_ftree (Suc n) m1"
and v2: "is_leveln_ftree (Suc n) m2"
using "5.prems" by (simp_all add: is_leveln_ftree_def)
have v3: "is_measured_ftree m1"
and v4: "is_measured_ftree m2"
using "5.prems" by (simp_all add: is_measured_ftree_def)
have v5: "is_leveln_digit n sf1"
"is_measured_digit sf1"
"is_leveln_digit n pr2"
"is_measured_digit pr2"
"\<forall>x\<in>set ts. is_leveln_node n x \<and> is_measured_node x"
using "5.prems"
by (simp_all add: is_leveln_ftree_def is_measured_ftree_def)
note v6 = nodes_inv2[OF v5]
note v7 = "5.hyps"[OF v1 v2 v3 v4 v6]
have v8: "is_leveln_digit n sf2"
"is_measured_digit sf2"
"is_leveln_digit n pr1"
"is_measured_digit pr1"
using "5.prems"
by (simp_all add: is_leveln_ftree_def is_measured_ftree_def)
show ?thesis using v7 v8
by (simp add: is_leveln_ftree_def is_measured_ftree_def deep_def)
qed
qed
primrec nlistToList:: "(('e, 'a) Node) list \<Rightarrow> ('e \<times> 'a) list" where
"nlistToList [] = []"|
"nlistToList (x#xs) = (nodeToList x) @ (nlistToList xs)"
lemma nodes_list: "length xs \<ge> 2 \<Longrightarrow> nlistToList (nodes xs) = nlistToList xs"
by (induct xs rule: nodes.induct) (auto simp add: node2_def node3_def)
lemma nlistToList_app:
"nlistToList (xs@ys) = (nlistToList xs) @ (nlistToList ys)"
by (induct xs arbitrary: ys, simp_all)
lemma nlistListLCons: "toList (lconsNlist xs t) = (nlistToList xs) @ (toList t)"
by (induct xs) (auto simp add: nlcons_list)
lemma nlistListRCons: "toList (rconsNlist t xs) = (toList t) @ (nlistToList xs)"
by (induct xs arbitrary: t) (auto simp add: nrcons_list)
lemma app3_list_lem1:
"nlistToList (nodes (digitToNlist sf1 @ ts @ digitToNlist pr2)) =
digitToList sf1 @ nlistToList ts @ digitToList pr2"
proof -
have len1: "length (digitToNlist sf1 @ ts @ digitToNlist pr2) \<ge> 2"
by (cases sf1,cases pr2,simp_all)
have "(nlistToList (digitToNlist sf1 @ ts @ digitToNlist pr2))
= (digitToList sf1 @ nlistToList ts @ digitToList pr2)"
apply (cases sf1, cases pr2)
apply (simp_all add: nlistToList_app)
apply (cases pr2, auto)
apply (cases pr2, auto)
apply (cases pr2, auto)
done
with nodes_list[OF len1] show ?thesis by simp
qed
lemma app3_list:
"toList (app3 t1 xs t2) = (toList t1) @ (nlistToList xs) @ (toList t2)"
apply (induct t1 xs t2 rule: app3.induct)
apply (simp_all add: nlistListLCons nlistListRCons nlcons_list nrcons_list)
apply (simp add: app3_list_lem1 deep_def)
done
definition app
:: "('e,'a::monoid_add) FingerTreeStruc \<Rightarrow> ('e,'a) FingerTreeStruc
\<Rightarrow> ('e,'a) FingerTreeStruc"
where "app t1 t2 = app3 t1 [] t2"
lemma app_correct:
assumes "ft_invar t1" "ft_invar t2"
shows "toList (app t1 t2) = (toList t1) @ (toList t2)"
and "ft_invar (app t1 t2)"
using assms
by (auto simp add: app3_inv app3_list ft_invar_def app_def)
lemma app_inv: "\<lbrakk>ft_invar t1;ft_invar t2\<rbrakk> \<Longrightarrow> ft_invar (app t1 t2)"
by (auto simp add: app3_inv ft_invar_def app_def)
lemma app_list[simp]: "toList (app t1 t2) = (toList t1) @ (toList t2)"
by (simp add: app3_list app_def)
subsubsection "Splitting"
type_synonym ('e,'a) SplitDigit =
"('e,'a) Node list \<times> ('e,'a) Node \<times> ('e,'a) Node list"
type_synonym ('e,'a) SplitTree =
"('e,'a) FingerTreeStruc \<times> ('e,'a) Node \<times> ('e,'a) FingerTreeStruc"
text \<open>Auxiliary functions to create a correct finger tree
even if the left or right digit is empty\<close>
fun deepL :: "('e,'a::monoid_add) Node list \<Rightarrow> ('e,'a) FingerTreeStruc
\<Rightarrow> ('e,'a) Digit \<Rightarrow> ('e,'a) FingerTreeStruc" where
"deepL [] m sf = (case (viewLn m) of None \<Rightarrow> digitToTree sf |
(Some (a, m2)) \<Rightarrow> deep (nodeToDigit a) m2 sf)" |
"deepL pr m sf = deep (nlistToDigit pr) m sf"
fun deepR :: "('e,'a::monoid_add) Digit \<Rightarrow> ('e,'a) FingerTreeStruc
\<Rightarrow> ('e,'a) Node list \<Rightarrow> ('e,'a) FingerTreeStruc" where
"deepR pr m [] = (case (viewRn m) of None \<Rightarrow> digitToTree pr |
(Some (a, m2)) \<Rightarrow> deep pr m2 (nodeToDigit a))" |
"deepR pr m sf = deep pr m (nlistToDigit sf)"
text \<open>Splitting a list of nodes\<close>
fun splitNlist :: "('a::monoid_add \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> ('e,'a) Node list
\<Rightarrow> ('e,'a) SplitDigit" where
"splitNlist p i [a] = ([],a,[])" |
"splitNlist p i (a#b) =
(let i2 = (i + gmn a) in
(if (p i2)
then ([],a,b)
else
(let (l,x,r) = (splitNlist p i2 b) in ((a#l),x,r))))"
text \<open>Splitting a digit by converting it into a list of nodes\<close>
definition splitDigit :: "('a::monoid_add \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> ('e,'a) Digit
\<Rightarrow> ('e,'a) SplitDigit" where
"splitDigit p i d = splitNlist p i (digitToNlist d)"
text \<open>Creating a finger tree from list of nodes\<close>
definition nlistToTree :: "('e,'a::monoid_add) Node list
\<Rightarrow> ('e,'a) FingerTreeStruc" where
"nlistToTree xs = lconsNlist xs Empty"
text \<open>Recursive splitting into a left and right tree and a center node\<close>
fun nsplitTree :: "('a::monoid_add \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> ('e,'a) FingerTreeStruc
\<Rightarrow> ('e,'a) SplitTree" where
"nsplitTree p i Empty = (Empty, Tip undefined undefined, Empty)"
\<comment> \<open>Making the function total\<close> |
"nsplitTree p i (Single ea) = (Empty,ea,Empty)" |
"nsplitTree p i (Deep _ pr m sf) =
(let
vpr = (i + gmd pr);
vm = (vpr + gmft m)
in
if (p vpr) then
(let (l,x,r) = (splitDigit p i pr) in
(nlistToTree l,x,deepL r m sf))
else (if (p vm) then
(let (ml,xs,mr) = (nsplitTree p vpr m);
(l,x,r) = (splitDigit p (vpr + gmft ml) (nodeToDigit xs)) in
(deepR pr ml l,x,deepL r mr sf))
else
(let (l,x,r) = (splitDigit p vm sf) in
(deepR pr m l,x,nlistToTree r))
))"
lemma nlistToTree_inv:
"\<forall> x \<in> set nl. is_measured_node x \<Longrightarrow> is_measured_ftree (nlistToTree nl)"
"\<forall> x \<in> set nl. is_leveln_node n x \<Longrightarrow> is_leveln_ftree n (nlistToTree nl)"
by (unfold nlistToTree_def, induct nl, auto simp add: nlcons_invmeas)
(induct nl, auto simp add: nlcons_invlevel)
lemma nlistToTree_list: "toList (nlistToTree nl) = nlistToList nl"
by (auto simp add: nlistToTree_def nlistListLCons)
lemma deepL_inv:
assumes "is_leveln_ftree (Suc n) m \<and> is_measured_ftree m"
and "is_leveln_digit n sf \<and> is_measured_digit sf"
and "\<forall> x \<in> set pr. (is_measured_node x \<and> is_leveln_node n x) \<and> length pr \<le> 4"
shows "is_leveln_ftree n (deepL pr m sf) \<and> is_measured_ftree (deepL pr m sf)"
apply (insert assms)
apply (induct "pr" m sf rule: deepL.induct)
apply (simp split: viewnres_split)
apply auto[1]
apply (simp_all add: digitToTree_inv deep_def)
proof -
fix m sf Node FingerTreeStruc
assume "is_leveln_ftree (Suc n) m" "is_measured_ftree m"
"is_leveln_digit n sf" "is_measured_digit sf"
"viewLn m = Some (Node, FingerTreeStruc)"
thus "is_leveln_digit n (nodeToDigit Node)
\<and> is_leveln_ftree (Suc n) FingerTreeStruc"
by (simp add: viewLn_inv[of m "Suc n" Node FingerTreeStruc] nodeToDigit_inv)
next
fix m sf Node FingerTreeStruc
assume assms1:
"is_leveln_ftree (Suc n) m" "is_measured_ftree m"
"is_leveln_digit n sf" "is_measured_digit sf"
"viewLn m = Some (Node, FingerTreeStruc)"
thus "is_measured_digit (nodeToDigit Node) \<and> is_measured_ftree FingerTreeStruc"
apply (auto simp only: viewLn_inv[of m "Suc n" Node FingerTreeStruc])
proof -
from assms1 have "is_measured_node Node \<and> is_leveln_node (Suc n) Node"
by (simp add: viewLn_inv[of m "Suc n" Node FingerTreeStruc])
thus "is_measured_digit (nodeToDigit Node)"
by (auto simp add: nodeToDigit_inv)
qed
next
fix v va
assume
"is_measured_node v \<and> is_leveln_node n (v:: ('a,'b) Node) \<and>
length (va::('a, 'b) Node list) \<le> 3 \<and>
(\<forall>x\<in>set va. is_measured_node x \<and> is_leveln_node n x \<and> length va \<le> 3)"
thus "is_leveln_digit n (nlistToDigit (v # va))
\<and> is_measured_digit (nlistToDigit (v # va))"
by(cases "v#va" rule: nlistToDigit.cases,simp_all)
qed
(*corollary deepL_inv':
assumes "is_leveln_ftree (Suc n) m" "is_measured_ftree m"
and "is_leveln_digit n sf" "is_measured_digit sf"
and "\<forall> x \<in> set pr. (is_measured_node x \<and> is_leveln_node n x)" "length pr \<le> 4"
shows "is_leveln_ftree n (deepL pr m sf)" "is_measured_ftree (deepL pr m sf)"
using assms deepL_inv by blast+
*)
lemma nlistToDigit_list:
assumes "1 \<le> length xs \<and> length xs \<le> 4"
shows "digitToList(nlistToDigit xs) = nlistToList xs"
by (insert assms, cases xs rule: nlistToDigit.cases,auto)
lemma deepL_list:
assumes "is_leveln_ftree (Suc n) m \<and> is_measured_ftree m"
and "is_leveln_digit n sf \<and> is_measured_digit sf"
and "\<forall> x \<in> set pr. (is_measured_node x \<and> is_leveln_node n x) \<and> length pr \<le> 4"
shows "toList (deepL pr m sf) = nlistToList pr @ toList m @ digitToList sf"
proof (insert assms, induct "pr" m sf rule: deepL.induct)
case (1 m sf)
thus ?case
proof (auto split: viewnres_split simp add: deep_def)
assume "viewLn m = None"
hence "m = Empty" by (metis viewLn_empty)
hence "toList m = []" by simp
thus "toList (digitToTree sf) = toList m @ digitToList sf"
by (simp add:digitToTree_list)
next
fix nd t
assume "viewLn m = Some (nd, t)"
"is_leveln_ftree (Suc n) m" "is_measured_ftree m"
hence "nodeToList nd @ toList t = toList m" by (metis viewLn_list)
thus "digitToList (nodeToDigit nd) @ toList t = toList m"
by (simp add: nodeToDigit_list)
qed
next
case (2 v va m sf)
thus ?case
apply (unfold deepL.simps)
apply (simp add: deep_def)
apply (simp add: nlistToDigit_list)
done
qed
lemma deepR_inv:
assumes "is_leveln_ftree (Suc n) m \<and> is_measured_ftree m"
and "is_leveln_digit n pr \<and> is_measured_digit pr"
and "\<forall> x \<in> set sf. (is_measured_node x \<and> is_leveln_node n x) \<and> length sf \<le> 4"
shows "is_leveln_ftree n (deepR pr m sf) \<and> is_measured_ftree (deepR pr m sf)"
apply (insert assms)
apply (induct "pr" m sf rule: deepR.induct)
apply (simp split: viewnres_split)
apply auto[1]
apply (simp_all add: digitToTree_inv deep_def)
proof -
fix m "pr" Node FingerTreeStruc
assume "is_leveln_ftree (Suc n) m" "is_measured_ftree m"
"is_leveln_digit n pr" "is_measured_digit pr"
"viewRn m = Some (Node, FingerTreeStruc)"
thus
"is_leveln_digit n (nodeToDigit Node)
\<and> is_leveln_ftree (Suc n) FingerTreeStruc"
by (simp add: viewRn_inv[of m "Suc n" Node FingerTreeStruc] nodeToDigit_inv)
next
fix m "pr" Node FingerTreeStruc
assume assms1:
"is_leveln_ftree (Suc n) m" "is_measured_ftree m"
"is_leveln_digit n pr" "is_measured_digit pr"
"viewRn m = Some (Node, FingerTreeStruc)"
thus "is_measured_ftree FingerTreeStruc \<and> is_measured_digit (nodeToDigit Node)"
apply (auto simp only: viewRn_inv[of m "Suc n" Node FingerTreeStruc])
proof -
from assms1 have "is_measured_node Node \<and> is_leveln_node (Suc n) Node"
by (simp add: viewRn_inv[of m "Suc n" Node FingerTreeStruc])
thus "is_measured_digit (nodeToDigit Node)"
by (auto simp add: nodeToDigit_inv)
qed
next
fix v va
assume
"is_measured_node v \<and> is_leveln_node n (v:: ('a,'b) Node) \<and>
length (va::('a, 'b) Node list) \<le> 3 \<and>
(\<forall>x\<in>set va. is_measured_node x \<and> is_leveln_node n x \<and> length va \<le> 3)"
thus "is_leveln_digit n (nlistToDigit (v # va)) \<and>
is_measured_digit (nlistToDigit (v # va))"
by(cases "v#va" rule: nlistToDigit.cases,simp_all)
qed
lemma deepR_list:
assumes "is_leveln_ftree (Suc n) m \<and> is_measured_ftree m"
and "is_leveln_digit n pr \<and> is_measured_digit pr"
and "\<forall> x \<in> set sf. (is_measured_node x \<and> is_leveln_node n x) \<and> length sf \<le> 4"
shows "toList (deepR pr m sf) = digitToList pr @ toList m @ nlistToList sf"
proof (insert assms, induct "pr" m sf rule: deepR.induct)
case (1 "pr" m)
thus ?case
proof (auto split: viewnres_split simp add: deep_def)
assume "viewRn m = None"
hence "m = Empty" by (metis viewRn_empty)
hence "toList m = []" by simp
thus "toList (digitToTree pr) = digitToList pr @ toList m"
by (simp add:digitToTree_list)
next
fix nd t
assume "viewRn m = Some (nd, t)" "is_leveln_ftree (Suc n) m"
"is_measured_ftree m"
hence "toList t @ nodeToList nd = toList m" by (metis viewRn_list)
thus "toList t @ digitToList (nodeToDigit nd) = toList m"
by (simp add: nodeToDigit_list)
qed
next
case (2 "pr" m v va)
thus ?case
apply (unfold deepR.simps)
apply (simp add: deep_def)
apply (simp add: nlistToDigit_list)
done
qed
primrec gmnl:: "('e, 'a::monoid_add) Node list \<Rightarrow> 'a" where
"gmnl [] = 0"|
"gmnl (x#xs) = gmn x + gmnl xs"
lemma gmnl_correct:
assumes "\<forall> x \<in> set xs. is_measured_node x"
shows "gmnl xs = sum_list (map snd (nlistToList xs))"
by (insert assms, induct xs) (auto simp add: add.assoc gmn_correct)
lemma splitNlist_correct:" \<lbrakk>
\<And>(a::'a) (b::'a). p a \<Longrightarrow> p (a + b);
\<not> p i;
p (i + gmnl (nl ::('e,'a::monoid_add) Node list));
splitNlist p i nl = (l, n, r)
\<rbrakk> \<Longrightarrow>
\<not> p (i + (gmnl l))
\<and>
p (i + (gmnl l) + (gmn n))
\<and>
nl = l @ n # r
"
proof (induct p i nl arbitrary: l n r rule: splitNlist.induct)
case 1 thus ?case by simp
next
case (2 p i a v va l n r) note IV = this
show ?case
proof (cases "p (i + (gmn a))")
case True with IV show ?thesis by simp
next
case False note IV2 = this IV thus ?thesis
proof -
obtain l1 n1 r1 where
v1[simp]: "splitNlist p (i + gmn a) (v # va) = (l1, n1, r1)"
by (cases "splitNlist p (i + gmn a) (v # va)", blast)
note miv = IV2(2)[of "i + gmn a" l1 n1 r1]
have v2:"p (i + gmn a + gmnl (v # va))"
using IV2(5) by (simp add: add.assoc)
note miv2 = miv[OF _ IV2(1) IV2(3) IV2(1) v2 v1]
have v3: "a # l1 = l" "n1 = n" "r1 = r" using IV2 v1 by auto
with miv2 have
v4: "\<not> p (i + gmn a + gmnl l1) \<and>
p (i + gmn a + gmnl l1 + gmn n1) \<and>
v # va = l1 @ n1 # r1"
by auto
with v2 v3 show ?thesis
by (auto simp add: add.assoc)
qed
qed
next
case 3 thus ?case by simp
qed
lemma digitToNlist_inv:
"is_measured_digit d \<Longrightarrow> (\<forall> x \<in> set (digitToNlist d). is_measured_node x)"
"is_leveln_digit n d \<Longrightarrow> (\<forall> x \<in> set (digitToNlist d). is_leveln_node n x)"
by (cases d, auto)(cases d, auto)
lemma gmnl_gmd:
"is_measured_digit d \<Longrightarrow> gmnl (digitToNlist d) = gmd d"
by (cases d, auto simp add: add.assoc)
lemma gmn_gmd:
"is_measured_node nd \<Longrightarrow> gmd (nodeToDigit nd) = gmn nd"
by (auto simp add: nodeToDigit_inv nodeToDigit_list gmn_correct gmd_correct)
lemma splitDigit_inv:
"\<lbrakk>
\<And>(a::'a) (b::'a). p a \<Longrightarrow> p (a + b);
\<not> p i;
is_measured_digit d;
is_leveln_digit n d;
p (i + gmd (d ::('e,'a::monoid_add) Digit));
splitDigit p i d = (l, nd, r)
\<rbrakk> \<Longrightarrow>
\<not> p (i + (gmnl l))
\<and>
p (i + (gmnl l) + (gmn nd))
\<and>
(\<forall> x \<in> set l. (is_measured_node x \<and> is_leveln_node n x))
\<and>
(\<forall> x \<in> set r. (is_measured_node x \<and> is_leveln_node n x))
\<and>
(is_measured_node nd \<and> is_leveln_node n nd )
"
proof -
fix p i d n l nd r
assume assms: "\<And>a b. p a \<Longrightarrow> p (a + b)" "\<not> p i" "is_measured_digit d"
"p (i + gmd d)" "splitDigit p i d = (l, nd, r)"
"is_leveln_digit n d"
from assms(3, 4) have v1: "p (i + gmnl (digitToNlist d))"
by (simp add: gmnl_gmd)
note snc = splitNlist_correct [of p i "digitToNlist d" l nd r]
from assms(5) have v2: "splitNlist p i (digitToNlist d) = (l, nd, r)"
by (simp add: splitDigit_def)
note snc1 = snc[OF assms(1) assms(2) v1 v2]
hence v3: "\<not> p (i + gmnl l) \<and> p (i + gmnl l + gmn nd) \<and>
digitToNlist d = l @ nd # r" by auto
from assms(3,6) have
v4:" \<forall> x \<in> set (digitToNlist d). is_measured_node x"
" \<forall> x \<in> set (digitToNlist d). is_leveln_node n x"
by(auto simp add: digitToNlist_inv)
with v3 have v5: "\<forall> x \<in> set l. (is_measured_node x \<and> is_leveln_node n x)"
"\<forall> x \<in> set r. (is_measured_node x \<and> is_leveln_node n x)"
"is_measured_node nd \<and> is_leveln_node n nd" by auto
with v3 v5 show
"\<not> p (i + gmnl l) \<and> p (i + gmnl l + gmn nd) \<and>
(\<forall>x\<in>set l. is_measured_node x \<and> is_leveln_node n x) \<and>
(\<forall>x\<in>set r. is_measured_node x \<and> is_leveln_node n x) \<and>
is_measured_node nd \<and> is_leveln_node n nd"
by auto
qed
lemma splitDigit_inv':
"\<lbrakk>
splitDigit p i d = (l, nd, r);
is_measured_digit d;
is_leveln_digit n d
\<rbrakk> \<Longrightarrow>
(\<forall> x \<in> set l. (is_measured_node x \<and> is_leveln_node n x))
\<and>
(\<forall> x \<in> set r. (is_measured_node x \<and> is_leveln_node n x))
\<and>
(is_measured_node nd \<and> is_leveln_node n nd )
"
apply (unfold splitDigit_def)
apply (cases d)
apply (auto split: if_split_asm simp add: Let_def)
done
lemma splitDigit_list: "splitDigit p i d = (l,n,r) \<Longrightarrow>
(digitToList d) = (nlistToList l) @ (nodeToList n) @ (nlistToList r)
\<and> length l \<le> 4 \<and> length r \<le> 4"
apply (unfold splitDigit_def)
apply (cases d)
apply (auto split: if_split_asm simp add: Let_def)
done
lemma gmnl_gmft: "\<forall> x \<in> set nl. is_measured_node x \<Longrightarrow>
gmft (nlistToTree nl) = gmnl nl"
by (auto simp add: gmnl_correct[of nl] nlistToTree_list[of nl]
nlistToTree_inv[of nl] gmft_correct[of "nlistToTree nl"])
lemma gmftR_gmnl:
assumes "is_leveln_ftree (Suc n) m \<and> is_measured_ftree m"
and "is_leveln_digit n pr \<and> is_measured_digit pr"
and "\<forall> x \<in> set sf. (is_measured_node x \<and> is_leveln_node n x) \<and> length sf \<le> 4"
shows "gmft (deepR pr m sf) = gmd pr + gmft m + gmnl sf"
proof-
from assms have
v1: "toList (deepR pr m sf) = digitToList pr @ toList m @ nlistToList sf"
by (auto simp add: deepR_list)
from assms have
v2: "is_measured_ftree (deepR pr m sf)"
by (auto simp add: deepR_inv)
with v1 have
v3: "gmft (deepR pr m sf) =
sum_list (map snd (digitToList pr @ toList m @ nlistToList sf))"
by (auto simp add: gmft_correct)
have
v4:"gmd pr + gmft m + gmnl sf =
sum_list (map snd (digitToList pr @ toList m @ nlistToList sf))"
by (auto simp add: gmd_correct gmft_correct gmnl_correct assms add.assoc)
with v3 show ?thesis by simp
qed
lemma nsplitTree_invpres: "\<lbrakk>
is_leveln_ftree n (s:: ('e,'a::monoid_add) FingerTreeStruc);
is_measured_ftree s;
\<not> p i;
p (i + (gmft s));
(nsplitTree p i s) = (l, nd, r)\<rbrakk>
\<Longrightarrow>
is_leveln_ftree n l
\<and>
is_measured_ftree l
\<and>
is_leveln_ftree n r
\<and>
is_measured_ftree r
\<and>
is_leveln_node n nd
\<and>
is_measured_node nd
"
proof (induct p i s arbitrary: n l nd r rule: nsplitTree.induct)
case 1
thus ?case by auto
next
case 2 thus ?case by auto
next
case (3 p i uu "pr" m sf n l nd r)
thus ?case
proof (cases "p (i + gmd pr)")
case True with 3 show ?thesis
proof -
obtain l1 x r1 where
l1xr1: "splitDigit p i pr = (l1,x,r1)"
by (cases "splitDigit p i pr", blast)
with True 3 have
v1: "l = nlistToTree l1" "nd = x" "r = deepL r1 m sf" by auto
from l1xr1 have
v2: "digitToList pr = nlistToList l1 @ nodeToList x @ nlistToList r1"
"length l1 \<le> 4" "length r1 \<le> 4"
by (auto simp add: splitDigit_list)
from 3(2,3) have
pr_m_sf_inv: "is_leveln_digit n pr \<and> is_measured_digit pr"
"is_leveln_ftree (Suc n) m \<and> is_measured_ftree m"
"is_leveln_digit n sf \<and> is_measured_digit sf" by simp_all
with 3(4,5) pr_m_sf_inv(1) True l1xr1
splitDigit_inv'[of p i "pr" l1 x r1 n] have
l1_x_r1_inv:
"\<forall> x \<in> set l1. (is_measured_node x \<and> is_leveln_node n x)"
"\<forall> x \<in> set r1. (is_measured_node x \<and> is_leveln_node n x)"
"is_measured_node x \<and> is_leveln_node n x"
by auto
from l1_x_r1_inv v1 v2(3) pr_m_sf_inv have
ziel3: "is_leveln_ftree n l \<and> is_measured_ftree l \<and>
is_leveln_ftree n r \<and> is_measured_ftree r \<and>
is_leveln_node n nd \<and> is_measured_node nd"
by (auto simp add: nlistToTree_inv deepL_inv)
thus ?thesis by simp
qed
next
case False note case1 = this with 3 show ?thesis
proof (cases "p (i + gmd pr + gmft m)")
case False with case1 3 show ?thesis
proof -
obtain l1 x r1 where
l1xr1: "splitDigit p (i + gmd pr + gmft m) sf = (l1,x,r1)"
by (cases "splitDigit p (i + gmd pr + gmft m) sf", blast)
with case1 False 3 have
v1: "l = deepR pr m l1" "nd = x" "r = nlistToTree r1" by auto
from l1xr1 have
v2: "digitToList sf = nlistToList l1 @ nodeToList x @ nlistToList r1"
"length l1 \<le> 4" "length r1 \<le> 4"
by (auto simp add: splitDigit_list)
from 3(2,3) have
pr_m_sf_inv: "is_leveln_digit n pr \<and> is_measured_digit pr"
"is_leveln_ftree (Suc n) m \<and> is_measured_ftree m"
"is_leveln_digit n sf \<and> is_measured_digit sf" by simp_all
from 3 have
v7: "p (i + gmd pr + gmft m + gmd sf)" by (auto simp add: add.assoc)
with pr_m_sf_inv 3(4) pr_m_sf_inv(3) case1 False l1xr1
splitDigit_inv'[of p "i + gmd pr + gmft m" sf l1 x r1 n]
have l1_x_r1_inv:
"\<forall> x \<in> set l1. (is_measured_node x \<and> is_leveln_node n x)"
"\<forall> x \<in> set r1. (is_measured_node x \<and> is_leveln_node n x)"
"is_measured_node x \<and> is_leveln_node n x"
by auto
from l1_x_r1_inv v1 v2(2) pr_m_sf_inv have
ziel3: "is_leveln_ftree n l \<and> is_measured_ftree l \<and>
is_leveln_ftree n r \<and> is_measured_ftree r \<and>
is_leveln_node n nd \<and> is_measured_node nd"
by (auto simp add: nlistToTree_inv deepR_inv)
from ziel3 show ?thesis by simp
qed
next
case True with case1 3 show ?thesis
proof -
obtain l1 x r1 where
l1_x_r1 :"nsplitTree p (i + gmd pr) m = (l1, x, r1)"
by (cases "nsplitTree p (i + gmd pr) m", blast)
from 3(2,3) have
pr_m_sf_inv: "is_leveln_digit n pr \<and> is_measured_digit pr"
"is_leveln_ftree (Suc n) m \<and> is_measured_ftree m"
"is_leveln_digit n sf \<and> is_measured_digit sf" by simp_all
with True case1
"3.hyps"[of "i + gmd pr" "i + gmd pr + gmft m" "Suc n" l1 x r1]
3(6) l1_x_r1
have l1_x_r1_inv:
"is_leveln_ftree (Suc n) l1 \<and> is_measured_ftree l1"
"is_leveln_ftree (Suc n) r1 \<and> is_measured_ftree r1"
"is_leveln_node (Suc n) x \<and> is_measured_node x"
by auto
obtain l2 x2 r2 where l2_x2_r2:
"splitDigit p (i + gmd pr + gmft l1) (nodeToDigit x) = (l2,x2,r2)"
by (cases "splitDigit p (i + gmd pr + gmft l1) (nodeToDigit x)",blast)
from l1_x_r1_inv have
ndx_inv: "is_leveln_digit n (nodeToDigit x) \<and>
is_measured_digit (nodeToDigit x)"
by (auto simp add: nodeToDigit_inv gmn_gmd)
note spdi = splitDigit_inv'[of p "i + gmd pr + gmft l1"
"nodeToDigit x" l2 x2 r2 n]
from ndx_inv l1_x_r1_inv(1) l2_x2_r2 3(4) have
l2_x2_r2_inv:
"\<forall>x\<in>set l2. is_measured_node x \<and> is_leveln_node n x"
"\<forall>x\<in>set r2. is_measured_node x \<and> is_leveln_node n x"
"is_measured_node x2 \<and> is_leveln_node n x2"
by (auto simp add: spdi)
note spdl = splitDigit_list[of p "i + gmd pr + gmft l1"
"nodeToDigit x" l2 x2 r2]
from l2_x2_r2 have
l2_x2_r2_list:
"digitToList (nodeToDigit x) =
nlistToList l2 @ nodeToList x2 @ nlistToList r2"
"length l2 \<le> 4 \<and> length r2 \<le> 4"
by (auto simp add: spdl)
from case1 True 3(6) l1_x_r1 l2_x2_r2 have
l_nd_r:
"l = deepR pr l1 l2"
"nd = x2"
"r = deepL r2 r1 sf"
by auto
note dr1 = deepR_inv[OF l1_x_r1_inv(1) pr_m_sf_inv(1)]
from dr1 l2_x2_r2_inv l2_x2_r2_list(2) l_nd_r have
l_inv: "is_leveln_ftree n l \<and> is_measured_ftree l"
by simp
note dl1 = deepL_inv[OF l1_x_r1_inv(2) pr_m_sf_inv(3)]
from dl1 l2_x2_r2_inv l2_x2_r2_list(2) l_nd_r have
r_inv: "is_leveln_ftree n r \<and> is_measured_ftree r"
by simp
from l2_x2_r2_inv l_nd_r have
nd_inv: "is_leveln_node n nd \<and> is_measured_node nd"
by simp
from l_inv r_inv nd_inv
show ?thesis by simp
qed
qed
qed
qed
lemma nsplitTree_correct: "\<lbrakk>
is_leveln_ftree n (s:: ('e,'a::monoid_add) FingerTreeStruc);
is_measured_ftree s;
\<And>(a::'a) (b::'a). p a \<Longrightarrow> p (a + b);
\<not> p i;
p (i + (gmft s));
(nsplitTree p i s) = (l, nd, r)\<rbrakk>
\<Longrightarrow> (toList s) = (toList l) @ (nodeToList nd) @ (toList r)
\<and>
\<not> p (i + (gmft l))
\<and>
p (i + (gmft l) + (gmn nd))
\<and>
is_leveln_ftree n l
\<and>
is_measured_ftree l
\<and>
is_leveln_ftree n r
\<and>
is_measured_ftree r
\<and>
is_leveln_node n nd
\<and>
is_measured_node nd
"
proof (induct p i s arbitrary: n l nd r rule: nsplitTree.induct)
case 1
thus ?case by auto
next
case 2 thus ?case by auto
next
case (3 p i uu "pr" m sf n l nd r)
thus ?case
proof (cases "p (i + gmd pr)")
case True with 3 show ?thesis
proof -
obtain l1 x r1 where
l1xr1: "splitDigit p i pr = (l1,x,r1)"
by (cases "splitDigit p i pr", blast)
with True 3(7) have
v1: "l = nlistToTree l1" "nd = x" "r = deepL r1 m sf" by auto
from l1xr1 have
v2: "digitToList pr = nlistToList l1 @ nodeToList x @ nlistToList r1"
"length l1 \<le> 4" "length r1 \<le> 4"
by (auto simp add: splitDigit_list)
from 3(2,3) have
pr_m_sf_inv: "is_leveln_digit n pr \<and> is_measured_digit pr"
"is_leveln_ftree (Suc n) m \<and> is_measured_ftree m"
"is_leveln_digit n sf \<and> is_measured_digit sf" by simp_all
with 3(4,5) pr_m_sf_inv(1) True l1xr1
splitDigit_inv[of p i "pr" n l1 x r1] have
l1_x_r1_inv:
"\<not> p (i + (gmnl l1))"
"p (i + (gmnl l1) + (gmn x))"
"\<forall> x \<in> set l1. (is_measured_node x \<and> is_leveln_node n x)"
"\<forall> x \<in> set r1. (is_measured_node x \<and> is_leveln_node n x)"
"is_measured_node x \<and> is_leveln_node n x"
by auto
from v2 v1 l1_x_r1_inv(4) pr_m_sf_inv have
ziel1: "toList (Deep uu pr m sf) = toList l @ nodeToList nd @ toList r"
by (auto simp add: nlistToTree_list deepL_list)
from l1_x_r1_inv(3) v1(1) have
v3: "gmft l = gmnl l1" by (simp add: gmnl_gmft)
with l1_x_r1_inv(1,2) v1 have
ziel2: " \<not> p (i + gmft l)"
"p (i + gmft l + gmn nd)"
by simp_all
from l1_x_r1_inv(3,4,5) v1 v2(3) pr_m_sf_inv have
ziel3: "is_leveln_ftree n l \<and> is_measured_ftree l \<and>
is_leveln_ftree n r \<and> is_measured_ftree r \<and>
is_leveln_node n nd \<and> is_measured_node nd"
by (auto simp add: nlistToTree_inv deepL_inv)
from ziel1 ziel2 ziel3 show ?thesis by simp
qed
next
case False note case1 = this with 3 show ?thesis
proof (cases "p (i + gmd pr + gmft m)")
case False with case1 3 show ?thesis
proof -
obtain l1 x r1 where
l1xr1: "splitDigit p (i + gmd pr + gmft m) sf = (l1,x,r1)"
by (cases "splitDigit p (i + gmd pr + gmft m) sf", blast)
with case1 False 3(7) have
v1: "l = deepR pr m l1" "nd = x" "r = nlistToTree r1" by auto
from l1xr1 have
v2: "digitToList sf = nlistToList l1 @ nodeToList x @ nlistToList r1"
"length l1 \<le> 4" "length r1 \<le> 4"
by (auto simp add: splitDigit_list)
from 3(2,3) have
pr_m_sf_inv: "is_leveln_digit n pr \<and> is_measured_digit pr"
"is_leveln_ftree (Suc n) m \<and> is_measured_ftree m"
"is_leveln_digit n sf \<and> is_measured_digit sf" by simp_all
from 3(3,6) have
v7: "p (i + gmd pr + gmft m + gmd sf)" by (auto simp add: add.assoc)
with pr_m_sf_inv 3(4) pr_m_sf_inv(3) case1 False l1xr1
splitDigit_inv[of p "i + gmd pr + gmft m" sf n l1 x r1]
have l1_x_r1_inv:
"\<not> p (i + gmd pr + gmft m + gmnl l1)"
"p (i + gmd pr + gmft m + gmnl l1 + gmn x)"
"\<forall> x \<in> set l1. (is_measured_node x \<and> is_leveln_node n x)"
"\<forall> x \<in> set r1. (is_measured_node x \<and> is_leveln_node n x)"
"is_measured_node x \<and> is_leveln_node n x"
by auto
from v2 v1 l1_x_r1_inv(3) pr_m_sf_inv have
ziel1: "toList (Deep uu pr m sf) = toList l @ nodeToList nd @ toList r"
by (auto simp add: nlistToTree_list deepR_list)
from l1_x_r1_inv(4) v1(3) have
v3: "gmft r = gmnl r1" by (simp add: gmnl_gmft)
with l1_x_r1_inv(1,2,3) pr_m_sf_inv v1 v2 have
ziel2: " \<not> p (i + gmft l)"
"p (i + gmft l + gmn nd)"
by (auto simp add: gmftR_gmnl add.assoc)
from l1_x_r1_inv(3,4,5) v1 v2(2) pr_m_sf_inv have
ziel3: "is_leveln_ftree n l \<and> is_measured_ftree l \<and>
is_leveln_ftree n r \<and> is_measured_ftree r \<and>
is_leveln_node n nd \<and> is_measured_node nd"
by (auto simp add: nlistToTree_inv deepR_inv)
from ziel1 ziel2 ziel3 show ?thesis by simp
qed
next
case True with case1 3 show ?thesis
proof -
obtain l1 x r1 where
l1_x_r1 :"nsplitTree p (i + gmd pr) m = (l1, x, r1)"
by (cases "nsplitTree p (i + gmd pr) m") blast
from 3(2,3) have
pr_m_sf_inv: "is_leveln_digit n pr \<and> is_measured_digit pr"
"is_leveln_ftree (Suc n) m \<and> is_measured_ftree m"
"is_leveln_digit n sf \<and> is_measured_digit sf" by simp_all
with True case1
"3.hyps"[of "i + gmd pr" "i + gmd pr + gmft m" "Suc n" l1 x r1]
3(4) l1_x_r1
have l1_x_r1_inv:
"\<not> p (i + gmd pr + gmft l1)"
"p (i + gmd pr + gmft l1 + gmn x)"
"is_leveln_ftree (Suc n) l1 \<and> is_measured_ftree l1"
"is_leveln_ftree (Suc n) r1 \<and> is_measured_ftree r1"
"is_leveln_node (Suc n) x \<and> is_measured_node x"
and l1_x_r1_list:
"toList m = toList l1 @ nodeToList x @ toList r1"
by auto
obtain l2 x2 r2 where l2_x2_r2:
"splitDigit p (i + gmd pr + gmft l1) (nodeToDigit x) = (l2,x2,r2)"
by (cases "splitDigit p (i + gmd pr + gmft l1) (nodeToDigit x)",blast)
from l1_x_r1_inv(2,5) have
ndx_inv: "is_leveln_digit n (nodeToDigit x) \<and>
is_measured_digit (nodeToDigit x)"
"p (i + gmd pr + gmft l1 + gmd (nodeToDigit x))"
by (auto simp add: nodeToDigit_inv gmn_gmd)
note spdi = splitDigit_inv[of p "i + gmd pr + gmft l1"
"nodeToDigit x" n l2 x2 r2]
from ndx_inv l1_x_r1_inv(1) l2_x2_r2 3(4) have
l2_x2_r2_inv:"\<not> p (i + gmd pr + gmft l1 + gmnl l2)"
"p (i + gmd pr + gmft l1 + gmnl l2 + gmn x2)"
"\<forall>x\<in>set l2. is_measured_node x \<and> is_leveln_node n x"
"\<forall>x\<in>set r2. is_measured_node x \<and> is_leveln_node n x"
"is_measured_node x2 \<and> is_leveln_node n x2"
by (auto simp add: spdi)
note spdl = splitDigit_list[of p "i + gmd pr + gmft l1"
"nodeToDigit x" l2 x2 r2]
from l2_x2_r2 have
l2_x2_r2_list:
"digitToList (nodeToDigit x) =
nlistToList l2 @ nodeToList x2 @ nlistToList r2"
"length l2 \<le> 4 \<and> length r2 \<le> 4"
by (auto simp add: spdl)
from case1 True 3(7) l1_x_r1 l2_x2_r2 have
l_nd_r:
"l = deepR pr l1 l2"
"nd = x2"
"r = deepL r2 r1 sf"
by auto
note dr1 = deepR_inv[OF l1_x_r1_inv(3) pr_m_sf_inv(1)]
from dr1 l2_x2_r2_inv(3) l2_x2_r2_list(2) l_nd_r have
l_inv: "is_leveln_ftree n l \<and> is_measured_ftree l"
by simp
note dl1 = deepL_inv[OF l1_x_r1_inv(4) pr_m_sf_inv(3)]
from dl1 l2_x2_r2_inv(4) l2_x2_r2_list(2) l_nd_r have
r_inv: "is_leveln_ftree n r \<and> is_measured_ftree r"
by simp
from l2_x2_r2_inv l_nd_r have
nd_inv: "is_leveln_node n nd \<and> is_measured_node nd"
by simp
from l_nd_r(1,2) l2_x2_r2_inv(1,2,3)
l1_x_r1_inv(3) l2_x2_r2_list(2) pr_m_sf_inv(1)
have split_point:
" \<not> p (i + gmft l)"
"p (i + gmft l + gmn nd)"
by (auto simp add: gmftR_gmnl add.assoc)
from l2_x2_r2_list have x_list:
"nodeToList x = nlistToList l2 @ nodeToList x2 @ nlistToList r2"
by (simp add: nodeToDigit_list)
from l1_x_r1_inv(3) pr_m_sf_inv(1)
l2_x2_r2_inv(3) l2_x2_r2_list(2) l_nd_r(1)
have l_list: "toList l = digitToList pr @ toList l1 @ nlistToList l2"
by (auto simp add: deepR_list)
from l1_x_r1_inv(4) pr_m_sf_inv(3) l2_x2_r2_inv(4)
l2_x2_r2_list(2) l_nd_r(3)
have r_list: "toList r = nlistToList r2 @ toList r1 @ digitToList sf"
by (auto simp add: deepL_list)
from x_list l1_x_r1_list l_list r_list l_nd_r
have "toList (Deep uu pr m sf) = toList l @ nodeToList nd @ toList r"
by auto
with split_point l_inv r_inv nd_inv
show ?thesis by simp
qed
qed
qed
qed
text \<open>
A predicate on the elements of a monoid is called {\em monotone},
iff, when it holds for some value $a$, it also holds for all values $a+b$:
\<close>
text \<open>Split a finger tree by a monotone predicate on the annotations, using
a given initial value. Intuitively, the elements are summed up from left to
right, and the split is done when the predicate first holds for the sum.
The predicate must not hold for the initial value of the summation, and must
hold for the sum of all elements.
\<close>
definition splitTree
:: "('a::monoid_add \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> ('e, 'a) FingerTreeStruc
\<Rightarrow> ('e, 'a) FingerTreeStruc \<times> ('e \<times> 'a) \<times> ('e, 'a) FingerTreeStruc"
where
"splitTree p i t = (let (l, x, r) = nsplitTree p i t in (l, (n_unwrap x), r))"
lemma splitTree_invpres:
assumes inv: "ft_invar (s:: ('e,'a::monoid_add) FingerTreeStruc)"
assumes init_ff: "\<not> p i"
assumes sum_tt: "p (i + annot s)"
assumes fmt: "(splitTree p i s) = (l, (e,a), r)"
shows "ft_invar l" and "ft_invar r"
proof -
obtain l1 nd r1 where
l1_nd_r1: "nsplitTree p i s = (l1, nd, r1)"
by (cases "nsplitTree p i s", blast)
with assms have
l0: "l = l1"
"(e,a) = n_unwrap nd"
"r = r1"
by (auto simp add: splitTree_def)
note nsp = nsplitTree_invpres[of 0 s p i l1 nd r1]
from assms have "p (i + gmft s)" by (simp add: ft_invar_def annot_def)
with assms l1_nd_r1 l0 have
v1:
"is_leveln_ftree 0 l \<and> is_measured_ftree l"
"is_leveln_ftree 0 r \<and> is_measured_ftree r"
"is_leveln_node 0 nd \<and> is_measured_node nd"
by (auto simp add: nsp ft_invar_def)
thus "ft_invar l" and "ft_invar r"
by (simp_all add: ft_invar_def annot_def)
qed
lemma splitTree_correct:
assumes inv: "ft_invar (s:: ('e,'a::monoid_add) FingerTreeStruc)"
assumes mono: "\<forall>a b. p a \<longrightarrow> p (a + b)"
assumes init_ff: "\<not> p i"
assumes sum_tt: "p (i + annot s)"
assumes fmt: "(splitTree p i s) = (l, (e,a), r)"
shows "(toList s) = (toList l) @ (e,a) # (toList r)"
and "\<not> p (i + annot l)"
and "p (i + annot l + a)"
and "ft_invar l" and "ft_invar r"
proof -
obtain l1 nd r1 where
l1_nd_r1: "nsplitTree p i s = (l1, nd, r1)"
by (cases "nsplitTree p i s", blast)
with assms have
l0: "l = l1"
"(e,a) = n_unwrap nd"
"r = r1"
by (auto simp add: splitTree_def)
note nsp = nsplitTree_correct[of 0 s p i l1 nd r1]
from assms have "p (i + gmft s)" by (simp add: ft_invar_def annot_def)
with assms l1_nd_r1 l0 have
v1:
"(toList s) = (toList l) @ (nodeToList nd) @ (toList r)"
"\<not> p (i + (gmft l))"
"p (i + (gmft l) + (gmn nd))"
"is_leveln_ftree 0 l \<and> is_measured_ftree l"
"is_leveln_ftree 0 r \<and> is_measured_ftree r"
"is_leveln_node 0 nd \<and> is_measured_node nd"
by (auto simp add: nsp ft_invar_def)
from v1(6) l0(2) have
ndea: "nd = Tip e a"
by (cases nd) auto
hence nd_list_inv: "nodeToList nd = [(e,a)]"
"gmn nd = a" by simp_all
with v1 show "(toList s) = (toList l) @ (e,a) # (toList r)"
and "\<not> p (i + annot l)"
and "p (i + annot l + a)"
and "ft_invar l" and "ft_invar r"
by (simp_all add: ft_invar_def annot_def)
qed
lemma splitTree_correctE:
assumes inv: "ft_invar (s:: ('e,'a::monoid_add) FingerTreeStruc)"
assumes mono: "\<forall>a b. p a \<longrightarrow> p (a + b)"
assumes init_ff: "\<not> p i"
assumes sum_tt: "p (i + annot s)"
obtains l e a r where
"(splitTree p i s) = (l, (e,a), r)" and
"(toList s) = (toList l) @ (e,a) # (toList r)" and
"\<not> p (i + annot l)" and
"p (i + annot l + a)" and
"ft_invar l" and "ft_invar r"
proof -
obtain l e a r where fmt: "(splitTree p i s) = (l, (e,a), r)"
by (cases "(splitTree p i s)") auto
from splitTree_correct[of s p, OF assms fmt] fmt
show ?thesis
by (blast intro: that)
qed
subsubsection \<open>Folding\<close>
fun foldl_node :: "('s \<Rightarrow> 'e \<times> 'a \<Rightarrow> 's) \<Rightarrow> 's \<Rightarrow> ('e,'a) Node \<Rightarrow> 's" where
"foldl_node f \<sigma> (Tip e a) = f \<sigma> (e,a)"|
"foldl_node f \<sigma> (Node2 _ a b) = foldl_node f (foldl_node f \<sigma> a) b"|
"foldl_node f \<sigma> (Node3 _ a b c) =
foldl_node f (foldl_node f (foldl_node f \<sigma> a) b) c"
primrec foldl_digit :: "('s \<Rightarrow> 'e \<times> 'a \<Rightarrow> 's) \<Rightarrow> 's \<Rightarrow> ('e,'a) Digit \<Rightarrow> 's" where
"foldl_digit f \<sigma> (One n1) = foldl_node f \<sigma> n1"|
"foldl_digit f \<sigma> (Two n1 n2) = foldl_node f (foldl_node f \<sigma> n1) n2"|
"foldl_digit f \<sigma> (Three n1 n2 n3) =
foldl_node f (foldl_node f (foldl_node f \<sigma> n1) n2) n3"|
"foldl_digit f \<sigma> (Four n1 n2 n3 n4) =
foldl_node f (foldl_node f (foldl_node f (foldl_node f \<sigma> n1) n2) n3) n4"
primrec foldr_node :: "('e \<times> 'a \<Rightarrow> 's \<Rightarrow> 's) \<Rightarrow> ('e,'a) Node \<Rightarrow> 's \<Rightarrow> 's" where
"foldr_node f (Tip e a) \<sigma> = f (e,a) \<sigma> "|
"foldr_node f (Node2 _ a b) \<sigma> = foldr_node f a (foldr_node f b \<sigma>)"|
"foldr_node f (Node3 _ a b c) \<sigma>
= foldr_node f a (foldr_node f b (foldr_node f c \<sigma>))"
primrec foldr_digit :: "('e \<times> 'a \<Rightarrow> 's \<Rightarrow> 's) \<Rightarrow> ('e,'a) Digit \<Rightarrow> 's \<Rightarrow> 's" where
"foldr_digit f (One n1) \<sigma> = foldr_node f n1 \<sigma>"|
"foldr_digit f (Two n1 n2) \<sigma> = foldr_node f n1 (foldr_node f n2 \<sigma>)"|
"foldr_digit f (Three n1 n2 n3) \<sigma> =
foldr_node f n1 (foldr_node f n2 (foldr_node f n3 \<sigma>))"|
"foldr_digit f (Four n1 n2 n3 n4) \<sigma> =
foldr_node f n1 (foldr_node f n2 (foldr_node f n3 (foldr_node f n4 \<sigma>)))"
lemma foldl_node_correct:
"foldl_node f \<sigma> nd = List.foldl f \<sigma> (nodeToList nd)"
by (induct nd arbitrary: "\<sigma>") (auto simp add: nodeToList_def)
lemma foldl_digit_correct:
"foldl_digit f \<sigma> d = List.foldl f \<sigma> (digitToList d)"
by (induct d arbitrary: "\<sigma>") (auto
simp add: digitToList_def foldl_node_correct)
lemma foldr_node_correct:
"foldr_node f nd \<sigma> = List.foldr f (nodeToList nd) \<sigma>"
by (induct nd arbitrary: "\<sigma>") (auto simp add: nodeToList_def)
lemma foldr_digit_correct:
"foldr_digit f d \<sigma> = List.foldr f (digitToList d) \<sigma>"
by (induct d arbitrary: "\<sigma>") (auto
simp add: digitToList_def foldr_node_correct)
text "Fold from left"
primrec foldl :: "('s \<Rightarrow> 'e \<times> 'a \<Rightarrow> 's) \<Rightarrow> 's \<Rightarrow> ('e,'a) FingerTreeStruc \<Rightarrow> 's"
where
"foldl f \<sigma> Empty = \<sigma>"|
"foldl f \<sigma> (Single nd) = foldl_node f \<sigma> nd"|
"foldl f \<sigma> (Deep _ d1 m d2) =
foldl_digit f (foldl f (foldl_digit f \<sigma> d1) m) d2"
lemma foldl_correct:
"foldl f \<sigma> t = List.foldl f \<sigma> (toList t)"
by (induct t arbitrary: "\<sigma>") (auto
simp add: toList_def foldl_node_correct foldl_digit_correct)
text "Fold from right"
primrec foldr :: "('e \<times> 'a \<Rightarrow> 's \<Rightarrow> 's) \<Rightarrow> ('e,'a) FingerTreeStruc \<Rightarrow> 's \<Rightarrow> 's"
where
"foldr f Empty \<sigma> = \<sigma>"|
"foldr f (Single nd) \<sigma> = foldr_node f nd \<sigma>"|
"foldr f (Deep _ d1 m d2) \<sigma>
= foldr_digit f d1 (foldr f m(foldr_digit f d2 \<sigma>))"
lemma foldr_correct:
"foldr f t \<sigma> = List.foldr f (toList t) \<sigma>"
by (induct t arbitrary: "\<sigma>") (auto
simp add: toList_def foldr_node_correct foldr_digit_correct)
subsubsection "Number of elements"
primrec count_node :: "('e, 'a) Node \<Rightarrow> nat" where
"count_node (Tip _ a) = 1" |
"count_node (Node2 _ a b) = count_node a + count_node b" |
"count_node (Node3 _ a b c) = count_node a + count_node b + count_node c"
primrec count_digit :: "('e,'a) Digit \<Rightarrow> nat" where
"count_digit (One a) = count_node a" |
"count_digit (Two a b) = count_node a + count_node b" |
"count_digit (Three a b c) = count_node a + count_node b + count_node c" |
"count_digit (Four a b c d)
= count_node a + count_node b + count_node c + count_node d"
lemma count_node_correct:
"count_node n = length (nodeToList n)"
by (induct n,auto simp add: nodeToList_def count_node_def)
lemma count_digit_correct:
"count_digit d = length (digitToList d)"
by (cases d, auto simp add: digitToList_def count_digit_def count_node_correct)
primrec count :: "('e,'a) FingerTreeStruc \<Rightarrow> nat" where
"count Empty = 0" |
"count (Single a) = count_node a" |
"count (Deep _ pr m sf) = count_digit pr + count m + count_digit sf"
lemma count_correct[simp]:
"count t = length (toList t)"
by (induct t,
auto simp add: toList_def count_def
count_digit_correct count_node_correct)
end
(* Expose finger tree functions as qualified names.
Generate code equations *)
interpretation FingerTreeStruc: FingerTreeStruc_loc .
(* Hide the concrete syntax *)
no_notation FingerTreeStruc.lcons (infixr "\<lhd>" 65)
no_notation FingerTreeStruc.rcons (infixl "\<rhd>" 65)
subsection "Hiding the invariant"
text_raw\<open>\label{sec:hide_invar}\<close>
text \<open>
In this section, we define the datatype of all FingerTrees that fulfill their
invariant, and define the operations to work on this datatype.
The advantage is, that the correctness lemmas do no longer contain
explicit invariant predicates, what makes them more handy to use.
\<close>
subsubsection "Datatype"
typedef (overloaded) ('e, 'a) FingerTree =
"{t :: ('e, 'a::monoid_add) FingerTreeStruc. FingerTreeStruc.ft_invar t}"
proof -
have "Empty \<in> ?FingerTree" by (simp)
then show ?thesis ..
qed
lemma Rep_FingerTree_invar[simp]: "FingerTreeStruc.ft_invar (Rep_FingerTree t)"
using Rep_FingerTree by simp
lemma [simp]:
"FingerTreeStruc.ft_invar t \<Longrightarrow> Rep_FingerTree (Abs_FingerTree t) = t"
using Abs_FingerTree_inverse by simp
lemma [simp, code abstype]: "Abs_FingerTree (Rep_FingerTree t) = t"
by (rule Rep_FingerTree_inverse)
typedef (overloaded) ('e,'a) viewres =
"{ r:: (('e \<times> 'a) \<times> ('e,'a::monoid_add) FingerTreeStruc) option .
case r of None \<Rightarrow> True | Some (a,t) \<Rightarrow> FingerTreeStruc.ft_invar t}"
apply (rule_tac x=None in exI)
apply auto
done
lemma [simp, code abstype]: "Abs_viewres (Rep_viewres x) = x"
by (rule Rep_viewres_inverse)
lemma Abs_viewres_inverse_None[simp]:
"Rep_viewres (Abs_viewres None) = None"
by (simp add: Abs_viewres_inverse)
lemma Abs_viewres_inverse_Some:
"FingerTreeStruc.ft_invar t \<Longrightarrow>
Rep_viewres (Abs_viewres (Some (a,t))) = Some (a,t)"
by (auto simp add: Abs_viewres_inverse)
definition [code]: "extract_viewres_isNone r == Rep_viewres r = None"
definition [code]: "extract_viewres_a r ==
case (Rep_viewres r) of Some (a,t) \<Rightarrow> a"
definition "extract_viewres_t r ==
case (Rep_viewres r) of None \<Rightarrow> Abs_FingerTree Empty
| Some (a,t) \<Rightarrow> Abs_FingerTree t"
lemma [code abstract]: "Rep_FingerTree (extract_viewres_t r) =
(case (Rep_viewres r) of None \<Rightarrow> Empty | Some (a,t) \<Rightarrow> t)"
apply (cases r)
apply (auto split: option.split option.split_asm
simp add: extract_viewres_t_def Abs_viewres_inverse_Some)
done
definition "extract_viewres r ==
if extract_viewres_isNone r then None
else Some (extract_viewres_a r, extract_viewres_t r)"
typedef (overloaded) ('e,'a) splitres =
"{ ((l,a,r):: (('e,'a) FingerTreeStruc \<times> ('e \<times> 'a) \<times> ('e,'a::monoid_add) FingerTreeStruc))
| l a r.
FingerTreeStruc.ft_invar l \<and> FingerTreeStruc.ft_invar r}"
apply (rule_tac x="(Empty,undefined,Empty)" in exI)
apply auto
done
lemma [simp, code abstype]: "Abs_splitres (Rep_splitres x) = x"
by (rule Rep_splitres_inverse)
lemma Abs_splitres_inverse:
"FingerTreeStruc.ft_invar r \<Longrightarrow> FingerTreeStruc.ft_invar s \<Longrightarrow>
Rep_splitres (Abs_splitres ((r,a,s))) = (r,a,s)"
by (auto simp add: Abs_splitres_inverse)
definition [code]: "extract_splitres_a r == case (Rep_splitres r) of (l,a,s) \<Rightarrow> a"
definition "extract_splitres_l r == case (Rep_splitres r) of (l,a,r) \<Rightarrow>
Abs_FingerTree l"
lemma [code abstract]: "Rep_FingerTree (extract_splitres_l r) = (case
(Rep_splitres r) of (l,a,r) \<Rightarrow> l)"
apply (cases r)
apply (auto split: option.split option.split_asm
simp add: extract_splitres_l_def Abs_splitres_inverse)
done
definition "extract_splitres_r r == case (Rep_splitres r) of (l,a,r) \<Rightarrow>
Abs_FingerTree r"
lemma [code abstract]: "Rep_FingerTree (extract_splitres_r r) = (case
(Rep_splitres r) of (l,a,r) \<Rightarrow> r)"
apply (cases r)
apply (auto split: option.split option.split_asm
simp add: extract_splitres_r_def Abs_splitres_inverse)
done
definition "extract_splitres r ==
(extract_splitres_l r,
extract_splitres_a r,
extract_splitres_r r)"
subsubsection "Definition of Operations"
locale FingerTree_loc
begin
definition [code]: "toList t == FingerTreeStruc.toList (Rep_FingerTree t)"
definition empty where "empty == Abs_FingerTree FingerTreeStruc.Empty"
lemma [code abstract]: "Rep_FingerTree empty = FingerTreeStruc.Empty"
by (simp add: empty_def)
lemma empty_rep: "t=empty \<longleftrightarrow> Rep_FingerTree t = Empty"
apply (auto simp add: empty_def)
apply (metis Rep_FingerTree_inverse)
done
definition [code]: "annot t == FingerTreeStruc.annot (Rep_FingerTree t)"
definition "toTree t == Abs_FingerTree (FingerTreeStruc.toTree t)"
lemma [code abstract]: "Rep_FingerTree (toTree t) = FingerTreeStruc.toTree t"
by (simp add: toTree_def)
definition "lcons a t ==
Abs_FingerTree (FingerTreeStruc.lcons a (Rep_FingerTree t))"
lemma [code abstract]:
"Rep_FingerTree (lcons a t) = (FingerTreeStruc.lcons a (Rep_FingerTree t))"
by (simp add: lcons_def FingerTreeStruc.lcons_correct)
definition "rcons t a ==
Abs_FingerTree (FingerTreeStruc.rcons (Rep_FingerTree t) a)"
lemma [code abstract]:
"Rep_FingerTree (rcons t a) = (FingerTreeStruc.rcons (Rep_FingerTree t) a)"
by (simp add: rcons_def FingerTreeStruc.rcons_correct)
definition "viewL_aux t ==
Abs_viewres (FingerTreeStruc.viewL (Rep_FingerTree t))"
definition "viewL t == extract_viewres (viewL_aux t)"
lemma [code abstract]:
"Rep_viewres (viewL_aux t) = (FingerTreeStruc.viewL (Rep_FingerTree t))"
apply (cases "(FingerTreeStruc.viewL (Rep_FingerTree t))")
apply (auto simp add: viewL_aux_def )
apply (cases "Rep_FingerTree t = Empty")
apply simp
apply (auto
elim!: FingerTreeStruc.viewL_correct_nonEmpty
[of "Rep_FingerTree t", simplified]
simp add: Abs_viewres_inverse_Some)
done
definition "viewR_aux t ==
Abs_viewres (FingerTreeStruc.viewR (Rep_FingerTree t))"
definition "viewR t == extract_viewres (viewR_aux t)"
lemma [code abstract]:
"Rep_viewres (viewR_aux t) = (FingerTreeStruc.viewR (Rep_FingerTree t))"
apply (cases "(FingerTreeStruc.viewR (Rep_FingerTree t))")
apply (auto simp add: viewR_aux_def )
apply (cases "Rep_FingerTree t = Empty")
apply simp
apply (auto
elim!: FingerTreeStruc.viewR_correct_nonEmpty
[of "Rep_FingerTree t", simplified]
simp add: Abs_viewres_inverse_Some)
done
definition [code]: "isEmpty t == FingerTreeStruc.isEmpty (Rep_FingerTree t)"
definition [code]: "head t = FingerTreeStruc.head (Rep_FingerTree t)"
definition "tail t \<equiv>
if t=empty then
empty
else
Abs_FingerTree (FingerTreeStruc.tail (Rep_FingerTree t))"
\<comment> \<open>Make function total, to allow abstraction\<close>
lemma [code abstract]: "Rep_FingerTree (tail t) =
(if (FingerTreeStruc.isEmpty (Rep_FingerTree t)) then Empty
else FingerTreeStruc.tail (Rep_FingerTree t))"
apply (simp add: tail_def FingerTreeStruc.tail_correct FingerTreeStruc.isEmpty_def empty_rep)
apply (auto simp add: empty_def)
done
definition [code]: "headR t = FingerTreeStruc.headR (Rep_FingerTree t)"
definition "tailR t \<equiv>
if t=empty then
empty
else
Abs_FingerTree (FingerTreeStruc.tailR (Rep_FingerTree t))"
lemma [code abstract]: "Rep_FingerTree (tailR t) =
(if (FingerTreeStruc.isEmpty (Rep_FingerTree t)) then Empty
else FingerTreeStruc.tailR (Rep_FingerTree t))"
apply (simp add: tailR_def FingerTreeStruc.tailR_correct FingerTreeStruc.isEmpty_def empty_rep)
apply (simp add: empty_def)
done
definition "app s t = Abs_FingerTree (
FingerTreeStruc.app (Rep_FingerTree s) (Rep_FingerTree t))"
lemma [code abstract]:
"Rep_FingerTree (app s t) =
FingerTreeStruc.app (Rep_FingerTree s) (Rep_FingerTree t)"
by (simp add: app_def FingerTreeStruc.app_correct)
definition "splitTree_aux p i t == if (\<not>p i \<and> p (i+annot t)) then
Abs_splitres (FingerTreeStruc.splitTree p i (Rep_FingerTree t))
else
Abs_splitres (Empty,undefined,Empty)"
definition "splitTree p i t == extract_splitres (splitTree_aux p i t)"
lemma [code abstract]:
"Rep_splitres (splitTree_aux p i t) = (if (\<not>p i \<and> p (i+annot t)) then
(FingerTreeStruc.splitTree p i (Rep_FingerTree t))
else
(Empty,undefined,Empty))"
using FingerTreeStruc.splitTree_invpres[of "Rep_FingerTree t" p i]
apply (auto simp add: splitTree_aux_def annot_def Abs_splitres_inverse)
apply (cases "FingerTreeStruc.splitTree p i (Rep_FingerTree t)")
apply (force simp add: Abs_FingerTree_inverse Abs_splitres_inverse)
done
definition foldl where
[code]: "foldl f \<sigma> t == FingerTreeStruc.foldl f \<sigma> (Rep_FingerTree t)"
definition foldr where
[code]: "foldr f t \<sigma> == FingerTreeStruc.foldr f (Rep_FingerTree t) \<sigma>"
definition count where
[code]: "count t == FingerTreeStruc.count (Rep_FingerTree t)"
subsubsection "Correctness statements"
lemma empty_correct: "toList t = [] \<longleftrightarrow> t=empty"
apply (unfold toList_def empty_rep)
apply (simp add: FingerTreeStruc.toList_empty)
done
lemma toList_of_empty[simp]: "toList empty = []"
apply (unfold toList_def empty_def)
apply (auto simp add: FingerTreeStruc.toList_empty)
done
lemma annot_correct: "annot t = sum_list (map snd (toList t))"
apply (unfold toList_def annot_def)
apply (simp add: FingerTreeStruc.annot_correct)
done
lemma toTree_correct: "toList (toTree l) = l"
apply (unfold toList_def toTree_def)
apply (simp add: FingerTreeStruc.toTree_correct)
done
lemma lcons_correct: "toList (lcons a t) = a#toList t"
apply (unfold toList_def lcons_def)
apply (simp add: FingerTreeStruc.lcons_correct)
done
lemma rcons_correct: "toList (rcons t a) = toList t@[a]"
apply (unfold toList_def rcons_def)
apply (simp add: FingerTreeStruc.rcons_correct)
done
lemma viewL_correct:
"t = empty \<Longrightarrow> viewL t = None"
"t \<noteq> empty \<Longrightarrow> \<exists>a s. viewL t = Some (a,s) \<and> toList t = a#toList s"
apply (unfold toList_def viewL_def viewL_aux_def
extract_viewres_def extract_viewres_isNone_def
extract_viewres_a_def
extract_viewres_t_def
empty_rep)
apply (simp add: FingerTreeStruc.viewL_correct)
apply (drule FingerTreeStruc.viewL_correct(2)[OF Rep_FingerTree_invar])
apply (auto simp add: Abs_viewres_inverse)
done
lemma viewL_empty[simp]: "viewL empty = None"
using viewL_correct by auto
lemma viewL_nonEmpty:
assumes "t\<noteq>empty"
obtains a s where "viewL t = Some (a,s)" "toList t = a#toList s"
using assms viewL_correct by blast
lemma viewR_correct:
"t = empty \<Longrightarrow> viewR t = None"
"t \<noteq> empty \<Longrightarrow> \<exists>a s. viewR t = Some (a,s) \<and> toList t = toList s@[a]"
apply (unfold toList_def viewR_def viewR_aux_def
extract_viewres_def extract_viewres_isNone_def
extract_viewres_a_def
extract_viewres_t_def
empty_rep)
apply (simp add: FingerTreeStruc.viewR_correct)
apply (drule FingerTreeStruc.viewR_correct(2)[OF Rep_FingerTree_invar])
apply (auto simp add: Abs_viewres_inverse)
done
lemma viewR_empty[simp]: "viewR empty = None"
using viewR_correct by auto
lemma viewR_nonEmpty:
assumes "t\<noteq>empty"
obtains a s where "viewR t = Some (a,s)" "toList t = toList s@[a]"
using assms viewR_correct by blast
lemma isEmpty_correct: "isEmpty t \<longleftrightarrow> t=empty"
apply (unfold toList_def isEmpty_def empty_rep)
apply (simp add: FingerTreeStruc.isEmpty_correct FingerTreeStruc.toList_empty)
done
lemma head_correct: "t\<noteq>empty \<Longrightarrow> head t = hd (toList t)"
apply (unfold toList_def head_def empty_rep)
apply (simp add: FingerTreeStruc.head_correct)
done
lemma tail_correct: "t\<noteq>empty \<Longrightarrow> toList (tail t) = tl (toList t)"
apply (unfold toList_def tail_def empty_rep)
apply (simp add: FingerTreeStruc.tail_correct)
done
lemma headR_correct: "t\<noteq>empty \<Longrightarrow> headR t = last (toList t)"
apply (unfold toList_def headR_def empty_rep)
apply (simp add: FingerTreeStruc.headR_correct)
done
lemma tailR_correct: "t\<noteq>empty \<Longrightarrow> toList (tailR t) = butlast (toList t)"
apply (unfold toList_def tailR_def empty_rep)
apply (simp add: FingerTreeStruc.tailR_correct)
done
lemma app_correct: "toList (app s t) = toList s @ toList t"
apply (unfold toList_def app_def)
apply (simp add: FingerTreeStruc.app_correct)
done
lemma splitTree_correct:
assumes mono: "\<forall>a b. p a \<longrightarrow> p (a + b)"
assumes init_ff: "\<not> p i"
assumes sum_tt: "p (i + annot s)"
assumes fmt: "(splitTree p i s) = (l, (e,a), r)"
shows "(toList s) = (toList l) @ (e,a) # (toList r)"
and "\<not> p (i + annot l)"
and "p (i + annot l + a)"
apply (rule
FingerTreeStruc.splitTree_correctE[
where p=p and s="Rep_FingerTree s",
OF _ mono init_ff sum_tt[unfolded annot_def],
simplified
])
using fmt
apply (unfold toList_def splitTree_aux_def splitTree_def annot_def
extract_splitres_def extract_splitres_l_def
extract_splitres_a_def extract_splitres_r_def) [1]
apply (auto split: if_split_asm prod.split_asm
simp add: init_ff sum_tt[unfolded annot_def] Abs_splitres_inverse) [1]
apply (rule
FingerTreeStruc.splitTree_correctE[
where p=p and s="Rep_FingerTree s",
OF _ mono init_ff sum_tt[unfolded annot_def],
simplified
])
using fmt
apply (unfold toList_def splitTree_aux_def splitTree_def annot_def
extract_splitres_def extract_splitres_l_def
extract_splitres_a_def extract_splitres_r_def) [1]
apply (auto split: if_split_asm prod.split_asm
simp add: init_ff sum_tt[unfolded annot_def] Abs_splitres_inverse) [1]
apply (rule
FingerTreeStruc.splitTree_correctE[
where p=p and s="Rep_FingerTree s",
OF _ mono init_ff sum_tt[unfolded annot_def],
simplified
])
using fmt
apply (unfold toList_def splitTree_aux_def splitTree_def annot_def
extract_splitres_def extract_splitres_l_def
extract_splitres_a_def extract_splitres_r_def) [1]
apply (auto split: if_split_asm prod.split_asm
simp add: init_ff sum_tt[unfolded annot_def] Abs_splitres_inverse) [1]
done
lemma splitTree_correctE:
assumes mono: "\<forall>a b. p a \<longrightarrow> p (a + b)"
assumes init_ff: "\<not> p i"
assumes sum_tt: "p (i + annot s)"
obtains l e a r where
"(splitTree p i s) = (l, (e,a), r)" and
"(toList s) = (toList l) @ (e,a) # (toList r)" and
"\<not> p (i + annot l)" and
"p (i + annot l + a)"
proof -
obtain l e a r where fmt: "(splitTree p i s) = (l, (e,a), r)"
by (cases "(splitTree p i s)") auto
from splitTree_correct[of p, OF assms fmt] fmt
show ?thesis
by (blast intro: that)
qed
lemma foldl_correct: "foldl f \<sigma> t = List.foldl f \<sigma> (toList t)"
apply (unfold toList_def foldl_def)
apply (simp add: FingerTreeStruc.foldl_correct)
done
lemma foldr_correct: "foldr f t \<sigma> = List.foldr f (toList t) \<sigma>"
apply (unfold toList_def foldr_def)
apply (simp add: FingerTreeStruc.foldr_correct)
done
lemma count_correct: "count t = length (toList t)"
apply (unfold toList_def count_def)
apply (simp add: FingerTreeStruc.count_correct)
done
end
interpretation FingerTree: FingerTree_loc .
text_raw\<open>\clearpage\<close>
subsection "Interface Documentation"
text_raw\<open>\label{sec:doc}\<close>
text \<open>
In this section, we list all supported operations on finger trees,
along with a short plaintext documentation and their correctness statements.
\<close>
(*#DOC
fun [no_spec] FingerTree.toList
Convert to list ($O(n)$)
fun FingerTree.empty
The empty finger tree ($O(1)$)
fun FingerTree.annot
Return sum of all annotations ($O(1)$)
fun FingerTree.toTree
Convert list to finger tree ($O(n\log(n))$)
fun FingerTree.lcons
Append element at the left end ($O(\log(n))$, $O(1)$ amortized)
fun FingerTree.rcons
Append element at the right end ($O(\log(n))$, $O(1)$ amortized)
fun FingerTree.viewL
Detach leftmost element ($O(\log(n))$, $O(1)$ amortized)
fun FingerTree.viewR
Detach rightmost element ($O(\log(n))$, $O(1)$ amortized)
fun FingerTree.isEmpty
Check whether tree is empty ($O(1)$)
fun FingerTree.head
Get leftmost element of non-empty tree ($O(\log(n))$)
fun FingerTree.tail
Get all but leftmost element of non-empty tree ($O(\log(n))$)
fun FingerTree.headR
Get rightmost element of non-empty tree ($O(\log(n))$)
fun FingerTree.tailR
Get all but rightmost element of non-empty tree ($O(\log(n))$)
fun FingerTree.app
Concatenate two finger trees ($O(\log(m+n))$)
fun [long_type] FingerTree.splitTree
Split tree by a monotone predicate. ($O(\log(n))$)
A predicate $p$ over the annotations is called monotone, iff, for all
annotations
$a,b$ with $p(a)$, we have already $p(a+b)$.
Splitting is done by specifying a monotone predicate $p$ that does not hold
for the initial value $i$ of the summation, but holds for $i$ plus the sum
of all annotations. The tree is then split at the position where $p$ starts to
hold for the sum of all elements up to that position.
fun [long_type] FingerTree.foldl
Fold with function from left
fun [long_type] FingerTree.foldr
Fold with function from right
fun FingerTree.count
Return the number of elements
*)
text \<open>
\underline{@{term_type "FingerTree.toList"}}\\
Convert to list ($O(n)$)\\
\underline{@{term_type "FingerTree.empty"}}\\
The empty finger tree ($O(1)$)\\
{\bf Spec} \<open>FingerTree.empty_correct\<close>:
@{thm [display] "FingerTree.empty_correct"}
\underline{@{term_type "FingerTree.annot"}}\\
Return sum of all annotations ($O(1)$)\\
{\bf Spec} \<open>FingerTree.annot_correct\<close>:
@{thm [display] "FingerTree.annot_correct"}
\underline{@{term_type "FingerTree.toTree"}}\\
Convert list to finger tree ($O(n\log(n))$)\\
{\bf Spec} \<open>FingerTree.toTree_correct\<close>:
@{thm [display] "FingerTree.toTree_correct"}
\underline{@{term_type "FingerTree.lcons"}}\\
Append element at the left end ($O(\log(n))$, $O(1)$ amortized)\\
{\bf Spec} \<open>FingerTree.lcons_correct\<close>:
@{thm [display] "FingerTree.lcons_correct"}
\underline{@{term_type "FingerTree.rcons"}}\\
Append element at the right end ($O(\log(n))$, $O(1)$ amortized)\\
{\bf Spec} \<open>FingerTree.rcons_correct\<close>:
@{thm [display] "FingerTree.rcons_correct"}
\underline{@{term_type "FingerTree.viewL"}}\\
Detach leftmost element ($O(\log(n))$, $O(1)$ amortized)\\
{\bf Spec} \<open>FingerTree.viewL_correct\<close>:
@{thm [display] "FingerTree.viewL_correct"}
\underline{@{term_type "FingerTree.viewR"}}\\
Detach rightmost element ($O(\log(n))$, $O(1)$ amortized)\\
{\bf Spec} \<open>FingerTree.viewR_correct\<close>:
@{thm [display] "FingerTree.viewR_correct"}
\underline{@{term_type "FingerTree.isEmpty"}}\\
Check whether tree is empty ($O(1)$)\\
{\bf Spec} \<open>FingerTree.isEmpty_correct\<close>:
@{thm [display] "FingerTree.isEmpty_correct"}
\underline{@{term_type "FingerTree.head"}}\\
Get leftmost element of non-empty tree ($O(\log(n))$)\\
{\bf Spec} \<open>FingerTree.head_correct\<close>:
@{thm [display] "FingerTree.head_correct"}
\underline{@{term_type "FingerTree.tail"}}\\
Get all but leftmost element of non-empty tree ($O(\log(n))$)\\
{\bf Spec} \<open>FingerTree.tail_correct\<close>:
@{thm [display] "FingerTree.tail_correct"}
\underline{@{term_type "FingerTree.headR"}}\\
Get rightmost element of non-empty tree ($O(\log(n))$)\\
{\bf Spec} \<open>FingerTree.headR_correct\<close>:
@{thm [display] "FingerTree.headR_correct"}
\underline{@{term_type "FingerTree.tailR"}}\\
Get all but rightmost element of non-empty tree ($O(\log(n))$)\\
{\bf Spec} \<open>FingerTree.tailR_correct\<close>:
@{thm [display] "FingerTree.tailR_correct"}
\underline{@{term_type "FingerTree.app"}}\\
Concatenate two finger trees ($O(\log(m+n))$)\\
{\bf Spec} \<open>FingerTree.app_correct\<close>:
@{thm [display] "FingerTree.app_correct"}
\underline{@{term "FingerTree.splitTree"}}
@{term_type [display] "FingerTree.splitTree"}
Split tree by a monotone predicate. ($O(\log(n))$)
A predicate $p$ over the annotations is called monotone, iff, for all
annotations
$a,b$ with $p(a)$, we have already $p(a+b)$.
Splitting is done by specifying a monotone predicate $p$ that does not hold
for the initial value $i$ of the summation, but holds for $i$ plus the sum
of all annotations. The tree is then split at the position where $p$ starts to
hold for the sum of all elements up to that position.\\
{\bf Spec} \<open>FingerTree.splitTree_correct\<close>:
@{thm [display] "FingerTree.splitTree_correct"}
\underline{@{term "FingerTree.foldl"}}
@{term_type [display] "FingerTree.foldl"}
Fold with function from left\\
{\bf Spec} \<open>FingerTree.foldl_correct\<close>:
@{thm [display] "FingerTree.foldl_correct"}
\underline{@{term "FingerTree.foldr"}}
@{term_type [display] "FingerTree.foldr"}
Fold with function from right\\
{\bf Spec} \<open>FingerTree.foldr_correct\<close>:
@{thm [display] "FingerTree.foldr_correct"}
\underline{@{term_type "FingerTree.count"}}\\
Return the number of elements\\
{\bf Spec} \<open>FingerTree.count_correct\<close>:
@{thm [display] "FingerTree.count_correct"}
\<close>
end
diff --git a/thys/Functional_Ordered_Resolution_Prover/Weighted_FO_Ordered_Resolution_Prover.thy b/thys/Functional_Ordered_Resolution_Prover/Weighted_FO_Ordered_Resolution_Prover.thy
--- a/thys/Functional_Ordered_Resolution_Prover/Weighted_FO_Ordered_Resolution_Prover.thy
+++ b/thys/Functional_Ordered_Resolution_Prover/Weighted_FO_Ordered_Resolution_Prover.thy
@@ -1,836 +1,836 @@
(* Title: A Fair Ordered Resolution Prover for First-Order Clauses with Weights
Author: Anders Schlichtkrull <andschl at dtu.dk>, 2017
Author: Jasmin Blanchette <j.c.blanchette at vu.nl>, 2017
Maintainer: Anders Schlichtkrull <andschl at dtu.dk>
*)
section \<open>A Fair Ordered Resolution Prover for First-Order Clauses with Weights\<close>
text \<open>
The \<open>weighted_RP\<close> prover introduced below operates on finite multisets of clauses and
organizes the multiset of processed clauses as a priority queue to ensure that inferences are
performed in a fair manner, to guarantee completeness.
\<close>
theory Weighted_FO_Ordered_Resolution_Prover
imports Ordered_Resolution_Prover.FO_Ordered_Resolution_Prover
begin
subsection \<open>Prover\<close>
type_synonym 'a wclause = "'a clause \<times> nat"
type_synonym 'a wstate = "'a wclause multiset \<times> 'a wclause multiset \<times> 'a wclause multiset \<times> nat"
fun state_of_wstate :: "'a wstate \<Rightarrow> 'a state" where
"state_of_wstate (N, P, Q, n) =
(set_mset (image_mset fst N), set_mset (image_mset fst P), set_mset (image_mset fst Q))"
locale weighted_FO_resolution_prover =
FO_resolution_prover S subst_atm id_subst comp_subst renamings_apart atm_of_atms mgu less_atm
for
S :: "('a :: wellorder) clause \<Rightarrow> 'a clause" and
subst_atm :: "'a \<Rightarrow> 's \<Rightarrow> 'a" and
id_subst :: "'s" and
comp_subst :: "'s \<Rightarrow> 's \<Rightarrow> 's" and
renamings_apart :: "'a clause list \<Rightarrow> 's list" and
atm_of_atms :: "'a list \<Rightarrow> 'a" and
mgu :: "'a set set \<Rightarrow> 's option" and
less_atm :: "'a \<Rightarrow> 'a \<Rightarrow> bool" +
fixes
weight :: "'a clause \<times> nat \<Rightarrow> nat"
assumes
weight_mono: "i < j \<Longrightarrow> weight (C, i) < weight (C, j)"
begin
abbreviation clss_of_wstate :: "'a wstate \<Rightarrow> 'a clause set" where
"clss_of_wstate St \<equiv> clss_of_state (state_of_wstate St)"
abbreviation N_of_wstate :: "'a wstate \<Rightarrow> 'a clause set" where
"N_of_wstate St \<equiv> N_of_state (state_of_wstate St)"
abbreviation P_of_wstate :: "'a wstate \<Rightarrow> 'a clause set" where
"P_of_wstate St \<equiv> P_of_state (state_of_wstate St)"
abbreviation Q_of_wstate :: "'a wstate \<Rightarrow> 'a clause set" where
"Q_of_wstate St \<equiv> Q_of_state (state_of_wstate St)"
fun wN_of_wstate :: "'a wstate \<Rightarrow> 'a wclause multiset" where
"wN_of_wstate (N, P, Q, n) = N"
fun wP_of_wstate :: "'a wstate \<Rightarrow> 'a wclause multiset" where
"wP_of_wstate (N, P, Q, n) = P"
fun wQ_of_wstate :: "'a wstate \<Rightarrow> 'a wclause multiset" where
"wQ_of_wstate (N, P, Q, n) = Q"
fun n_of_wstate :: "'a wstate \<Rightarrow> nat" where
"n_of_wstate (N, P, Q, n) = n"
lemma of_wstate_split[simp]:
"(wN_of_wstate St, wP_of_wstate St, wQ_of_wstate St, n_of_wstate St) = St"
by (cases St) auto
abbreviation grounding_of_wstate :: "'a wstate \<Rightarrow> 'a clause set" where
"grounding_of_wstate St \<equiv> grounding_of_state (state_of_wstate St)"
abbreviation Liminf_wstate :: "'a wstate llist \<Rightarrow> 'a state" where
"Liminf_wstate Sts \<equiv> Liminf_state (lmap state_of_wstate Sts)"
lemma timestamp_le_weight: "n \<le> weight (C, n)"
by (induct n, simp, metis weight_mono[of k "Suc k" for k] Suc_le_eq le_less le_trans)
inductive weighted_RP :: "'a wstate \<Rightarrow> 'a wstate \<Rightarrow> bool" (infix "\<leadsto>\<^sub>w" 50) where
tautology_deletion: "Neg A \<in># C \<Longrightarrow> Pos A \<in># C \<Longrightarrow> (N + {#(C, i)#}, P, Q, n) \<leadsto>\<^sub>w (N, P, Q, n)"
| forward_subsumption: "D \<in># image_mset fst (P + Q) \<Longrightarrow> subsumes D C \<Longrightarrow>
(N + {#(C, i)#}, P, Q, n) \<leadsto>\<^sub>w (N, P, Q, n)"
| backward_subsumption_P: "D \<in># image_mset fst N \<Longrightarrow> C \<in># image_mset fst P \<Longrightarrow>
strictly_subsumes D C \<Longrightarrow> (N, P, Q, n) \<leadsto>\<^sub>w (N, {#(E, k) \<in># P. E \<noteq> C#}, Q, n)"
| backward_subsumption_Q: "D \<in># image_mset fst N \<Longrightarrow> strictly_subsumes D C \<Longrightarrow>
(N, P, Q + {#(C, i)#}, n) \<leadsto>\<^sub>w (N, P, Q, n)"
| forward_reduction: "D + {#L'#} \<in># image_mset fst (P + Q) \<Longrightarrow> - L = L' \<cdot>l \<sigma> \<Longrightarrow> D \<cdot> \<sigma> \<subseteq># C \<Longrightarrow>
(N + {#(C + {#L#}, i)#}, P, Q, n) \<leadsto>\<^sub>w (N + {#(C, i)#}, P, Q, n)"
| backward_reduction_P: "D + {#L'#} \<in># image_mset fst N \<Longrightarrow> - L = L' \<cdot>l \<sigma> \<Longrightarrow> D \<cdot> \<sigma> \<subseteq># C \<Longrightarrow>
(\<forall>j. (C + {#L#}, j) \<in># P \<longrightarrow> j \<le> i) \<Longrightarrow>
(N, P + {#(C + {#L#}, i)#}, Q, n) \<leadsto>\<^sub>w (N, P + {#(C, i)#}, Q, n)"
| backward_reduction_Q: "D + {#L'#} \<in># image_mset fst N \<Longrightarrow> - L = L' \<cdot>l \<sigma> \<Longrightarrow> D \<cdot> \<sigma> \<subseteq># C \<Longrightarrow>
(N, P, Q + {#(C + {#L#}, i)#}, n) \<leadsto>\<^sub>w (N, P + {#(C, i)#}, Q, n)"
| clause_processing: "(N + {#(C, i)#}, P, Q, n) \<leadsto>\<^sub>w (N, P + {#(C, i)#}, Q, n)"
| inference_computation: "(\<forall>(D, j) \<in># P. weight (C, i) \<le> weight (D, j)) \<Longrightarrow>
N = mset_set ((\<lambda>D. (D, n)) ` concls_of
(inference_system.inferences_between (ord_FO_\<Gamma> S) (set_mset (image_mset fst Q)) C)) \<Longrightarrow>
({#}, P + {#(C, i)#}, Q, n) \<leadsto>\<^sub>w (N, {#(D, j) \<in># P. D \<noteq> C#}, Q + {#(C, i)#}, Suc n)"
lemma weighted_RP_imp_RP: "St \<leadsto>\<^sub>w St' \<Longrightarrow> state_of_wstate St \<leadsto> state_of_wstate St'"
proof (induction rule: weighted_RP.induct)
case (backward_subsumption_P D N C P Q n)
show ?case
by (rule arg_cong2[THEN iffD1, of _ _ _ _ "(\<leadsto>)", OF _ _
RP.backward_subsumption_P[of D "fst ` set_mset N" C "fst ` set_mset P - {C}"
"fst ` set_mset Q"]])
(use backward_subsumption_P in auto)
next
case (inference_computation P C i N n Q)
show ?case
by (rule arg_cong2[THEN iffD1, of _ _ _ _ "(\<leadsto>)", OF _ _
RP.inference_computation[of "fst ` set_mset N" "fst ` set_mset Q" C
"fst ` set_mset P - {C}"]],
use inference_computation(2) finite_ord_FO_resolution_inferences_between in
\<open>auto simp: comp_def image_comp inference_system.inferences_between_def\<close>)
qed (use RP.intros in simp_all)
lemma final_weighted_RP: "\<not> ({#}, {#}, Q, n) \<leadsto>\<^sub>w St"
by (auto elim: weighted_RP.cases)
context
fixes
Sts :: "'a wstate llist"
assumes
full_deriv: "full_chain (\<leadsto>\<^sub>w) Sts" and
empty_P0: "P_of_wstate (lhd Sts) = {}" and
empty_Q0: "Q_of_wstate (lhd Sts) = {}"
begin
lemma finite_Sts0: "finite (clss_of_wstate (lhd Sts))"
by (cases "lhd Sts") auto
lemmas deriv = full_chain_imp_chain[OF full_deriv]
lemmas lhd_lmap_Sts = llist.map_sel(1)[OF chain_not_lnull[OF deriv]]
lemma deriv_RP: "chain (\<leadsto>) (lmap state_of_wstate Sts)"
using deriv weighted_RP_imp_RP by (metis chain_lmap)
lemma finite_Sts0_RP: "finite (clss_of_state (lhd (lmap state_of_wstate Sts)))"
using finite_Sts0 chain_length_pos[OF deriv] by auto
lemma empty_P0_RP: "P_of_state (lhd (lmap state_of_wstate Sts)) = {}"
using empty_P0 chain_length_pos[OF deriv] by auto
lemma empty_Q0_RP: "Q_of_state (lhd (lmap state_of_wstate Sts)) = {}"
using empty_Q0 chain_length_pos[OF deriv] by auto
lemmas Sts_thms = deriv_RP finite_Sts0_RP empty_P0_RP empty_Q0_RP
theorem weighted_RP_model:
"St \<leadsto>\<^sub>w St' \<Longrightarrow> I \<Turnstile>s grounding_of_wstate St' \<longleftrightarrow> I \<Turnstile>s grounding_of_wstate St"
using RP_model Sts_thms weighted_RP_imp_RP by (simp only: comp_def)
abbreviation S_gQ :: "'a clause \<Rightarrow> 'a clause" where
"S_gQ \<equiv> S_Q (lmap state_of_wstate Sts)"
interpretation sq: selection S_gQ
- unfolding S_Q_def[OF deriv_RP] using S_M_selects_subseteq S_M_selects_neg_lits selection_axioms
+ unfolding S_Q_def using S_M_selects_subseteq S_M_selects_neg_lits selection_axioms
by unfold_locales auto
interpretation gd: ground_resolution_with_selection S_gQ
by unfold_locales
interpretation src: standard_redundancy_criterion_reductive gd.ord_\<Gamma>
by unfold_locales
interpretation src: standard_redundancy_criterion_counterex_reducing gd.ord_\<Gamma>
"ground_resolution_with_selection.INTERP S_gQ"
by unfold_locales
lemmas ord_\<Gamma>_saturated_upto_def = src.saturated_upto_def
lemmas ord_\<Gamma>_saturated_upto_complete = src.saturated_upto_complete
lemmas ord_\<Gamma>_contradiction_Rf = src.contradiction_Rf
theorem weighted_RP_sound:
assumes "{#} \<in> clss_of_state (Liminf_wstate Sts)"
shows "\<not> satisfiable (grounding_of_wstate (lhd Sts))"
by (rule RP_sound[OF deriv_RP assms, unfolded lhd_lmap_Sts])
abbreviation RP_filtered_measure :: "('a wclause \<Rightarrow> bool) \<Rightarrow> 'a wstate \<Rightarrow> nat \<times> nat \<times> nat" where
"RP_filtered_measure \<equiv> \<lambda>p (N, P, Q, n).
(sum_mset (image_mset (\<lambda>(C, i). Suc (size C)) {#Di \<in># N + P + Q. p Di#}),
size {#Di \<in># N. p Di#}, size {#Di \<in># P. p Di#})"
abbreviation RP_combined_measure :: "nat \<Rightarrow> 'a wstate \<Rightarrow> nat \<times> (nat \<times> nat \<times> nat) \<times> (nat \<times> nat \<times> nat)" where
"RP_combined_measure \<equiv> \<lambda>w St.
(w + 1 - n_of_wstate St, RP_filtered_measure (\<lambda>(C, i). i \<le> w) St,
RP_filtered_measure (\<lambda>Ci. True) St)"
abbreviation (input) RP_filtered_relation :: "((nat \<times> nat \<times> nat) \<times> (nat \<times> nat \<times> nat)) set" where
"RP_filtered_relation \<equiv> natLess <*lex*> natLess <*lex*> natLess"
abbreviation (input) RP_combined_relation :: "((nat \<times> ((nat \<times> nat \<times> nat) \<times> (nat \<times> nat \<times> nat))) \<times>
(nat \<times> ((nat \<times> nat \<times> nat) \<times> (nat \<times> nat \<times> nat)))) set" where
"RP_combined_relation \<equiv> natLess <*lex*> RP_filtered_relation <*lex*> RP_filtered_relation"
abbreviation "(fst3 :: 'b * 'c * 'd \<Rightarrow> 'b) \<equiv> fst"
abbreviation "(snd3 :: 'b * 'c * 'd \<Rightarrow> 'c) \<equiv> \<lambda>x. fst (snd x)"
abbreviation "(trd3 :: 'b * 'c * 'd \<Rightarrow> 'd) \<equiv> \<lambda>x. snd (snd x)"
lemma
wf_RP_filtered_relation: "wf RP_filtered_relation" and
wf_RP_combined_relation: "wf RP_combined_relation"
unfolding natLess_def using wf_less wf_mult by auto
lemma multiset_sum_of_Suc_f_monotone: "N \<subset># M \<Longrightarrow> (\<Sum>x \<in># N. Suc (f x)) < (\<Sum>x \<in># M. Suc (f x))"
proof (induction N arbitrary: M)
case empty
then obtain y where "y \<in># M"
by force
then have "(\<Sum>x \<in># M. 1) = (\<Sum>x \<in># M - {#y#} + {#y#}. 1)"
by auto
also have "... = (\<Sum>x \<in># M - {#y#}. 1) + (\<Sum>x \<in># {#y#}. 1)"
by (metis image_mset_union sum_mset.union)
also have "... > (0 :: nat)"
by auto
finally have "0 < (\<Sum>x \<in># M. Suc (f x))"
by (fastforce intro: gr_zeroI)
then show ?case
using empty by auto
next
case (add x N)
from this(2) have "(\<Sum>y \<in># N. Suc (f y)) < (\<Sum>y \<in># M - {#x#}. Suc (f y))"
using add(1)[of "M - {#x#}"] by (simp add: insert_union_subset_iff)
moreover have "add_mset x (remove1_mset x M) = M"
by (meson add.prems add_mset_remove_trivial_If mset_subset_insertD)
ultimately show ?case
by (metis (no_types) add.commute add_less_cancel_right sum_mset.insert)
qed
lemma multiset_sum_monotone_f':
assumes "CC \<subset># DD"
shows "(\<Sum>(C, i) \<in># CC. Suc (f C)) < (\<Sum>(C, i) \<in># DD. Suc (f C))"
using multiset_sum_of_Suc_f_monotone[OF assms, of "f \<circ> fst"]
by (metis (mono_tags) comp_apply image_mset_cong2 split_beta)
lemma filter_mset_strict_subset:
assumes "x \<in># M" and "\<not> p x"
shows "{#y \<in># M. p y#} \<subset># M"
proof -
have subseteq: "{#E \<in># M. p E#} \<subseteq># M"
by auto
have "count {#E \<in># M. p E#} x = 0"
using assms by auto
moreover have "0 < count M x"
using assms by auto
ultimately have lt_count: "count {#y \<in># M. p y#} x < count M x"
by auto
then show ?thesis
using subseteq by (metis less_not_refl2 subset_mset.le_neq_trans)
qed
lemma weighted_RP_measure_decreasing_N:
assumes "St \<leadsto>\<^sub>w St'" and "(C, l) \<in># wN_of_wstate St"
shows "(RP_filtered_measure (\<lambda>Ci. True) St', RP_filtered_measure (\<lambda>Ci. True) St)
\<in> RP_filtered_relation"
using assms proof (induction rule: weighted_RP.induct)
case (backward_subsumption_P D N C' P Q n)
then obtain i' where "(C', i') \<in># P"
by auto
then have "{#(E, k) \<in># P. E \<noteq> C'#} \<subset># P"
using filter_mset_strict_subset[of "(C', i')" P "\<lambda>X. \<not>fst X = C'"]
by (metis (mono_tags, lifting) filter_mset_cong fst_conv prod.case_eq_if)
then have "(\<Sum>(C, i) \<in># {#(E, k) \<in># P. E \<noteq> C'#}. Suc (size C)) < (\<Sum>(C, i) \<in># P. Suc (size C))"
using multiset_sum_monotone_f'[of "{#(E, k) \<in># P. E \<noteq> C'#}" P size] by metis
then show ?case
unfolding natLess_def by auto
qed (auto simp: natLess_def)
lemma weighted_RP_measure_decreasing_P:
assumes "St \<leadsto>\<^sub>w St'" and "(C, i) \<in># wP_of_wstate St"
shows "(RP_combined_measure (weight (C, i)) St', RP_combined_measure (weight (C, i)) St)
\<in> RP_combined_relation"
using assms proof (induction rule: weighted_RP.induct)
case (backward_subsumption_P D N C' P Q n)
define St where "St = (N, P, Q, n)"
define P' where "P' = {#(E, k) \<in># P. E \<noteq> C'#}"
define St' where "St' = (N, P', Q, n)"
from backward_subsumption_P obtain i' where "(C', i') \<in># P"
by auto
then have P'_sub_P: "P' \<subset># P"
unfolding P'_def using filter_mset_strict_subset[of "(C', i')" P "\<lambda>Dj. fst Dj \<noteq> C'"]
by (metis (no_types, lifting) filter_mset_cong fst_conv prod.case_eq_if)
have P'_subeq_P_filter:
"{#(Ca, ia) \<in># P'. ia \<le> weight (C, i)#} \<subseteq># {#(Ca, ia) \<in># P. ia \<le> weight (C, i)#}"
using P'_sub_P by (auto intro: multiset_filter_mono)
have "fst3 (RP_combined_measure (weight (C, i)) St')
\<le> fst3 (RP_combined_measure (weight (C, i)) St)"
unfolding St'_def St_def by auto
moreover have "(\<Sum>(C, i) \<in># {#(Ca, ia) \<in># P'. ia \<le> weight (C, i)#}. Suc (size C))
\<le> (\<Sum>x \<in># {#(Ca, ia) \<in># P. ia \<le> weight (C, i)#}. case x of (C, i) \<Rightarrow> Suc (size C))"
using P'_subeq_P_filter by (rule sum_image_mset_mono)
then have "fst3 (snd3 (RP_combined_measure (weight (C, i)) St'))
\<le> fst3 (snd3 (RP_combined_measure (weight (C, i)) St))"
unfolding St'_def St_def by auto
moreover have "snd3 (snd3 (RP_combined_measure (weight (C, i)) St'))
\<le> snd3 (snd3 (RP_combined_measure (weight (C, i)) St))"
unfolding St'_def St_def by auto
moreover from P'_subeq_P_filter have "size {#(Ca, ia) \<in># P'. ia \<le> weight (C, i)#}
\<le> size {#(Ca, ia) \<in># P. ia \<le> weight (C, i)#}"
by (simp add: size_mset_mono)
then have "trd3 (snd3 (RP_combined_measure (weight (C, i)) St'))
\<le> trd3 (snd3 (RP_combined_measure (weight (C, i)) St))"
unfolding St'_def St_def unfolding fst_def snd_def by auto
moreover from P'_sub_P have "(\<Sum>(C, i) \<in># P'. Suc (size C)) < (\<Sum>(C, i) \<in># P. Suc (size C))"
using multiset_sum_monotone_f'[of "{#(E, k) \<in># P. E \<noteq> C'#}" P size] unfolding P'_def by metis
then have "fst3 (trd3 (RP_combined_measure (weight (C, i)) St'))
< fst3 (trd3 (RP_combined_measure (weight (C, i)) St))"
unfolding P'_def St'_def St_def by auto
ultimately show ?case
unfolding natLess_def P'_def St'_def St_def by auto
next
case (inference_computation P C' i' N n Q)
then show ?case
proof (cases "n \<le> weight (C, i)")
case True
then have "weight (C, i) + 1 - n > weight (C, i) + 1 - Suc n"
by auto
then show ?thesis
unfolding natLess_def by auto
next
case n_nle_w: False
define St :: "'a wstate" where "St = ({#}, P + {#(C', i')#}, Q, n)"
define St' :: "'a wstate" where "St' = (N, {#(D, j) \<in># P. D \<noteq> C'#}, Q + {#(C', i')#}, Suc n)"
define concls :: "'a wclause set" where
"concls = (\<lambda>D. (D, n)) ` concls_of (inference_system.inferences_between (ord_FO_\<Gamma> S)
(fst ` set_mset Q) C')"
have fin: "finite concls"
unfolding concls_def using finite_ord_FO_resolution_inferences_between by auto
have "{(D, ia) \<in> concls. ia \<le> weight (C, i)} = {}"
unfolding concls_def using n_nle_w by auto
then have "{#(D, ia) \<in># mset_set concls. ia \<le> weight (C, i)#} = {#}"
using fin filter_mset_empty_if_finite_and_filter_set_empty[of concls] by auto
then have n_low_weight_empty: "{#(D, ia) \<in># N. ia \<le> weight (C, i)#} = {#}"
unfolding inference_computation unfolding concls_def by auto
have "weight (C', i') \<le> weight (C, i)"
using inference_computation by auto
then have i'_le_w_Ci: "i' \<le> weight (C, i)"
using timestamp_le_weight[of i' C'] by auto
have subs: "{#(D, ia) \<in># N + {#(D, j) \<in># P. D \<noteq> C'#} + (Q + {#(C', i')#}). ia \<le> weight (C, i)#}
\<subseteq># {#(D, ia) \<in># {#} + (P + {#(C', i')#}) + Q. ia \<le> weight (C, i)#}"
using n_low_weight_empty by (auto simp: multiset_filter_mono)
have "fst3 (RP_combined_measure (weight (C, i)) St')
\<le> fst3 (RP_combined_measure (weight (C, i)) St)"
unfolding St'_def St_def by auto
moreover have "fst (RP_filtered_measure ((\<lambda>(D, ia). ia \<le> weight (C, i))) St') =
(\<Sum>(C, i) \<in># {#(D, ia) \<in># N + {#(D, j) \<in># P. D \<noteq> C'#} + (Q + {#(C', i')#}).
ia \<le> weight (C, i)#}. Suc (size C))"
unfolding St'_def by auto
also have "... \<le> (\<Sum>(C, i) \<in># {#(D, ia) \<in># {#} + (P + {#(C', i')#}) + Q. ia \<le> weight (C, i)#}.
Suc (size C))"
using subs sum_image_mset_mono by blast
also have "... = fst (RP_filtered_measure (\<lambda>(D, ia). ia \<le> weight (C, i)) St)"
unfolding St_def by auto
finally have "fst3 (snd3 (RP_combined_measure (weight (C, i)) St'))
\<le> fst3 (snd3 (RP_combined_measure (weight (C, i)) St))"
by auto
moreover have "snd3 (snd3 (RP_combined_measure (weight (C, i)) St')) =
snd3 (snd3 (RP_combined_measure (weight (C, i)) St))"
unfolding St_def St'_def using n_low_weight_empty by auto
moreover have "trd3 (snd3 (RP_combined_measure (weight (C, i)) St')) <
trd3 (snd3 (RP_combined_measure (weight (C, i)) St))"
unfolding St_def St'_def using i'_le_w_Ci
by (simp add: le_imp_less_Suc multiset_filter_mono size_mset_mono)
ultimately show ?thesis
unfolding natLess_def St'_def St_def lex_prod_def by force
qed
qed (auto simp: natLess_def)
lemma preserve_min_or_delete_completely:
assumes "St \<leadsto>\<^sub>w St'" "(C, i) \<in># wP_of_wstate St"
"\<forall>k. (C, k) \<in># wP_of_wstate St \<longrightarrow> i \<le> k"
shows "(C, i) \<in># wP_of_wstate St' \<or> (\<forall>j. (C, j) \<notin># wP_of_wstate St')"
using assms proof (induction rule: weighted_RP.induct)
case (backward_reduction_P D L' N L \<sigma> C' P i' Q n)
show ?case
proof (cases "C = C' + {#L#}")
case True_outer: True
then have C_i_in: "(C, i) \<in># P + {#(C, i')#}"
using backward_reduction_P by auto
then have max: "\<And>k. (C, k) \<in># P + {#(C, i')#} \<Longrightarrow> k \<le> i'"
using backward_reduction_P unfolding True_outer[symmetric] by auto
then have "count (P + {#(C, i')#}) (C, i') \<ge> 1"
by auto
moreover
{
assume asm: "count (P + {#(C, i')#}) (C, i') = 1"
then have nin_P: "(C, i') \<notin># P"
using not_in_iff by force
have ?thesis
proof (cases "(C, i) = (C, i')")
case True
then have "i = i'"
by auto
then have "\<forall>j. (C, j) \<in># P + {#(C, i')#} \<longrightarrow> j = i'"
using max backward_reduction_P(6) unfolding True_outer[symmetric] by force
then show ?thesis
using True_outer[symmetric] nin_P by auto
next
case False
then show ?thesis
using C_i_in by auto
qed
}
moreover
{
assume "count (P + {#(C, i')#}) (C, i') > 1"
then have ?thesis
using C_i_in by auto
}
ultimately show ?thesis
by (cases "count (P + {#(C, i')#}) (C, i') = 1") auto
next
case False
then show ?thesis
using backward_reduction_P by auto
qed
qed auto
lemma preserve_min_P:
assumes
"St \<leadsto>\<^sub>w St'" "(C, j) \<in># wP_of_wstate St'" and
"(C, i) \<in># wP_of_wstate St" and
"\<forall>k. (C, k) \<in># wP_of_wstate St \<longrightarrow> i \<le> k"
shows "(C, i) \<in># wP_of_wstate St'"
using assms preserve_min_or_delete_completely by blast
lemma preserve_min_P_Sts:
assumes
"enat (Suc k) < llength Sts" and
"(C, i) \<in># wP_of_wstate (lnth Sts k)" and
"(C, j) \<in># wP_of_wstate (lnth Sts (Suc k))" and
"\<forall>j. (C, j) \<in># wP_of_wstate (lnth Sts k) \<longrightarrow> i \<le> j"
shows "(C, i) \<in># wP_of_wstate (lnth Sts (Suc k))"
using deriv assms chain_lnth_rel preserve_min_P by metis
lemma in_lnth_in_Supremum_ldrop:
assumes "i < llength xs" and "x \<in># (lnth xs i)"
shows "x \<in> Sup_llist (lmap set_mset (ldrop (enat i) xs))"
using assms by (metis (no_types) ldrop_eq_LConsD ldropn_0 llist.simps(13) contra_subsetD
ldrop_enat ldropn_Suc_conv_ldropn lnth_0 lnth_lmap lnth_subset_Sup_llist)
lemma persistent_wclause_in_P_if_persistent_clause_in_P:
assumes "C \<in> Liminf_llist (lmap P_of_state (lmap state_of_wstate Sts))"
shows "\<exists>i. (C, i) \<in> Liminf_llist (lmap (set_mset \<circ> wP_of_wstate) Sts)"
proof -
obtain t_C where t_C_p:
"enat t_C < llength Sts"
"\<And>t. t_C \<le> t \<Longrightarrow> t < llength Sts \<Longrightarrow> C \<in> P_of_state (state_of_wstate (lnth Sts t))"
using assms unfolding Liminf_llist_def by auto
then obtain i where i_p:
"(C, i) \<in># wP_of_wstate (lnth Sts t_C)"
using t_C_p by (cases "lnth Sts t_C") force
have Ci_in_nth_wP: "\<exists>i. (C, i) \<in># wP_of_wstate (lnth Sts (t_C + t))" if "t_C + t < llength Sts"
for t
using that t_C_p(2)[of "t_C + _"] by (cases "lnth Sts (t_C + t)") force
define in_Sup_wP :: "nat \<Rightarrow> bool" where
"in_Sup_wP = (\<lambda>i. (C, i) \<in> Sup_llist (lmap (set_mset \<circ> wP_of_wstate) (ldrop t_C Sts)))"
have "in_Sup_wP i"
using i_p assms(1) in_lnth_in_Supremum_ldrop[of t_C "lmap wP_of_wstate Sts" "(C, i)"] t_C_p
by (simp add: in_Sup_wP_def llist.map_comp)
then obtain j where j_p: "is_least in_Sup_wP j"
unfolding in_Sup_wP_def[symmetric] using least_exists by metis
then have "\<forall>i. (C, i) \<in> Sup_llist (lmap (set_mset \<circ> wP_of_wstate) (ldrop t_C Sts)) \<longrightarrow> j \<le> i"
unfolding is_least_def in_Sup_wP_def using not_less by blast
then have j_smallest:
"\<And>i t. enat (t_C + t) < llength Sts \<Longrightarrow> (C, i) \<in># wP_of_wstate (lnth Sts (t_C + t)) \<Longrightarrow> j \<le> i"
unfolding comp_def
by (smt add.commute ldrop_enat ldrop_eq_LConsD ldrop_ldrop ldropn_Suc_conv_ldropn
plus_enat_simps(1) lnth_ldropn Sup_llist_def UN_I ldrop_lmap llength_lmap lnth_lmap
mem_Collect_eq)
from j_p have "\<exists>t_Cj. t_Cj < llength (ldrop (enat t_C) Sts)
\<and> (C, j) \<in># wP_of_wstate (lnth (ldrop t_C Sts) t_Cj)"
unfolding in_Sup_wP_def Sup_llist_def is_least_def by simp
then obtain t_Cj where j_p:
"(C,j) \<in># wP_of_wstate (lnth Sts (t_C + t_Cj))"
"enat (t_C + t_Cj) < llength Sts"
by (smt add.commute ldrop_enat ldrop_eq_LConsD ldrop_ldrop ldropn_Suc_conv_ldropn
plus_enat_simps(1) lhd_ldropn)
have Ci_stays:
"t_C + t_Cj + t < llength Sts \<Longrightarrow> (C,j) \<in># wP_of_wstate (lnth Sts (t_C + t_Cj + t))" for t
proof (induction t)
case 0
then show ?case
using j_p by (simp add: add.commute)
next
case (Suc t)
have any_Ck_in_wP: "j \<le> k" if "(C, k) \<in># wP_of_wstate (lnth Sts (t_C + t_Cj + t))" for k
using that j_p j_smallest Suc
by (smt Suc_ile_eq add.commute add.left_commute add_Suc less_imp_le plus_enat_simps(1)
the_enat.simps)
from Suc have Cj_in_wP: "(C, j) \<in># wP_of_wstate (lnth Sts (t_C + t_Cj + t))"
by (metis (no_types, hide_lams) Suc_ile_eq add.commute add_Suc_right less_imp_le)
moreover have "C \<in> P_of_state (state_of_wstate (lnth Sts (Suc (t_C + t_Cj + t))))"
using t_C_p(2) Suc.prems by auto
then have "\<exists>k. (C, k) \<in># wP_of_wstate (lnth Sts (Suc (t_C + t_Cj + t)))"
by (smt Suc.prems Ci_in_nth_wP add.commute add.left_commute add_Suc_right enat_ord_code(4))
ultimately have "(C, j) \<in># wP_of_wstate (lnth Sts (Suc (t_C + t_Cj + t)))"
using preserve_min_P_Sts Cj_in_wP any_Ck_in_wP Suc.prems by force
then have "(C, j) \<in># lnth (lmap wP_of_wstate Sts) (Suc (t_C + t_Cj + t))"
using Suc.prems by auto
then show ?case
by (smt Suc.prems add.commute add_Suc_right lnth_lmap)
qed
then have "(\<And>t. t_C + t_Cj \<le> t \<Longrightarrow> t < llength (lmap (set_mset \<circ> wP_of_wstate) Sts) \<Longrightarrow>
(C, j) \<in># wP_of_wstate (lnth Sts t))"
using Ci_stays[of "_ - (t_C + t_Cj)"] by (metis le_add_diff_inverse llength_lmap)
then have "(C, j) \<in> Liminf_llist (lmap (set_mset \<circ> wP_of_wstate) Sts)"
unfolding Liminf_llist_def using j_p by auto
then show "\<exists>i. (C, i) \<in> Liminf_llist (lmap (set_mset \<circ> wP_of_wstate) Sts)"
by auto
qed
lemma lfinite_not_LNil_nth_llast:
assumes "lfinite Sts" and "Sts \<noteq> LNil"
shows "\<exists>i < llength Sts. lnth Sts i = llast Sts \<and> (\<forall>j < llength Sts. j \<le> i)"
using assms proof (induction rule: lfinite.induct)
case (lfinite_LConsI xs x)
then show ?case
proof (cases "xs = LNil")
case True
show ?thesis
using True zero_enat_def by auto
next
case False
then obtain i where
i_p: "enat i < llength xs \<and> lnth xs i = llast xs \<and> (\<forall>j < llength xs. j \<le> enat i)"
using lfinite_LConsI by auto
then have "enat (Suc i) < llength (LCons x xs)"
by (simp add: Suc_ile_eq)
moreover from i_p have "lnth (LCons x xs) (Suc i) = llast (LCons x xs)"
by (metis gr_implies_not_zero llast_LCons llength_lnull lnth_Suc_LCons)
moreover from i_p have "\<forall>j < llength (LCons x xs). j \<le> enat (Suc i)"
by (metis antisym_conv2 eSuc_enat eSuc_ile_mono ileI1 iless_Suc_eq llength_LCons)
ultimately show ?thesis
by auto
qed
qed auto
lemma fair_if_finite:
assumes fin: "lfinite Sts"
shows "fair_state_seq (lmap state_of_wstate Sts)"
proof (rule ccontr)
assume unfair: "\<not> fair_state_seq (lmap state_of_wstate Sts)"
have no_inf_from_last: "\<forall>y. \<not> llast Sts \<leadsto>\<^sub>w y"
using fin full_chain_iff_chain[of "(\<leadsto>\<^sub>w)" Sts] full_deriv by auto
from unfair obtain C where
"C \<in> Liminf_llist (lmap N_of_state (lmap state_of_wstate Sts))
\<union> Liminf_llist (lmap P_of_state (lmap state_of_wstate Sts))"
unfolding fair_state_seq_def Liminf_state_def by auto
then obtain i where i_p:
"enat i < llength Sts"
"\<And>j. i \<le> j \<Longrightarrow> enat j < llength Sts \<Longrightarrow>
C \<in> N_of_state (state_of_wstate (lnth Sts j)) \<union> P_of_state (state_of_wstate (lnth Sts j))"
unfolding Liminf_llist_def by auto
have C_in_llast:
"C \<in> N_of_state (state_of_wstate (llast Sts)) \<union> P_of_state (state_of_wstate (llast Sts))"
proof -
obtain l where
l_p: "enat l < llength Sts \<and> lnth Sts l = llast Sts \<and> (\<forall>j < llength Sts. j \<le> enat l)"
using fin lfinite_not_LNil_nth_llast i_p(1) by fastforce
then have
"C \<in> N_of_state (state_of_wstate (lnth Sts l)) \<union> P_of_state (state_of_wstate (lnth Sts l))"
using i_p(1) i_p(2)[of l] by auto
then show ?thesis
using l_p by auto
qed
define N :: "'a wclause multiset" where "N = wN_of_wstate (llast Sts)"
define P :: "'a wclause multiset" where "P = wP_of_wstate (llast Sts)"
define Q :: "'a wclause multiset" where "Q = wQ_of_wstate (llast Sts)"
define n :: nat where "n = n_of_wstate (llast Sts)"
{
assume "N_of_state (state_of_wstate (llast Sts)) \<noteq> {}"
then obtain D j where "(D, j) \<in># N"
unfolding N_def by (cases "llast Sts") auto
then have "llast Sts \<leadsto>\<^sub>w (N - {#(D, j)#}, P + {#(D, j)#}, Q, n)"
using weighted_RP.clause_processing[of "N - {#(D, j)#}" D j P Q n]
unfolding N_def P_def Q_def n_def by auto
then have "\<exists>St'. llast Sts \<leadsto>\<^sub>w St'"
by auto
}
moreover
{
assume a: "N_of_state (state_of_wstate (llast Sts)) = {}"
then have b: "N = {#}"
unfolding N_def by (cases "llast Sts") auto
from a have "C \<in> P_of_state (state_of_wstate (llast Sts))"
using C_in_llast by auto
then obtain D j where "(D, j) \<in># P"
unfolding P_def by (cases "llast Sts") auto
then have "weight (D, j) \<in> weight ` set_mset P"
by auto
then have "\<exists>w. is_least (\<lambda>w. w \<in> (weight ` set_mset P)) w"
using least_exists by auto
then have "\<exists>D j. (\<forall>(D', j') \<in># P. weight (D, j) \<le> weight (D', j')) \<and> (D, j) \<in># P"
using assms linorder_not_less unfolding is_least_def by (auto 6 0)
then obtain D j where
min: "(\<forall>(D', j') \<in># P. weight (D, j) \<le> weight (D', j'))" and
Dj_in_p: "(D, j) \<in># P"
by auto
from min have min: "(\<forall>(D', j') \<in># P - {#(D, j)#}. weight (D, j) \<le> weight (D', j'))"
using mset_subset_diff_self[OF Dj_in_p] by auto
define N' where
"N' = mset_set ((\<lambda>D'. (D', n)) ` concls_of (inference_system.inferences_between (ord_FO_\<Gamma> S)
(set_mset (image_mset fst Q)) D))"
have "llast Sts \<leadsto>\<^sub>w (N', {#(D', j') \<in># P - {#(D, j)#}. D' \<noteq> D#}, Q + {#(D,j)#}, Suc n)"
using weighted_RP.inference_computation[of "P - {#(D, j)#}" D j N' n Q, OF min N'_def]
of_wstate_split[symmetric, of "llast Sts"] Dj_in_p
unfolding N_def[symmetric] P_def[symmetric] Q_def[symmetric] n_def[symmetric] b by auto
then have "\<exists>St'. llast Sts \<leadsto>\<^sub>w St'"
by auto
}
ultimately have "\<exists>St'. llast Sts \<leadsto>\<^sub>w St'"
by auto
then show False
using no_inf_from_last by metis
qed
lemma N_of_state_state_of_wstate_wN_of_wstate:
assumes "C \<in> N_of_state (state_of_wstate St)"
shows "\<exists>i. (C, i) \<in># wN_of_wstate St"
by (smt N_of_state.elims assms eq_fst_iff fstI fst_conv image_iff of_wstate_split set_image_mset
state_of_wstate.simps)
lemma in_wN_of_wstate_in_N_of_wstate: "(C, i) \<in># wN_of_wstate St \<Longrightarrow> C \<in> N_of_wstate St"
by (metis (mono_guards_query_query) N_of_state.simps fst_conv image_eqI of_wstate_split
set_image_mset state_of_wstate.simps)
lemma in_wP_of_wstate_in_P_of_wstate: "(C, i) \<in># wP_of_wstate St \<Longrightarrow> C \<in> P_of_wstate St"
by (metis (mono_guards_query_query) P_of_state.simps fst_conv image_eqI of_wstate_split
set_image_mset state_of_wstate.simps)
lemma in_wQ_of_wstate_in_Q_of_wstate: "(C, i) \<in># wQ_of_wstate St \<Longrightarrow> C \<in> Q_of_wstate St"
by (metis (mono_guards_query_query) Q_of_state.simps fst_conv image_eqI of_wstate_split
set_image_mset state_of_wstate.simps)
lemma n_of_wstate_weighted_RP_increasing: "St \<leadsto>\<^sub>w St' \<Longrightarrow> n_of_wstate St \<le> n_of_wstate St'"
by (induction rule: weighted_RP.induct) auto
lemma nth_of_wstate_monotonic:
assumes "j < llength Sts" and "i \<le> j"
shows "n_of_wstate (lnth Sts i) \<le> n_of_wstate (lnth Sts j)"
using assms proof (induction "j - i" arbitrary: i)
case (Suc x)
then have "x = j - (i + 1)"
by auto
then have "n_of_wstate (lnth Sts (i + 1)) \<le> n_of_wstate (lnth Sts j)"
using Suc by auto
moreover have "i < j"
using Suc by auto
then have "Suc i < llength Sts"
using Suc by (metis enat_ord_simps(2) le_less_Suc_eq less_le_trans not_le)
then have "lnth Sts i \<leadsto>\<^sub>w lnth Sts (Suc i)"
using deriv chain_lnth_rel[of "(\<leadsto>\<^sub>w)" Sts i] by auto
then have "n_of_wstate (lnth Sts i) \<le> n_of_wstate (lnth Sts (i + 1))"
using n_of_wstate_weighted_RP_increasing[of "lnth Sts i" "lnth Sts (i + 1)"] by auto
ultimately show ?case
by auto
qed auto
lemma infinite_chain_relation_measure:
assumes
measure_decreasing: "\<And>St St'. P St \<Longrightarrow> R St St' \<Longrightarrow> (m St', m St) \<in> mR" and
non_infer_chain: "chain R (ldrop (enat k) Sts)" and
inf: "llength Sts = \<infinity>" and
P: "\<And>i. P (lnth (ldrop (enat k) Sts) i)"
shows "chain (\<lambda>x y. (x, y) \<in> mR)\<inverse>\<inverse> (lmap m (ldrop (enat k) Sts))"
proof (rule lnth_rel_chain)
show "\<not> lnull (lmap m (ldrop (enat k) Sts))"
using assms by auto
next
from inf have ldrop_inf: "llength (ldrop (enat k) Sts) = \<infinity> \<and> \<not> lfinite (ldrop (enat k) Sts)"
using inf by (auto simp: llength_eq_infty_conv_lfinite)
{
fix j :: "nat"
define St where "St = lnth (ldrop (enat k) Sts) j"
define St' where "St' = lnth (ldrop (enat k) Sts) (j + 1)"
have P': "P St \<and> P St'"
unfolding St_def St'_def using P by auto
from ldrop_inf have "R St St'"
unfolding St_def St'_def
using non_infer_chain infinite_chain_lnth_rel[of "ldrop (enat k) Sts" R j] by auto
then have "(m St', m St) \<in> mR"
using measure_decreasing P' by auto
then have "(lnth (lmap m (ldrop (enat k) Sts)) (j + 1), lnth (lmap m (ldrop (enat k) Sts)) j)
\<in> mR"
unfolding St_def St'_def using lnth_lmap
by (smt enat.distinct(1) enat_add_left_cancel enat_ord_simps(4) inf ldrop_lmap llength_lmap
lnth_ldrop plus_enat_simps(3))
}
then show "\<forall>j. enat (j + 1) < llength (lmap m (ldrop (enat k) Sts)) \<longrightarrow>
(\<lambda>x y. (x, y) \<in> mR)\<inverse>\<inverse> (lnth (lmap m (ldrop (enat k) Sts)) j)
(lnth (lmap m (ldrop (enat k) Sts)) (j + 1))"
by blast
qed
theorem weighted_RP_fair: "fair_state_seq (lmap state_of_wstate Sts)"
proof (rule ccontr)
assume asm: "\<not> fair_state_seq (lmap state_of_wstate Sts)"
then have inff: "\<not> lfinite Sts" using fair_if_finite
by auto
then have inf: "llength Sts = \<infinity>"
using llength_eq_infty_conv_lfinite by auto
from asm obtain C where
"C \<in> Liminf_llist (lmap N_of_state (lmap state_of_wstate Sts))
\<union> Liminf_llist (lmap P_of_state (lmap state_of_wstate Sts))"
unfolding fair_state_seq_def Liminf_state_def by auto
then show False
proof
assume "C \<in> Liminf_llist (lmap N_of_state (lmap state_of_wstate Sts))"
then obtain x where "enat x < llength Sts"
"\<forall>xa. x \<le> xa \<and> enat xa < llength Sts \<longrightarrow> C \<in> N_of_state (state_of_wstate (lnth Sts xa))"
unfolding Liminf_llist_def by auto
then have "\<exists>k. \<forall>j. k \<le> j \<longrightarrow> (\<exists>i. (C, i) \<in># wN_of_wstate (lnth Sts j))"
unfolding Liminf_llist_def by (force simp add: inf N_of_state_state_of_wstate_wN_of_wstate)
then obtain k where k_p:
"\<And>j. k \<le> j \<Longrightarrow> \<exists>i. (C, i) \<in># wN_of_wstate (lnth Sts j)"
unfolding Liminf_llist_def
by auto
have chain_drop_Sts: "chain (\<leadsto>\<^sub>w) (ldrop k Sts)"
using deriv inf inff by (simp add: inf_chain_ldropn_chain ldrop_enat)
have in_N_j: "\<And>j. \<exists>i. (C, i) \<in># wN_of_wstate (lnth (ldrop k Sts) j)"
using k_p by (simp add: add.commute inf)
then have "chain (\<lambda>x y. (x, y) \<in> RP_filtered_relation)\<inverse>\<inverse> (lmap (RP_filtered_measure (\<lambda>Ci. True))
(ldrop k Sts))"
using inff inf weighted_RP_measure_decreasing_N chain_drop_Sts
infinite_chain_relation_measure[of "\<lambda>St. \<exists>i. (C, i) \<in># wN_of_wstate St" "(\<leadsto>\<^sub>w)"] by blast
then show False
using wfP_iff_no_infinite_down_chain_llist[of "\<lambda>x y. (x, y) \<in> RP_filtered_relation"]
wf_RP_filtered_relation inff
by (metis (no_types, lifting) inf_llist_lnth ldrop_enat_inf_llist lfinite_inf_llist
lfinite_lmap wfPUNIVI wf_induct_rule)
next
assume asm: "C \<in> Liminf_llist (lmap P_of_state (lmap state_of_wstate Sts))"
from asm obtain i where i_p:
"enat i < llength Sts"
"\<And>j. i \<le> j \<and> enat j < llength Sts \<Longrightarrow> C \<in> P_of_state (state_of_wstate (lnth Sts j))"
unfolding Liminf_llist_def by auto
then obtain i where "(C, i) \<in> Liminf_llist (lmap (set_mset \<circ> wP_of_wstate) Sts)"
using persistent_wclause_in_P_if_persistent_clause_in_P[of C] using asm inf by auto
then have "\<exists>l. \<forall>k \<ge> l. (C, i) \<in> (set_mset \<circ> wP_of_wstate) (lnth Sts k)"
unfolding Liminf_llist_def using inff inf by auto
then obtain k where k_p:
"(\<forall>k'\<ge>k. (C, i) \<in> (set_mset \<circ> wP_of_wstate) (lnth Sts k'))"
by blast
have Ci_in: "\<forall>k'. (C, i) \<in> (set_mset \<circ> wP_of_wstate) (lnth (ldrop k Sts) k')"
using k_p lnth_ldrop[of k _ Sts] inf inff by force
then have Ci_inn: "\<forall>k'. (C, i) \<in># (wP_of_wstate) (lnth (ldrop k Sts) k')"
by auto
have "chain (\<leadsto>\<^sub>w) (ldrop k Sts)"
using deriv inf_chain_ldropn_chain inf inff by (simp add: inf_chain_ldropn_chain ldrop_enat)
then have "chain (\<lambda>x y. (x, y) \<in> RP_combined_relation)\<inverse>\<inverse>
(lmap (RP_combined_measure (weight (C, i))) (ldrop k Sts))"
using inff inf Ci_in weighted_RP_measure_decreasing_P
infinite_chain_relation_measure[of "\<lambda>St. (C, i) \<in># wP_of_wstate St" "(\<leadsto>\<^sub>w)"
"RP_combined_measure (weight (C, i))" ]
by auto
then show False
using wfP_iff_no_infinite_down_chain_llist[of "\<lambda>x y. (x, y) \<in> RP_combined_relation"]
wf_RP_combined_relation inff
by (smt inf_llist_lnth ldrop_enat_inf_llist lfinite_inf_llist lfinite_lmap wfPUNIVI
wf_induct_rule)
qed
qed
corollary weighted_RP_saturated: "src.saturated_upto (Liminf_llist (lmap grounding_of_wstate Sts))"
using RP_saturated_if_fair[OF deriv_RP weighted_RP_fair empty_Q0_RP, unfolded llist.map_comp]
by simp
corollary weighted_RP_complete:
"\<not> satisfiable (grounding_of_wstate (lhd Sts)) \<Longrightarrow> {#} \<in> Q_of_state (Liminf_wstate Sts)"
using RP_complete_if_fair[OF deriv_RP weighted_RP_fair empty_Q0_RP, simplified lhd_lmap_Sts] .
end
end
locale weighted_FO_resolution_prover_with_size_timestamp_factors =
FO_resolution_prover S subst_atm id_subst comp_subst renamings_apart atm_of_atms mgu less_atm
for
S :: "('a :: wellorder) clause \<Rightarrow> 'a clause" and
subst_atm :: "'a \<Rightarrow> 's \<Rightarrow> 'a" and
id_subst :: "'s" and
comp_subst :: "'s \<Rightarrow> 's \<Rightarrow> 's" and
renamings_apart :: "'a literal multiset list \<Rightarrow> 's list" and
atm_of_atms :: "'a list \<Rightarrow> 'a" and
mgu :: "'a set set \<Rightarrow> 's option" and
less_atm :: "'a \<Rightarrow> 'a \<Rightarrow> bool" +
fixes
size_atm :: "'a \<Rightarrow> nat" and
size_factor :: nat and
timestamp_factor :: nat
assumes
timestamp_factor_pos: "timestamp_factor > 0"
begin
fun weight :: "'a wclause \<Rightarrow> nat" where
"weight (C, i) = size_factor * size_multiset (size_literal size_atm) C + timestamp_factor * i"
lemma weight_mono: "i < j \<Longrightarrow> weight (C, i) < weight (C, j)"
using timestamp_factor_pos by simp
declare weight.simps [simp del]
sublocale wrp: weighted_FO_resolution_prover _ _ _ _ _ _ _ _ weight
by unfold_locales (rule weight_mono)
notation wrp.weighted_RP (infix "\<leadsto>\<^sub>w" 50)
end
end
diff --git a/thys/Hybrid_Multi_Lane_Spatial_Logic/NatInt.thy b/thys/Hybrid_Multi_Lane_Spatial_Logic/NatInt.thy
--- a/thys/Hybrid_Multi_Lane_Spatial_Logic/NatInt.thy
+++ b/thys/Hybrid_Multi_Lane_Spatial_Logic/NatInt.thy
@@ -1,1012 +1,1007 @@
(* Title: NatInt.thy
Author: Sven Linker, University of Liverpool
Intervals based on natural numbers. Defines a
bottom element (empty set), infimum (set intersection),
partial order (subset relation), cardinality (set cardinality).
The union of intervals i and j can only be created, if they are consecutive, i.e.,
max i +1 = min j (or vice versa). To express consecutiveness, we employ the
predicate consec.
Also contains a "chopping" predicate N_Chop(i,j,k): i can be divided into
consecutive intervals j and k.
*)
section \<open>Discrete Intervals based on Natural Numbers\<close>
text\<open>We define a type of intervals based on the natural numbers. To that
end, we employ standard operators of Isabelle, but in addition prove some
structural properties of the intervals. In particular, we show that this type
constitutes a meet-semilattice with a bottom element and equality.
Furthermore, we show that this semilattice allows for a constrained join, i.e.,
the union of two intervals is defined, if either one of them is empty, or they are
consecutive. Finally, we define the notion of \emph{chopping} an interval into
two consecutive subintervals.\<close>
theory NatInt
imports Main
begin
text \<open>A discrete interval is a set of consecutive natural numbers, or the empty
set.\<close>
typedef nat_int = "{S . (\<exists> (m::nat) n . {m..n }=S) }"
by auto
setup_lifting type_definition_nat_int
subsection \<open>Basic properties of discrete intervals.\<close>
locale nat_int
interpretation nat_int_class?: nat_int .
context nat_int
begin
lemma un_consec_seq: "(m::nat)\<le> n \<and> n+1 \<le> l \<longrightarrow> {m..n} \<union> {n+1..l} = {m..l}"
by auto
lemma int_conseq_seq: " {(m::nat)..n} \<inter> {n+1..l} = {}"
by auto
lemma empty_type: "{} \<in> { S . \<exists> (m:: nat) n . {m..n}=S}"
by auto
lemma inter_result: "\<forall>x \<in> {S . (\<exists> (m::nat) n . {m..n }=S) }.
\<forall>y \<in> {S . (\<exists> (m::nat) n . {m..n }=S) }.
x \<inter> y \<in>{S . (\<exists> (m::nat) n . {m..n }=S)}"
using Int_atLeastAtMost by blast
lemma union_result: "\<forall>x \<in> {S . (\<exists> (m::nat) n . {m..n }=S) }.
\<forall>y \<in> {S . (\<exists> (m::nat) n . {m..n }=S) }.
x \<noteq> {} \<and> y \<noteq> {} \<and> Max x +1 = Min y
\<longrightarrow> x \<union> y \<in>{S . (\<exists> (m::nat) n . {m..n }=S) }"
proof (rule ballI)+
fix x y
assume "x\<in> {S . (\<exists> (m::nat) n . {m..n }=S) }"
and "y\<in> {S . (\<exists> (m::nat) n . {m..n }=S) }"
then have x_def:"(\<exists>m n. {m..n} = x) "
and y_def:"(\<exists>m n. {m..n} = y) " by blast+
show " x \<noteq> {} \<and> y \<noteq> {} \<and> Max x+1 = Min y
\<longrightarrow> x \<union> y \<in> {S. (\<exists>m n. {m..n} = S) }"
proof
assume pre:"x \<noteq> {} \<and> y \<noteq> {} \<and> Max x + 1 = Min y"
then have x_int:"\<exists>m n. m \<le> n \<and> {m..n} = x"
and y_int:"(\<exists>m n. m \<le> n \<and> {m..n} = y)"
using x_def y_def by force+
{
fix ya yb xa xb
- assume y_prop:"ya \<le> yb \<and> {ya..yb} = y"
- assume x_prop:"xa \<le> xb \<and> {xa..xb} = x"
- from x_prop have upper_x:"Max x = xb"
- by (metis Sup_nat_def cSup_atLeastAtMost)
- from y_prop have lower_y:"Min y = ya"
- by (metis Inf_fin.coboundedI Inf_fin_Min Min_in add.right_neutral finite_atLeastAtMost
- le_add1 ord_class.atLeastAtMost_iff order_class.antisym pre)
+ assume y_prop:"ya \<le> yb \<and> {ya..yb} = y" and x_prop:"xa \<le> xb \<and> {xa..xb} = x"
+ then have upper_x:"Max x = xb" and lower_y: "Min y = ya"
+ by (auto simp: Max_eq_iff Min_eq_iff)
from upper_x and lower_y and pre have upper_eq_lower: "xb+1 = ya"
by blast
hence "y= {xb+1 .. yb}" using y_prop by blast
hence "x \<union> y = {xa..yb}"
using un_consec_seq upper_eq_lower x_prop y_prop by blast
then have " x \<union> y \<in> {S.(\<exists>m n. {m..n} = S) }"
by auto
}
then show "x \<union> y \<in> {S.(\<exists>m n. {m..n} = S)}"
using x_int y_int by blast
qed
qed
lemma union_empty_result1: "\<forall>i \<in> {S . (\<exists> (m::nat) n . {m..n }=S) }.
i \<union> {} \<in> {S . (\<exists> (m::nat) n . {m..n }=S) }"
by blast
lemma union_empty_result2: "\<forall>i \<in> {S . (\<exists> (m::nat) n . {m..n }=S) }.
{} \<union> i \<in> {S . (\<exists> (m::nat) n . {m..n }=S) }"
by blast
lemma finite:" \<forall>i \<in> {S . (\<exists> (m::nat) n . {m..n }=S) } . (finite i)"
by blast
lemma not_empty_means_seq:"\<forall>i \<in> {S . (\<exists> (m::nat) n . {m..n }=S) } . i \<noteq> {}
\<longrightarrow> ( \<exists>m n . m \<le> n \<and> {m..n} = i)"
using atLeastatMost_empty_iff
by force
end
text \<open>The empty set is the bottom element of the type. The infimum/meet of
the semilattice is set intersection. The order is given by the subset relation.
\<close>
instantiation nat_int :: bot
begin
lift_definition bot_nat_int :: "nat_int" is Set.empty by force
instance by standard
end
instantiation nat_int :: inf
begin
lift_definition inf_nat_int ::"nat_int \<Rightarrow> nat_int \<Rightarrow> nat_int" is Set.inter by force
instance
proof qed
end
instantiation nat_int :: "order_bot"
begin
lift_definition less_eq_nat_int :: "nat_int \<Rightarrow> nat_int \<Rightarrow> bool" is Set.subset_eq .
lift_definition less_nat_int :: "nat_int \<Rightarrow> nat_int \<Rightarrow> bool" is Set.subset .
instance
proof
fix i j k ::nat_int
show "(i < j) = (i \<le> j \<and> \<not> j \<le> i)"
by (simp add: less_eq_nat_int.rep_eq less_le_not_le less_nat_int.rep_eq)
show "i \<le> i" by (simp add:less_eq_nat_int.rep_eq)
show " i \<le> j \<Longrightarrow> j \<le> k \<Longrightarrow> i \<le> k" by (simp add:less_eq_nat_int.rep_eq)
show "i \<le> j \<Longrightarrow> j \<le> i \<Longrightarrow> i = j"
by (simp add: Rep_nat_int_inject less_eq_nat_int.rep_eq )
show "bot \<le> i" using less_eq_nat_int.rep_eq
using bot_nat_int.rep_eq by blast
qed
end
instantiation nat_int :: "semilattice_inf"
begin
instance
proof
fix i j k :: nat_int
show "i \<le> j \<Longrightarrow> i \<le> k \<Longrightarrow> i \<le> inf j k"
by (simp add: inf_nat_int.rep_eq less_eq_nat_int.rep_eq)
show " inf i j \<le> i"
by (simp add: inf_nat_int.rep_eq less_eq_nat_int.rep_eq)
show "inf i j \<le> j"
by (simp add: inf_nat_int.rep_eq less_eq_nat_int.rep_eq)
qed
end
instantiation nat_int:: "equal"
begin
definition equal_nat_int :: "nat_int \<Rightarrow> nat_int \<Rightarrow> bool"
where "equal_nat_int i j \<equiv> i \<le> j \<and> j \<le> i"
instance
proof
fix i j :: nat_int
show " equal_class.equal i j = (i = j)" using equal_nat_int_def by auto
qed
end
context nat_int
begin
abbreviation subseteq :: "nat_int \<Rightarrow> nat_int\<Rightarrow> bool" (infix "\<sqsubseteq>" 30)
where "i \<sqsubseteq> j == i \<le> j "
abbreviation empty :: "nat_int" ("\<emptyset>")
where "\<emptyset> \<equiv> bot"
notation inf (infix "\<sqinter>" 70)
text \<open>The union of two intervals is only defined, if it is also
a discrete interval.\<close>
definition union :: "nat_int \<Rightarrow> nat_int \<Rightarrow> nat_int" (infix "\<squnion>" 65)
where "i \<squnion> j = Abs_nat_int (Rep_nat_int i \<union> Rep_nat_int j)"
text \<open>Non-empty intervals contain a minimal and maximal element.
Two non-empty intervals \(i\) and \(j\) are
consecutive, if the minimum of \(j\) is the successor of the
maximum of \(i\).
Furthermore, the interval \(i\) can be chopped into the intervals \(j\)
and \(k\), if the union of \(j\) and \(k\) equals \(i\), and if \(j\)
and \(k\) are not-empty, they must be consecutive. Finally, we define
the cardinality of discrete intervals by lifting the cardinality of
sets.
\<close>
definition maximum :: "nat_int \<Rightarrow> nat"
where maximum_def: "i \<noteq> \<emptyset> \<Longrightarrow> maximum (i) = Max (Rep_nat_int i)"
definition minimum :: "nat_int \<Rightarrow> nat"
where minimum_def: "i \<noteq> \<emptyset> \<Longrightarrow> minimum(i) = Min (Rep_nat_int i)"
definition consec:: "nat_int\<Rightarrow>nat_int \<Rightarrow> bool"
where "consec i j \<equiv> (i\<noteq>\<emptyset> \<and> j \<noteq> \<emptyset> \<and> (maximum(i)+1 = minimum j))"
definition N_Chop :: "nat_int \<Rightarrow> nat_int \<Rightarrow> nat_int \<Rightarrow> bool" ("N'_Chop'(_,_,_')" 51)
where nchop_def :
"N_Chop(i,j,k) \<equiv> (i = j \<squnion> k \<and> (j = \<emptyset> \<or> k = \<emptyset> \<or> consec j k))"
lift_definition card' ::"nat_int \<Rightarrow> nat" ( "|_|" 70) is card .
text\<open>For convenience, we also lift the membership relation and its negation
to discrete intervals.\<close>
lift_definition el::"nat \<Rightarrow> nat_int \<Rightarrow> bool" (infix "\<^bold>\<in>" 50) is "Set.member" .
lift_definition not_in ::"nat \<Rightarrow> nat_int \<Rightarrow> bool" (infix "\<^bold>\<notin>" 40) is Set.not_member .
end
lemmas[simp] = nat_int.el.rep_eq nat_int.not_in.rep_eq nat_int.card'.rep_eq
context nat_int
begin
lemma in_not_in_iff1 :"n \<^bold>\<in> i \<longleftrightarrow> \<not> n\<^bold>\<notin> i" by simp
lemma in_not_in_iff2: "n\<^bold>\<notin> i \<longleftrightarrow> \<not> n \<^bold>\<in> i" by simp
lemma rep_non_empty_means_seq:"i \<noteq>\<emptyset>
\<longrightarrow> (\<exists>m n. m \<le> n \<and> ({m..n} =( Rep_nat_int i)))"
by (metis Rep_nat_int Rep_nat_int_inject bot_nat_int.rep_eq nat_int.not_empty_means_seq)
lemma non_empty_max: "i \<noteq> \<emptyset> \<longrightarrow> (\<exists>m . maximum(i) = m)"
by auto
lemma non_empty_min: "i \<noteq> \<emptyset> \<longrightarrow> (\<exists>m . minimum(i) = m)"
by auto
lemma minimum_in: "i \<noteq> \<emptyset> \<longrightarrow> minimum i \<^bold>\<in> i"
by (metis Min_in atLeastatMost_empty_iff2 finite_atLeastAtMost minimum_def
el.rep_eq rep_non_empty_means_seq)
lemma maximum_in: "i \<noteq> \<emptyset> \<longrightarrow> maximum i \<^bold>\<in> i"
by (metis Max_in atLeastatMost_empty_iff2 finite_atLeastAtMost maximum_def
el.rep_eq rep_non_empty_means_seq)
lemma non_empty_elem_in:"i \<noteq> \<emptyset> \<longleftrightarrow> (\<exists>n. n \<^bold>\<in> i)"
proof
assume assm:"i \<noteq> \<emptyset>"
show "\<exists>n . n \<^bold>\<in> i"
by (metis assm Rep_nat_int_inverse all_not_in_conv el.rep_eq bot_nat_int_def)
next
assume assm:"\<exists>n. n \<^bold>\<in> i"
show "i \<noteq> \<emptyset>"
using Abs_nat_int_inverse assm el.rep_eq bot_nat_int_def by fastforce
qed
lemma leq_nat_non_empty:"(m::nat) \<le> n \<longrightarrow> Abs_nat_int{m..n} \<noteq> \<emptyset>"
proof
assume assm:"m \<le>n"
then have non_empty:"{m..n} \<noteq> {} "
using atLeastatMost_empty_iff by blast
with assm have "{m..n} \<in> {S . (\<exists> (m::nat) n . {m..n }=S) }" by blast
then show "Abs_nat_int {m..n} \<noteq> \<emptyset>"
using Abs_nat_int_inject empty_type non_empty bot_nat_int_def
by (simp add: bot_nat_int.abs_eq)
qed
lemma leq_max_sup:"(m::nat) \<le> n \<longrightarrow> Max {m..n} = n"
- by (metis Sup_nat_def cSup_atLeastAtMost)
+ by (auto simp: Max_eq_iff)
lemma leq_min_inf: "(m::nat) \<le> n \<longrightarrow> Min {m..n} = m"
- by (meson Min_in Min_le antisym atLeastAtMost_iff atLeastatMost_empty_iff
- eq_imp_le finite_atLeastAtMost)
+ by (auto simp: Min_eq_iff)
lemma leq_max_sup':"(m::nat) \<le> n \<longrightarrow> maximum(Abs_nat_int{m..n}) = n"
proof
assume assm:"m \<le> n"
hence in_type:"{m..n} \<in> {S . (\<exists> (m::nat) n . m \<le> n \<and> {m..n }=S) \<or> S={} }" by blast
from assm have "Abs_nat_int{m..n} \<noteq> \<emptyset>" using leq_nat_non_empty by blast
hence max:"maximum(Abs_nat_int{m..n}) = Max (Rep_nat_int (Abs_nat_int {m..n}))"
using maximum_def by blast
from in_type have " (Rep_nat_int (Abs_nat_int {m..n})) = {m..n}"
using Abs_nat_int_inverse by blast
hence "Max (Rep_nat_int (Abs_nat_int{m..n})) = Max {m..n}" by simp
with max have simp_max:"maximum(Abs_nat_int{m..n}) = Max {m..n}" by simp
from assm have "Max {m..n} = n" using leq_max_sup by blast
with simp_max show "maximum(Abs_nat_int{m..n}) = n" by simp
qed
lemma leq_min_inf':"(m::nat) \<le> n \<longrightarrow> minimum(Abs_nat_int{m..n}) = m"
proof
assume assm:"m \<le> n"
hence in_type:"{m..n} \<in> {S . (\<exists> (m::nat) n . m \<le> n \<and> {m..n }=S) \<or> S={} }" by blast
from assm have "Abs_nat_int{m..n} \<noteq> \<emptyset>" using leq_nat_non_empty by blast
hence min:"minimum(Abs_nat_int{m..n}) = Min(Rep_nat_int (Abs_nat_int {m..n}))"
using minimum_def by blast
from in_type have " (Rep_nat_int (Abs_nat_int {m..n})) = {m..n}"
using Abs_nat_int_inverse by blast
hence "Min (Rep_nat_int (Abs_nat_int{m..n})) = Min {m..n}" by simp
with min have simp_min:"minimum(Abs_nat_int{m..n}) = Min {m..n}" by simp
from assm have "Min {m..n} = m" using leq_min_inf by blast
with simp_min show "minimum(Abs_nat_int{m..n}) = m" by simp
qed
lemma in_refl:"(n::nat) \<^bold>\<in> Abs_nat_int {n}"
proof -
have "(n::nat) \<le> n" by simp
hence "{n} \<in> {S . (\<exists> (m::nat) n . m \<le> n \<and> {m..n }=S) \<or> S={} }" by auto
then show "n \<^bold>\<in> Abs_nat_int {n}"
by (simp add: Abs_nat_int_inverse el_def)
qed
lemma in_singleton:" m \<^bold>\<in> Abs_nat_int{n} \<longrightarrow> m = n"
proof
assume assm:" m \<^bold>\<in> Abs_nat_int{n}"
have "(n::nat) \<le> n" by simp
hence "{n} \<in> {S . (\<exists> (m::nat) n . m \<le> n \<and> {m..n }=S) \<or> S={} }" by auto
with assm show "m=n" by (simp add: Abs_nat_int_inverse el_def)
qed
subsection \<open>Algebraic properties of intersection and union.\<close>
lemma inter_empty1:"(i::nat_int) \<sqinter> \<emptyset> = \<emptyset>"
using Rep_nat_int_inject inf_nat_int.rep_eq bot_nat_int.abs_eq bot_nat_int.rep_eq
by fastforce
lemma inter_empty2:"\<emptyset> \<sqinter> (i::nat_int) = \<emptyset>"
by (metis inf_commute nat_int.inter_empty1)
lemma un_empty_absorb1:"i \<squnion> \<emptyset> = i"
using Abs_nat_int_inverse Rep_nat_int_inverse union_def empty_type bot_nat_int.rep_eq
by auto
lemma un_empty_absorb2:"\<emptyset> \<squnion> i = i"
using Abs_nat_int_inverse Rep_nat_int_inverse union_def empty_type bot_nat_int.rep_eq
by auto
text \<open>Most properties of the union of two intervals depends on them being consectuive,
to ensure that their union exists.\<close>
lemma consec_un:"consec i j \<and> n \<notin> Rep_nat_int(i) \<union> Rep_nat_int j
\<longrightarrow> n \<^bold>\<notin> (i \<squnion> j)"
proof
assume assm:"consec i j \<and> n \<notin> Rep_nat_int i \<union> Rep_nat_int j"
thus "n \<^bold>\<notin> (i \<squnion> j)"
proof -
have f1: "Abs_nat_int (Rep_nat_int (i \<squnion> j))
= Abs_nat_int (Rep_nat_int i \<union> Rep_nat_int j)"
using Rep_nat_int_inverse union_def by presburger
have "i \<noteq> \<emptyset> \<and> j \<noteq> \<emptyset> \<and> maximum i + 1 = minimum j"
using assm consec_def by auto
then have "\<exists>n na. {n..na} = Rep_nat_int i \<union> Rep_nat_int j"
by (metis (no_types) leq_max_sup leq_min_inf maximum_def minimum_def
rep_non_empty_means_seq un_consec_seq)
then show ?thesis
using f1 Abs_nat_int_inject Rep_nat_int not_in.rep_eq assm by auto
qed
qed
lemma un_subset1: "consec i j \<longrightarrow> i \<sqsubseteq> i \<squnion> j"
proof
assume "consec i j"
then have assm:"i \<noteq> \<emptyset> \<and> j \<noteq> \<emptyset> \<and> maximum i+1 = minimum j"
using consec_def by blast
have "Rep_nat_int i \<union> Rep_nat_int j = {minimum i.. maximum j}"
by (metis assm nat_int.leq_max_sup nat_int.leq_min_inf nat_int.maximum_def
nat_int.minimum_def nat_int.rep_non_empty_means_seq nat_int.un_consec_seq)
then show "i \<sqsubseteq> i \<squnion> j" using Abs_nat_int_inverse Rep_nat_int
by (metis (mono_tags, lifting) Un_upper1 less_eq_nat_int.rep_eq mem_Collect_eq
nat_int.union_def)
qed
lemma un_subset2: "consec i j \<longrightarrow> j \<sqsubseteq> i \<squnion> j"
proof
assume "consec i j"
then have assm:"i \<noteq> \<emptyset> \<and> j \<noteq> \<emptyset> \<and> maximum i+1 = minimum j"
using consec_def by blast
have "Rep_nat_int i \<union> Rep_nat_int j = {minimum i.. maximum j}"
by (metis assm nat_int.leq_max_sup nat_int.leq_min_inf nat_int.maximum_def
nat_int.minimum_def nat_int.rep_non_empty_means_seq nat_int.un_consec_seq)
then show "j \<sqsubseteq> i \<squnion> j" using Abs_nat_int_inverse Rep_nat_int
by (metis (mono_tags, lifting) Un_upper2 less_eq_nat_int.rep_eq mem_Collect_eq
nat_int.union_def)
qed
lemma inter_distr1:"consec j k \<longrightarrow> i \<sqinter> (j \<squnion> k) = (i \<sqinter> j) \<squnion> (i \<sqinter> k)"
unfolding consec_def
proof
assume assm:"j \<noteq> \<emptyset> \<and> k \<noteq> \<emptyset> \<and> maximum j +1 = minimum k"
then show " i \<sqinter> (j \<squnion> k) = (i \<sqinter> j) \<squnion> (i \<sqinter> k)"
proof -
have f1: "\<forall>n. n = \<emptyset> \<or> maximum n = Max (Rep_nat_int n)"
using nat_int.maximum_def by auto
have f2: "Rep_nat_int j \<noteq> {}"
using assm nat_int.maximum_in by auto
have f3: "maximum j = Max (Rep_nat_int j)"
using f1 by (meson assm)
have "maximum k \<^bold>\<in> k"
using assm nat_int.maximum_in by blast
then have "Rep_nat_int k \<noteq> {}"
by fastforce
then have "Rep_nat_int (j \<squnion> k) = Rep_nat_int j \<union> Rep_nat_int k"
using f3 f2 Abs_nat_int_inverse Rep_nat_int assm nat_int.minimum_def
nat_int.union_def union_result
by auto
then show ?thesis
by (metis Rep_nat_int_inverse inf_nat_int.rep_eq inf_sup_distrib1 nat_int.union_def)
qed
qed
lemma inter_distr2:"consec j k \<longrightarrow> (j \<squnion> k) \<sqinter> i = (j \<sqinter> i) \<squnion> (k \<sqinter> i)"
by (simp add: inter_distr1 inf_commute)
lemma consec_un_not_elem1:"consec i j \<and> n\<^bold>\<notin> i \<squnion> j \<longrightarrow> n \<^bold>\<notin> i"
using un_subset1 less_eq_nat_int.rep_eq not_in.rep_eq by blast
lemma consec_un_not_elem2:"consec i j \<and> n\<^bold>\<notin> i \<squnion> j \<longrightarrow> n \<^bold>\<notin> j"
using un_subset2 less_eq_nat_int.rep_eq not_in.rep_eq by blast
lemma consec_un_element1:"consec i j \<and> n \<^bold>\<in> i \<longrightarrow> n \<^bold>\<in> i \<squnion> j"
using less_eq_nat_int.rep_eq nat_int.el.rep_eq nat_int.un_subset1 by blast
lemma consec_un_element2:"consec i j \<and> n \<^bold>\<in> j \<longrightarrow> n \<^bold>\<in> i \<squnion> j"
using less_eq_nat_int.rep_eq nat_int.el.rep_eq nat_int.un_subset2 by blast
lemma consec_lesser:" consec i j \<longrightarrow> (\<forall>n m. (n \<^bold>\<in> i \<and> m \<^bold>\<in> j \<longrightarrow> n < m))"
proof (rule allI|rule impI)+
assume "consec i j"
fix n and m
assume assump:"n \<^bold>\<in> i \<and> m \<^bold>\<in> j "
then have max:"n \<le> maximum i"
by (metis \<open>consec i j\<close> atLeastAtMost_iff leq_max_sup maximum_def consec_def
el.rep_eq rep_non_empty_means_seq)
from assump have min: "m \<ge> minimum j"
by (metis Min_le \<open>consec i j\<close> finite_atLeastAtMost minimum_def consec_def
el.rep_eq rep_non_empty_means_seq)
from min and max show less:"n < m"
using One_nat_def Suc_le_lessD \<open>consec i j\<close> add.right_neutral add_Suc_right
dual_order.trans leD leI consec_def by auto
qed
lemma consec_in_exclusive1:"consec i j \<and> n \<^bold>\<in> i \<longrightarrow> n \<^bold>\<notin> j"
using nat_int.consec_lesser nat_int.in_not_in_iff2 by blast
lemma consec_in_exclusive2:"consec i j \<and> n \<^bold>\<in> j \<longrightarrow> n \<^bold>\<notin> i"
using consec_in_exclusive1 el.rep_eq not_in.rep_eq by blast
lemma consec_un_max:"consec i j \<longrightarrow> maximum j = maximum (i \<squnion> j)"
proof
assume assm:"consec i j"
then have "(\<forall>n m. (n \<^bold>\<in> i \<and> m \<^bold>\<in> j \<longrightarrow> n < m))"
using nat_int.consec_lesser by blast
then have "\<forall>n . (n \<^bold>\<in> i \<longrightarrow> n < maximum j)"
using assm local.consec_def nat_int.maximum_in by auto
then have "\<forall>n. (n \<^bold>\<in> i \<squnion> j \<longrightarrow> n \<le> maximum j)"
by (metis (no_types, lifting) Rep_nat_int Rep_nat_int_inverse Un_iff assm atLeastAtMost_iff
bot_nat_int.rep_eq less_imp_le_nat local.consec_def local.not_empty_means_seq
nat_int.consec_un nat_int.el.rep_eq nat_int.in_not_in_iff1 nat_int.leq_max_sup')
then show "maximum j = maximum (i \<squnion> j)"
by (metis Rep_nat_int_inverse assm atLeastAtMost_iff bot.extremum_uniqueI
le_antisym local.consec_def nat_int.consec_un_element2 nat_int.el.rep_eq
nat_int.leq_max_sup' nat_int.maximum_in nat_int.un_subset2 rep_non_empty_means_seq)
qed
lemma consec_un_min:"consec i j \<longrightarrow> minimum i = minimum (i \<squnion> j)"
proof
assume assm:"consec i j"
then have "(\<forall>n m. (n \<^bold>\<in> i \<and> m \<^bold>\<in> j \<longrightarrow> n < m))"
using nat_int.consec_lesser by blast
then have "\<forall>n . (n \<^bold>\<in> j \<longrightarrow> n > minimum i)"
using assm local.consec_def nat_int.minimum_in by auto
then have 1:"\<forall>n. (n \<^bold>\<in> i \<squnion> j \<longrightarrow> n \<ge> minimum i)"
using Rep_nat_int Rep_nat_int_inverse Un_iff assm atLeastAtMost_iff bot_nat_int.rep_eq
less_imp_le_nat local.consec_def local.not_empty_means_seq nat_int.consec_un
nat_int.el.rep_eq nat_int.in_not_in_iff1
by (metis (no_types, lifting) leq_min_inf local.minimum_def)
from assm have "i \<squnion> j \<noteq> \<emptyset>"
by (metis bot.extremum_uniqueI nat_int.consec_def nat_int.un_subset2)
then show "minimum i = minimum (i \<squnion> j)"
by (metis "1" antisym assm atLeastAtMost_iff leq_min_inf nat_int.consec_def
nat_int.consec_un_element1 nat_int.el.rep_eq nat_int.minimum_def nat_int.minimum_in
rep_non_empty_means_seq)
qed
lemma consec_un_defined:
"consec i j \<longrightarrow> (Rep_nat_int (i \<squnion> j) \<in> {S . (\<exists> (m::nat) n . {m..n }=S) })"
using Rep_nat_int by auto
lemma consec_un_min_max:
"consec i j \<longrightarrow> Rep_nat_int(i \<squnion> j) = {minimum i .. maximum j}"
proof
assume assm:"consec i j"
then have 1:"minimum j = maximum i +1"
by (simp add: nat_int.consec_def)
have i:"Rep_nat_int i = {minimum i..maximum i}"
by (metis Rep_nat_int_inverse assm nat_int.consec_def nat_int.leq_max_sup' nat_int.leq_min_inf'
rep_non_empty_means_seq)
have j:"Rep_nat_int j = {minimum j..maximum j}"
by (metis Rep_nat_int_inverse assm nat_int.consec_def nat_int.leq_max_sup' nat_int.leq_min_inf'
rep_non_empty_means_seq)
show "Rep_nat_int(i \<squnion> j) = {minimum i .. maximum j}"
by (metis Rep_nat_int_inverse antisym assm bot.extremum i nat_int.consec_un_max
nat_int.consec_un_min nat_int.leq_max_sup' nat_int.leq_min_inf' nat_int.un_subset1
rep_non_empty_means_seq)
qed
lemma consec_un_equality:
"(consec i j \<and> k \<noteq> \<emptyset>)
\<longrightarrow>( minimum (i \<squnion> j) = minimum (k) \<and> maximum (i \<squnion> j) = maximum (k))
\<longrightarrow> i \<squnion> j = k"
proof (rule impI)+
assume cons:"consec i j \<and> k \<noteq> \<emptyset>"
assume endpoints:" minimum(i \<squnion> j) = minimum(k) \<and> maximum(i \<squnion> j) = maximum(k)"
have "Rep_nat_int( i \<squnion> j) = {minimum(i \<squnion> j)..maximum(i \<squnion> j)}"
by (metis cons leq_max_sup leq_min_inf local.consec_def nat_int.consec_un_element2
nat_int.maximum_def nat_int.minimum_def nat_int.non_empty_elem_in rep_non_empty_means_seq)
then have 1:"Rep_nat_int( i \<squnion> j) = {minimum(k) .. maximum(k)}"
using endpoints by simp
have "Rep_nat_int( k) = {minimum(k) .. maximum(k)}"
by (metis cons leq_max_sup leq_min_inf nat_int.maximum_def nat_int.minimum_def
rep_non_empty_means_seq)
then show " i \<squnion> j = k" using 1
by (metis Rep_nat_int_inverse)
qed
lemma consec_trans_lesser:
"consec i j \<and> consec j k \<longrightarrow> (\<forall>n m. (n \<^bold>\<in> i \<and> m \<^bold>\<in> k \<longrightarrow> n < m))"
proof (rule allI|rule impI)+
assume cons:" consec i j \<and> consec j k"
fix n and m
assume assump:"n \<^bold>\<in> i \<and> m \<^bold>\<in> k "
have "\<forall>k . k \<^bold>\<in> j \<longrightarrow> k < m" using consec_lesser assump cons by blast
hence m_greater:"maximum j < m" using cons maximum_in consec_def by blast
then show "n < m"
by (metis assump cons consec_def dual_order.strict_trans nat_int.consec_lesser
nat_int.maximum_in)
qed
lemma consec_inter_empty:"consec i j \<Longrightarrow> i \<sqinter> j = \<emptyset>"
proof -
assume "consec i j"
then have "i \<noteq> bot \<and> j \<noteq> bot \<and> maximum i + 1 = minimum j"
using consec_def by force
then show ?thesis
by (metis (no_types) Rep_nat_int_inverse bot_nat_int_def inf_nat_int.rep_eq int_conseq_seq
nat_int.leq_max_sup nat_int.leq_min_inf nat_int.maximum_def nat_int.minimum_def
nat_int.rep_non_empty_means_seq)
qed
lemma consec_intermediate1:"consec j k \<and> consec i (j \<squnion> k) \<longrightarrow> consec i j "
proof
assume assm:"consec j k \<and> consec i (j \<squnion> k)"
hence min_max_yz:"maximum j +1=minimum k" using consec_def by blast
hence min_max_xyz:"maximum i +1 = minimum (j \<squnion> k)"
using consec_def assm by blast
have min_y_yz:"minimum j = minimum (j \<squnion> k)"
by (simp add: assm nat_int.consec_un_min)
hence min_max_xy:"maximum i+1 = minimum j"
using min_max_xyz by simp
thus consec_x_y:"consec i j" using assm consec_def by auto
qed
lemma consec_intermediate2:"consec i j \<and> consec (i \<squnion> j) k \<longrightarrow> consec j k "
proof
assume assm:"consec i j \<and> consec (i \<squnion> j) k"
hence min_max_yz:"maximum i +1=minimum j" using consec_def by blast
hence min_max_xyz:"maximum (i \<squnion> j) +1 = minimum ( k)"
using consec_def assm by blast
have min_y_yz:"maximum j = maximum (i \<squnion> j)"
using assm nat_int.consec_un_max by blast
then have min_max_xy:"maximum j+1 = minimum k"
using min_max_xyz by simp
thus consec_x_y:"consec j k" using assm consec_def by auto
qed
lemma un_assoc: "consec i j \<and> consec j k \<longrightarrow> (i \<squnion> j) \<squnion> k = i \<squnion> (j \<squnion> k)"
proof
assume assm:"consec i j \<and> consec j k"
from assm have 3:"maximum (i \<squnion> j) = maximum j"
using nat_int.consec_un_max by auto
from assm have 4:"minimum (j \<squnion> k) = minimum (j)"
using nat_int.consec_un_min by auto
have "i \<squnion> j = Abs_nat_int{minimum i .. maximum j}"
by (metis Rep_nat_int_inverse assm nat_int.consec_un_min_max)
then have 5:"(i \<squnion> j) \<squnion> k = Abs_nat_int{minimum i .. maximum k}"
by (metis (no_types, hide_lams) "3" Rep_nat_int_inverse antisym assm bot.extremum
nat_int.consec_def nat_int.consec_un_min nat_int.consec_un_min_max nat_int.un_subset1)
have "j \<squnion> k = Abs_nat_int{minimum j .. maximum k}"
by (metis Rep_nat_int_inverse assm nat_int.consec_un_min_max)
then have 6:"i \<squnion> (j \<squnion> k) = Abs_nat_int{minimum i .. maximum k}"
by (metis (no_types, hide_lams) "4" Rep_nat_int_inverse antisym assm bot.extremum
nat_int.consec_def nat_int.consec_un_max nat_int.consec_un_min_max nat_int.un_subset2)
from 5 and 6 show " (i \<squnion> j) \<squnion> k = i \<squnion> (j \<squnion> k)" by simp
qed
lemma consec_assoc1:"consec j k \<and> consec i (j \<squnion> k) \<longrightarrow> consec (i \<squnion> j) k "
proof
assume assm:"consec j k \<and> consec i (j \<squnion> k)"
hence min_max_yz:"maximum j +1=minimum k" using consec_def by blast
hence min_max_xyz:"maximum i +1 = minimum (j \<squnion> k)"
using consec_def assm by blast
have min_y_yz:"minimum j = minimum (j \<squnion> k)"
by (simp add: assm nat_int.consec_un_min)
hence min_max_xy:"maximum i+1 = minimum j" using min_max_xyz by simp
hence consec_x_y:"consec i j" using assm _consec_def by auto
hence max_y_xy:"maximum j = maximum (i \<squnion> j)" using consec_lesser assm
by (simp add: nat_int.consec_un_max)
have none_empty:"i \<noteq> \<emptyset> \<and> j \<noteq> \<emptyset> \<and> k \<noteq> \<emptyset>" using assm by (simp add: consec_def)
hence un_non_empty:"i\<squnion>j \<noteq> \<emptyset>"
using bot.extremum_uniqueI consec_x_y nat_int.un_subset2 by force
have max:"maximum (i\<squnion>j) +1 = minimum k"
using min_max_yz max_y_xy by auto
thus "consec (i \<squnion> j) k" using max un_non_empty none_empty consec_def by blast
qed
lemma consec_assoc2:"consec i j \<and> consec (i\<squnion> j) k \<longrightarrow> consec i (j\<squnion> k) "
proof
assume assm:"consec i j \<and> consec (i\<squnion> j) k"
hence consec_y_z:"consec j k" using assm consec_def consec_intermediate2
by blast
hence max_y_xy:"maximum j = maximum (i \<squnion> j)"
by (simp add: assm nat_int.consec_un_max)
have min_y_yz:"minimum j = minimum (j \<squnion> k)"
by (simp add: consec_y_z nat_int.consec_un_min)
have none_empty:"i \<noteq> \<emptyset> \<and> j \<noteq> \<emptyset> \<and> k \<noteq> \<emptyset>" using assm by (simp add: consec_def)
then have un_non_empty:"j\<squnion>k \<noteq> \<emptyset>"
by (metis bot_nat_int.rep_eq Rep_nat_int_inject consec_y_z less_eq_nat_int.rep_eq
un_subset1 subset_empty)
have max:"maximum (i) +1 = minimum (j\<squnion> k)"
using assm min_y_yz consec_def by auto
thus "consec i ( j \<squnion> k)" using max un_non_empty none_empty consec_def by blast
qed
lemma consec_assoc_mult:
"(i2=\<emptyset>\<or> consec i1 i2 ) \<and> (i3 =\<emptyset> \<or> consec i3 i4) \<and> (consec (i1 \<squnion> i2) (i3 \<squnion> i4))
\<longrightarrow> (i1 \<squnion> i2) \<squnion> (i3 \<squnion> i4) = (i1 \<squnion> (i2 \<squnion> i3)) \<squnion> i4"
proof
assume assm:"(i2=\<emptyset>\<or> consec i1 i2 ) \<and> (i3 =\<emptyset> \<or> consec i3 i4)
\<and> (consec (i1 \<squnion> i2) (i3 \<squnion> i4))"
hence "(i2=\<emptyset>\<or> consec i1 i2 )" by simp
thus " (i1 \<squnion> i2) \<squnion> (i3 \<squnion> i4) = (i1 \<squnion> (i2 \<squnion> i3)) \<squnion> i4"
proof
assume empty2:"i2 = \<emptyset>"
hence only_l1:"(i1 \<squnion> i2) = i1" using un_empty_absorb1 by simp
from assm have "(i3 =\<emptyset> \<or> consec i3 i4)" by simp
thus " (i1 \<squnion> i2) \<squnion> (i3 \<squnion> i4) = (i1 \<squnion> (i2 \<squnion> i3)) \<squnion> i4"
by (metis Rep_nat_int_inverse assm bot_nat_int.rep_eq empty2 local.union_def
nat_int.consec_intermediate1 nat_int.un_assoc only_l1 sup_bot.left_neutral)
next
assume consec12:" consec i1 i2"
from assm have "(i3 =\<emptyset> \<or> consec i3 i4)" by simp
thus " (i1 \<squnion> i2) \<squnion> (i3 \<squnion> i4) = (i1 \<squnion> (i2 \<squnion> i3)) \<squnion> i4"
proof
assume empty3:"i3 = \<emptyset>"
hence only_l4:"(i3 \<squnion> i4) = i4 " using un_empty_absorb2 by simp
have "(i1 \<squnion> (i2 \<squnion> i3)) = i1 \<squnion> i2" using empty3 by (simp add: un_empty_absorb1)
thus ?thesis by (simp add: only_l4)
next
assume consec34:" consec i3 i4"
have consec12_3:"consec (i1 \<squnion> i2) i3"
using assm consec34 consec_intermediate1 by blast
show ?thesis
by (metis consec12 consec12_3 consec34 consec_intermediate2 un_assoc)
qed
qed
qed
lemma card_subset_le: "i \<sqsubseteq> i' \<longrightarrow> |i| \<le> |i'|"
by (metis bot_nat_int.rep_eq card_mono finite.intros(1) finite_atLeastAtMost
less_eq_nat_int.rep_eq local.card'.rep_eq rep_non_empty_means_seq)
lemma card_subset_less:"(i::nat_int) < i' \<longrightarrow> |i|<|i'|"
by (metis bot_nat_int.rep_eq finite.intros(1) finite_atLeastAtMost less_nat_int.rep_eq
local.card'.rep_eq psubset_card_mono rep_non_empty_means_seq)
lemma card_empty_zero:"|\<emptyset>| = 0"
using Abs_nat_int_inverse empty_type card'.rep_eq bot_nat_int.rep_eq by auto
lemma card_non_empty_geq_one:"i \<noteq> \<emptyset> \<longleftrightarrow> |i| \<ge> 1"
proof
assume "i \<noteq> \<emptyset>"
hence "Rep_nat_int i \<noteq> {}" by (metis Rep_nat_int_inverse bot_nat_int.rep_eq)
hence "card (Rep_nat_int i) > 0"
by (metis \<open>i \<noteq> \<emptyset>\<close> card_0_eq finite_atLeastAtMost gr0I rep_non_empty_means_seq)
thus "|i| \<ge> 1" by (simp add: card'_def)
next
assume "|i| \<ge> 1" thus "i \<noteq>\<emptyset>"
using card_empty_zero by auto
qed
lemma card_min_max:"i \<noteq> \<emptyset> \<longrightarrow> |i| = (maximum i - minimum i) + 1"
proof
assume assm:"i \<noteq> \<emptyset>"
then have "Rep_nat_int i = {minimum i .. maximum i}"
by (metis leq_max_sup leq_min_inf nat_int.maximum_def nat_int.minimum_def
rep_non_empty_means_seq)
then have "card (Rep_nat_int i) = maximum i - minimum i + 1"
using Rep_nat_int_inject assm bot_nat_int.rep_eq by fastforce
then show " |i| = (maximum i - minimum i) + 1" by simp
qed
lemma card_un_add: " consec i j \<longrightarrow> |i \<squnion> j| = |i| + |j|"
proof
assume assm:"consec i j"
then have 0:"i \<sqinter> j = \<emptyset>"
using nat_int.consec_inter_empty by auto
then have "(Rep_nat_int i) \<inter> (Rep_nat_int j) = {}"
by (metis bot_nat_int.rep_eq inf_nat_int.rep_eq)
then have 1:
"card((Rep_nat_int i)\<union>(Rep_nat_int j))=card(Rep_nat_int i)+card(Rep_nat_int j)"
by (metis Int_iff add.commute add.left_neutral assm card.infinite card_Un_disjoint
emptyE le_add1 le_antisym local.consec_def nat_int.card'.rep_eq
nat_int.card_min_max nat_int.el.rep_eq nat_int.maximum_in nat_int.minimum_in)
then show "|i \<squnion> j| = |i| + |j|"
proof -
have f1: "i \<noteq> \<emptyset> \<and> j \<noteq> \<emptyset> \<and> maximum i + 1 = minimum j"
using assm nat_int.consec_def by blast
then have f2: "Rep_nat_int i \<noteq> {}"
using Rep_nat_int_inject bot_nat_int.rep_eq by auto
have "Rep_nat_int j \<noteq> {}"
using f1 Rep_nat_int_inject bot_nat_int.rep_eq by auto
then show ?thesis
using f2 f1 Abs_nat_int_inverse Rep_nat_int 1 local.union_result
nat_int.union_def nat_int_class.maximum_def nat_int_class.minimum_def
by force
qed
qed
lemma singleton:"|i| = 1 \<longrightarrow> (\<exists>n. Rep_nat_int i = {n})"
using card_1_singletonE card'.rep_eq by fastforce
lemma singleton2:" (\<exists>n. Rep_nat_int i = {n}) \<longrightarrow> |i| = 1"
using card_1_singletonE card'.rep_eq by fastforce
lemma card_seq:"
\<forall>i .|i| = x \<longrightarrow> (Rep_nat_int i = {} \<or> (\<exists>n. Rep_nat_int i = {n..n+(x-1)}))"
proof (induct x)
show IB:
"\<forall>i. |i| = 0 \<longrightarrow> (Rep_nat_int i = {} \<or> (\<exists>n. Rep_nat_int i = {n..n+(0-1)}))"
by (metis card_non_empty_geq_one bot_nat_int.rep_eq not_one_le_zero)
fix x
assume IH:
"\<forall>i. |i| = x \<longrightarrow> Rep_nat_int i = {} \<or> (\<exists>n. Rep_nat_int i = {n..n+(x-1)})"
show " \<forall>i. |i| = Suc x \<longrightarrow>
Rep_nat_int i = {} \<or> (\<exists>n. Rep_nat_int i = {n.. n + (Suc x - 1)})"
proof (rule allI|rule impI)+
fix i
assume assm_IS:"|i| = Suc x"
show " Rep_nat_int i = {} \<or> (\<exists>n. Rep_nat_int i = {n.. n + (Suc x -1)})"
proof (cases "x = 0")
assume "x=0"
hence "|i| = 1"
using assm_IS by auto
then have "\<exists>n'. Rep_nat_int i = {n'}"
using nat_int.singleton by blast
hence "\<exists>n'. Rep_nat_int i = {n'.. n' + (Suc x) -1}"
by (simp add: \<open>x = 0\<close>)
thus "Rep_nat_int i = {} \<or> (\<exists>n. Rep_nat_int i = {n.. n + (Suc x - 1)})"
by simp
next
assume x_neq_0:"x \<noteq>0 "
hence x_ge_0:"x > 0" using gr0I by blast
from assm_IS have i_is_seq:"\<exists>n m. n \<le> m \<and> Rep_nat_int i = {n..m}"
by (metis One_nat_def Suc_le_mono card_non_empty_geq_one le0 rep_non_empty_means_seq)
obtain n and m where seq_def:" n \<le> m \<and> Rep_nat_int i = {n..m}"
using i_is_seq by auto
have n_le_m:"n < m"
proof (rule ccontr)
assume "\<not>n < m"
hence "n = m" by (simp add: less_le seq_def)
hence "Rep_nat_int i = {n}" by (simp add: seq_def)
hence "x = 0" using assm_IS card'.rep_eq by auto
thus False by (simp add: x_neq_0)
qed
hence "n \<le> (m-1)" by simp
obtain i' where i_def:"i' = Abs_nat_int {n..m-1}" by blast
then have card_i':"|i'| = x"
using assm_IS leq_nat_non_empty n_le_m
nat_int_class.card_min_max nat_int_class.leq_max_sup' nat_int_class.leq_min_inf'
seq_def by auto
hence "Rep_nat_int i' = {} \<or> (\<exists>n. Rep_nat_int i' = {n.. n + (x - 1)})"
using IH by auto
hence " (\<exists>n. Rep_nat_int i' = {n.. n + (x - 1)})" using x_neq_0
using card.empty card_i' card'.rep_eq by auto
hence "m-1 = n + x -1" using assm_IS card'.rep_eq seq_def by auto
hence "m = n +x" using n_le_m x_ge_0 by linarith
hence "( Rep_nat_int i = {n.. n + (Suc x -1) })" using seq_def by (simp )
hence "\<exists>n. (Rep_nat_int i = {n.. n + (Suc x -1) })" ..
then show "Rep_nat_int i = {} \<or> (\<exists>n. Rep_nat_int i ={n.. n + (Suc x-1)})"
by blast
qed
qed
qed
lemma rep_single: "Rep_nat_int (Abs_nat_int {m..m}) = {m}"
by (simp add: Abs_nat_int_inverse)
lemma chop_empty_right: "\<forall>i. N_Chop(i,i,\<emptyset>)"
using bot_nat_int.abs_eq nat_int.inter_empty1 nat_int.nchop_def nat_int.un_empty_absorb1
by auto
lemma chop_empty_left: "\<forall>i. N_Chop(i, \<emptyset>, i)"
using bot_nat_int.abs_eq nat_int.inter_empty2 nat_int.nchop_def nat_int.un_empty_absorb2
by auto
lemma chop_empty : "N_Chop(\<emptyset>,\<emptyset>,\<emptyset>)"
by (simp add: chop_empty_left)
lemma chop_always_possible:"\<forall>i.\<exists> j k. N_Chop(i,j,k)"
by (metis chop_empty_right)
lemma chop_add1: "N_Chop(i,j,k) \<longrightarrow> |i| = |j| + |k|"
using card_empty_zero card_un_add un_empty_absorb1 un_empty_absorb2 nchop_def by auto
lemma chop_add2:"|i| = x+y \<longrightarrow> (\<exists> j k. N_Chop(i,j,k) \<and> |j|=x \<and> |k|=y)"
proof
assume assm:"|i| = x+y"
show "(\<exists> j k. N_Chop(i,j,k) \<and> |j|=x \<and> |k|=y)"
proof (cases "x+y = 0")
assume "x+y =0"
then show "\<exists> j k. N_Chop(i,j,k) \<and> |j|=x \<and> |k|=y"
using assm chop_empty_left nat_int.chop_add1 by fastforce
next
assume "x+y \<noteq> 0"
show "\<exists> j k. N_Chop(i,j,k) \<and> |j|=x \<and> |k|=y"
proof (cases "x = 0")
assume x_eq_0:"x=0"
then show "\<exists> j k. N_Chop(i,j,k) \<and> |j|=x \<and> |k|=y"
using assm nat_int.card_empty_zero nat_int.chop_empty_left by auto
next
assume x_neq_0:"x \<noteq>0"
show "\<exists> j k. N_Chop(i,j,k) \<and> |j|=x \<and> |k|=y"
proof (cases "y = 0")
assume y_eq_0:"y=0"
then show "\<exists> j k. N_Chop(i,j,k) \<and> |j|=x \<and> |k|=y"
using assm nat_int.card_empty_zero nat_int.chop_empty_right by auto
next
assume y_neq_0:"y \<noteq> 0"
have rep_i:"\<exists>n. Rep_nat_int i = {n..n + (x+y)-1}"
using assm card'.rep_eq card_seq x_neq_0 by fastforce
obtain n where n_def:"Rep_nat_int i = {n..n + (x+y) -1}"
using rep_i by auto
have n_le:"n \<le> n+(x-1)" by simp
have x_le:"n+(x) \<le> n + (x+y)-1" using y_neq_0 by linarith
obtain j where j_def:" j = Abs_nat_int {n..n+(x-1)}" by blast
from n_le have j_in_type:
"{n..n+(x-1)} \<in> {S . (\<exists> (m::nat) n . m \<le> n \<and> {m..n }=S) \<or> S={}}"
by blast
obtain k where k_def:" k =Abs_nat_int {n+x..n+(x+y)-1}" by blast
from x_le have k_in_type:
"{n+x..n+(x+y)-1} \<in> {S.(\<exists> (m::nat) n . m \<le> n \<and> {m..n }=S) \<or> S={}}"
by blast
have consec: "consec j k"
by (metis j_def k_def One_nat_def Suc_leI add.assoc diff_add n_le consec_def
leq_max_sup' leq_min_inf' leq_nat_non_empty neq0_conv x_le x_neq_0)
have union:"i = j \<squnion> k"
by (metis Rep_nat_int_inverse consec j_def k_def n_def n_le nat_int.consec_un_min_max
nat_int.leq_max_sup' nat_int.leq_min_inf' x_le)
have disj:"j \<sqinter> k = \<emptyset>" using consec by (simp add: consec_inter_empty)
have chop:"N_Chop(i,j,k)" using consec union disj nchop_def by simp
have card_j:"|j| = x"
using Abs_nat_int_inverse j_def n_le card'.rep_eq x_neq_0 by auto
have card_k:"|k| = y"
using Abs_nat_int_inverse k_def x_le card'.rep_eq x_neq_0 y_neq_0 by auto
have "N_Chop(i,j,k) \<and> |j| = x \<and> |k| = y" using chop card_j card_k by blast
then show "\<exists> j k. N_Chop(i,j,k) \<and> |j|=x \<and> |k|=y" by blast
qed
qed
qed
qed
lemma chop_single:"(N_Chop(i,j,k) \<and> |i| = 1) \<longrightarrow> ( |j| =0 \<or> |k|=0)"
using chop_add1 by force
lemma chop_leq_max:"N_Chop(i,j,k) \<and> consec j k \<longrightarrow>
(\<forall>n . n \<in> Rep_nat_int i \<and> n \<le> maximum j \<longrightarrow> n \<in> Rep_nat_int j)"
by (metis Un_iff le_antisym less_imp_le_nat nat_int.consec_def nat_int.consec_lesser
nat_int.consec_un nat_int.el.rep_eq nat_int.maximum_in nat_int.nchop_def
nat_int.not_in.rep_eq)
lemma chop_geq_min:"N_Chop(i,j,k) \<and> consec j k \<longrightarrow>
(\<forall>n . n \<in> Rep_nat_int i \<and> minimum k \<le> n \<longrightarrow> n \<in> Rep_nat_int k)"
by (metis atLeastAtMost_iff bot_nat_int.rep_eq equals0D leq_max_sup leq_min_inf
nat_int.consec_def nat_int.consec_un_max nat_int.maximum_def nat_int.minimum_def
nat_int.nchop_def rep_non_empty_means_seq)
lemma chop_min:"N_Chop(i,j,k) \<and> consec j k \<longrightarrow> minimum i = minimum j"
using nat_int.consec_un_min nat_int.nchop_def by auto
lemma chop_max:"N_Chop(i,j,k) \<and> consec j k \<longrightarrow> maximum i = maximum k"
using nat_int.consec_un_max nat_int.nchop_def by auto
lemma chop_assoc1:
"N_Chop(i,i1,i2) \<and> N_Chop(i2,i3,i4)
\<longrightarrow> (N_Chop(i, i1 \<squnion> i3, i4) \<and> N_Chop(i1 \<squnion> i3, i1, i3))"
proof
assume assm:"N_Chop(i,i1,i2) \<and> N_Chop(i2,i3,i4)"
then have chop_def:"(i = i1 \<squnion> i2 \<and>
(i1 = \<emptyset> \<or> i2 = \<emptyset> \<or> ( consec i1 i2)))"
using nchop_def by blast
hence "(i1 = \<emptyset> \<or> i2 = \<emptyset> \<or> ( consec i1 i2))" by simp
then show "N_Chop(i, i1 \<squnion> i3, i4) \<and> N_Chop(i1 \<squnion> i3, i1, i3)"
proof
assume empty:"i1 = \<emptyset>"
then show "N_Chop(i,i1 \<squnion> i3, i4) \<and> N_Chop(i1 \<squnion> i3, i1, i3)"
by (simp add: assm chop_def nat_int.chop_empty_left nat_int.un_empty_absorb2)
next
assume "i2 = \<emptyset> \<or> ( consec i1 i2)"
then show "N_Chop(i, i1 \<squnion> i3, i4)\<and> N_Chop(i1 \<squnion> i3, i1, i3)"
proof
assume empty:"i2 = \<emptyset>"
then show "N_Chop(i, i1 \<squnion> i3, i4)\<and> N_Chop(i1 \<squnion> i3, i1, i3)"
by (metis assm bot.extremum_uniqueI nat_int.chop_empty_right nat_int.nchop_def
nat_int.un_empty_absorb2 nat_int.un_subset1)
next
assume " consec i1 i2"
then have consec_i1_i2:"i1 \<noteq>\<emptyset> \<and> i2 \<noteq>\<emptyset> \<and> maximum i1 +1 = minimum i2"
using consec_def by blast
from assm have "i3 = \<emptyset> \<or> i4 = \<emptyset> \<or> consec i3 i4"
using nchop_def by blast
then show "N_Chop(i, i1 \<squnion> i3, i4)\<and> N_Chop(i1 \<squnion> i3, i1, i3)"
proof
assume i3_empty:"i3 = \<emptyset>"
then show "N_Chop(i, i1 \<squnion> i3, i4)\<and> N_Chop(i1 \<squnion> i3, i1, i3)"
using assm nat_int.chop_empty_right nat_int.nchop_def nat_int.un_empty_absorb2
by auto
next
assume "i4 = \<emptyset> \<or> consec i3 i4"
then show "N_Chop(i, i1 \<squnion> i3, i4)\<and> N_Chop(i1 \<squnion> i3, i1, i3)"
proof
assume i4_empty:"i4 = \<emptyset>"
then show "N_Chop(i, i1 \<squnion> i3, i4)\<and> N_Chop(i1 \<squnion> i3, i1, i3)"
using assm nat_int.chop_empty_right nat_int.nchop_def by auto
next
assume consec_i3_i4:"consec i3 i4"
then show "N_Chop(i, i1 \<squnion> i3, i4)\<and> N_Chop(i1 \<squnion> i3, i1, i3)"
by (metis \<open>consec i1 i2\<close> assm nat_int.consec_assoc1 nat_int.consec_intermediate1
nat_int.nchop_def nat_int.un_assoc)
qed
qed
qed
qed
qed
lemma chop_assoc2:
"N_Chop(i,i1,i2) \<and> N_Chop(i1,i3,i4)
\<longrightarrow> N_Chop(i, i3, i4 \<squnion> i2) \<and> N_Chop(i4 \<squnion> i2, i4,i2)"
proof
assume assm: "N_Chop(i,i1,i2) \<and> N_Chop(i1,i3,i4)"
hence "(i1 = \<emptyset> \<or> i2 = \<emptyset> \<or> ( consec i1 i2))"
using nchop_def by blast
then show "N_Chop(i, i3, i4 \<squnion> i2)\<and> N_Chop(i4 \<squnion> i2, i4,i2)"
proof
assume i1_empty:"i1 = \<emptyset>"
then show "N_Chop(i, i3, i4 \<squnion> i2)\<and> N_Chop(i4 \<squnion> i2, i4,i2)"
by (metis assm nat_int.chop_empty_left nat_int.consec_un_not_elem1 nat_int.in_not_in_iff1
nat_int.nchop_def nat_int.non_empty_elem_in nat_int.un_empty_absorb1)
next
assume "i2 = \<emptyset> \<or> consec i1 i2"
then show "N_Chop(i, i3, i4 \<squnion> i2)\<and> N_Chop(i4 \<squnion> i2, i4,i2)"
proof
assume i2_empty:"i2=\<emptyset>"
then show "N_Chop(i, i3, i4 \<squnion> i2)\<and> N_Chop(i4 \<squnion> i2, i4,i2)"
using assm nat_int.chop_empty_right nat_int.nchop_def by auto
next
assume consec_i1_i2:"consec i1 i2"
from assm have "(i3 = \<emptyset> \<or> i4 = \<emptyset> \<or> ( consec i3 i4))"
by (simp add: nchop_def)
then show "N_Chop(i, i3, i4 \<squnion> i2)\<and> N_Chop(i4 \<squnion> i2, i4,i2)"
proof
assume i3_empty:"i3=\<emptyset>"
then show "N_Chop(i, i3, i4 \<squnion> i2)\<and> N_Chop(i4 \<squnion> i2, i4,i2)"
using assm nat_int.chop_empty_left nat_int.nchop_def by auto
next
assume " i4 = \<emptyset> \<or> ( consec i3 i4)"
then show "N_Chop(i, i3, i4 \<squnion> i2)\<and> N_Chop(i4 \<squnion> i2, i4,i2)"
proof
assume i4_empty:"i4=\<emptyset>"
then show "N_Chop(i, i3, i4 \<squnion> i2)\<and> N_Chop(i4 \<squnion> i2, i4,i2)"
using assm nat_int.nchop_def nat_int.un_empty_absorb1 nat_int.un_empty_absorb2
by auto
next
assume consec_i3_i4:"consec i3 i4"
then show "N_Chop(i, i3, i4 \<squnion> i2)\<and> N_Chop(i4 \<squnion> i2, i4,i2)"
by (metis assm consec_i1_i2 nat_int.consec_assoc2 nat_int.consec_intermediate2
nat_int.nchop_def nat_int.un_assoc)
qed
qed
qed
qed
qed
lemma chop_subset1:"N_Chop(i,j,k) \<longrightarrow> j \<sqsubseteq> i"
using nat_int.chop_empty_right nat_int.nchop_def nat_int.un_subset1 by auto
lemma chop_subset2:"N_Chop(i,j,k) \<longrightarrow> k \<sqsubseteq> i"
using nat_int.chop_empty_left nat_int.nchop_def nat_int.un_subset2 by auto
end
end
diff --git a/thys/IP_Addresses/WordInterval.thy b/thys/IP_Addresses/WordInterval.thy
--- a/thys/IP_Addresses/WordInterval.thy
+++ b/thys/IP_Addresses/WordInterval.thy
@@ -1,776 +1,775 @@
(* Title: WordInterval.thy
Authors: Julius Michaelis, Cornelius Diekmann
*)
theory WordInterval
imports Main
"Word_Lib.Word_Lemmas"
begin
section\<open>WordInterval: Executable datatype for Machine Word Sets\<close>
text\<open>Stores ranges of machine words as interval. This has been proven quite efficient for
IP Addresses.\<close>
(*NOTE: All algorithms here use a straight-forward implementation. There is a lot of room for
improving the computation complexity, for example by making the WordInterval a balanced,
sorted tree.*)
subsection\<open>Syntax\<close>
context
notes [[typedef_overloaded]]
begin
datatype ('a::len0) wordinterval = WordInterval
"('a::len0) word" \<comment> \<open>start (inclusive)\<close>
"('a::len0) word" \<comment> \<open>end (inclusive)\<close>
| RangeUnion "'a wordinterval" "'a wordinterval"
end
subsection\<open>Semantics\<close>
fun wordinterval_to_set :: "'a::len0 wordinterval \<Rightarrow> ('a::len0 word) set"
where
"wordinterval_to_set (WordInterval start end) =
{start .. end}" |
"wordinterval_to_set (RangeUnion r1 r2) =
wordinterval_to_set r1 \<union> wordinterval_to_set r2"
(*Note: The runtime of all the operations could be improved, for example by keeping the tree sorted
and balanced.*)
subsection\<open>Basic operations\<close>
text\<open>\<open>\<in>\<close>\<close>
fun wordinterval_element :: "'a::len0 word \<Rightarrow> 'a::len0 wordinterval \<Rightarrow> bool" where
"wordinterval_element el (WordInterval s e) \<longleftrightarrow> s \<le> el \<and> el \<le> e" |
"wordinterval_element el (RangeUnion r1 r2) \<longleftrightarrow>
wordinterval_element el r1 \<or> wordinterval_element el r2"
lemma wordinterval_element_set_eq[simp]:
"wordinterval_element el rg = (el \<in> wordinterval_to_set rg)"
by(induction rg rule: wordinterval_element.induct) simp_all
definition wordinterval_union
:: "'a::len0 wordinterval \<Rightarrow> 'a::len0 wordinterval \<Rightarrow> 'a::len0 wordinterval" where
"wordinterval_union r1 r2 = RangeUnion r1 r2"
lemma wordinterval_union_set_eq[simp]:
"wordinterval_to_set (wordinterval_union r1 r2) = wordinterval_to_set r1 \<union> wordinterval_to_set r2"
unfolding wordinterval_union_def by simp
fun wordinterval_empty :: "'a::len0 wordinterval \<Rightarrow> bool" where
"wordinterval_empty (WordInterval s e) \<longleftrightarrow> e < s" |
"wordinterval_empty (RangeUnion r1 r2) \<longleftrightarrow> wordinterval_empty r1 \<and> wordinterval_empty r2"
lemma wordinterval_empty_set_eq[simp]: "wordinterval_empty r \<longleftrightarrow> wordinterval_to_set r = {}"
by(induction r) auto
definition Empty_WordInterval :: "'a::len wordinterval" where
"Empty_WordInterval \<equiv> WordInterval 1 0"
lemma wordinterval_empty_Empty_WordInterval: "wordinterval_empty Empty_WordInterval"
by(simp add: Empty_WordInterval_def)
lemma Empty_WordInterval_set_eq[simp]: "wordinterval_to_set Empty_WordInterval = {}"
by(simp add: Empty_WordInterval_def)
subsection\<open>WordInterval and Lists\<close>
text\<open>A list of \<open>(start, end)\<close> tuples.\<close>
text\<open>wordinterval to list\<close>
fun wi2l :: "'a::len0 wordinterval \<Rightarrow> ('a::len0 word \<times> 'a::len0 word) list" where
"wi2l (RangeUnion r1 r2) = wi2l r1 @ wi2l r2" |
"wi2l (WordInterval s e) = (if e < s then [] else [(s,e)])"
text\<open>list to wordinterval\<close>
fun l2wi :: "('a::len word \<times> 'a word) list \<Rightarrow> 'a wordinterval" where
"l2wi [] = Empty_WordInterval" |
"l2wi [(s,e)] = (WordInterval s e)" |
"l2wi ((s,e)#rs) = (RangeUnion (WordInterval s e) (l2wi rs))"
lemma l2wi_append: "wordinterval_to_set (l2wi (l1@l2)) =
wordinterval_to_set (l2wi l1) \<union> wordinterval_to_set (l2wi l2)"
proof(induction l1 arbitrary: l2 rule:l2wi.induct)
case 1 thus ?case by simp
next
case (2 s e l2) thus ?case by (cases l2) simp_all
next
case 3 thus ?case by force
qed
lemma l2wi_wi2l[simp]: "wordinterval_to_set (l2wi (wi2l r)) = wordinterval_to_set r"
by(induction r) (simp_all add: l2wi_append)
lemma l2wi: "wordinterval_to_set (l2wi l) = (\<Union> (i,j) \<in> set l. {i .. j})"
by(induction l rule: l2wi.induct, simp_all)
lemma wi2l: "(\<Union>(i,j)\<in>set (wi2l r). {i .. j}) = wordinterval_to_set r"
by(induction r rule: wi2l.induct, simp_all)
lemma l2wi_remdups[simp]: "wordinterval_to_set (l2wi (remdups ls)) = wordinterval_to_set (l2wi ls)"
by(simp add: l2wi)
lemma wi2l_empty[simp]: "wi2l Empty_WordInterval = []"
unfolding Empty_WordInterval_def
by simp
subsection\<open>Optimizing and minimizing @{typ "('a::len) wordinterval"}s\<close>
text\<open>Removing empty intervals\<close>
context
begin
fun wordinterval_optimize_empty :: "'a::len0 wordinterval \<Rightarrow> 'a wordinterval" where
"wordinterval_optimize_empty (RangeUnion r1 r2) = (let r1o = wordinterval_optimize_empty r1;
r2o = wordinterval_optimize_empty r2
in if
wordinterval_empty r1o
then
r2o
else if
wordinterval_empty r2o
then
r1o
else
RangeUnion r1o r2o)" |
"wordinterval_optimize_empty r = r"
lemma wordinterval_optimize_empty_set_eq[simp]:
"wordinterval_to_set (wordinterval_optimize_empty r) = wordinterval_to_set r"
by(induction r) (simp_all add: Let_def)
lemma wordinterval_optimize_empty_double:
"wordinterval_optimize_empty (wordinterval_optimize_empty r) = wordinterval_optimize_empty r"
by(induction r) (simp_all add: Let_def)
private fun wordinterval_empty_shallow :: "'a::len0 wordinterval \<Rightarrow> bool" where
"wordinterval_empty_shallow (WordInterval s e) \<longleftrightarrow> e < s" |
"wordinterval_empty_shallow (RangeUnion _ _) \<longleftrightarrow> False"
private lemma helper_optimize_shallow:
"wordinterval_empty_shallow (wordinterval_optimize_empty r) =
wordinterval_empty (wordinterval_optimize_empty r)"
by(induction r) fastforce+
private fun wordinterval_optimize_empty2 where
"wordinterval_optimize_empty2 (RangeUnion r1 r2) = (let r1o = wordinterval_optimize_empty r1;
r2o = wordinterval_optimize_empty r2
in if
wordinterval_empty_shallow r1o
then
r2o
else if
wordinterval_empty_shallow r2o
then
r1o
else
RangeUnion r1o r2o)" |
"wordinterval_optimize_empty2 r = r"
lemma wordinterval_optimize_empty_code[code_unfold]:
"wordinterval_optimize_empty = wordinterval_optimize_empty2"
by (subst fun_eq_iff, clarify, rename_tac r, induct_tac r)
(unfold wordinterval_optimize_empty.simps wordinterval_optimize_empty2.simps
Let_def helper_optimize_shallow, simp_all)
end
text\<open>Merging overlapping intervals\<close>
context
begin
private definition disjoint :: "'a set \<Rightarrow> 'a set \<Rightarrow> bool" where
"disjoint A B \<equiv> A \<inter> B = {}"
private primrec interval_of :: "('a::len0) word \<times> 'a word \<Rightarrow> 'a word set" where
"interval_of (s,e) = {s .. e}"
declare interval_of.simps[simp del]
private definition disjoint_intervals
:: "(('a::len0) word \<times> ('a::len0) word) \<Rightarrow> ('a word \<times> 'a word) \<Rightarrow> bool"
where
"disjoint_intervals A B \<equiv> disjoint (interval_of A) (interval_of B)"
private definition not_disjoint_intervals
:: "(('a::len0) word \<times> ('a::len0) word) \<Rightarrow> ('a word \<times> 'a word) \<Rightarrow> bool"
where
"not_disjoint_intervals A B \<equiv> \<not> disjoint (interval_of A) (interval_of B)"
private lemma [code]:
"not_disjoint_intervals A B =
(case A of (s,e) \<Rightarrow> case B of (s',e') \<Rightarrow> s \<le> e' \<and> s' \<le> e \<and> s \<le> e \<and> s' \<le> e')"
apply(cases A, cases B)
apply(simp add: not_disjoint_intervals_def interval_of.simps disjoint_def)
done
private lemma [code]:
"disjoint_intervals A B =
(case A of (s,e) \<Rightarrow> case B of (s',e') \<Rightarrow> s > e' \<or> s' > e \<or> s > e \<or> s' > e')"
apply(cases A, cases B)
apply(simp add: disjoint_intervals_def interval_of.simps disjoint_def)
by fastforce
text\<open>BEGIN merging overlapping intervals\<close>
(*result has no empty intervals and all are disjoint.
merging things such as [1,7] [8,10] would still be possible*)
private fun merge_overlap
:: "(('a::len0) word \<times> ('a::len0) word) \<Rightarrow> ('a word \<times> 'a word) list \<Rightarrow> ('a word \<times> 'a word) list"
where
"merge_overlap s [] = [s]" |
"merge_overlap (s,e) ((s',e')#ss) = (
if not_disjoint_intervals (s,e) (s',e')
then (min s s', max e e')#ss
else (s',e')#merge_overlap (s,e) ss)"
private lemma not_disjoint_union:
fixes s :: "('a::len0) word"
shows "\<not> disjoint {s..e} {s'..e'} \<Longrightarrow> {s..e} \<union> {s'..e'} = {min s s' .. max e e'}"
by(auto simp add: disjoint_def min_def max_def)
private lemma disjoint_subset: "disjoint A B \<Longrightarrow> A \<subseteq> B \<union> C \<Longrightarrow> A \<subseteq> C"
unfolding disjoint_def
by blast
private lemma merge_overlap_helper1: "interval_of A \<subseteq> (\<Union>s \<in> set ss. interval_of s) \<Longrightarrow>
(\<Union>s \<in> set (merge_overlap A ss). interval_of s) = (\<Union>s \<in> set ss. interval_of s)"
apply(induction ss)
apply(simp; fail)
apply(rename_tac x xs)
apply(cases A, rename_tac a b)
apply(case_tac x)
apply(simp add: not_disjoint_intervals_def interval_of.simps)
apply(intro impI conjI)
apply(drule not_disjoint_union)
apply blast
apply(drule_tac C="(\<Union>x\<in>set xs. interval_of x)" in disjoint_subset)
apply(simp_all)
done
private lemma merge_overlap_helper2: "\<exists>s'\<in>set ss. \<not> disjoint (interval_of A) (interval_of s') \<Longrightarrow>
interval_of A \<union> (\<Union>s \<in> set ss. interval_of s) = (\<Union>s \<in> set (merge_overlap A ss). interval_of s)"
apply(induction ss)
apply(simp; fail)
apply(rename_tac x xs)
apply(cases A, rename_tac a b)
apply(case_tac x)
apply(simp add: not_disjoint_intervals_def interval_of.simps)
apply(intro impI conjI)
apply(drule not_disjoint_union)
apply blast
apply(simp)
by blast
private lemma merge_overlap_length:
"\<exists>s' \<in> set ss. \<not> disjoint (interval_of A) (interval_of s') \<Longrightarrow>
length (merge_overlap A ss) = length ss"
apply(induction ss)
apply(simp)
apply(rename_tac x xs)
apply(cases A, rename_tac a b)
apply(case_tac x)
apply(simp add: not_disjoint_intervals_def interval_of.simps)
done
lemma "merge_overlap (1:: 16 word,2) [(1, 7)] = [(1, 7)]" by eval
lemma "merge_overlap (1:: 16 word,2) [(2, 7)] = [(1, 7)]" by eval
lemma "merge_overlap (1:: 16 word,2) [(3, 7)] = [(3, 7), (1,2)]" by eval
private function listwordinterval_compress
:: "(('a::len0) word \<times> ('a::len0) word) list \<Rightarrow> ('a word \<times> 'a word) list" where
"listwordinterval_compress [] = []" |
"listwordinterval_compress (s#ss) = (
if \<forall>s' \<in> set ss. disjoint_intervals s s'
then s#listwordinterval_compress ss
else listwordinterval_compress (merge_overlap s ss))"
by(pat_completeness, auto)
private termination listwordinterval_compress
apply (relation "measure length")
apply(rule wf_measure)
apply(simp)
using disjoint_intervals_def merge_overlap_length by fastforce
private lemma listwordinterval_compress:
"(\<Union>s \<in> set (listwordinterval_compress ss). interval_of s) = (\<Union>s \<in> set ss. interval_of s)"
apply(induction ss rule: listwordinterval_compress.induct)
apply(simp)
apply(simp)
apply(intro impI)
apply(simp add: disjoint_intervals_def)
apply(drule merge_overlap_helper2)
apply(simp)
done
lemma "listwordinterval_compress [(1::32 word,3), (8,10), (2,5), (3,7)] = [(8, 10), (1, 7)]"
by eval
private lemma A_in_listwordinterval_compress: "A \<in> set (listwordinterval_compress ss) \<Longrightarrow>
interval_of A \<subseteq> (\<Union>s \<in> set ss. interval_of s)"
using listwordinterval_compress by blast
private lemma listwordinterval_compress_disjoint:
"A \<in> set (listwordinterval_compress ss) \<Longrightarrow> B \<in> set (listwordinterval_compress ss) \<Longrightarrow>
A \<noteq> B \<Longrightarrow> disjoint (interval_of A) (interval_of B)"
apply(induction ss arbitrary: rule: listwordinterval_compress.induct)
apply(simp)
apply(simp split: if_split_asm)
apply(elim disjE)
apply(simp_all)
apply(simp_all add: disjoint_intervals_def disjoint_def)
- apply(thin_tac [!] "False \<Longrightarrow> _ \<Longrightarrow> _ \<Longrightarrow> _")
apply(blast dest: A_in_listwordinterval_compress)+
done
text\<open>END merging overlapping intervals\<close>
text\<open>BEGIN merging adjacent intervals\<close>
private fun merge_adjacent
:: "(('a::len) word \<times> ('a::len) word) \<Rightarrow> ('a word \<times> 'a word) list \<Rightarrow> ('a word \<times> 'a word) list"
where
"merge_adjacent s [] = [s]" |
"merge_adjacent (s,e) ((s',e')#ss) = (
if s \<le>e \<and> s' \<le> e' \<and> word_next e = s'
then (s, e')#ss
else if s \<le>e \<and> s' \<le> e' \<and> word_next e' = s
then (s', e)#ss
else (s',e')#merge_adjacent (s,e) ss)"
private lemma merge_adjacent_helper:
"interval_of A \<union> (\<Union>s \<in> set ss. interval_of s) = (\<Union>s \<in> set (merge_adjacent A ss). interval_of s)"
apply(induction ss)
apply(simp; fail)
apply(rename_tac x xs)
apply(cases A, rename_tac a b)
apply(case_tac x)
apply(simp add: interval_of.simps)
apply(intro impI conjI)
apply (metis Un_assoc word_adjacent_union)
apply(elim conjE)
apply(drule(2) word_adjacent_union)
subgoal by (blast)
subgoal by (metis word_adjacent_union Un_assoc)
by blast
private lemma merge_adjacent_length:
"\<exists>(s', e')\<in>set ss. s \<le> e \<and> s' \<le> e' \<and> (word_next e = s' \<or> word_next e' = s)
\<Longrightarrow> length (merge_adjacent (s,e) ss) = length ss"
apply(induction ss)
apply(simp)
apply(rename_tac x xs)
apply(case_tac x)
apply(simp add: )
by blast
private function listwordinterval_adjacent
:: "(('a::len) word \<times> ('a::len) word) list \<Rightarrow> ('a word \<times> 'a word) list" where
"listwordinterval_adjacent [] = []" |
"listwordinterval_adjacent ((s,e)#ss) = (
if \<forall>(s',e') \<in> set ss. \<not> (s \<le>e \<and> s' \<le> e' \<and> (word_next e = s' \<or> word_next e' = s))
then (s,e)#listwordinterval_adjacent ss
else listwordinterval_adjacent (merge_adjacent (s,e) ss))"
by(pat_completeness, auto)
private termination listwordinterval_adjacent
apply (relation "measure length")
apply(rule wf_measure)
apply(simp)
apply(simp)
using merge_adjacent_length by fastforce
private lemma listwordinterval_adjacent:
"(\<Union>s \<in> set (listwordinterval_adjacent ss). interval_of s) = (\<Union>s \<in> set ss. interval_of s)"
apply(induction ss rule: listwordinterval_adjacent.induct)
apply(simp)
apply(simp add: merge_adjacent_helper)
done
lemma "listwordinterval_adjacent [(1::16 word, 3), (5, 10), (10,10), (4,4)] = [(10, 10), (1, 10)]"
by eval
text\<open>END merging adjacent intervals\<close>
definition wordinterval_compress :: "('a::len) wordinterval \<Rightarrow> 'a wordinterval" where
"wordinterval_compress r \<equiv>
l2wi (remdups (listwordinterval_adjacent (listwordinterval_compress
(wi2l (wordinterval_optimize_empty r)))))"
text\<open>Correctness: Compression preserves semantics\<close>
lemma wordinterval_compress:
"wordinterval_to_set (wordinterval_compress r) = wordinterval_to_set r"
unfolding wordinterval_compress_def
proof -
have interval_of': "interval_of s = (case s of (s,e) \<Rightarrow> {s .. e})" for s
by (cases s) (simp add: interval_of.simps)
have "wordinterval_to_set (l2wi (remdups (listwordinterval_adjacent
(listwordinterval_compress (wi2l (wordinterval_optimize_empty r)))))) =
(\<Union>x\<in>set (listwordinterval_adjacent (listwordinterval_compress
(wi2l (wordinterval_optimize_empty r)))). interval_of x)"
by (force simp: interval_of' l2wi)
also have "\<dots> = (\<Union>s\<in>set (wi2l (wordinterval_optimize_empty r)). interval_of s)"
by(simp add: listwordinterval_compress listwordinterval_adjacent)
also have "\<dots> = (\<Union>(i, j)\<in>set (wi2l (wordinterval_optimize_empty r)). {i..j})"
by(simp add: interval_of')
also have "\<dots> = wordinterval_to_set r" by(simp add: wi2l)
finally show "wordinterval_to_set
(l2wi (remdups (listwordinterval_adjacent (listwordinterval_compress
(wi2l (wordinterval_optimize_empty r))))))
= wordinterval_to_set r" .
qed
end
text\<open>Example\<close>
lemma "(wi2l \<circ> (wordinterval_compress :: 32 wordinterval \<Rightarrow> 32 wordinterval) \<circ> l2wi)
[(70, 80001), (0,0), (150, 8000), (1,3), (42,41), (3,7), (56, 200), (8,10)] =
[(56, 80001), (0, 10)]" by eval
lemma "wordinterval_compress (RangeUnion (RangeUnion (WordInterval (1::32 word) 5)
(WordInterval 8 10)) (WordInterval 3 7)) =
WordInterval 1 10" by eval
subsection\<open>Further operations\<close>
text\<open>\<open>\<Union>\<close>\<close>
definition wordinterval_Union :: "('a::len) wordinterval list \<Rightarrow> 'a wordinterval" where
"wordinterval_Union ws = wordinterval_compress (foldr wordinterval_union ws Empty_WordInterval)"
lemma wordinterval_Union:
"wordinterval_to_set (wordinterval_Union ws) = (\<Union> w \<in> (set ws). wordinterval_to_set w)"
by(induction ws) (simp_all add: wordinterval_compress wordinterval_Union_def)
context
begin
private fun wordinterval_setminus'
:: "'a::len wordinterval \<Rightarrow> 'a wordinterval \<Rightarrow> 'a wordinterval" where
"wordinterval_setminus' (WordInterval s e) (WordInterval ms me) = (
if s > e \<or> ms > me then WordInterval s e else
if me \<ge> e
then
WordInterval (if ms = 0 then 1 else s) (min e (word_prev ms))
else if ms \<le> s
then
WordInterval (max s (word_next me)) (if me = max_word then 0 else e)
else
RangeUnion (WordInterval (if ms = 0 then 1 else s) (word_prev ms))
(WordInterval (word_next me) (if me = max_word then 0 else e))
)" |
"wordinterval_setminus' (RangeUnion r1 r2) t =
RangeUnion (wordinterval_setminus' r1 t) (wordinterval_setminus' r2 t)"|
"wordinterval_setminus' t (RangeUnion r1 r2) =
wordinterval_setminus' (wordinterval_setminus' t r1) r2"
private lemma wordinterval_setminus'_rr_set_eq:
"wordinterval_to_set(wordinterval_setminus' (WordInterval s e) (WordInterval ms me)) =
wordinterval_to_set (WordInterval s e) - wordinterval_to_set (WordInterval ms me)"
apply(simp only: wordinterval_setminus'.simps)
apply(case_tac "e < s")
apply simp
apply(case_tac "me < ms")
apply simp
apply(case_tac [!] "e \<le> me")
apply(case_tac [!] "ms = 0")
apply(case_tac [!] "ms \<le> s")
apply(case_tac [!] "me = max_word")
apply(simp_all add: word_prev_def word_next_def min_def max_def)
apply(safe)
apply(auto)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
apply(uint_arith)
done
private lemma wordinterval_setminus'_set_eq:
"wordinterval_to_set (wordinterval_setminus' r1 r2) =
wordinterval_to_set r1 - wordinterval_to_set r2"
apply(induction rule: wordinterval_setminus'.induct)
using wordinterval_setminus'_rr_set_eq apply blast
apply auto
done
lemma wordinterval_setminus'_empty_struct:
"wordinterval_empty r2 \<Longrightarrow> wordinterval_setminus' r1 r2 = r1"
by(induction r1 r2 rule: wordinterval_setminus'.induct) auto
definition wordinterval_setminus
:: "'a::len wordinterval \<Rightarrow> 'a::len wordinterval \<Rightarrow> 'a::len wordinterval" where
"wordinterval_setminus r1 r2 = wordinterval_compress (wordinterval_setminus' r1 r2)"
lemma wordinterval_setminus_set_eq[simp]: "wordinterval_to_set (wordinterval_setminus r1 r2) =
wordinterval_to_set r1 - wordinterval_to_set r2"
by(simp add: wordinterval_setminus_def wordinterval_compress wordinterval_setminus'_set_eq)
end
definition wordinterval_UNIV :: "'a::len wordinterval" where
"wordinterval_UNIV \<equiv> WordInterval 0 max_word"
lemma wordinterval_UNIV_set_eq[simp]: "wordinterval_to_set wordinterval_UNIV = UNIV"
unfolding wordinterval_UNIV_def
using max_word_max by fastforce
fun wordinterval_invert :: "'a::len wordinterval \<Rightarrow> 'a::len wordinterval" where
"wordinterval_invert r = wordinterval_setminus wordinterval_UNIV r"
lemma wordinterval_invert_set_eq[simp]:
"wordinterval_to_set (wordinterval_invert r) = UNIV - wordinterval_to_set r" by(auto)
lemma wordinterval_invert_UNIV_empty:
"wordinterval_empty (wordinterval_invert wordinterval_UNIV)" by simp
lemma wi2l_univ[simp]: "wi2l wordinterval_UNIV = [(0, max_word)]"
unfolding wordinterval_UNIV_def
by simp
text\<open>\<open>\<inter>\<close>\<close>
context
begin
private lemma "{(s::nat) .. e} \<inter> {s' .. e'} = {} \<longleftrightarrow> s > e' \<or> s' > e \<or> s > e \<or> s' > e'"
by simp linarith
private fun wordinterval_intersection'
:: "'a::len wordinterval \<Rightarrow> 'a::len wordinterval \<Rightarrow> 'a::len wordinterval" where
"wordinterval_intersection' (WordInterval s e) (WordInterval s' e') = (
if s > e \<or> s' > e' \<or> s > e' \<or> s' > e \<or> s > e \<or> s' > e'
then
Empty_WordInterval
else
WordInterval (max s s') (min e e')
)" |
"wordinterval_intersection' (RangeUnion r1 r2) t =
RangeUnion (wordinterval_intersection' r1 t) (wordinterval_intersection' r2 t)"|
"wordinterval_intersection' t (RangeUnion r1 r2) =
RangeUnion (wordinterval_intersection' t r1) (wordinterval_intersection' t r2)"
private lemma wordinterval_intersection'_set_eq:
"wordinterval_to_set (wordinterval_intersection' r1 r2) =
wordinterval_to_set r1 \<inter> wordinterval_to_set r2"
by(induction r1 r2 rule: wordinterval_intersection'.induct) (auto)
lemma "wordinterval_intersection'
(RangeUnion (RangeUnion (WordInterval (1::32 word) 3) (WordInterval 8 10))
(WordInterval 1 3)) (WordInterval 1 3) =
RangeUnion (RangeUnion (WordInterval 1 3) (WordInterval 1 0)) (WordInterval 1 3)" by eval
definition wordinterval_intersection
:: "'a::len wordinterval \<Rightarrow> 'a::len wordinterval \<Rightarrow> 'a::len wordinterval" where
"wordinterval_intersection r1 r2 \<equiv> wordinterval_compress (wordinterval_intersection' r1 r2)"
lemma wordinterval_intersection_set_eq[simp]:
"wordinterval_to_set (wordinterval_intersection r1 r2) =
wordinterval_to_set r1 \<inter> wordinterval_to_set r2"
by(simp add: wordinterval_intersection_def
wordinterval_compress wordinterval_intersection'_set_eq)
lemma "wordinterval_intersection
(RangeUnion (RangeUnion (WordInterval (1::32 word) 3) (WordInterval 8 10))
(WordInterval 1 3)) (WordInterval 1 3) =
WordInterval 1 3" by eval
end
definition wordinterval_subset :: "'a::len wordinterval \<Rightarrow> 'a::len wordinterval \<Rightarrow> bool" where
"wordinterval_subset r1 r2 \<equiv> wordinterval_empty (wordinterval_setminus r1 r2)"
lemma wordinterval_subset_set_eq[simp]:
"wordinterval_subset r1 r2 = (wordinterval_to_set r1 \<subseteq> wordinterval_to_set r2)"
unfolding wordinterval_subset_def by simp
definition wordinterval_eq :: "'a::len wordinterval \<Rightarrow> 'a::len wordinterval \<Rightarrow> bool" where
"wordinterval_eq r1 r2 = (wordinterval_subset r1 r2 \<and> wordinterval_subset r2 r1)"
lemma wordinterval_eq_set_eq:
"wordinterval_eq r1 r2 \<longleftrightarrow> wordinterval_to_set r1 = wordinterval_to_set r2"
unfolding wordinterval_eq_def by auto
thm iffD1[OF wordinterval_eq_set_eq]
(*declare iffD1[OF wordinterval_eq_set_eq, simp]*)
lemma wordinterval_eq_comm: "wordinterval_eq r1 r2 \<longleftrightarrow> wordinterval_eq r2 r1"
unfolding wordinterval_eq_def by fast
lemma wordinterval_to_set_alt: "wordinterval_to_set r = {x. wordinterval_element x r}"
unfolding wordinterval_element_set_eq by blast
lemma wordinterval_un_empty:
"wordinterval_empty r1 \<Longrightarrow> wordinterval_eq (wordinterval_union r1 r2) r2"
by(subst wordinterval_eq_set_eq, simp)
lemma wordinterval_un_emty_b:
"wordinterval_empty r2 \<Longrightarrow> wordinterval_eq (wordinterval_union r1 r2) r1"
by(subst wordinterval_eq_set_eq, simp)
lemma wordinterval_Diff_triv:
"wordinterval_empty (wordinterval_intersection a b) \<Longrightarrow> wordinterval_eq (wordinterval_setminus a b) a"
unfolding wordinterval_eq_set_eq
by simp blast
text\<open>A size of the datatype, does not correspond to the cardinality of the corresponding set\<close>
fun wordinterval_size :: "('a::len) wordinterval \<Rightarrow> nat" where
"wordinterval_size (RangeUnion a b) = wordinterval_size a + wordinterval_size b" |
"wordinterval_size (WordInterval s e) = (if s \<le> e then 1 else 0)"
lemma wordinterval_size_length: "wordinterval_size r = length (wi2l r)"
by(induction r) (auto)
lemma Ex_wordinterval_nonempty: "\<exists>x::('a::len wordinterval). y \<in> wordinterval_to_set x"
proof show "y \<in> wordinterval_to_set wordinterval_UNIV" by simp qed
lemma wordinterval_eq_reflp:
"reflp wordinterval_eq"
apply(rule reflpI)
by(simp only: wordinterval_eq_set_eq)
lemma wordintervalt_eq_symp:
"symp wordinterval_eq"
apply(rule sympI)
by(simp add: wordinterval_eq_comm)
lemma wordinterval_eq_transp:
"transp wordinterval_eq"
apply(rule transpI)
by(simp only: wordinterval_eq_set_eq)
lemma wordinterval_eq_equivp:
"equivp wordinterval_eq"
by (auto intro: equivpI wordinterval_eq_reflp wordintervalt_eq_symp wordinterval_eq_transp)
text\<open>The smallest element in the interval\<close>
definition is_lowest_element :: "'a::ord \<Rightarrow> 'a set \<Rightarrow> bool" where
"is_lowest_element x S = (x \<in> S \<and> (\<forall>y\<in>S. y \<le> x \<longrightarrow> y = x))"
lemma
fixes x :: "'a :: complete_lattice"
assumes "x \<in> S"
shows " x = Inf S \<Longrightarrow> is_lowest_element x S"
using assms apply(simp add: is_lowest_element_def)
by (simp add: Inf_lower eq_iff)
lemma
fixes x :: "'a :: linorder"
assumes "finite S" and "x \<in> S"
shows "is_lowest_element x S \<longleftrightarrow> x = Min S"
apply(rule)
subgoal
apply(simp add: is_lowest_element_def)
apply(subst Min_eqI[symmetric])
using assms by(auto)
by (metis Min.coboundedI assms(1) assms(2) dual_order.antisym is_lowest_element_def)
text\<open>Smallest element in the interval\<close>
fun wordinterval_lowest_element :: "'a::len0 wordinterval \<Rightarrow> 'a word option" where
"wordinterval_lowest_element (WordInterval s e) = (if s \<le> e then Some s else None)" |
"wordinterval_lowest_element (RangeUnion A B) =
(case (wordinterval_lowest_element A, wordinterval_lowest_element B) of
(Some a, Some b) \<Rightarrow> Some (if a < b then a else b) |
(None, Some b) \<Rightarrow> Some b |
(Some a, None) \<Rightarrow> Some a |
(None, None) \<Rightarrow> None)"
lemma wordinterval_lowest_none_empty: "wordinterval_lowest_element r = None \<longleftrightarrow> wordinterval_empty r"
proof(induction r)
case WordInterval thus ?case by simp
next
case RangeUnion thus ?case by fastforce
qed
lemma wordinterval_lowest_element_correct_A:
"wordinterval_lowest_element r = Some x \<Longrightarrow> is_lowest_element x (wordinterval_to_set r)"
unfolding is_lowest_element_def
apply(induction r arbitrary: x rule: wordinterval_lowest_element.induct)
apply(rename_tac rs re x, case_tac "rs \<le> re", auto)[1]
apply(subst(asm) wordinterval_lowest_element.simps(2))
apply(rename_tac A B x)
apply(case_tac "wordinterval_lowest_element B")
apply(case_tac[!] "wordinterval_lowest_element A")
apply(simp_all add: wordinterval_lowest_none_empty)[3]
apply fastforce
done
lemma wordinterval_lowest_element_set_eq: assumes "\<not> wordinterval_empty r"
shows "(wordinterval_lowest_element r = Some x) = (is_lowest_element x (wordinterval_to_set r))"
(*unfolding is_lowest_element_def*)
proof(rule iffI)
assume "wordinterval_lowest_element r = Some x"
thus "is_lowest_element x (wordinterval_to_set r)"
using wordinterval_lowest_element_correct_A wordinterval_lowest_none_empty by simp
next
assume "is_lowest_element x (wordinterval_to_set r)"
with assms show "(wordinterval_lowest_element r = Some x)"
proof(induction r arbitrary: x rule: wordinterval_lowest_element.induct)
case 1 thus ?case by(simp add: is_lowest_element_def)
next
case (2 A B x)
have is_lowest_RangeUnion: "is_lowest_element x (wordinterval_to_set A \<union> wordinterval_to_set B) \<Longrightarrow>
is_lowest_element x (wordinterval_to_set A) \<or> is_lowest_element x (wordinterval_to_set B)"
by(simp add: is_lowest_element_def)
(*why \<And> A B?*)
have wordinterval_lowest_element_RangeUnion:
"\<And>a b A B. wordinterval_lowest_element A = Some a \<Longrightarrow>
wordinterval_lowest_element B = Some b \<Longrightarrow>
wordinterval_lowest_element (RangeUnion A B) = Some (min a b)"
by(auto dest!: wordinterval_lowest_element_correct_A simp add: is_lowest_element_def min_def)
from 2 show ?case
apply(case_tac "wordinterval_lowest_element B")
apply(case_tac[!] "wordinterval_lowest_element A")
apply(auto simp add: is_lowest_element_def)[3]
apply(subgoal_tac "\<not> wordinterval_empty A \<and> \<not> wordinterval_empty B")
prefer 2
using arg_cong[where f = Not, OF wordinterval_lowest_none_empty] apply blast
apply(drule(1) wordinterval_lowest_element_RangeUnion)
apply(simp split: option.split_asm add: min_def)
apply(drule is_lowest_RangeUnion)
apply(elim disjE)
apply(simp add: is_lowest_element_def)
apply(clarsimp simp add: wordinterval_lowest_none_empty)
apply(simp add: is_lowest_element_def)
apply(clarsimp simp add: wordinterval_lowest_none_empty)
using wordinterval_lowest_element_correct_A[simplified is_lowest_element_def]
by (metis Un_iff not_le)
qed
qed
text\<open>Cardinality approximation for @{typ "('a::len) wordinterval"}s\<close>
context
begin
lemma card_atLeastAtMost_word: fixes s::"('a::len) word" shows "card {s..e} = Suc (unat e) - (unat s)"
apply(cases "s > e")
apply(simp)
apply(subst(asm) Word.word_less_nat_alt)
apply simp
apply(subst Word_Lemmas.upto_enum_set_conv2[symmetric])
apply(subst List.card_set)
apply(simp add: remdups_enum_upto)
done
fun wordinterval_card :: "('a::len) wordinterval \<Rightarrow> nat" where
"wordinterval_card (WordInterval s e) = Suc (unat e) - (unat s)" |
"wordinterval_card (RangeUnion a b) = wordinterval_card a + wordinterval_card b"
lemma wordinterval_card: "wordinterval_card r \<ge> card (wordinterval_to_set r)"
proof(induction r)
case WordInterval thus ?case by (simp add: card_atLeastAtMost_word)
next
case (RangeUnion r1 r2)
have "card (wordinterval_to_set r1 \<union> wordinterval_to_set r2) \<le>
card (wordinterval_to_set r1) + card (wordinterval_to_set r2)"
using Finite_Set.card_Un_le by blast
with RangeUnion show ?case by(simp)
qed
text\<open>With @{thm wordinterval_compress} it should be possible to get the exact cardinality\<close>
end
end
diff --git a/thys/Integration/MonConv.thy b/thys/Integration/MonConv.thy
--- a/thys/Integration/MonConv.thy
+++ b/thys/Integration/MonConv.thy
@@ -1,279 +1,279 @@
subsection \<open>Monotone Convergence \label{sec:monconv}\<close>
theory MonConv
imports Complex_Main
begin
text \<open>A sensible requirement for an integral operator is that it be
``well-behaved'' with respect to limit functions. To become just a
little more
precise, it is expected that the limit operator may be interchanged
with the integral operator under conditions that are as weak as
possible. To this
end, the notion of monotone convergence is introduced and later
applied in the definition of the integral.
In fact, we distinguish three types of monotone convergence here:
There are converging sequences of real numbers, real functions and
sets. Monotone convergence could even be defined more generally for
any type in the axiomatic type class\footnote{For the concept of axiomatic type
classes, see \cite{Nipkow93,wenzelax}} \<open>ord\<close> of ordered
types like this.
@{prop "mon_conv u f \<equiv> (\<forall>n. u n \<le> u (Suc n)) \<and> Sup (range u) = f"}
However, this employs the general concept of a least upper bound.
For the special types we have in mind, the more specific
limit --- respective union --- operators are available, combined with many theorems
about their properties. For the type of real- (or rather ordered-) valued functions,
the less-or-equal relation is defined pointwise.
@{thm le_fun_def [no_vars]}
\<close>
(*monotone convergence*)
text \<open>Now the foundations are laid for the definition of monotone
convergence. To express the similarity of the different types of
convergence, a single overloaded operator is used.\<close>
consts
mon_conv:: "(nat \<Rightarrow> 'a) \<Rightarrow> 'a::ord \<Rightarrow> bool" ("_\<up>_" [60,61] 60)
overloading
mon_conv_real \<equiv> "mon_conv :: _ \<Rightarrow> real \<Rightarrow> bool"
mon_conv_real_fun \<equiv> "mon_conv :: _ \<Rightarrow> ('a \<Rightarrow> real) \<Rightarrow> bool"
mon_conv_set \<equiv> "mon_conv :: _ \<Rightarrow> 'a set \<Rightarrow> bool"
begin
definition "x\<up>(y::real) \<equiv> (\<forall>n. x n \<le> x (Suc n)) \<and> x \<longlonglongrightarrow> y"
definition "u\<up>(f::'a \<Rightarrow> real) \<equiv> (\<forall>n. u n \<le> u (Suc n)) \<and> (\<forall>w. (\<lambda>n. u n w) \<longlonglongrightarrow> f w)"
definition "A\<up>(B::'a set) \<equiv> (\<forall>n. A n \<le> A (Suc n)) \<and> B = (\<Union>n. A n)"
end
theorem realfun_mon_conv_iff: "(u\<up>f) = (\<forall>w. (\<lambda>n. u n w)\<up>((f w)::real))"
by (auto simp add: mon_conv_real_def mon_conv_real_fun_def le_fun_def)
text \<open>The long arrow signifies convergence of real sequences as
defined in the theory \<open>SEQ\<close> \cite{Fleuriot:2000:MNR}. Monotone convergence
for real functions is simply pointwise monotone convergence.
Quite a few properties of these definitions will be necessary later,
and they are listed now, giving only few select proofs.\<close>
(*This theorem, too, could be proved just the same for any ord
Type!*)
lemma assumes mon_conv: "x\<up>(y::real)"
shows mon_conv_mon: "(x i) \<le> (x (m+i))"
(*<*)proof (induct m)
case 0
show ?case by simp
next
case (Suc n)
also
from mon_conv have "x (n+i) \<le> x (Suc n+i)"
by (simp add: mon_conv_real_def)
finally show ?case .
qed(*>*)
lemma limseq_shift_iff: "(\<lambda>m. x (m+i)) \<longlonglongrightarrow> y = x \<longlonglongrightarrow> y"
(*<*)proof (induct i)
case 0 show ?case by simp
next
case (Suc n)
also have "(\<lambda>m. x (m + n)) \<longlonglongrightarrow> y = (\<lambda>m. x (Suc m + n)) \<longlonglongrightarrow> y"
- by (rule LIMSEQ_Suc_iff[THEN sym])
+ by (rule filterlim_sequentially_Suc[THEN sym])
also have "\<dots> = (\<lambda>m. x (m + Suc n)) \<longlonglongrightarrow> y"
by simp
finally show ?case .
qed(*>*)
(*This, too, could be established in general*)
theorem assumes mon_conv: "x\<up>(y::real)"
shows real_mon_conv_le: "x i \<le> y"
proof -
from mon_conv have "(\<lambda>m. x (m+i)) \<longlonglongrightarrow> y"
by (simp add: mon_conv_real_def limseq_shift_iff)
also from mon_conv have "\<forall>m\<ge>0. x i \<le> x (m+i)" by (simp add: mon_conv_mon)
ultimately show ?thesis by (rule LIMSEQ_le_const[OF _ exI[where x=0]])
qed
theorem assumes mon_conv: "x\<up>(y::('a \<Rightarrow> real))"
shows realfun_mon_conv_le: "x i \<le> y"
proof -
{fix w
from mon_conv have "(\<lambda>i. x i w)\<up>(y w)"
by (simp add: realfun_mon_conv_iff)
hence "x i w \<le> y w"
by (rule real_mon_conv_le)
}
thus ?thesis by (simp add: le_fun_def)
qed
lemma assumes mon_conv: "x\<up>(y::real)"
and less: "z < y"
shows real_mon_conv_outgrow: "\<exists>n. \<forall>m. n \<le> m \<longrightarrow> z < x m"
proof -
from less have less': "0 < y-z"
by simp
have "\<exists>n.\<forall>m. n \<le> m \<longrightarrow> \<bar>x m - y\<bar> < y - z"
proof -
from mon_conv have aux: "\<And>r. r > 0 \<Longrightarrow> \<exists>n. \<forall>m. n \<le> m \<longrightarrow> \<bar>x m - y\<bar> < r"
unfolding mon_conv_real_def lim_sequentially dist_real_def by auto
with less' show "\<exists>n. \<forall>m. n \<le> m \<longrightarrow> \<bar>x m - y\<bar> < y - z" by auto
qed
also
{ fix m
from mon_conv have "x m \<le> y"
by (rule real_mon_conv_le)
hence "\<bar>x m - y\<bar> = y - x m"
by arith
also assume "\<bar>x m - y\<bar> < y - z"
ultimately have "z < x m"
by arith
}
ultimately show ?thesis
by blast
qed
theorem real_mon_conv_times:
assumes xy: "x\<up>(y::real)" and nn: "0\<le>z"
shows "(\<lambda>m. z*x m)\<up>(z*y)"
(*<*)proof -
from assms have "\<And>n. z*x n \<le> z*x (Suc n)"
by (simp add: mon_conv_real_def mult_left_mono)
also from xy have "(\<lambda>m. z*x m)\<longlonglongrightarrow>(z*y)"
by (simp add: mon_conv_real_def tendsto_const tendsto_mult)
ultimately show ?thesis by (simp add: mon_conv_real_def)
qed(*>*)
theorem realfun_mon_conv_times:
assumes xy: "x\<up>(y::'a\<Rightarrow>real)" and nn: "0\<le>z"
shows "(\<lambda>m w. z*x m w)\<up>(\<lambda>w. z*y w)"
(*<*)proof -
from assms have "\<And>w. (\<lambda>m. z*x m w)\<up>(z*y w)"
by (simp add: realfun_mon_conv_iff real_mon_conv_times)
thus ?thesis by (auto simp add: realfun_mon_conv_iff)
qed(*>*)
theorem real_mon_conv_add:
assumes xy: "x\<up>(y::real)" and ab: "a\<up>(b::real)"
shows "(\<lambda>m. x m + a m)\<up>(y + b)"
(*<*)proof -
{ fix n
from assms have "x n \<le> x (Suc n)" and "a n \<le> a (Suc n)"
by (simp_all add: mon_conv_real_def)
hence "x n + a n \<le> x (Suc n) + a (Suc n)"
by simp
}
also from assms have "(\<lambda>m. x m + a m)\<longlonglongrightarrow>(y + b)" by (simp add: mon_conv_real_def tendsto_add)
ultimately show ?thesis by (simp add: mon_conv_real_def)
qed(*>*)
theorem realfun_mon_conv_add:
assumes xy: "x\<up>(y::'a\<Rightarrow>real)" and ab: "a\<up>(b::'a \<Rightarrow> real)"
shows "(\<lambda>m w. x m w + a m w)\<up>(\<lambda>w. y w + b w)"
(*<*)proof -
from assms have "\<And>w. (\<lambda>m. x m w + a m w)\<up>(y w + b w)"
by (simp add: realfun_mon_conv_iff real_mon_conv_add)
thus ?thesis by (auto simp add: realfun_mon_conv_iff)
qed(*>*)
theorem real_mon_conv_bound:
assumes mon: "\<And>n. c n \<le> c (Suc n)"
and bound: "\<And>n. c n \<le> (x::real)"
shows "\<exists>l. c\<up>l \<and> l\<le>x"
proof -
from incseq_convergent[of c x] mon bound
obtain l where "c \<longlonglongrightarrow> l" "\<forall>i. c i \<le> l"
by (auto simp: incseq_Suc_iff)
moreover \<comment> \<open>This is like $\isacommand{also}$ but lacks the transitivity step.\<close>
with bound have "l \<le> x"
by (intro LIMSEQ_le_const2) auto
ultimately show ?thesis
by (auto simp: mon_conv_real_def mon)
qed
theorem real_mon_conv_dom:
assumes xy: "x\<up>(y::real)" and mon: "\<And>n. c n \<le> c (Suc n)"
and dom: "c \<le> x"
shows "\<exists>l. c\<up>l \<and> l\<le>y"
proof -
from dom have "\<And>n. c n \<le> x n" by (simp add: le_fun_def)
also from xy have "\<And>n. x n \<le> y" by (simp add: real_mon_conv_le)
also note mon
ultimately show ?thesis by (simp add: real_mon_conv_bound)
qed
text\<open>\newpage\<close>
theorem realfun_mon_conv_bound:
assumes mon: "\<And>n. c n \<le> c (Suc n)"
and bound: "\<And>n. c n \<le> (x::'a \<Rightarrow> real)"
shows "\<exists>l. c\<up>l \<and> l\<le>x"
(*<*)proof
define r where "r t = (SOME l. (\<lambda>n. c n t)\<up>l \<and> l\<le>x t)" for t
{ fix t
from mon have m2: "\<And>n. c n t \<le> c (Suc n) t" by (simp add: le_fun_def)
also
from bound have "\<And>n. c n t \<le> x t" by (simp add: le_fun_def)
ultimately have "\<exists>l. (\<lambda>n. c n t)\<up>l \<and> l\<le>x t" (is "\<exists>l. ?P l")
by (rule real_mon_conv_bound)
hence "?P (SOME l. ?P l)" by (rule someI_ex)
hence "(\<lambda>n. c n t)\<up>r t \<and> r t\<le>x t" by (simp add: r_def)
}
thus "c\<up>r \<and> r \<le> x" by (simp add: realfun_mon_conv_iff le_fun_def)
qed (*>*)
text \<open>This brings the theory to an end. Notice how the definition of the limit of a
real sequence is visible in the proof to \<open>real_mon_conv_outgrow\<close>, a lemma that will be used for a
monotonicity proof of the integral of simple functions later on.\<close>(*<*)
(*Another set construction. Needed in ImportPredSet, but Set is shadowed beyond
reconstruction there.
Before making disjoint, we first need an ascending series of sets*)
primrec mk_mon::"(nat \<Rightarrow> 'a set) \<Rightarrow> nat \<Rightarrow> 'a set"
where
"mk_mon A 0 = A 0"
| "mk_mon A (Suc n) = A (Suc n) \<union> mk_mon A n"
lemma "mk_mon A \<up> (\<Union>i. A i)"
proof (unfold mon_conv_set_def)
{ fix n
have "mk_mon A n \<subseteq> mk_mon A (Suc n)"
by auto
}
also
have "(\<Union>i. mk_mon A i) = (\<Union>i. A i)"
proof
{ fix i x
assume "x \<in> mk_mon A i"
hence "\<exists>j. x \<in> A j"
by (induct i) auto
hence "x \<in> (\<Union>i. A i)"
by simp
}
thus "(\<Union>i. mk_mon A i) \<subseteq> (\<Union>i. A i)"
by auto
{ fix i
have "A i \<subseteq> mk_mon A i"
by (induct i) auto
}
thus "(\<Union>i. A i) \<subseteq> (\<Union>i. mk_mon A i)"
by auto
qed
ultimately show "(\<forall>n. mk_mon A n \<subseteq> mk_mon A (Suc n)) \<and> \<Union>(A ` UNIV) = (\<Union>n. mk_mon A n)"
by simp
qed(*>*)
end
diff --git a/thys/Iptables_Semantics/Semantics_Embeddings.thy b/thys/Iptables_Semantics/Semantics_Embeddings.thy
--- a/thys/Iptables_Semantics/Semantics_Embeddings.thy
+++ b/thys/Iptables_Semantics/Semantics_Embeddings.thy
@@ -1,348 +1,347 @@
theory Semantics_Embeddings
imports "Simple_Firewall/SimpleFw_Compliance" Matching_Embeddings Semantics "Semantics_Ternary/Semantics_Ternary"
begin
section\<open>Semantics Embedding\<close>
subsection\<open>Tactic @{const in_doubt_allow}\<close>
lemma iptables_bigstep_undecided_to_undecided_in_doubt_allow_approx:
assumes agree: "matcher_agree_on_exact_matches \<gamma> \<beta>"
and good: "good_ruleset rs" and semantics: "\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Undecided"
shows "(\<beta>, in_doubt_allow),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Undecided \<or> (\<beta>, in_doubt_allow),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalAllow"
proof -
from semantics good show ?thesis
proof(induction rs Undecided Undecided rule: iptables_bigstep_induct)
case Skip thus ?case by(auto intro: approximating_bigstep.skip)
next
case Log thus ?case by(auto intro: approximating_bigstep.empty approximating_bigstep.log approximating_bigstep.nomatch)
next
case (Nomatch m a)
with not_exact_match_in_doubt_allow_approx_match[OF agree] have
"a \<noteq> Log \<Longrightarrow> a \<noteq> Empty \<Longrightarrow> a = Accept \<and> Matching_Ternary.matches (\<beta>, in_doubt_allow) m a p \<or> \<not> Matching_Ternary.matches (\<beta>, in_doubt_allow) m a p"
by(simp add: good_ruleset_alt) blast
thus ?case
by(cases a) (auto intro: approximating_bigstep.empty approximating_bigstep.log approximating_bigstep.accept approximating_bigstep.nomatch)
next
case (Seq rs rs1 rs2 t)
from Seq have "good_ruleset rs1" and "good_ruleset rs2" by(simp_all add: good_ruleset_append)
also from Seq iptables_bigstep_to_undecided have "t = Undecided" by simp
ultimately show ?case using Seq by(fastforce intro: approximating_bigstep.decision Semantics_Ternary.seq')
qed(simp_all add: good_ruleset_def)
qed
lemma FinalAllow_approximating_in_doubt_allow:
assumes agree: "matcher_agree_on_exact_matches \<gamma> \<beta>"
and good: "good_ruleset rs" and semantics: "\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalAllow"
shows "(\<beta>, in_doubt_allow),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalAllow"
proof -
from semantics good show ?thesis
proof(induction rs Undecided "Decision FinalAllow" rule: iptables_bigstep_induct)
case Allow thus ?case
by (auto intro: agree approximating_bigstep.accept in_doubt_allow_allows_Accept)
next
case (Seq rs rs1 rs2 t)
from Seq have good1: "good_ruleset rs1" and good2: "good_ruleset rs2" by(simp_all add: good_ruleset_append)
show ?case
proof(cases t)
case Decision with Seq good1 good2 show "(\<beta>, in_doubt_allow),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalAllow"
by (auto intro: approximating_bigstep.decision approximating_bigstep.seq dest: Semantics.decisionD)
next
case Undecided
with iptables_bigstep_undecided_to_undecided_in_doubt_allow_approx[OF agree good1] Seq have
"(\<beta>, in_doubt_allow),p\<turnstile> \<langle>rs1, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Undecided \<or> (\<beta>, in_doubt_allow),p\<turnstile> \<langle>rs1, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalAllow" by simp
with Undecided Seq good1 good2 show "(\<beta>, in_doubt_allow),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalAllow"
by (auto intro: approximating_bigstep.seq Semantics_Ternary.seq' approximating_bigstep.decision)
qed
next
case Call_result thus ?case by(simp add: good_ruleset_alt)
qed
qed
corollary FinalAllows_subseteq_in_doubt_allow: "matcher_agree_on_exact_matches \<gamma> \<beta> \<Longrightarrow> good_ruleset rs \<Longrightarrow>
{p. \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalAllow} \<subseteq> {p. (\<beta>, in_doubt_allow),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalAllow}"
using FinalAllow_approximating_in_doubt_allow by (metis (lifting, full_types) Collect_mono)
(*referenced by name in paper*)
corollary new_packets_to_simple_firewall_overapproximation:
defines "preprocess rs \<equiv> upper_closure (optimize_matches abstract_for_simple_firewall (upper_closure (packet_assume_new rs)))"
and "newpkt p \<equiv> match_tcp_flags ipt_tcp_syn (p_tcp_flags p) \<and> p_tag_ctstate p = CT_New"
fixes p :: "('i::len, 'pkt_ext) tagged_packet_scheme"
assumes "matcher_agree_on_exact_matches \<gamma> common_matcher" and "simple_ruleset rs"
shows "{p. \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalAllow \<and> newpkt p} \<subseteq> {p. simple_fw (to_simple_firewall (preprocess rs)) p = Decision FinalAllow \<and> newpkt p}"
proof -
from assms(3) have "{p. \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalAllow \<and> newpkt p} \<subseteq>
{p. (common_matcher, in_doubt_allow),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalAllow \<and> newpkt p}"
apply(drule_tac rs=rs and \<Gamma>=\<Gamma> in FinalAllows_subseteq_in_doubt_allow)
using simple_imp_good_ruleset assms(4) apply blast
by blast
thus ?thesis unfolding newpkt_def preprocess_def using transform_simple_fw_upper(2)[OF assms(4)] by blast
qed
lemma approximating_bigstep_undecided_to_undecided_in_doubt_allow_approx: "matcher_agree_on_exact_matches \<gamma> \<beta> \<Longrightarrow>
good_ruleset rs \<Longrightarrow>
(\<beta>, in_doubt_allow),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Undecided \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Undecided \<or> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalDeny"
apply(rotate_tac 2)
apply(induction rs Undecided Undecided rule: approximating_bigstep_induct)
apply(simp_all)
apply (metis iptables_bigstep.skip)
apply (metis iptables_bigstep.empty iptables_bigstep.log iptables_bigstep.nomatch)
apply(simp split: ternaryvalue.split_asm add: matches_case_ternaryvalue_tuple)
apply (metis in_doubt_allow_allows_Accept iptables_bigstep.nomatch matches_casesE ternaryvalue.distinct(1) ternaryvalue.distinct(5))
apply(case_tac a)
apply(simp_all)
apply (metis iptables_bigstep.drop iptables_bigstep.nomatch)
apply (metis iptables_bigstep.log iptables_bigstep.nomatch)
apply (metis iptables_bigstep.nomatch iptables_bigstep.reject)
apply(simp add: good_ruleset_alt)
apply(simp add: good_ruleset_alt)
apply(simp add: good_ruleset_alt)
apply (metis iptables_bigstep.empty iptables_bigstep.nomatch)
apply(simp add: good_ruleset_alt)
apply(simp add: good_ruleset_append,clarify)
by (metis approximating_bigstep_to_undecided iptables_bigstep.decision iptables_bigstep.seq)
lemma FinalDeny_approximating_in_doubt_allow: "matcher_agree_on_exact_matches \<gamma> \<beta> \<Longrightarrow>
good_ruleset rs \<Longrightarrow>
(\<beta>, in_doubt_allow),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalDeny \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalDeny"
apply(rotate_tac 2)
apply(induction rs Undecided "Decision FinalDeny" rule: approximating_bigstep_induct)
apply(simp_all)
apply (metis action.distinct(1) action.distinct(5) deny not_exact_match_in_doubt_allow_approx_match)
apply(simp add: good_ruleset_append, clarify)
apply(case_tac t)
apply(simp)
apply(drule(2) approximating_bigstep_undecided_to_undecided_in_doubt_allow_approx[where \<Gamma>=\<Gamma>])
apply(erule disjE)
apply (metis iptables_bigstep.seq)
apply (metis iptables_bigstep.decision iptables_bigstep.seq)
by (metis Decision_approximating_bigstep_fun approximating_semantics_imp_fun iptables_bigstep.decision iptables_bigstep.seq)
corollary FinalDenys_subseteq_in_doubt_allow: "matcher_agree_on_exact_matches \<gamma> \<beta> \<Longrightarrow> good_ruleset rs \<Longrightarrow>
{p. (\<beta>, in_doubt_allow),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalDeny} \<subseteq> {p. \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalDeny}"
using FinalDeny_approximating_in_doubt_allow by (metis (lifting, full_types) Collect_mono)
text\<open>
If our approximating firewall (the executable version) concludes that we deny a packet,
the exact semantic agrees that this packet is definitely denied!
\<close>
corollary "matcher_agree_on_exact_matches \<gamma> \<beta> \<Longrightarrow> good_ruleset rs \<Longrightarrow>
approximating_bigstep_fun (\<beta>, in_doubt_allow) p rs Undecided = (Decision FinalDeny) \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalDeny"
apply(frule(1) FinalDeny_approximating_in_doubt_allow[where p=p and \<Gamma>=\<Gamma>])
apply(rule approximating_fun_imp_semantics)
apply (metis good_imp_wf_ruleset)
apply(simp_all)
done
subsection\<open>Tactic @{const in_doubt_deny}\<close>
lemma iptables_bigstep_undecided_to_undecided_in_doubt_deny_approx: "matcher_agree_on_exact_matches \<gamma> \<beta> \<Longrightarrow>
good_ruleset rs \<Longrightarrow>
\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Undecided \<Longrightarrow>
(\<beta>, in_doubt_deny),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Undecided \<or> (\<beta>, in_doubt_deny),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalDeny"
apply(rotate_tac 2)
apply(induction rs Undecided Undecided rule: iptables_bigstep_induct)
apply(simp_all)
apply (metis approximating_bigstep.skip)
apply (metis approximating_bigstep.empty approximating_bigstep.log approximating_bigstep.nomatch)
apply(case_tac "a = Log")
apply (metis approximating_bigstep.log approximating_bigstep.nomatch)
apply(case_tac "a = Empty")
apply (metis approximating_bigstep.empty approximating_bigstep.nomatch)
apply(drule_tac a=a in not_exact_match_in_doubt_deny_approx_match)
apply(simp_all)
apply(simp add: good_ruleset_alt)
apply fast
apply (metis approximating_bigstep.drop approximating_bigstep.nomatch approximating_bigstep.reject)
apply(frule iptables_bigstep_to_undecided)
apply(simp)
apply(simp add: good_ruleset_append)
apply (metis (hide_lams, no_types) approximating_bigstep.decision Semantics_Ternary.seq')
apply(simp add: good_ruleset_def)
apply(simp add: good_ruleset_def)
done
lemma FinalDeny_approximating_in_doubt_deny: "matcher_agree_on_exact_matches \<gamma> \<beta> \<Longrightarrow>
good_ruleset rs \<Longrightarrow>
\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalDeny \<Longrightarrow> (\<beta>, in_doubt_deny),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalDeny"
apply(rotate_tac 2)
apply(induction rs Undecided "Decision FinalDeny" rule: iptables_bigstep_induct)
apply(simp_all)
apply (metis approximating_bigstep.drop approximating_bigstep.reject in_doubt_deny_denies_DropReject)
apply(case_tac t)
apply(simp_all)
prefer 2
apply(simp add: good_ruleset_append)
apply (metis approximating_bigstep.decision approximating_bigstep.seq Semantics.decisionD state.inject)
- apply(thin_tac "False \<Longrightarrow> _ \<Longrightarrow> _")
apply(simp add: good_ruleset_append, clarify)
apply(drule(2) iptables_bigstep_undecided_to_undecided_in_doubt_deny_approx)
apply(erule disjE)
apply (metis approximating_bigstep.seq)
apply (metis approximating_bigstep.decision Semantics_Ternary.seq')
apply(simp add: good_ruleset_alt)
done
lemma approximating_bigstep_undecided_to_undecided_in_doubt_deny_approx: "matcher_agree_on_exact_matches \<gamma> \<beta> \<Longrightarrow>
good_ruleset rs \<Longrightarrow>
(\<beta>, in_doubt_deny),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Undecided \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Undecided \<or> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalAllow"
apply(rotate_tac 2)
apply(induction rs Undecided Undecided rule: approximating_bigstep_induct)
apply(simp_all)
apply (metis iptables_bigstep.skip)
apply (metis iptables_bigstep.empty iptables_bigstep.log iptables_bigstep.nomatch)
apply(simp split: ternaryvalue.split_asm add: matches_case_ternaryvalue_tuple)
apply (metis in_doubt_allow_allows_Accept iptables_bigstep.nomatch matches_casesE ternaryvalue.distinct(1) ternaryvalue.distinct(5))
apply(case_tac a)
apply(simp_all)
apply (metis iptables_bigstep.accept iptables_bigstep.nomatch)
apply (metis iptables_bigstep.log iptables_bigstep.nomatch)
apply(simp add: good_ruleset_alt)
apply(simp add: good_ruleset_alt)
apply(simp add: good_ruleset_alt)
apply (metis iptables_bigstep.empty iptables_bigstep.nomatch)
apply(simp add: good_ruleset_alt)
apply(simp add: good_ruleset_append,clarify)
by (metis approximating_bigstep_to_undecided iptables_bigstep.decision iptables_bigstep.seq)
lemma FinalAllow_approximating_in_doubt_deny: "matcher_agree_on_exact_matches \<gamma> \<beta> \<Longrightarrow>
good_ruleset rs \<Longrightarrow>
(\<beta>, in_doubt_deny),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalAllow \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalAllow"
apply(rotate_tac 2)
apply(induction rs Undecided "Decision FinalAllow" rule: approximating_bigstep_induct)
apply(simp_all)
apply (metis action.distinct(1) action.distinct(5) iptables_bigstep.accept not_exact_match_in_doubt_deny_approx_match)
apply(simp add: good_ruleset_append, clarify)
apply(case_tac t)
apply(simp)
apply(drule(2) approximating_bigstep_undecided_to_undecided_in_doubt_deny_approx[where \<Gamma>=\<Gamma>])
apply(erule disjE)
apply (metis iptables_bigstep.seq)
apply (metis iptables_bigstep.decision iptables_bigstep.seq)
by (metis Decision_approximating_bigstep_fun approximating_semantics_imp_fun iptables_bigstep.decision iptables_bigstep.seq)
corollary FinalAllows_subseteq_in_doubt_deny: "matcher_agree_on_exact_matches \<gamma> \<beta> \<Longrightarrow> good_ruleset rs \<Longrightarrow>
{p. (\<beta>, in_doubt_deny),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalAllow} \<subseteq> {p. \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalAllow}"
using FinalAllow_approximating_in_doubt_deny by (metis (lifting, full_types) Collect_mono)
corollary new_packets_to_simple_firewall_underapproximation:
defines "preprocess rs \<equiv> lower_closure (optimize_matches abstract_for_simple_firewall (lower_closure (packet_assume_new rs)))"
and "newpkt p \<equiv> match_tcp_flags ipt_tcp_syn (p_tcp_flags p) \<and> p_tag_ctstate p = CT_New"
fixes p :: "('i::len, 'pkt_ext) tagged_packet_scheme"
assumes "matcher_agree_on_exact_matches \<gamma> common_matcher" and "simple_ruleset rs"
shows "{p. simple_fw (to_simple_firewall (preprocess rs)) p = Decision FinalAllow \<and> newpkt p} \<subseteq> {p. \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalAllow \<and> newpkt p}"
proof -
from assms(3) have "{p. (common_matcher, in_doubt_deny),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalAllow \<and> newpkt p} \<subseteq>
{p. \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalAllow \<and> newpkt p}"
apply(drule_tac rs=rs and \<Gamma>=\<Gamma> in FinalAllows_subseteq_in_doubt_deny)
using simple_imp_good_ruleset assms(4) apply blast
by blast
thus ?thesis unfolding newpkt_def preprocess_def using transform_simple_fw_lower(2)[OF assms(4)] by blast
qed
subsection\<open>Approximating Closures\<close>
theorem FinalAllowClosure:
assumes "matcher_agree_on_exact_matches \<gamma> \<beta>" and "good_ruleset rs"
shows "{p. (\<beta>, in_doubt_deny),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalAllow} \<subseteq> {p. \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalAllow}"
and "{p. \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalAllow} \<subseteq> {p. (\<beta>, in_doubt_allow),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalAllow}"
apply (metis FinalAllows_subseteq_in_doubt_deny assms)
by (metis FinalAllows_subseteq_in_doubt_allow assms)
theorem FinalDenyClosure:
assumes "matcher_agree_on_exact_matches \<gamma> \<beta>" and "good_ruleset rs"
shows "{p. (\<beta>, in_doubt_allow),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalDeny} \<subseteq> {p. \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalDeny}"
and "{p. \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> Decision FinalDeny} \<subseteq> {p. (\<beta>, in_doubt_deny),p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>\<alpha> Decision FinalDeny}"
apply (metis FinalDenys_subseteq_in_doubt_allow assms)
by (metis FinalDeny_approximating_in_doubt_deny assms mem_Collect_eq subsetI)
subsection\<open>Exact Embedding\<close>
lemma LukassLemma: assumes agree: "matcher_agree_on_exact_matches \<gamma> \<beta>"
and noUnknown: "(\<forall> r \<in> set rs. ternary_ternary_eval (map_match_tac \<beta> p (get_match r)) \<noteq> TernaryUnknown)"
and good: "good_ruleset rs"
shows "(\<beta>,\<alpha>),p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow>\<^sub>\<alpha> t \<longleftrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow> t"
proof -
{ fix t \<comment> \<open>if we show it for arbitrary @{term t}, we can reuse this fact for the other direction.\<close>
assume a: "(\<beta>,\<alpha>),p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow>\<^sub>\<alpha> t"
from a good agree noUnknown have "\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow> t"
proof(induction rs s t rule: approximating_bigstep_induct)
qed(auto intro: approximating_bigstep.intros iptables_bigstep.intros dest: iptables_bigstepD dest: matches_comply_exact simp: good_ruleset_append)
} note 1=this
{
assume a: "\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow> t"
obtain x where "approximating_bigstep_fun (\<beta>,\<alpha>) p rs s = x" by simp
with approximating_fun_imp_semantics[OF good_imp_wf_ruleset[OF good]] have x: "(\<beta>,\<alpha>),p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow>\<^sub>\<alpha> x" by fast
with 1 have "\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow> x" by simp
with a iptables_bigstep_deterministic have "x = t" by metis
hence "(\<beta>,\<alpha>),p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow>\<^sub>\<alpha> t" using x by blast
} note 2=this
from 1 2 show ?thesis by blast
qed
text\<open>
For rulesets without @{term Call}s, the approximating ternary semantics can perfectly simulate the Boolean semantics.
\<close>
theorem \<beta>\<^sub>m\<^sub>a\<^sub>g\<^sub>i\<^sub>c_approximating_bigstep_iff_iptables_bigstep:
assumes "\<forall>r \<in> set rs. \<forall>c. get_action r \<noteq> Call c"
shows "((\<beta>\<^sub>m\<^sub>a\<^sub>g\<^sub>i\<^sub>c \<gamma>),\<alpha>),p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow>\<^sub>\<alpha> t \<longleftrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow> t"
apply(rule iffI)
apply(induction rs s t rule: approximating_bigstep_induct)
apply(auto intro: iptables_bigstep.intros simp: \<beta>\<^sub>m\<^sub>a\<^sub>g\<^sub>i\<^sub>c_matching)[7]
apply(insert assms)
apply(induction rs s t rule: iptables_bigstep_induct)
apply(auto intro: approximating_bigstep.intros simp: \<beta>\<^sub>m\<^sub>a\<^sub>g\<^sub>i\<^sub>c_matching)
done
corollary \<beta>\<^sub>m\<^sub>a\<^sub>g\<^sub>i\<^sub>c_approximating_bigstep_fun_iff_iptables_bigstep:
assumes "good_ruleset rs"
shows "approximating_bigstep_fun (\<beta>\<^sub>m\<^sub>a\<^sub>g\<^sub>i\<^sub>c \<gamma>,\<alpha>) p rs s = t \<longleftrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow> t"
apply(subst approximating_semantics_iff_fun_good_ruleset[symmetric])
using assms apply simp
apply(subst \<beta>\<^sub>m\<^sub>a\<^sub>g\<^sub>i\<^sub>c_approximating_bigstep_iff_iptables_bigstep[where \<Gamma>=\<Gamma>])
using assms apply (simp add: good_ruleset_def)
by simp
text\<open>The function @{const optimize_primitive_univ} was only applied to the ternary semantics.
It is, in fact, also correct for the Boolean semantics, assuming the @{const common_matcher}.\<close>
lemma Semantics_optimize_primitive_univ_common_matcher:
assumes "matcher_agree_on_exact_matches \<gamma> common_matcher"
shows "Semantics.matches \<gamma> (optimize_primitive_univ m) p = Semantics.matches \<gamma> m p"
proof -
have "(max_word::16 word) = 65535" by(simp add: max_word_def)
hence port_range: "\<And>s e port. s = 0 \<and> e = 0xFFFF \<longrightarrow> (port::16 word) \<le> 0xFFFF" by simp
from assms show ?thesis
apply(induction m rule: optimize_primitive_univ.induct)
apply(auto elim!: matcher_agree_on_exact_matches_gammaE
simp add: port_range match_ifaceAny ipset_from_cidr_0 ctstate_is_UNIV)
done
qed
end
diff --git a/thys/Isabelle_Meta_Model/toy_example/document_generated/Design_generated_generated.thy b/thys/Isabelle_Meta_Model/toy_example/document_generated/Design_generated_generated.thy
--- a/thys/Isabelle_Meta_Model/toy_example/document_generated/Design_generated_generated.thy
+++ b/thys/Isabelle_Meta_Model/toy_example/document_generated/Design_generated_generated.thy
@@ -1,221 +1,225 @@
theory Design_generated_generated imports "../Toy_Library" "../Toy_Library_Static" begin
(* 1 ************************************ 0 + 1 *)
text \<open>
For certain concepts like classes and class-types, only a generic
definition for its resulting semantics can be given. Generic means,
there is a function outside HOL that ``compiles'' a concrete,
closed-world class diagram into a ``theory'' of this data model,
consisting of a bunch of definitions for classes, accessors, method,
casts, and tests for actual types, as well as proofs for the
fundamental properties of these operations in this concrete data
model. \<close>
(* 2 ************************************ 1 + 1 *)
text \<open>
Our data universe consists in the concrete class diagram just of node's,
and implicitly of the class object. Each class implies the existence of a class
type defined for the corresponding object representations as follows: \<close>
(* 3 ************************************ 2 + 10 *)
datatype ty\<E>\<X>\<T>\<^sub>A\<^sub>t\<^sub>o\<^sub>m = mk\<E>\<X>\<T>\<^sub>A\<^sub>t\<^sub>o\<^sub>m "oid" "oid list option" "int option" "bool option" "nat option" "unit option"
datatype ty\<^sub>A\<^sub>t\<^sub>o\<^sub>m = mk\<^sub>A\<^sub>t\<^sub>o\<^sub>m "ty\<E>\<X>\<T>\<^sub>A\<^sub>t\<^sub>o\<^sub>m" "int option"
datatype ty\<E>\<X>\<T>\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e = mk\<E>\<X>\<T>\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e_\<^sub>A\<^sub>t\<^sub>o\<^sub>m "ty\<^sub>A\<^sub>t\<^sub>o\<^sub>m"
| mk\<E>\<X>\<T>\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e "oid" "oid list option" "int option" "bool option" "nat option" "unit option"
datatype ty\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e = mk\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e "ty\<E>\<X>\<T>\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e"
datatype ty\<E>\<X>\<T>\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n = mk\<E>\<X>\<T>\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n_\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e "ty\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e"
| mk\<E>\<X>\<T>\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n_\<^sub>A\<^sub>t\<^sub>o\<^sub>m "ty\<^sub>A\<^sub>t\<^sub>o\<^sub>m"
| mk\<E>\<X>\<T>\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n "oid" "nat option" "unit option"
datatype ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n = mk\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n "ty\<E>\<X>\<T>\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" "oid list option" "int option" "bool option"
datatype ty\<E>\<X>\<T>\<^sub>G\<^sub>a\<^sub>l\<^sub>a\<^sub>x\<^sub>y = mk\<E>\<X>\<T>\<^sub>G\<^sub>a\<^sub>l\<^sub>a\<^sub>x\<^sub>y_\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
| mk\<E>\<X>\<T>\<^sub>G\<^sub>a\<^sub>l\<^sub>a\<^sub>x\<^sub>y_\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e "ty\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e"
| mk\<E>\<X>\<T>\<^sub>G\<^sub>a\<^sub>l\<^sub>a\<^sub>x\<^sub>y_\<^sub>A\<^sub>t\<^sub>o\<^sub>m "ty\<^sub>A\<^sub>t\<^sub>o\<^sub>m"
| mk\<E>\<X>\<T>\<^sub>G\<^sub>a\<^sub>l\<^sub>a\<^sub>x\<^sub>y "oid"
datatype ty\<^sub>G\<^sub>a\<^sub>l\<^sub>a\<^sub>x\<^sub>y = mk\<^sub>G\<^sub>a\<^sub>l\<^sub>a\<^sub>x\<^sub>y "ty\<E>\<X>\<T>\<^sub>G\<^sub>a\<^sub>l\<^sub>a\<^sub>x\<^sub>y" "nat option" "unit option"
datatype ty\<E>\<X>\<T>\<^sub>T\<^sub>o\<^sub>y\<^sub>A\<^sub>n\<^sub>y = mk\<E>\<X>\<T>\<^sub>T\<^sub>o\<^sub>y\<^sub>A\<^sub>n\<^sub>y_\<^sub>G\<^sub>a\<^sub>l\<^sub>a\<^sub>x\<^sub>y "ty\<^sub>G\<^sub>a\<^sub>l\<^sub>a\<^sub>x\<^sub>y"
| mk\<E>\<X>\<T>\<^sub>T\<^sub>o\<^sub>y\<^sub>A\<^sub>n\<^sub>y_\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
| mk\<E>\<X>\<T>\<^sub>T\<^sub>o\<^sub>y\<^sub>A\<^sub>n\<^sub>y_\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e "ty\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e"
| mk\<E>\<X>\<T>\<^sub>T\<^sub>o\<^sub>y\<^sub>A\<^sub>n\<^sub>y_\<^sub>A\<^sub>t\<^sub>o\<^sub>m "ty\<^sub>A\<^sub>t\<^sub>o\<^sub>m"
| mk\<E>\<X>\<T>\<^sub>T\<^sub>o\<^sub>y\<^sub>A\<^sub>n\<^sub>y "oid"
datatype ty\<^sub>T\<^sub>o\<^sub>y\<^sub>A\<^sub>n\<^sub>y = mk\<^sub>T\<^sub>o\<^sub>y\<^sub>A\<^sub>n\<^sub>y "ty\<E>\<X>\<T>\<^sub>T\<^sub>o\<^sub>y\<^sub>A\<^sub>n\<^sub>y"
(* 4 ************************************ 12 + 1 *)
text \<open>
Now, we construct a concrete ``universe of ToyAny types'' by injection into a
sum type containing the class types. This type of ToyAny will be used as instance
for all respective type-variables. \<close>
(* 5 ************************************ 13 + 1 *)
datatype \<AA> = in\<^sub>A\<^sub>t\<^sub>o\<^sub>m "ty\<^sub>A\<^sub>t\<^sub>o\<^sub>m"
| in\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e "ty\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e"
| in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
| in\<^sub>G\<^sub>a\<^sub>l\<^sub>a\<^sub>x\<^sub>y "ty\<^sub>G\<^sub>a\<^sub>l\<^sub>a\<^sub>x\<^sub>y"
| in\<^sub>T\<^sub>o\<^sub>y\<^sub>A\<^sub>n\<^sub>y "ty\<^sub>T\<^sub>o\<^sub>y\<^sub>A\<^sub>n\<^sub>y"
(* 6 ************************************ 14 + 1 *)
text \<open>
Having fixed the object universe, we can introduce type synonyms that exactly correspond
to Toy types. Again, we exploit that our representation of Toy is a ``shallow embedding'' with a
one-to-one correspondance of Toy-types to types of the meta-language HOL. \<close>
(* 7 ************************************ 15 + 5 *)
type_synonym Atom = "\<langle>\<langle>ty\<^sub>A\<^sub>t\<^sub>o\<^sub>m\<rangle>\<^sub>\<bottom>\<rangle>\<^sub>\<bottom>"
type_synonym Molecule = "\<langle>\<langle>ty\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e\<rangle>\<^sub>\<bottom>\<rangle>\<^sub>\<bottom>"
type_synonym Person = "\<langle>\<langle>ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rangle>\<^sub>\<bottom>\<rangle>\<^sub>\<bottom>"
type_synonym Galaxy = "\<langle>\<langle>ty\<^sub>G\<^sub>a\<^sub>l\<^sub>a\<^sub>x\<^sub>y\<rangle>\<^sub>\<bottom>\<rangle>\<^sub>\<bottom>"
type_synonym ToyAny = "\<langle>\<langle>ty\<^sub>T\<^sub>o\<^sub>y\<^sub>A\<^sub>n\<^sub>y\<rangle>\<^sub>\<bottom>\<rangle>\<^sub>\<bottom>"
(* 8 ************************************ 20 + 3 *)
definition "oid\<^sub>A\<^sub>t\<^sub>o\<^sub>m_0___boss = 0"
definition "oid\<^sub>M\<^sub>o\<^sub>l\<^sub>e\<^sub>c\<^sub>u\<^sub>l\<^sub>e_0___boss = 0"
definition "oid\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n_0___boss = 0"
(* 9 ************************************ 23 + 2 *)
definition "switch\<^sub>2_01 = (\<lambda> [x0 , x1] \<Rightarrow> (x0 , x1))"
definition "switch\<^sub>2_10 = (\<lambda> [x0 , x1] \<Rightarrow> (x1 , x0))"
(* 10 ************************************ 25 + 3 *)
definition "oid1 = 1"
definition "oid2 = 2"
definition "inst_assoc1 = (\<lambda>oid_class to_from oid. ((case (deref_assocs_list ((to_from::oid list list \<Rightarrow> oid list \<times> oid list)) ((oid::oid)) ((drop ((((map_of_list ([(oid\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n_0___boss , (List.map ((\<lambda>(x , y). [x , y]) o switch\<^sub>2_01) ([[[oid1] , [oid2]]])))]))) ((oid_class::oid))))))) of Nil \<Rightarrow> None
| l \<Rightarrow> (Some (l)))::oid list option))"
(* 11 ************************************ 28 + 0 *)
(* 12 ************************************ 28 + 2 *)
definition "oid3 = 3"
definition "inst_assoc3 = (\<lambda>oid_class to_from oid. ((case (deref_assocs_list ((to_from::oid list list \<Rightarrow> oid list \<times> oid list)) ((oid::oid)) ((drop ((((map_of_list ([]))) ((oid_class::oid))))))) of Nil \<Rightarrow> None
| l \<Rightarrow> (Some (l)))::oid list option))"
(* 13 ************************************ 30 + 0 *)
(* 14 ************************************ 30 + 5 *)
definition "oid4 = 4"
definition "oid5 = 5"
definition "oid6 = 6"
definition "oid7 = 7"
definition "inst_assoc4 = (\<lambda>oid_class to_from oid. ((case (deref_assocs_list ((to_from::oid list list \<Rightarrow> oid list \<times> oid list)) ((oid::oid)) ((drop ((((map_of_list ([(oid\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n_0___boss , (List.map ((\<lambda>(x , y). [x , y]) o switch\<^sub>2_01) ([[[oid7] , [oid6]] , [[oid6] , [oid1]] , [[oid4] , [oid5]]])))]))) ((oid_class::oid))))))) of Nil \<Rightarrow> None
| l \<Rightarrow> (Some (l)))::oid list option))"
(* 15 ************************************ 35 + 0 *)
(* 16 ************************************ 35 + 1 *)
locale state_\<sigma>\<^sub>1 =
fixes "oid4" :: "nat"
fixes "oid5" :: "nat"
fixes "oid6" :: "nat"
fixes "oid1" :: "nat"
fixes "oid7" :: "nat"
fixes "oid2" :: "nat"
assumes distinct_oid: "(distinct ([oid4 , oid5 , oid6 , oid1 , oid7 , oid2]))"
fixes "\<sigma>\<^sub>1_object0\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "\<sigma>\<^sub>1_object0" :: "\<cdot>Person"
assumes \<sigma>\<^sub>1_object0_def: "\<sigma>\<^sub>1_object0 = (\<lambda>_. \<lfloor>\<lfloor>\<sigma>\<^sub>1_object0\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
fixes "\<sigma>\<^sub>1_object1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "\<sigma>\<^sub>1_object1" :: "\<cdot>Person"
assumes \<sigma>\<^sub>1_object1_def: "\<sigma>\<^sub>1_object1 = (\<lambda>_. \<lfloor>\<lfloor>\<sigma>\<^sub>1_object1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
fixes "\<sigma>\<^sub>1_object2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "\<sigma>\<^sub>1_object2" :: "\<cdot>Person"
assumes \<sigma>\<^sub>1_object2_def: "\<sigma>\<^sub>1_object2 = (\<lambda>_. \<lfloor>\<lfloor>\<sigma>\<^sub>1_object2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1" :: "\<cdot>Person"
assumes X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1_def: "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1 = (\<lambda>_. \<lfloor>\<lfloor>X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
fixes "\<sigma>\<^sub>1_object4\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "\<sigma>\<^sub>1_object4" :: "\<cdot>Person"
assumes \<sigma>\<^sub>1_object4_def: "\<sigma>\<^sub>1_object4 = (\<lambda>_. \<lfloor>\<lfloor>\<sigma>\<^sub>1_object4\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2" :: "\<cdot>Person"
assumes X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2_def: "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2 = (\<lambda>_. \<lfloor>\<lfloor>X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
begin
definition "\<sigma>\<^sub>1 = (state.make ((Map.empty (oid4 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (\<sigma>\<^sub>1_object0\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid5 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (\<sigma>\<^sub>1_object1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid6 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (\<sigma>\<^sub>1_object2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid1 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid7 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (\<sigma>\<^sub>1_object4\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid2 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))))) ((map_of_list ([(oid\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n_0___boss , (List.map ((\<lambda>(x , y). [x , y]) o switch\<^sub>2_01) ([[[oid4] , [oid2]] , [[oid6] , [oid4]] , [[oid1] , [oid6]] , [[oid7] , [oid3]]])))]))))"
lemma perm_\<sigma>\<^sub>1 : "\<sigma>\<^sub>1 = (state.make ((Map.empty (oid2 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid7 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (\<sigma>\<^sub>1_object4\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid1 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid6 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (\<sigma>\<^sub>1_object2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid5 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (\<sigma>\<^sub>1_object1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid4 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (\<sigma>\<^sub>1_object0\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))))) ((assocs (\<sigma>\<^sub>1))))"
apply(simp add: \<sigma>\<^sub>1_def)
apply(subst (1) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (2) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (1) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (3) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (2) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (1) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (4) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (3) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (2) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (1) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (5) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (4) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (3) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (2) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (1) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
by(simp)
end
(* 17 ************************************ 36 + 1 *)
locale state_\<sigma>\<^sub>1' =
fixes "oid1" :: "nat"
fixes "oid2" :: "nat"
fixes "oid3" :: "nat"
assumes distinct_oid: "(distinct ([oid1 , oid2 , oid3]))"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1" :: "\<cdot>Person"
assumes X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1_def: "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1 = (\<lambda>_. \<lfloor>\<lfloor>X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2" :: "\<cdot>Person"
assumes X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2_def: "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2 = (\<lambda>_. \<lfloor>\<lfloor>X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3" :: "\<cdot>Person"
assumes X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3_def: "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3 = (\<lambda>_. \<lfloor>\<lfloor>X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
begin
definition "\<sigma>\<^sub>1' = (state.make ((Map.empty (oid1 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid2 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid3 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))))) ((map_of_list ([(oid\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n_0___boss , (List.map ((\<lambda>(x , y). [x , y]) o switch\<^sub>2_01) ([[[oid1] , [oid2]]])))]))))"
lemma perm_\<sigma>\<^sub>1' : "\<sigma>\<^sub>1' = (state.make ((Map.empty (oid3 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid2 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))) (oid1 \<mapsto> (in\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n))))) ((assocs (\<sigma>\<^sub>1'))))"
apply(simp add: \<sigma>\<^sub>1'_def)
apply(subst (1) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (2) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
apply(subst (1) fun_upd_twist, metis distinct_oid distinct_length_2_or_more)
by(simp)
end
(* 18 ************************************ 37 + 1 *)
locale pre_post_\<sigma>\<^sub>1_\<sigma>\<^sub>1' =
fixes "oid1" :: "nat"
fixes "oid2" :: "nat"
fixes "oid3" :: "nat"
fixes "oid4" :: "nat"
fixes "oid5" :: "nat"
fixes "oid6" :: "nat"
fixes "oid7" :: "nat"
assumes distinct_oid: "(distinct ([oid1 , oid2 , oid3 , oid4 , oid5 , oid6 , oid7]))"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1" :: "\<cdot>Person"
assumes X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1_def: "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1 = (\<lambda>_. \<lfloor>\<lfloor>X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2" :: "\<cdot>Person"
assumes X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2_def: "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2 = (\<lambda>_. \<lfloor>\<lfloor>X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3" :: "\<cdot>Person"
assumes X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3_def: "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3 = (\<lambda>_. \<lfloor>\<lfloor>X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
fixes "\<sigma>\<^sub>1_object0\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "\<sigma>\<^sub>1_object0" :: "\<cdot>Person"
assumes \<sigma>\<^sub>1_object0_def: "\<sigma>\<^sub>1_object0 = (\<lambda>_. \<lfloor>\<lfloor>\<sigma>\<^sub>1_object0\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
fixes "\<sigma>\<^sub>1_object1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "\<sigma>\<^sub>1_object1" :: "\<cdot>Person"
assumes \<sigma>\<^sub>1_object1_def: "\<sigma>\<^sub>1_object1 = (\<lambda>_. \<lfloor>\<lfloor>\<sigma>\<^sub>1_object1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
fixes "\<sigma>\<^sub>1_object2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "\<sigma>\<^sub>1_object2" :: "\<cdot>Person"
assumes \<sigma>\<^sub>1_object2_def: "\<sigma>\<^sub>1_object2 = (\<lambda>_. \<lfloor>\<lfloor>\<sigma>\<^sub>1_object2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
fixes "\<sigma>\<^sub>1_object4\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" :: "ty\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n"
fixes "\<sigma>\<^sub>1_object4" :: "\<cdot>Person"
assumes \<sigma>\<^sub>1_object4_def: "\<sigma>\<^sub>1_object4 = (\<lambda>_. \<lfloor>\<lfloor>\<sigma>\<^sub>1_object4\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n\<rfloor>\<rfloor>)"
assumes \<sigma>\<^sub>1: "(state_\<sigma>\<^sub>1 (oid4) (oid5) (oid6) (oid1) (oid7) (oid2) (\<sigma>\<^sub>1_object0\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n) (\<sigma>\<^sub>1_object0) (\<sigma>\<^sub>1_object1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n) (\<sigma>\<^sub>1_object1) (\<sigma>\<^sub>1_object2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n) (\<sigma>\<^sub>1_object2) (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n) (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1) (\<sigma>\<^sub>1_object4\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n) (\<sigma>\<^sub>1_object4) (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n) (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2))"
assumes \<sigma>\<^sub>1': "(state_\<sigma>\<^sub>1' (oid1) (oid2) (oid3) (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n) (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1) (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n) (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2) (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n) (X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3))"
begin
interpretation state_\<sigma>\<^sub>1: state_\<sigma>\<^sub>1 "oid4" "oid5" "oid6" "oid1" "oid7" "oid2" "\<sigma>\<^sub>1_object0\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" "\<sigma>\<^sub>1_object0" "\<sigma>\<^sub>1_object1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" "\<sigma>\<^sub>1_object1" "\<sigma>\<^sub>1_object2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" "\<sigma>\<^sub>1_object2" "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1" "\<sigma>\<^sub>1_object4\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" "\<sigma>\<^sub>1_object4" "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2"
by(rule \<sigma>\<^sub>1)
interpretation state_\<sigma>\<^sub>1': state_\<sigma>\<^sub>1' "oid1" "oid2" "oid3" "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n1" "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n2" "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n" "X\<^sub>P\<^sub>e\<^sub>r\<^sub>s\<^sub>o\<^sub>n3"
by(rule \<sigma>\<^sub>1')
+
+definition "heap_\<sigma>\<^sub>1 = (heap (state_\<sigma>\<^sub>1.\<sigma>\<^sub>1))"
+
+definition "heap_\<sigma>\<^sub>1' = (heap (state_\<sigma>\<^sub>1'.\<sigma>\<^sub>1'))"
end
end
diff --git a/thys/Isabelle_Meta_Model/toy_example/embedding/Core.thy b/thys/Isabelle_Meta_Model/toy_example/embedding/Core.thy
--- a/thys/Isabelle_Meta_Model/toy_example/embedding/Core.thy
+++ b/thys/Isabelle_Meta_Model/toy_example/embedding/Core.thy
@@ -1,257 +1,258 @@
(******************************************************************************
* Citadelle
*
* Copyright (c) 2011-2018 Université Paris-Saclay, Univ. Paris-Sud, France
* 2013-2017 IRT SystemX, France
* 2011-2015 Achim D. Brucker, Germany
* 2016-2018 The University of Sheffield, UK
* 2016-2017 Nanyang Technological University, Singapore
* 2017-2018 Virginia Tech, USA
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials provided
* with the distribution.
*
* * Neither the name of the copyright holders nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
section\<open>General Environment for the Translation: Conclusion\<close>
theory Core
imports "core/Floor1_infra"
"core/Floor1_access"
"core/Floor1_examp"
"core/Floor2_examp"
"core/Floor1_ctxt"
begin
subsection\<open>Preliminaries\<close>
datatype ('a, 'b) embedding = Embed_theories "('a \<Rightarrow> 'b \<Rightarrow> all_meta list \<times> 'b) list"
| Embed_locale "'a \<Rightarrow> 'b \<Rightarrow> semi__locale \<times> 'b"
"('a \<Rightarrow> 'b \<Rightarrow> semi__theory list \<times> 'b) list"
type_synonym 'a embedding' = "('a, compiler_env_config) embedding" \<comment> \<open>polymorphism weakening needed by \<^theory_text>\<open>code_reflect\<close>\<close>
definition "L_fold f =
(\<lambda> Embed_theories l \<Rightarrow> List.fold f l
| Embed_locale loc_data l \<Rightarrow>
f (\<lambda>a b.
let (loc_data, b) = loc_data a b
; (l, b) = List.fold (\<lambda>f0. \<lambda>(l, b) \<Rightarrow> let (x, b) = f0 a b in (x # l, b)) l ([], b) in
([META_semi__theories (Theories_locale loc_data (rev l))], b)))"
subsection\<open>Assembling Translations\<close>
definition "txt f = start_map'''' O.text o (\<lambda>_ design_analysis. [Text (f design_analysis)])"
definition "txt' s = txt (\<lambda>_. s)"
definition "txt'' = txt' o S.flatten"
definition thy_class ::
\<comment> \<open>polymorphism weakening needed by \<^theory_text>\<open>code_reflect\<close>\<close>
"_ embedding'" where \<open>thy_class =
Embed_theories
[ txt'' [ \<open>
For certain concepts like classes and class-types, only a generic
definition for its resulting semantics can be given. Generic means,
there is a function outside HOL that ``compiles'' a concrete,
closed-world class diagram into a ``theory'' of this data model,
consisting of a bunch of definitions for classes, accessors, method,
casts, and tests for actual types, as well as proofs for the
fundamental properties of these operations in this concrete data
model. \<close> ]
, txt'' [ \<open>
Our data universe consists in the concrete class diagram just of node's,
and implicitly of the class object. Each class implies the existence of a class
type defined for the corresponding object representations as follows: \<close> ]
, print_infra_datatype_class
, txt'' [ \<open>
Now, we construct a concrete ``universe of ToyAny types'' by injection into a
sum type containing the class types. This type of ToyAny will be used as instance
for all respective type-variables. \<close> ]
, print_infra_datatype_universe
, txt'' [ \<open>
Having fixed the object universe, we can introduce type synonyms that exactly correspond
to Toy types. Again, we exploit that our representation of Toy is a ``shallow embedding'' with a
one-to-one correspondance of Toy-types to types of the meta-language HOL. \<close> ]
, print_infra_type_synonym_class_higher
, print_access_oid_uniq
, print_access_choose ]\<close>
definition "thy_enum_flat = Embed_theories []"
definition "thy_enum = Embed_theories []"
definition "thy_class_synonym = Embed_theories []"
definition "thy_class_flat = Embed_theories []"
definition "thy_association = Embed_theories []"
definition "thy_instance = Embed_theories
[ print_examp_instance_defassoc
, print_examp_instance ]"
definition "thy_def_base_l = Embed_theories []"
definition "thy_def_state = (\<lambda> Floor1 \<Rightarrow> Embed_theories
[ Floor1_examp.print_examp_def_st1 ]
| Floor2 \<Rightarrow> Embed_locale
Floor2_examp.print_examp_def_st_locale
[ Floor2_examp.print_examp_def_st2
, Floor2_examp.print_examp_def_st_perm ])"
definition "thy_def_pre_post = (\<lambda> Floor1 \<Rightarrow> Embed_theories
[ Floor1_examp.print_pre_post ]
| Floor2 \<Rightarrow> Embed_locale
Floor2_examp.print_pre_post_locale
- [ Floor2_examp.print_pre_post_interp ])"
+ [ Floor2_examp.print_pre_post_interp
+ , Floor2_examp.print_pre_post_def_state' ])"
definition "thy_ctxt = (\<lambda> Floor1 \<Rightarrow> Embed_theories
[ Floor1_ctxt.print_ctxt ]
| Floor2 \<Rightarrow> Embed_theories
[])"
definition "thy_flush_all = Embed_theories []"
(* NOTE typechecking functions can be put at the end, however checking already defined constants can be done earlier *)
subsection\<open>Combinators Folding the Compiling Environment\<close>
definition "compiler_env_config_empty output_disable_thy output_header_thy oid_start design_analysis sorry_dirty =
compiler_env_config.make
output_disable_thy
output_header_thy
oid_start
(0, 0)
design_analysis
None [] [] [] False False ([], []) []
sorry_dirty"
definition "compiler_env_config_reset_no_env env =
compiler_env_config_empty
(D_output_disable_thy env)
(D_output_header_thy env)
(oidReinitAll (D_toy_oid_start env))
(D_toy_semantics env)
(D_output_sorry_dirty env)
\<lparr> D_input_meta := D_input_meta env \<rparr>"
definition "compiler_env_config_reset_all env =
(let env = compiler_env_config_reset_no_env env in
( env \<lparr> D_input_meta := [] \<rparr>
, let (l_class, l_env) = find_class_ass env in
L.flatten
[ l_class
, List.filter (\<lambda> META_flush_all _ \<Rightarrow> False | _ \<Rightarrow> True) l_env
, [META_flush_all ToyFlushAll] ] ))"
definition "compiler_env_config_update f env =
\<comment> \<open>WARNING The semantics of the meta-embedded language is not intended to be reset here (like \<open>oid_start\<close>), only syntactic configurations of the compiler (path, etc...)\<close>
f env
\<lparr> D_output_disable_thy := D_output_disable_thy env
, D_output_header_thy := D_output_header_thy env
, D_toy_semantics := D_toy_semantics env
, D_output_sorry_dirty := D_output_sorry_dirty env \<rparr>"
definition "fold_thy0 meta thy_object0 f =
L_fold (\<lambda>x (acc1, acc2).
let (sorry, dirty) = D_output_sorry_dirty acc1
; (l, acc1) = x meta acc1 in
(f (if sorry = Some Gen_sorry | sorry = None & dirty then
L.map (map_semi__theory (map_lemma (\<lambda> Lemma n spec _ _ \<Rightarrow> Lemma n spec [] C.sorry
| Lemma_assumes n spec1 spec2 _ _ \<Rightarrow> Lemma_assumes n spec1 spec2 [] C.sorry))) l
else
l) acc1 acc2)) thy_object0"
definition "comp_env_input_class_rm f_fold f env_accu =
(let (env, accu) = f_fold f env_accu in
(env \<lparr> D_input_class := None \<rparr>, accu))"
definition "comp_env_save ast f_fold f env_accu =
(let (env, accu) = f_fold f env_accu in
(env \<lparr> D_input_meta := ast # D_input_meta env \<rparr>, accu))"
definition "comp_env_input_class_mk f_try f_accu_reset f_fold f =
(\<lambda> (env, accu).
f_fold f
(case D_input_class env of Some _ \<Rightarrow> (env, accu) | None \<Rightarrow>
let (l_class, l_env) = find_class_ass env
; (l_enum, l_env) = partition (\<lambda>META_enum _ \<Rightarrow> True | _ \<Rightarrow> False) l_env in
(f_try (\<lambda> () \<Rightarrow>
let D_input_meta0 = D_input_meta env
; (env, accu) =
let meta = class_unflat (arrange_ass True (D_toy_semantics env \<noteq> Gen_default) l_class (L.map (\<lambda> META_enum e \<Rightarrow> e) l_enum))
; (env, accu) = List.fold (\<lambda> ast. comp_env_save ast (case ast of META_enum meta \<Rightarrow> fold_thy0 meta thy_enum) f)
l_enum
(let env = compiler_env_config_reset_no_env env in
(env \<lparr> D_input_meta := List.filter (\<lambda> META_enum _ \<Rightarrow> False | _ \<Rightarrow> True) (D_input_meta env) \<rparr>, f_accu_reset env accu))
; (env, accu) = fold_thy0 meta thy_class f (env, accu) in
(env \<lparr> D_input_class := Some meta \<rparr>, accu)
; (env, accu) =
List.fold
(\<lambda>ast. comp_env_save ast (case ast of
META_instance meta \<Rightarrow> fold_thy0 meta thy_instance
| META_def_base_l meta \<Rightarrow> fold_thy0 meta thy_def_base_l
| META_def_state floor meta \<Rightarrow> fold_thy0 meta (thy_def_state floor)
| META_def_pre_post floor meta \<Rightarrow> fold_thy0 meta (thy_def_pre_post floor)
| META_ctxt floor meta \<Rightarrow> fold_thy0 meta (thy_ctxt floor)
| META_flush_all meta \<Rightarrow> fold_thy0 meta thy_flush_all)
f)
l_env
(env \<lparr> D_input_meta := L.flatten [l_class, l_enum] \<rparr>, accu) in
(env \<lparr> D_input_meta := D_input_meta0 \<rparr>, accu)))))"
definition "comp_env_input_class_bind l f =
List.fold (\<lambda>x. x f) l"
definition "fold_thy' f_try f_accu_reset f =
(let comp_env_input_class_mk = comp_env_input_class_mk f_try f_accu_reset in
List.fold (\<lambda> ast.
comp_env_save ast (case ast of
META_enum meta \<Rightarrow> comp_env_input_class_rm (fold_thy0 meta thy_enum_flat)
| META_class_raw Floor1 meta \<Rightarrow> comp_env_input_class_rm (fold_thy0 meta thy_class_flat)
| META_association meta \<Rightarrow> comp_env_input_class_rm (fold_thy0 meta thy_association)
| META_ass_class Floor1 (ToyAssClass meta_ass meta_class) \<Rightarrow>
comp_env_input_class_rm (comp_env_input_class_bind [ fold_thy0 meta_ass thy_association
, fold_thy0 meta_class thy_class_flat ])
| META_class_synonym meta \<Rightarrow> comp_env_input_class_rm (fold_thy0 meta thy_class_synonym)
| META_instance meta \<Rightarrow> comp_env_input_class_mk (fold_thy0 meta thy_instance)
| META_def_base_l meta \<Rightarrow> fold_thy0 meta thy_def_base_l
| META_def_state floor meta \<Rightarrow> comp_env_input_class_mk (fold_thy0 meta (thy_def_state floor))
| META_def_pre_post floor meta \<Rightarrow> fold_thy0 meta (thy_def_pre_post floor)
| META_ctxt floor meta \<Rightarrow> comp_env_input_class_mk (fold_thy0 meta (thy_ctxt floor))
| META_flush_all meta \<Rightarrow> comp_env_input_class_mk (fold_thy0 meta thy_flush_all)) f))"
definition "fold_thy_shallow f_try f_accu_reset x =
fold_thy'
f_try
f_accu_reset
(\<lambda>l acc1.
map_prod (\<lambda> env. env \<lparr> D_input_meta := D_input_meta acc1 \<rparr>) id
o List.fold x l
o Pair acc1)"
definition "fold_thy_deep obj env =
(case fold_thy'
(\<lambda>f. f ())
(\<lambda>env _. D_output_position env)
(\<lambda>l acc1 (i, cpt). (acc1, (Succ i, natural_of_nat (List.length l) + cpt)))
obj
(env, D_output_position env) of
(env, output_position) \<Rightarrow> env \<lparr> D_output_position := output_position \<rparr>)"
end
diff --git a/thys/Isabelle_Meta_Model/toy_example/embedding/Generator_dynamic_sequential.thy b/thys/Isabelle_Meta_Model/toy_example/embedding/Generator_dynamic_sequential.thy
--- a/thys/Isabelle_Meta_Model/toy_example/embedding/Generator_dynamic_sequential.thy
+++ b/thys/Isabelle_Meta_Model/toy_example/embedding/Generator_dynamic_sequential.thy
@@ -1,1713 +1,1717 @@
(******************************************************************************
* Citadelle
*
* Copyright (c) 2011-2018 Université Paris-Saclay, Univ. Paris-Sud, France
* 2013-2017 IRT SystemX, France
* 2011-2015 Achim D. Brucker, Germany
* 2016-2018 The University of Sheffield, UK
* 2016-2017 Nanyang Technological University, Singapore
* 2017-2018 Virginia Tech, USA
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials provided
* with the distribution.
*
* * Neither the name of the copyright holders nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
section\<open>Dynamic Meta Embedding with Reflection\<close>
theory Generator_dynamic_sequential
imports Printer
"../../isabelle_home/src/HOL/Isabelle_Main2"
(*<*)
keywords (* Toy language *)
"Between"
"Attributes" "Operations" "Constraints"
"Role"
"Ordered" "Subsets" "Union" "Redefines" "Derived" "Qualifier"
"Existential" "Inv" "Pre" "Post"
"self"
"Nonunique" "Sequence_"
(* Isabelle syntax *)
"output_directory"
"THEORY" "IMPORTS" "SECTION" "SORRY" "no_dirty"
"deep" "shallow" "syntax_print" "skip_export"
"generation_semantics"
"flush_all"
(* Isabelle semantics (parameterizing the semantics of Toy language) *)
"design" "analysis" "oid_start"
and (* Toy language *)
"Enum"
"Abstract_class" "Class"
"Association" "Composition" "Aggregation"
"Abstract_associationclass" "Associationclass"
"Context"
"End" "Instance" "BaseType" "State" "PrePost"
(* Isabelle syntax *)
"generation_syntax"
:: thy_decl
(*>*)
begin
text\<open>In the ``dynamic'' solution: the exportation is automatically handled inside Isabelle/jEdit.
Inputs are provided using the syntax of the Toy Language, and in output
we basically have two options:
\begin{itemize}
\item The first is to generate an Isabelle file for inspection or debugging.
The generated file can interactively be loaded in Isabelle/jEdit, or saved to the hard disk.
This mode is called the ``deep exportation'' mode or shortly the ``deep'' mode.
The aim is to maximally automate the process one is manually performing in
@{file \<open>Generator_static.thy\<close>}.
\item On the other hand, it is also possible to directly execute
in Isabelle/jEdit the generated file from the random access memory.
This mode corresponds to the ``shallow reflection'' mode or shortly ``shallow'' mode.
\end{itemize}
In both modes, the reflection is necessary since the main part used by both
was defined at Isabelle side.
As a consequence, experimentations in ``deep'' and ``shallow'' are performed
without leaving the editing session, in the same as the one the meta-compiler is actually running.\<close>
apply_code_printing_reflect \<open>
val stdout_file = Unsynchronized.ref ""
\<close> text\<open>This variable is not used in this theory (only in @{file \<open>Generator_static.thy\<close>}),
but needed for well typechecking the reflected SML code.\<close>
code_reflect' open META
functions (* executing the compiler as monadic combinators for deep and shallow *)
fold_thy_deep fold_thy_shallow
(* printing the HOL AST to (shallow Isabelle) string *)
write_file
(* manipulating the compiling environment *)
compiler_env_config_reset_all
compiler_env_config_update
oidInit
D_output_header_thy_update
map2_ctxt_term
check_export_code
(* printing the TOY AST to (deep Isabelle) string *)
isabelle_apply isabelle_of_compiler_env_config
subsection\<open>Interface Between the Reflected and the Native\<close>
ML\<open>
val To_string0 = META.meta_of_logic
\<close>
ML\<open>
structure From = struct
val string = META.SS_base o META.ST
val binding = string o Binding.name_of
(*fun term ctxt s = string (XML.content_of (YXML.parse_body (Syntax.string_of_term ctxt s)))*)
val internal_oid = META.Oid o Code_Numeral.natural_of_integer
val option = Option.map
val list = List.map
fun pair f1 f2 (x, y) = (f1 x, f2 y)
fun pair3 f1 f2 f3 (x, y, z) = (f1 x, f2 y, f3 z)
structure Pure = struct
val indexname = pair string Code_Numeral.natural_of_integer
val class = string
val sort = list class
fun typ e = (fn
Type (s, l) => (META.Type o pair string (list typ)) (s, l)
| TFree (s, s0) => (META.TFree o pair string sort) (s, s0)
| TVar (i, s0) => (META.TVar o pair indexname sort) (i, s0)
) e
fun term e = (fn
Const (s, t) => (META.Const o pair string typ) (s, t)
| Free (s, t) => (META.Free o pair string typ) (s, t)
| Var (i, t) => (META.Var o pair indexname typ) (i, t)
| Bound i => (META.Bound o Code_Numeral.natural_of_integer) i
| Abs (s, ty, t) => (META.Abs o pair3 string typ term) (s, ty, t)
| op $ (term1, term2) => (META.App o pair term term) (term1, term2)
) e
end
fun toy_ctxt_term thy expr =
META.T_pure (Pure.term (Syntax.read_term (Proof_Context.init_global thy) expr))
end
\<close>
ML\<open>fun List_mapi f = META.mapi (f o Code_Numeral.integer_of_natural)\<close>
ML\<open>
structure Ty' = struct
fun check l_oid l =
let val Mp = META.map_prod
val Me = String.explode
val Mi = String.implode
val Ml = map in
META.check_export_code
(writeln o Mi)
(warning o Mi)
(fn s => writeln (Markup.markup (Markup.bad ()) (Mi s)))
(error o To_string0)
(Ml (Mp I Me) l_oid)
((META.SS_base o META.ST) l)
end
end
\<close>
subsection\<open>Binding of the Reflected API to the Native API\<close>
ML\<open>
structure META_overload = struct
val of_semi__typ = META.of_semi_typ To_string0
val of_semi__term = META.of_semi_terma To_string0
val of_semi__term' = META.of_semi_term To_string0
val fold = fold
end
\<close>
ML\<open>
structure Bind_Isabelle = struct
fun To_binding s = Binding.make (s, Position.none)
val To_sbinding = To_binding o To_string0
fun semi__method_simp g f = Method.Basic (fn ctxt => SIMPLE_METHOD (g (asm_full_simp_tac (f ctxt))))
val semi__method_simp_one = semi__method_simp (fn f => f 1)
val semi__method_simp_all = semi__method_simp (CHANGED_PROP o PARALLEL_ALLGOALS)
datatype semi__thm' = Thms_single' of thm
| Thms_mult' of thm list
fun semi__thm_attribute ctxt = let open META open META_overload val S = fn Thms_single' t => t
val M = fn Thms_mult' t => t in
fn Thm_thm s => Thms_single' (Proof_Context.get_thm ctxt (To_string0 s))
| Thm_thms s => Thms_mult' (Proof_Context.get_thms ctxt (To_string0 s))
| Thm_THEN (e1, e2) =>
(case (semi__thm_attribute ctxt e1, semi__thm_attribute ctxt e2) of
(Thms_single' e1, Thms_single' e2) => Thms_single' (e1 RSN (1, e2))
| (Thms_mult' e1, Thms_mult' e2) => Thms_mult' (e1 RLN (1, e2)))
| Thm_simplified (e1, e2) =>
Thms_single' (asm_full_simplify (clear_simpset ctxt addsimps [S (semi__thm_attribute ctxt e2)])
(S (semi__thm_attribute ctxt e1)))
| Thm_OF (e1, e2) =>
Thms_single' ([S (semi__thm_attribute ctxt e2)] MRS (S (semi__thm_attribute ctxt e1)))
| Thm_where (nth, l) =>
Thms_single' (Rule_Insts.where_rule
ctxt
(List.map (fn (var, expr) =>
(((To_string0 var, 0), Position.none), of_semi__term expr)) l)
[]
(S (semi__thm_attribute ctxt nth)))
| Thm_symmetric e1 =>
let val e2 = S (semi__thm_attribute ctxt (Thm_thm (From.string "sym"))) in
case semi__thm_attribute ctxt e1 of
Thms_single' e1 => Thms_single' (e1 RSN (1, e2))
| Thms_mult' e1 => Thms_mult' (e1 RLN (1, [e2]))
end
| Thm_of (nth, l) =>
Thms_single' (Rule_Insts.of_rule
ctxt
(List.map (SOME o of_semi__term) l, [])
[]
(S (semi__thm_attribute ctxt nth)))
end
fun semi__thm_attribute_single ctxt s = case (semi__thm_attribute ctxt s) of Thms_single' t => t
fun semi__thm_mult ctxt =
let fun f thy = case (semi__thm_attribute ctxt thy) of Thms_mult' t => t
| Thms_single' t => [t] in
fn META.Thms_single thy => f thy
| META.Thms_mult thy => f thy
end
fun semi__thm_mult_l ctxt l = List.concat (map (semi__thm_mult ctxt) l)
fun semi__method_simp_only l ctxt = clear_simpset ctxt addsimps (semi__thm_mult_l ctxt l)
fun semi__method_simp_add_del_split (l_add, l_del, l_split) ctxt =
fold Splitter.add_split (semi__thm_mult_l ctxt l_split)
(ctxt addsimps (semi__thm_mult_l ctxt l_add)
delsimps (semi__thm_mult_l ctxt l_del))
fun semi__method expr = let open META open Method open META_overload in case expr of
Method_rule o_s => Basic (fn ctxt =>
METHOD (HEADGOAL o Classical.rule_tac
ctxt
(case o_s of NONE => []
| SOME s => [semi__thm_attribute_single ctxt s])))
| Method_drule s => Basic (fn ctxt => drule ctxt 0 [semi__thm_attribute_single ctxt s])
| Method_erule s => Basic (fn ctxt => erule ctxt 0 [semi__thm_attribute_single ctxt s])
| Method_elim s => Basic (fn ctxt => elim ctxt [semi__thm_attribute_single ctxt s])
| Method_intro l => Basic (fn ctxt => intro ctxt (map (semi__thm_attribute_single ctxt) l))
| Method_subst (asm, l, s) => Basic (fn ctxt =>
SIMPLE_METHOD' ((if asm then EqSubst.eqsubst_asm_tac else EqSubst.eqsubst_tac)
ctxt
(map (fn s => case Int.fromString (To_string0 s) of
SOME i => i) l)
[semi__thm_attribute_single ctxt s]))
| Method_insert l => Basic (fn ctxt => insert (semi__thm_mult_l ctxt l))
| Method_plus t => Combinator ( no_combinator_info
, Repeat1
, [Combinator (no_combinator_info, Then, List.map semi__method t)])
| Method_option t => Combinator ( no_combinator_info
, Try
, [Combinator (no_combinator_info, Then, List.map semi__method t)])
| Method_or t => Combinator (no_combinator_info, Orelse, List.map semi__method t)
| Method_one (Method_simp_only l) => semi__method_simp_one (semi__method_simp_only l)
| Method_one (Method_simp_add_del_split l) => semi__method_simp_one (semi__method_simp_add_del_split l)
| Method_all (Method_simp_only l) => semi__method_simp_all (semi__method_simp_only l)
| Method_all (Method_simp_add_del_split l) => semi__method_simp_all (semi__method_simp_add_del_split l)
| Method_auto_simp_add_split (l_simp, l_split) =>
Basic (fn ctxt => SIMPLE_METHOD (auto_tac (fold (fn (f, l) => fold f l)
[(Simplifier.add_simp, semi__thm_mult_l ctxt l_simp)
,(Splitter.add_split, List.map (Proof_Context.get_thm ctxt o To_string0) l_split)]
ctxt)))
| Method_rename_tac l => Basic (K (SIMPLE_METHOD' (Tactic.rename_tac (List.map To_string0 l))))
| Method_case_tac e =>
Basic (fn ctxt => SIMPLE_METHOD' (Induct_Tacs.case_tac ctxt (of_semi__term e) [] NONE))
| Method_blast n =>
Basic (case n of NONE => SIMPLE_METHOD' o blast_tac
| SOME lim => fn ctxt => SIMPLE_METHOD' (depth_tac ctxt (Code_Numeral.integer_of_natural lim)))
| Method_clarify => Basic (fn ctxt => (SIMPLE_METHOD' (fn i => CHANGED_PROP (clarify_tac ctxt i))))
| Method_metis (l_opt, l) =>
Basic (fn ctxt => (METHOD oo Metis_Tactic.metis_method)
( (if l_opt = [] then NONE else SOME (map To_string0 l_opt), NONE)
, map (semi__thm_attribute_single ctxt) l)
ctxt)
end
fun then_tactic l = let open Method in
(Combinator (no_combinator_info, Then, map semi__method l), (Position.none, Position.none))
end
fun local_terminal_proof o_by = let open META in case o_by of
Command_done => Proof.local_done_proof
| Command_sorry => Proof.local_skip_proof true
| Command_by l_apply => Proof.local_terminal_proof (then_tactic l_apply, NONE)
end
fun global_terminal_proof o_by = let open META in case o_by of
Command_done => Proof.global_done_proof
| Command_sorry => Proof.global_skip_proof true
| Command_by l_apply => Proof.global_terminal_proof (then_tactic l_apply, NONE)
end
fun proof_show_gen f (thes, thes_when) st = st
|> Proof.proof
(SOME ( Method.Source [Token.make_string ("-", Position.none)]
, (Position.none, Position.none)))
|> Seq.the_result ""
|> f
|> Proof.show_cmd
(thes_when = [])
NONE
(K I)
[]
(if thes_when = [] then [] else [(Binding.empty_atts, map (fn t => (t, [])) thes_when)])
[(Binding.empty_atts, [(thes, [])])]
true
|> snd
val semi__command_state = let open META_overload in
fn META.Command_apply_end l => (fn st => st |> Proof.apply_end (then_tactic l)
|> Seq.the_result "")
end
val semi__command_proof = let open META_overload
val thesis = "?thesis"
fun proof_show f = proof_show_gen f (thesis, []) in
fn META.Command_apply l => (fn st => st |> Proof.apply (then_tactic l)
|> Seq.the_result "")
| META.Command_using l => (fn st =>
let val ctxt = Proof.context_of st in
Proof.using [map (fn s => ([ s], [])) (semi__thm_mult_l ctxt l)] st
end)
| META.Command_unfolding l => (fn st =>
let val ctxt = Proof.context_of st in
Proof.unfolding [map (fn s => ([s], [])) (semi__thm_mult_l ctxt l)] st
end)
| META.Command_let (e1, e2) =>
proof_show (Proof.let_bind_cmd [([of_semi__term e1], of_semi__term e2)])
| META.Command_have (n, b, e, e_pr) => proof_show (fn st => st
|> Proof.have_cmd true NONE (K I) [] []
[( (To_sbinding n, if b then [[Token.make_string ("simp", Position.none)]] else [])
, [(of_semi__term e, [])])]
true
|> snd
|> local_terminal_proof e_pr)
| META.Command_fix_let (l, l_let, o_exp, _) =>
proof_show_gen ( fold (fn (e1, e2) =>
Proof.let_bind_cmd [([of_semi__term e1], of_semi__term e2)])
l_let
o Proof.fix_cmd (List.map (fn i => (To_sbinding i, NONE, NoSyn)) l))
( case o_exp of NONE => thesis | SOME (l_spec, _) =>
(String.concatWith (" \<Longrightarrow> ")
(List.map of_semi__term l_spec))
, case o_exp of NONE => [] | SOME (_, l_when) => List.map of_semi__term l_when)
end
fun semi__theory in_theory in_local = let open META open META_overload in (*let val f = *)fn
Theory_datatype (Datatype (n, l)) => in_local
(BNF_FP_Def_Sugar.co_datatype_cmd
BNF_Util.Least_FP
BNF_LFP.construct_lfp
(Ctr_Sugar.default_ctr_options_cmd,
[( ( ( (([], To_sbinding n), NoSyn)
, List.map (fn (n, l) => ( ( (To_binding "", To_sbinding n)
, List.map (fn s => (To_binding "", of_semi__typ s)) l)
, NoSyn)) l)
, (To_binding "", To_binding "", To_binding ""))
, [])]))
| Theory_type_synonym (Type_synonym (n, v, l)) => in_theory
(fn thy =>
let val s_bind = To_sbinding n in
(snd o Typedecl.abbrev_global
(s_bind, map To_string0 v, NoSyn)
(Isabelle_Typedecl.abbrev_cmd0 (SOME s_bind) thy (of_semi__typ l))) thy
end)
| Theory_type_notation (Type_notation (n, e)) => in_local
(Specification.type_notation_cmd true ("", true) [(To_string0 n, Mixfix (Input.string (To_string0 e), [], 1000, Position.no_range))])
| Theory_instantiation (Instantiation (n, n_def, expr)) => in_theory
(fn thy =>
let val name = To_string0 n
val tycos =
[ let val Term.Type (s, _) = (Isabelle_Typedecl.abbrev_cmd0 NONE thy name) in s end ] in
thy
|> Class.instantiation (tycos, [], Syntax.read_sort (Proof_Context.init_global thy) "object")
|> fold_map (fn _ => fn thy =>
let val ((_, (_, ty)), thy) = Specification.definition_cmd
NONE [] []
((To_binding (To_string0 n_def ^ "_" ^ name ^ "_def"), [])
, of_semi__term expr) false thy in
(ty, thy)
end) tycos
|-> Class.prove_instantiation_exit_result (map o Morphism.thm) (fn ctxt => fn thms =>
Class.intro_classes_tac ctxt [] THEN ALLGOALS (Proof_Context.fact_tac ctxt thms))
|-> K I
end)
| Theory_overloading (Overloading (n_c, e_c, n, e)) => in_theory
(fn thy => thy
|> Overloading.overloading_cmd [(To_string0 n_c, of_semi__term e_c, true)]
|> snd o Specification.definition_cmd NONE [] [] ((To_sbinding n, []), of_semi__term e) false
|> Local_Theory.exit_global)
| Theory_consts (Consts (n, ty, symb)) => in_theory
(Sign.add_consts_cmd [( To_sbinding n
, of_semi__typ ty
, Mixfix (Input.string ("(_) " ^ To_string0 symb), [], 1000, Position.no_range))])
| Theory_definition def => in_local
let val (def, e) = case def of
Definition e => (NONE, e)
| Definition_where1 (name, (abbrev, prio), e) =>
(SOME ( To_sbinding name
, NONE
, Mixfix (Input.string ("(1" ^ of_semi__term abbrev ^ ")"), [], Code_Numeral.integer_of_natural prio, Position.no_range)), e)
| Definition_where2 (name, abbrev, e) =>
(SOME ( To_sbinding name
, NONE
, Mixfix (Input.string ("(" ^ of_semi__term abbrev ^ ")"), [], 1000, Position.no_range)), e) in
(snd o Specification.definition_cmd def [] [] (Binding.empty_atts, of_semi__term e) false)
end
| Theory_lemmas (Lemmas_simp_thm (simp, s, l)) => in_local
(fn lthy => (snd o Specification.theorems Thm.theoremK
[((To_sbinding s, List.map (fn s => Attrib.check_src lthy [Token.make_string (s, Position.none)])
(if simp then ["simp", "code_unfold"] else [])),
List.map (fn x => ([semi__thm_attribute_single lthy x], [])) l)]
[]
false) lthy)
| Theory_lemmas (Lemmas_simp_thms (s, l)) => in_local
(fn lthy => (snd o Specification.theorems Thm.theoremK
[((To_sbinding s, List.map (fn s => Attrib.check_src lthy [Token.make_string (s, Position.none)])
["simp", "code_unfold"]),
List.map (fn x => (Proof_Context.get_thms lthy (To_string0 x), [])) l)]
[]
false) lthy)
| Theory_lemma (Lemma (n, l_spec, l_apply, o_by)) => in_local
(fn lthy =>
Specification.theorem_cmd true Thm.theoremK NONE (K I)
Binding.empty_atts [] [] (Element.Shows [((To_sbinding n, [])
,[((String.concatWith (" \<Longrightarrow> ")
(List.map of_semi__term l_spec)), [])])])
false lthy
|> fold (semi__command_proof o META.Command_apply) l_apply
|> global_terminal_proof o_by)
| Theory_lemma (Lemma_assumes (n, l_spec, concl, l_apply, o_by)) => in_local
(fn lthy => lthy
|> Specification.theorem_cmd true Thm.theoremK NONE (K I)
(To_sbinding n, [])
[]
(List.map (fn (n, (b, e)) =>
Element.Assumes [( ( To_sbinding n
, if b then [[Token.make_string ("simp", Position.none)]] else [])
, [(of_semi__term e, [])])])
l_spec)
(Element.Shows [(Binding.empty_atts, [(of_semi__term concl, [])])])
false
|> fold semi__command_proof l_apply
|> (case map_filter (fn META.Command_let _ => SOME []
| META.Command_have _ => SOME []
| META.Command_fix_let (_, _, _, l) => SOME l
| _ => NONE)
(rev l_apply) of
[] => global_terminal_proof o_by
| _ :: l => let val arg = (NONE, true) in fn st => st
|> local_terminal_proof o_by
|> fold (fn l => fold semi__command_state l o Proof.local_qed arg) l
|> Proof.global_qed arg end))
| Theory_axiomatization (Axiomatization (n, e)) => in_theory
(#2 o Specification.axiomatization_cmd [] [] [] [((To_sbinding n, []), of_semi__term e)])
| Theory_section _ => in_theory I
| Theory_text _ => in_theory I
| Theory_ML (SML ml) =>
in_theory (Code_printing.reflect_ml (Input.source false (of_semi__term' ml)
(Position.none, Position.none)))
| Theory_setup (Setup ml) =>
in_theory (Isar_Cmd.setup (Input.source false (of_semi__term' ml)
(Position.none, Position.none)))
| Theory_thm (Thm thm) => in_local
(fn lthy =>
let val () =
writeln
(Pretty.string_of
(Proof_Context.pretty_fact lthy ("", List.map (semi__thm_attribute_single lthy) thm))) in
lthy
end)
| Theory_interpretation (Interpretation (n, loc_n, loc_param, o_by)) => in_local
(fn lthy => lthy
|> Interpretation.interpretation_cmd ( [ ( (To_string0 loc_n, Position.none)
, ( (To_string0 n, true)
, (if loc_param = [] then
Expression.Named []
else
Expression.Positional (map (SOME o of_semi__term)
loc_param), [])))]
, [])
|> global_terminal_proof o_by)
(*in fn t => fn thy => f t thy handle ERROR s => (warning s; thy)
end*)
end
end
structure Bind_META = struct open Bind_Isabelle
fun all_meta aux ret = let open META open META_overload in fn
META_semi_theories thy =>
ret o (case thy of
Theories_one thy => semi__theory I Named_Target.theory_map thy
| Theories_locale (data, l) => fn thy => thy
|> ( Expression.add_locale_cmd
(To_sbinding (META.holThyLocale_name data))
Binding.empty
([], [])
(List.concat
(map
(fn (fixes, assumes) => List.concat
[ map (fn (e,ty) => Element.Fixes [( To_binding (of_semi__term e)
, SOME (of_semi__typ ty)
, NoSyn)]) fixes
, case assumes of NONE => []
| SOME (n, e) => [Element.Assumes [( (To_sbinding n, [])
, [(of_semi__term e, [])])]]])
(META.holThyLocale_header data)))
#> snd)
|> fold (fold (semi__theory Local_Theory.background_theory
- (fn f => fn lthy => lthy
- |> Local_Theory.new_group
- |> f
- |> Local_Theory.reset_group))) l
+ (fn f =>
+ \<comment> \<open>Note: This function is not equivalent to \<^ML>\<open>Local_Theory.subtarget\<close>.\<close>
+ Local_Theory.new_group
+ #> f
+ #> Local_Theory.reset_group
+ #> (fn lthy =>
+ #1 (Named_Target.switch NONE (Context.Proof lthy)) lthy
+ |> Context.the_proof)))) l
|> Local_Theory.exit_global)
| META_boot_generation_syntax _ => ret o I
| META_boot_setup_env _ => ret o I
| META_all_meta_embedding meta => fn thy =>
aux
(map2_ctxt_term
(fn T_pure x => T_pure x
| e =>
let fun aux e = case e of
T_to_be_parsed (s, _) => SOME let val t = Syntax.read_term (Proof_Context.init_global thy)
(To_string0 s) in
(t, Term.add_frees t [])
end
| T_lambda (a, e) =>
Option.map
(fn (e, l_free) =>
let val a = To_string0 a
val (t, l_free) = case List.partition (fn (x, _) => x = a) l_free of
([], l_free) => (Term.TFree ("'a", ["HOL.type"]), l_free)
| ([(_, t)], l_free) => (t, l_free) in
(lambda (Term.Free (a, t)) e, l_free)
end)
(aux e)
| _ => NONE in
case aux e of
NONE => error "nested pure expression not expected"
| SOME (e, _) => META.T_pure (From.Pure.term e)
end) meta) thy
end
end
\<close>
(*<*)
subsection\<open>Directives of Compilation for Target Languages\<close>
ML\<open>
structure Deep0 = struct
fun apply_hs_code_identifiers ml_module thy =
let fun mod_hs (fic, ml_module) = Code_Symbol.Module (fic, [("Haskell", SOME ml_module)]) in
fold (Code_Target.set_identifiers o mod_hs)
(map (fn x => (Context.theory_name x, ml_module))
(* list of .hs files that will be merged together in "ml_module" *)
( thy
:: (* we over-approximate the set of compiler files *)
Context.ancestors_of thy)) thy end
val default_key = ""
structure Export_code_env = struct
structure Isabelle = struct
val function = "write_file"
val argument_main = "main"
end
structure Haskell = struct
val function = "Function"
val argument = "Argument"
val main = "Main"
structure Filename = struct
fun hs_function ext = function ^ "." ^ ext
fun hs_argument ext = argument ^ "." ^ ext
fun hs_main ext = main ^ "." ^ ext
end
end
structure OCaml = struct
val make = "write"
structure Filename = struct
fun function ext = "function." ^ ext
fun argument ext = "argument." ^ ext
fun main_fic ext = "main." ^ ext
fun makefile ext = make ^ "." ^ ext
end
end
structure Scala = struct
structure Filename = struct
fun function ext = "Function." ^ ext
fun argument ext = "Argument." ^ ext
end
end
structure SML = struct
val main = "Run"
structure Filename = struct
fun function ext = "Function." ^ ext
fun argument ext = "Argument." ^ ext
fun stdout ext = "Stdout." ^ ext
fun main_fic ext = main ^ "." ^ ext
end
end
datatype file_input = File
| Directory
end
fun compile l cmd =
let val (l, rc) = fold (fn cmd => (fn (l, 0) =>
let val {out, err, rc, ...} = Bash.process cmd in
((out, err) :: l, rc) end
| x => x)) l ([], 0)
val l = rev l in
if rc = 0 then
(l, Isabelle_System.bash_output cmd)
else
let val () = fold (fn (out, err) => K (warning err; writeln out)) l () in
error "Compilation failed"
end
end
val check =
fold (fn (cmd, msg) => fn () =>
let val (out, rc) = Isabelle_System.bash_output cmd in
if rc = 0 then
()
else
( writeln out
; error msg)
end)
val compiler = let open Export_code_env in
[ let val ml_ext = "hs" in
( "Haskell", ml_ext, Directory, Haskell.Filename.hs_function
, check [("ghc --version", "ghc is not installed (required for compiling a Haskell project)")]
, (fn mk_fic => fn ml_module => fn mk_free => fn thy =>
File.write (mk_fic ("Main." ^ ml_ext))
(String.concatWith "; " [ "import qualified Unsafe.Coerce"
, "import qualified " ^ Haskell.function
, "import qualified " ^ Haskell.argument
, "main :: IO ()"
, "main = " ^ Haskell.function ^ "." ^ Isabelle.function ^
" (Unsafe.Coerce.unsafeCoerce " ^ Haskell.argument ^ "." ^
mk_free (Proof_Context.init_global thy)
Isabelle.argument_main ([]: (string * string) list) ^
")"]))
, fn tmp_export_code => fn tmp_file =>
compile [ "mv " ^ tmp_file ^ "/" ^ Haskell.Filename.hs_argument ml_ext ^ " " ^
Path.implode tmp_export_code
, "cd " ^ Path.implode tmp_export_code ^
" && ghc -outputdir _build " ^ Haskell.Filename.hs_main ml_ext ]
(Path.implode (Path.append tmp_export_code (Path.make [Haskell.main]))))
end
, let val ml_ext = "ml" in
( "OCaml", ml_ext, File, OCaml.Filename.function
, check [("ocp-build -version", "ocp-build is not installed (required for compiling an OCaml project)")
,("ocamlopt -version", "ocamlopt is not installed (required for compiling an OCaml project)")]
, fn mk_fic => fn ml_module => fn mk_free => fn thy =>
let val () =
File.write
(mk_fic (OCaml.Filename.makefile "ocp"))
(String.concat
[ "comp += \"-g\" link += \"-g\" "
, "begin generated = true begin library \"nums\" end end "
, "begin program \"", OCaml.make, "\" sort = true files = [ \"", OCaml.Filename.function ml_ext
, "\" \"", OCaml.Filename.argument ml_ext
, "\" \"", OCaml.Filename.main_fic ml_ext
, "\" ]"
, "requires = [\"nums\"] "
, "end" ]) in
File.write (mk_fic (OCaml.Filename.main_fic ml_ext))
("let _ = Function." ^ ml_module ^ "." ^ Isabelle.function ^
" (Obj.magic (Argument." ^ ml_module ^ "." ^
mk_free (Proof_Context.init_global thy)
Isabelle.argument_main
([]: (string * string) list) ^ "))")
end
, fn tmp_export_code => fn tmp_file =>
compile
[ "mv " ^ tmp_file ^ " " ^
Path.implode (Path.append tmp_export_code (Path.make [OCaml.Filename.argument ml_ext]))
, "cd " ^ Path.implode tmp_export_code ^
" && ocp-build -init -scan -no-bytecode 2>&1" ]
(Path.implode (Path.append tmp_export_code (Path.make [ "_obuild"
, OCaml.make
, OCaml.make ^ ".asm"]))))
end
, let val ml_ext = "scala"
val ml_module = Unsynchronized.ref ("", "") in
( "Scala", ml_ext, File, Scala.Filename.function
, check [("scala -e 0", "scala is not installed (required for compiling a Scala project)")]
, (fn _ => fn ml_mod => fn mk_free => fn thy =>
ml_module := (ml_mod, mk_free (Proof_Context.init_global thy)
Isabelle.argument_main
([]: (string * string) list)))
, fn tmp_export_code => fn tmp_file =>
let val l = File.read_lines (Path.explode tmp_file)
val (ml_module, ml_main) = Unsynchronized.! ml_module
val () =
File.write_list
(Path.append tmp_export_code (Path.make [Scala.Filename.argument ml_ext]))
(List.map
(fn s => s ^ "\n")
("object " ^ ml_module ^ " { def main (__ : Array [String]) = " ^
ml_module ^ "." ^ Isabelle.function ^ " (" ^ ml_module ^ "." ^ ml_main ^ ")"
:: l @ ["}"])) in
compile []
("scala -nowarn " ^ Path.implode (Path.append tmp_export_code
(Path.make [Scala.Filename.argument ml_ext])))
end)
end
, let val ml_ext_thy = "thy"
val ml_ext_ml = "ML" in
( "SML", ml_ext_ml, File, SML.Filename.function
, check [ let val isa = "isabelle" in
( Path.implode (Path.expand (Path.append (Path.variable "ISABELLE_HOME") (Path.make ["bin", isa]))) ^ " version"
, isa ^ " is not installed (required for compiling a SML project)")
end ]
, fn mk_fic => fn ml_module => fn mk_free => fn thy =>
let val esc_star = "*"
fun ml l =
List.concat
[ [ "ML{" ^ esc_star ]
, map (fn s => s ^ ";") l
, [ esc_star ^ "}"] ]
val () =
let val fic = mk_fic (SML.Filename.function ml_ext_ml) in
(* replace ("\\" ^ "<") by ("\\\060") in 'fic' *)
File.write_list fic
(map (fn s =>
(if s = "" then
""
else
String.concatWith "\\"
(map (fn s =>
let val l = String.size s in
if l > 0 andalso String.sub (s,0) = #"<" then
"\\060" ^ String.substring (s, 1, String.size s - 1)
else
s end)
(String.fields (fn c => c = #"\\") s))) ^ "\n")
(File.read_lines fic))
end in
File.write_list (mk_fic (SML.Filename.main_fic ml_ext_thy))
(map (fn s => s ^ "\n") (List.concat
[ [ "theory " ^ SML.main
, "imports Main"
, "begin"
, "declare [[ML_print_depth = 500]]"
(* any large number so that @{make_string} displays all the expression *) ]
, ml [ "val stdout_file = Unsynchronized.ref (File.read (Path.make [\"" ^
SML.Filename.stdout ml_ext_ml ^ "\"]))"
, "use \"" ^ SML.Filename.argument ml_ext_ml ^ "\"" ]
, ml let val arg = "argument" in
[ "val " ^ arg ^ " = XML.content_of (YXML.parse_body (@{make_string} (" ^
ml_module ^ "." ^
mk_free (Proof_Context.init_global thy)
Isabelle.argument_main
([]: (string * string) list) ^ ")))"
, "use \"" ^ SML.Filename.function ml_ext_ml ^ "\""
, "ML_Context.eval_source (ML_Compiler.verbose false ML_Compiler.flags) (Input.source false (\"let open " ^
ml_module ^ " in " ^ Isabelle.function ^ " (\" ^ " ^ arg ^
" ^ \") end\") (Position.none, Position.none) )" ]
end
, [ "end" ]]))
end
, fn tmp_export_code => fn tmp_file =>
let
val stdout_file = Isabelle_System.create_tmp_path "stdout_file" "thy"
val () = File.write (Path.append tmp_export_code (Path.make [SML.Filename.stdout ml_ext_ml]))
(Path.implode (Path.expand stdout_file))
val (l, (_, exit_st)) =
compile
[ "mv " ^ tmp_file ^ " " ^ Path.implode (Path.append tmp_export_code
(Path.make [SML.Filename.argument ml_ext_ml]))
, "cd " ^ Path.implode tmp_export_code ^
" && echo 'use_thy \"" ^ SML.main ^ "\";' | " ^
Path.implode (Path.expand (Path.append (Path.variable "ISABELLE_HOME") (Path.make ["bin", "isabelle"]))) ^
" console" ]
"true"
val stdout =
case try File.read stdout_file of
SOME s => let val () = File.rm stdout_file in s end
| NONE => "" in
(l, (stdout, if List.exists (fn (err, _) =>
List.exists (fn "*** Error" => true | _ => false)
(String.tokens (fn #"\n" => true | _ => false) err)) l then
let val () = fold (fn (out, err) => K (warning err; writeln out)) l () in
1
end
else exit_st))
end)
end ]
end
structure Find = struct
fun ext ml_compiler =
case List.find (fn (ml_compiler0, _, _, _, _, _, _) => ml_compiler0 = ml_compiler) compiler of
SOME (_, ext, _, _, _, _, _) => ext
fun export_mode ml_compiler =
case List.find (fn (ml_compiler0, _, _, _, _, _, _) => ml_compiler0 = ml_compiler) compiler of
SOME (_, _, mode, _, _, _, _) => mode
fun function ml_compiler =
case List.find (fn (ml_compiler0, _, _, _, _, _, _) => ml_compiler0 = ml_compiler) compiler of
SOME (_, _, _, f, _, _, _) => f
fun check_compil ml_compiler =
case List.find (fn (ml_compiler0, _, _, _, _, _, _) => ml_compiler0 = ml_compiler) compiler of
SOME (_, _, _, _, build, _, _) => build
fun init ml_compiler =
case List.find (fn (ml_compiler0, _, _, _, _, _, _) => ml_compiler0 = ml_compiler) compiler of
SOME (_, _, _, _, _, build, _) => build
fun build ml_compiler =
case List.find (fn (ml_compiler0, _, _, _, _, _, _) => ml_compiler0 = ml_compiler) compiler of
SOME (_, _, _, _, _, _, build) => build
end
end
\<close>
ML\<open>
structure Deep = struct
fun absolute_path filename thy =
Path.implode (Path.append (Resources.master_directory thy) (Path.explode filename))
fun export_code_tmp_file seris g =
fold
(fn ((ml_compiler, ml_module), export_arg) => fn f => fn g =>
f (fn accu =>
let val tmp_name = Context.theory_name @{theory} in
(if Deep0.Find.export_mode ml_compiler = Deep0.Export_code_env.Directory then
Isabelle_System.with_tmp_dir tmp_name
else
Isabelle_System.with_tmp_file tmp_name (Deep0.Find.ext ml_compiler))
(fn filename =>
g (((((ml_compiler, ml_module), Path.implode filename), export_arg) :: accu)))
end))
seris
(fn f => f [])
(g o rev)
fun mk_path_export_code tmp_export_code ml_compiler i =
Path.append tmp_export_code (Path.make [ml_compiler ^ Int.toString i])
fun export_code_cmd' seris tmp_export_code f_err filename_thy raw_cs thy =
export_code_tmp_file seris
(fn seris =>
let val mem_scala = List.exists (fn ((("Scala", _), _), _) => true | _ => false) seris
val thy' (* FIXME unused *) = Isabelle_Code_Target.export_code_cmd
false
(if mem_scala then Deep0.Export_code_env.Isabelle.function :: raw_cs else raw_cs)
((map o apfst o apsnd) SOME seris)
(let val v = Deep0.apply_hs_code_identifiers Deep0.Export_code_env.Haskell.argument thy in
if mem_scala then Code_printing.apply_code_printing v else v end) in
List_mapi
(fn i => fn seri => case seri of (((ml_compiler, _), filename), _) =>
let val (l, (out, err)) =
Deep0.Find.build
ml_compiler
(mk_path_export_code tmp_export_code ml_compiler i)
filename
val _ = f_err seri err in
(l, out)
end) seris
end)
fun mk_term ctxt s =
fst (Scan.pass (Context.Proof ctxt) Args.term (Token.explode0 (Thy_Header.get_keywords' ctxt) s))
fun mk_free ctxt s l =
let val t_s = mk_term ctxt s in
if Term.is_Free t_s then s else
let val l = (s, "") :: l in
mk_free ctxt (fst (hd (Term.variant_frees t_s l))) l
end
end
val list_all_eq = fn x0 :: xs =>
List.all (fn x1 => x0 = x1) xs
end
\<close>
subsection\<open>Saving the History of Meta Commands\<close>
ML\<open>
fun p_gen f g = f "[" "]" g
(*|| f "{" "}" g*)
|| f "(" ")" g
fun paren f = p_gen (fn s1 => fn s2 => fn f => Parse.$$$ s1 |-- f --| Parse.$$$ s2) f
fun parse_l f = Parse.$$$ "[" |-- Parse.!!! (Parse.list f --| Parse.$$$ "]")
fun parse_l' f = Parse.$$$ "[" |-- Parse.list f --| Parse.$$$ "]"
fun parse_l1' f = Parse.$$$ "[" |-- Parse.list1 f --| Parse.$$$ "]"
fun annot_ty f = Parse.$$$ "(" |-- f --| Parse.$$$ "::" -- Parse.binding --| Parse.$$$ ")"
\<close>
ML\<open>
structure Generation_mode = struct
datatype internal_deep = Internal_deep of
(string * (string list (* imports *) * string (* import optional (bootstrap) *))) option
* ((bstring (* compiler *) * bstring (* main module *) ) * Token.T list) list (* seri_args *)
* bstring option (* filename_thy *)
* Path.T (* tmp dir export_code *)
* bool (* true: skip preview of code exportation *)
datatype 'a generation_mode = Gen_deep of unit META.compiler_env_config_ext
* internal_deep
| Gen_shallow of unit META.compiler_env_config_ext
* 'a (* theory init *)
| Gen_syntax_print of int option
structure Data_gen = Theory_Data
(type T = theory generation_mode list Symtab.table
val empty = Symtab.empty
val extend = I
val merge = Symtab.merge (K true))
val code_expr_argsP = Scan.optional (@{keyword "("} |-- Parse.args --| @{keyword ")"}) []
val parse_scheme =
@{keyword "design"} >> K META.Gen_only_design || @{keyword "analysis"} >> K META.Gen_only_analysis
val parse_sorry_mode =
Scan.optional ( @{keyword "SORRY"} >> K (SOME META.Gen_sorry)
|| @{keyword "no_dirty"} >> K (SOME META.Gen_no_dirty)) NONE
val parse_deep =
Scan.optional (@{keyword "skip_export"} >> K true) false
-- Scan.optional (((Parse.$$$ "(" -- @{keyword "THEORY"}) |-- Parse.name -- ((Parse.$$$ ")"
-- Parse.$$$ "(" -- @{keyword "IMPORTS"}) |-- parse_l' Parse.name -- Parse.name)
--| Parse.$$$ ")") >> SOME) NONE
-- Scan.optional (@{keyword "SECTION"} >> K true) false
-- parse_sorry_mode
-- (* code_expr_inP *) parse_l1' (@{keyword "in"} |-- (Parse.name
-- Scan.optional (@{keyword "module_name"} |-- Parse.name) ""
-- code_expr_argsP))
-- Scan.optional
((Parse.$$$ "(" -- @{keyword "output_directory"}) |-- Parse.name --| Parse.$$$ ")" >> SOME)
NONE
val parse_semantics =
let val z = 0 in
Scan.optional
(paren (@{keyword "generation_semantics"}
|-- paren (parse_scheme
-- Scan.optional ((Parse.$$$ "," -- @{keyword "oid_start"}) |-- Parse.nat)
z)))
(META.Gen_default, z)
end
val mode =
let fun mk_env output_disable_thy output_header_thy oid_start design_analysis sorry_mode dirty =
META.compiler_env_config_empty
output_disable_thy
(From.option (From.pair From.string (From.pair (From.list From.string) From.string))
output_header_thy)
(META.oidInit (From.internal_oid oid_start))
design_analysis
(sorry_mode, dirty) in
@{keyword "deep"} |-- parse_semantics -- parse_deep >>
(fn ( (design_analysis, oid_start)
, ( ((((skip_exportation, output_header_thy), output_disable_thy), sorry_mode), seri_args)
, filename_thy)) =>
fn ctxt =>
Gen_deep ( mk_env (not output_disable_thy)
output_header_thy
oid_start
design_analysis
sorry_mode
(Config.get ctxt quick_and_dirty)
, Internal_deep ( output_header_thy
, seri_args
, filename_thy
, Isabelle_System.create_tmp_path "deep_export_code" ""
, skip_exportation)))
|| @{keyword "shallow"} |-- parse_semantics -- parse_sorry_mode >>
(fn ((design_analysis, oid_start), sorry_mode) =>
fn ctxt =>
Gen_shallow ( mk_env true
NONE
oid_start
design_analysis
sorry_mode
(Config.get ctxt quick_and_dirty)
, ()))
|| (@{keyword "syntax_print"} |-- Scan.optional (Parse.number >> SOME) NONE) >>
(fn n => K (Gen_syntax_print (case n of NONE => NONE | SOME n => Int.fromString n)))
end
fun f_command l_mode =
Toplevel.theory (fn thy =>
let val (l_mode, thy) = META.mapM
(fn Gen_shallow (env, ()) => let val thy0 = thy in
fn thy => (Gen_shallow (env, thy0), thy) end
| Gen_syntax_print n => (fn thy => (Gen_syntax_print n, thy))
| Gen_deep (env, Internal_deep ( output_header_thy
, seri_args
, filename_thy
, tmp_export_code
, skip_exportation)) => fn thy =>
let val _ =
warning ("After closing Isabelle/jEdit, we may still need to remove this directory (by hand): " ^
Path.implode (Path.expand tmp_export_code))
val seri_args' = List_mapi (fn i => fn ((ml_compiler, ml_module), export_arg) =>
let val tmp_export_code = Deep.mk_path_export_code tmp_export_code ml_compiler i
fun mk_fic s = Path.append tmp_export_code (Path.make [s])
val () = Deep0.Find.check_compil ml_compiler ()
val () = Isabelle_System.mkdirs tmp_export_code in
((( (ml_compiler, ml_module)
, Path.implode (if Deep0.Find.export_mode ml_compiler = Deep0.Export_code_env.Directory then
tmp_export_code
else
mk_fic (Deep0.Find.function ml_compiler (Deep0.Find.ext ml_compiler))))
, export_arg), mk_fic)
end) seri_args
val thy' (* FIXME unused *) = Isabelle_Code_Target.export_code_cmd
(List.exists (fn (((("SML", _), _), _), _) => true | _ => false) seri_args')
[Deep0.Export_code_env.Isabelle.function]
(List.map ((apfst o apsnd) SOME o fst) seri_args')
(Code_printing.apply_code_printing
(Deep0.apply_hs_code_identifiers Deep0.Export_code_env.Haskell.function thy))
val () = fold (fn ((((ml_compiler, ml_module), _), _), mk_fic) => fn _ =>
Deep0.Find.init ml_compiler mk_fic ml_module Deep.mk_free thy) seri_args' () in
(Gen_deep (env, Internal_deep ( output_header_thy
, seri_args
, filename_thy
, tmp_export_code
, skip_exportation)), thy) end)
let val ctxt = Proof_Context.init_global thy in
map (fn f => f ctxt) l_mode
end
thy in
Data_gen.map (Symtab.map_default (Deep0.default_key, l_mode) (fn _ => l_mode)) thy
end)
fun update_compiler_config f =
Data_gen.map
(Symtab.map_entry
Deep0.default_key
(fn l_mode =>
map (fn Gen_deep (env, d) => Gen_deep (META.compiler_env_config_update f env, d)
| Gen_shallow (env, thy) => Gen_shallow (META.compiler_env_config_update f env, thy)
| Gen_syntax_print n => Gen_syntax_print n) l_mode))
end
\<close>
subsection\<open>Factoring All Meta Commands Together\<close>
setup\<open>ML_Antiquotation.inline @{binding mk_string} (Scan.succeed
"(fn ctxt => fn x => ML_Pretty.string_of_polyml (ML_system_pretty (x, FixedInt.fromInt (Config.get ctxt (ML_Print_Depth.print_depth)))))")
\<close>
ML\<open>
fun exec_deep (env, output_header_thy, seri_args, filename_thy, tmp_export_code, l_obj) thy0 =
let open Generation_mode in
let val of_arg = META.isabelle_of_compiler_env_config META.isabelle_apply I in
let fun def s = Named_Target.theory_map (snd o Specification.definition_cmd NONE [] [] (Binding.empty_atts, s) false) in
let val name_main = Deep.mk_free (Proof_Context.init_global thy0)
Deep0.Export_code_env.Isabelle.argument_main [] in
thy0
|> def (String.concatWith " "
( "(" (* polymorphism weakening needed by export_code *)
^ name_main ^ " :: (_ \<times> abr_string option) compiler_env_config_scheme)"
:: "="
:: To_string0
(of_arg (META.compiler_env_config_more_map
(fn () => (l_obj, From.option
From.string
(Option.map (fn filename_thy =>
Deep.absolute_path filename_thy thy0)
filename_thy)))
env))
:: []))
|> Deep.export_code_cmd' seri_args
tmp_export_code
(fn (((_, _), msg), _) => fn err => if err <> 0 then error msg else ())
filename_thy
[name_main]
|> (fn l =>
let val (l_warn, l) = (map fst l, map snd l) in
if Deep.list_all_eq l then
(List.concat l_warn, hd l)
else
error "There is an extracted language which does not produce a similar Isabelle content as the others"
end)
|> (fn (l_warn, s) =>
let val () = writeln
(case (output_header_thy, filename_thy) of
(SOME _, SOME _) => s
| _ => String.concat (map ( (fn s => s ^ "\n")
o Active.sendback_markup_command
o trim_line)
(String.tokens (fn c => c = #"\t") s))) in
fold (fn (out, err) => K ( writeln (Markup.markup Markup.keyword2 err)
; case trim_line out of
"" => ()
| out => writeln (Markup.markup Markup.keyword1 out)))
l_warn
() end)
end end end end
fun outer_syntax_command0 mk_string cmd_spec cmd_descr parser get_all_meta_embed =
let open Generation_mode in
Outer_Syntax.command cmd_spec cmd_descr
(parser >> (fn name =>
Toplevel.theory (fn thy =>
let val (env, thy) =
META.mapM
let val get_all_meta_embed = get_all_meta_embed name in
fn Gen_syntax_print n =>
(fn thy =>
let val _ = writeln
(mk_string
(Proof_Context.init_global
(case n of NONE => thy
| SOME n => Config.put_global ML_Print_Depth.print_depth n thy))
name) in
(Gen_syntax_print n, thy)
end)
| Gen_deep (env, Internal_deep ( output_header_thy
, seri_args
, filename_thy
, tmp_export_code
, skip_exportation)) =>
(fn thy0 =>
let val l_obj = get_all_meta_embed thy0 in
thy0 |> (if skip_exportation then
K ()
else
exec_deep ( META.d_output_header_thy_update (fn _ => NONE) env
, output_header_thy
, seri_args
, NONE
, tmp_export_code
, l_obj))
|> K (Gen_deep ( META.fold_thy_deep l_obj env
, Internal_deep ( output_header_thy
, seri_args
, filename_thy
, tmp_export_code
, skip_exportation)), thy0)
end)
| Gen_shallow (env, thy0) => fn thy =>
let fun aux (env, thy) x =
META.fold_thy_shallow
(fn f => f () handle ERROR e =>
( warning "Shallow Backtracking: (true) Isabelle declarations occurring among the META-simulated ones are ignored (if any)"
(* TODO automatically determine if there is such Isabelle declarations,
for raising earlier a specific error message *)
; error e))
(fn _ => fn _ => thy0)
(fn l => fn (env, thy) =>
Bind_META.all_meta (fn x => fn thy => aux (env, thy) [x]) (pair env) l thy)
x
(env, thy)
val (env, thy) = aux (env, thy) (get_all_meta_embed thy) in
(Gen_shallow (env, thy0), thy)
end
end
(case Symtab.lookup (Data_gen.get thy) Deep0.default_key of SOME l => l
| _ => [Gen_syntax_print NONE])
thy
in
Data_gen.map (Symtab.update (Deep0.default_key, env)) thy end)))
end
fun outer_syntax_command mk_string cmd_spec cmd_descr parser get_all_meta_embed =
outer_syntax_command0 mk_string cmd_spec cmd_descr parser (fn a => fn thy => [get_all_meta_embed a thy])
\<close>
subsection\<open>Parameterizing the Semantics of Embedded Languages\<close>
ML\<open>
val () = let open Generation_mode in
Outer_Syntax.command @{command_keyword generation_syntax} "set the generating list"
(( mode >> (fn x => SOME [x])
|| parse_l' mode >> SOME
|| @{keyword "deep"} -- @{keyword "flush_all"} >> K NONE) >>
(fn SOME x => f_command x
| NONE =>
Toplevel.theory (fn thy =>
let val l = case Symtab.lookup (Data_gen.get thy) Deep0.default_key of SOME l => l | _ => []
val l = List.concat (List.map (fn Gen_deep x => [x] | _ => []) l)
val _ = case l of [] => warning "Nothing to perform." | _ => ()
val thy =
fold
(fn (env, Internal_deep (output_header_thy, seri_args, filename_thy, tmp_export_code, _)) => fn thy0 =>
thy0 |> let val (env, l_exec) = META.compiler_env_config_reset_all env in
exec_deep (env, output_header_thy, seri_args, filename_thy, tmp_export_code, l_exec) end
|> K thy0)
l
thy
in
thy end)))
end
\<close>
subsection\<open>Common Parser for Toy\<close>
ML\<open>
structure TOY_parse = struct
datatype ('a, 'b) use_context = TOY_context_invariant of 'a
| TOY_context_pre_post of 'b
fun optional f = Scan.optional (f >> SOME) NONE
val colon = Parse.$$$ ":"
fun repeat2 scan = scan ::: Scan.repeat1 scan
fun xml_unescape s = (XML.content_of (YXML.parse_body s), Position.none)
|> Symbol_Pos.explode |> Symbol_Pos.implode |> From.string
fun outer_syntax_command2 mk_string cmd_spec cmd_descr parser v_true v_false get_all_meta_embed =
outer_syntax_command mk_string cmd_spec cmd_descr
(optional (paren @{keyword "shallow"}) -- parser)
(fn (is_shallow, use) => fn thy =>
get_all_meta_embed
(if is_shallow = NONE then
( fn s =>
META.T_to_be_parsed ( From.string s
, xml_unescape s)
, v_true)
else
(From.toy_ctxt_term thy, v_false))
use)
(* *)
val ident_dot_dot = let val f = Parse.sym_ident >> (fn "\<bullet>" => "\<bullet>" | _ => Scan.fail "Syntax error") in
f -- f end
val ident_star = Parse.sym_ident (* "*" *)
(* *)
val unlimited_natural = ident_star >> (fn "*" => META.Mult_star
| "\<infinity>" => META.Mult_infinity
| _ => Scan.fail "Syntax error")
|| Parse.number >> (fn s => META.Mult_nat
(case Int.fromString s of
SOME i => Code_Numeral.natural_of_integer i
| NONE => Scan.fail "Syntax error"))
val term_base =
Parse.number >> (META.ToyDefInteger o From.string)
|| Parse.float_number >> (META.ToyDefReal o (From.pair From.string From.string o
(fn s => case String.tokens (fn #"." => true
| _ => false) s of [l1,l2] => (l1,l2)
| _ => Scan.fail "Syntax error")))
|| Parse.string >> (META.ToyDefString o From.string)
val multiplicity = parse_l' (unlimited_natural -- optional (ident_dot_dot |-- unlimited_natural))
fun toy_term x =
( term_base >> META.ShallB_term
|| Parse.binding >> (META.ShallB_str o From.binding)
|| @{keyword "self"} |-- Parse.nat >> (fn n => META.ShallB_self (From.internal_oid n))
|| paren (Parse.list toy_term) >> (* untyped, corresponds to Set, Sequence or Pair *)
(* WARNING for Set: we are describing a finite set *)
META.ShallB_list) x
val name_object = optional (Parse.list1 Parse.binding --| colon) -- Parse.binding
val type_object_weak =
let val name_object = Parse.binding >> (fn s => (NONE, s)) in
name_object -- Scan.repeat (Parse.$$$ "<" |-- Parse.list1 name_object) >>
let val f = fn (_, s) => META.ToyTyCore_pre (From.binding s) in
fn (s, l) => META.ToyTyObj (f s, map (map f) l)
end
end
val type_object = name_object -- Scan.repeat (Parse.$$$ "<" |-- Parse.list1 name_object) >>
let val f = fn (_, s) => META.ToyTyCore_pre (From.binding s) in
fn (s, l) => META.ToyTyObj (f s, map (map f) l)
end
val category =
multiplicity
-- optional (@{keyword "Role"} |-- Parse.binding)
-- Scan.repeat ( @{keyword "Ordered"} >> K META.Ordered0
|| @{keyword "Subsets"} |-- Parse.binding >> K META.Subsets0
|| @{keyword "Union"} >> K META.Union0
|| @{keyword "Redefines"} |-- Parse.binding >> K META.Redefines0
|| @{keyword "Derived"} -- Parse.$$$ "=" |-- Parse.term >> K META.Derived0
|| @{keyword "Qualifier"} |-- Parse.term >> K META.Qualifier0
|| @{keyword "Nonunique"} >> K META.Nonunique0
|| @{keyword "Sequence_"} >> K META.Sequence) >>
(fn ((l_mult, role), l) =>
META.Toy_multiplicity_ext (l_mult, From.option From.binding role, l, ()))
val type_base = Parse.reserved "Void" >> K META.ToyTy_base_void
|| Parse.reserved "Boolean" >> K META.ToyTy_base_boolean
|| Parse.reserved "Integer" >> K META.ToyTy_base_integer
|| Parse.reserved "UnlimitedNatural" >> K META.ToyTy_base_unlimitednatural
|| Parse.reserved "Real" >> K META.ToyTy_base_real
|| Parse.reserved "String" >> K META.ToyTy_base_string
fun use_type_gen type_object v =
((* collection *)
Parse.reserved "Set" |-- use_type >>
(fn l => META.ToyTy_collection (META.Toy_multiplicity_ext ([], NONE, [META.Set], ()), l))
|| Parse.reserved "Sequence" |-- use_type >>
(fn l => META.ToyTy_collection (META.Toy_multiplicity_ext ([], NONE, [META.Sequence], ()), l))
|| category -- use_type >> META.ToyTy_collection
(* pair *)
|| Parse.reserved "Pair" |--
( use_type -- use_type
|| Parse.$$$ "(" |-- use_type --| Parse.$$$ "," -- use_type --| Parse.$$$ ")") >> META.ToyTy_pair
(* base *)
|| type_base
(* raw HOL *)
|| Parse.sym_ident (* "\<acute>" *) |-- Parse.typ --| Parse.sym_ident (* "\<acute>" *) >>
(META.ToyTy_raw o xml_unescape)
(* object type *)
|| type_object >> META.ToyTy_object
|| ((Parse.$$$ "(" |-- Parse.list ( (Parse.binding --| colon >> (From.option From.binding o SOME))
-- ( Parse.$$$ "(" |-- use_type --| Parse.$$$ ")"
|| use_type_gen type_object_weak) >> META.ToyTy_binding
) --| Parse.$$$ ")"
>> (fn ty_arg => case rev ty_arg of
[] => META.ToyTy_base_void
| ty_arg => fold (fn x => fn acc => META.ToyTy_pair (x, acc))
(tl ty_arg)
(hd ty_arg)))
-- optional (colon |-- use_type))
>> (fn (ty_arg, ty_out) => case ty_out of NONE => ty_arg
| SOME ty_out => META.ToyTy_arrow (ty_arg, ty_out))
|| (Parse.$$$ "(" |-- use_type --| Parse.$$$ ")" >> (fn s => META.ToyTy_binding (NONE, s)))) v
and use_type x = use_type_gen type_object x
val use_prop =
(optional (optional (Parse.binding >> From.binding) --| Parse.$$$ ":") >> (fn NONE => NONE
| SOME x => x))
-- Parse.term --| optional (Parse.$$$ ";") >> (fn (n, e) => fn from_expr =>
META.ToyProp_ctxt (n, from_expr e))
(* *)
val association_end =
type_object
-- category
--| optional (Parse.$$$ ";")
val association = optional @{keyword "Between"} |-- Scan.optional (repeat2 association_end) []
val invariant =
optional @{keyword "Constraints"}
|-- Scan.optional (@{keyword "Existential"} >> K true) false
--| @{keyword "Inv"}
-- use_prop
structure Outer_syntax_Association = struct
fun make ass_ty l = META.Toy_association_ext (ass_ty, META.ToyAssRel l, ())
end
(* *)
val context =
Scan.repeat
(( optional (@{keyword "Operations"} || Parse.$$$ "::")
|-- Parse.binding
-- use_type
--| optional (Parse.$$$ "=" |-- Parse.term || Parse.term)
-- Scan.repeat
( (@{keyword "Pre"} || @{keyword "Post"})
-- use_prop >> TOY_context_pre_post
|| invariant >> TOY_context_invariant)
--| optional (Parse.$$$ ";")) >>
(fn ((name_fun, ty), expr) => fn from_expr =>
META.Ctxt_pp
(META.Toy_ctxt_pre_post_ext
( From.binding name_fun
, ty
, From.list (fn TOY_context_pre_post (pp, expr) =>
META.T_pp (if pp = "Pre" then
META.ToyCtxtPre
else
META.ToyCtxtPost, expr from_expr)
| TOY_context_invariant (b, expr) =>
META.T_invariant (META.T_inv (b, expr from_expr))) expr
, ())))
||
invariant >> (fn (b, expr) => fn from_expr => META.Ctxt_inv (META.T_inv (b, expr from_expr))))
val class =
optional @{keyword "Attributes"}
|-- Scan.repeat (Parse.binding --| colon -- use_type
--| optional (Parse.$$$ ";"))
-- context
datatype use_classDefinition = TOY_class | TOY_class_abstract
datatype ('a, 'b) use_classDefinition_content = TOY_class_content of 'a | TOY_class_synonym of 'b
structure Outer_syntax_Class = struct
fun make from_expr abstract ty_object attribute oper =
META.Toy_class_raw_ext
( ty_object
, From.list (From.pair From.binding I) attribute
, From.list (fn f => f from_expr) oper
, abstract
, ())
end
(* *)
val term_object = parse_l ( optional ( Parse.$$$ "("
|-- Parse.binding
--| Parse.$$$ ","
-- Parse.binding
--| Parse.$$$ ")"
--| (Parse.sym_ident >> (fn "|=" => Scan.succeed
| _ => Scan.fail "")))
-- Parse.binding
-- ( Parse.$$$ "="
|-- toy_term))
val list_attr' = term_object >> (fn res => (res, [] : binding list))
fun object_cast e =
( annot_ty term_object
-- Scan.repeat ( (Parse.sym_ident >> (fn "->" => Scan.succeed
| "\<leadsto>" => Scan.succeed
| "\<rightarrow>" => Scan.succeed
| _ => Scan.fail ""))
|-- ( Parse.reserved "toyAsType"
|-- Parse.$$$ "("
|-- Parse.binding
--| Parse.$$$ ")"
|| Parse.binding)) >> (fn ((res, x), l) => (res, rev (x :: l)))) e
val object_cast' = object_cast >> (fn (res, l) => (res, rev l))
fun get_toyinst l _ =
META.ToyInstance (map (fn ((name,typ), (l_attr, is_cast)) =>
let val f = map (fn ((pre_post, attr), data) =>
( From.option (From.pair From.binding From.binding) pre_post
, ( From.binding attr
, data)))
val l_attr =
fold
(fn b => fn acc => META.ToyAttrCast (From.binding b, acc, []))
is_cast
(META.ToyAttrNoCast (f l_attr)) in
META.Toy_instance_single_ext
(From.option From.binding name, From.option From.binding typ, l_attr, ()) end) l)
val parse_instance = (Parse.binding >> SOME)
-- optional (@{keyword "::"} |-- Parse.binding) --| @{keyword "="}
-- (list_attr' || object_cast')
(* *)
datatype state_content =
ST_l_attr of (((binding * binding) option * binding) * META.toy_data_shallow) list * binding list
| ST_binding of binding
val state_parse = parse_l' ( object_cast >> ST_l_attr
|| Parse.binding >> ST_binding)
fun mk_state thy =
map (fn ST_l_attr l =>
META.ToyDefCoreAdd
(case get_toyinst (map (fn (l_i, l_ty) =>
((NONE, SOME (hd l_ty)), (l_i, rev (tl l_ty)))) [l])
thy of
META.ToyInstance [x] => x)
| ST_binding b => META.ToyDefCoreBinding (From.binding b))
(* *)
datatype state_pp_content = ST_PP_l_attr of state_content list
| ST_PP_binding of binding
val state_pp_parse = state_parse >> ST_PP_l_attr
|| Parse.binding >> ST_PP_binding
fun mk_pp_state thy = fn ST_PP_l_attr l => META.ToyDefPPCoreAdd (mk_state thy l)
| ST_PP_binding s => META.ToyDefPPCoreBinding (From.binding s)
end
\<close>
subsection\<open>Setup of Meta Commands for Toy: Enum\<close>
ML\<open>
val () =
outer_syntax_command @{mk_string} @{command_keyword Enum} ""
(Parse.binding -- parse_l1' Parse.binding)
(fn (n1, n2) =>
K (META.META_enum (META.ToyEnum (From.binding n1, From.list From.binding n2))))
\<close>
subsection\<open>Setup of Meta Commands for Toy: (abstract) Class\<close>
ML\<open>
local
open TOY_parse
fun mk_classDefinition abstract cmd_spec =
outer_syntax_command2 @{mk_string} cmd_spec "Class generation"
( Parse.binding --| Parse.$$$ "=" -- TOY_parse.type_base >> TOY_class_synonym
|| type_object
-- class >> TOY_class_content)
(curry META.META_class_raw META.Floor1)
(curry META.META_class_raw META.Floor2)
(fn (from_expr, META_class_raw) =>
fn TOY_class_content (ty_object, (attribute, oper)) =>
META_class_raw (Outer_syntax_Class.make
from_expr
(abstract = TOY_class_abstract)
ty_object
attribute
oper)
| TOY_class_synonym (n1, n2) =>
META.META_class_synonym (META.ToyClassSynonym (From.binding n1, n2)))
in
val () = mk_classDefinition TOY_class @{command_keyword Class}
val () = mk_classDefinition TOY_class_abstract @{command_keyword Abstract_class}
end
\<close>
subsection\<open>Setup of Meta Commands for Toy: Association, Composition, Aggregation\<close>
ML\<open>
local
open TOY_parse
fun mk_associationDefinition ass_ty cmd_spec =
outer_syntax_command @{mk_string} cmd_spec ""
( repeat2 association_end
|| optional Parse.binding
|-- association)
(fn l => fn _ =>
META.META_association (Outer_syntax_Association.make ass_ty l))
in
val () = mk_associationDefinition META.ToyAssTy_association @{command_keyword Association}
val () = mk_associationDefinition META.ToyAssTy_composition @{command_keyword Composition}
val () = mk_associationDefinition META.ToyAssTy_aggregation @{command_keyword Aggregation}
end
\<close>
subsection\<open>Setup of Meta Commands for Toy: (abstract) Associationclass\<close>
ML\<open>
local
open TOY_parse
datatype use_associationClassDefinition = TOY_associationclass | TOY_associationclass_abstract
fun mk_associationClassDefinition abstract cmd_spec =
outer_syntax_command2 @{mk_string} cmd_spec ""
( type_object
-- association
-- class
-- optional (Parse.reserved "aggregation" || Parse.reserved "composition"))
(curry META.META_ass_class META.Floor1)
(curry META.META_ass_class META.Floor2)
(fn (from_expr, META_ass_class) =>
fn (((ty_object, l_ass), (attribute, oper)), assty) =>
META_ass_class
(META.ToyAssClass
( Outer_syntax_Association.make
(case assty of SOME "aggregation" => META.ToyAssTy_aggregation
| SOME "composition" => META.ToyAssTy_composition
| _ => META.ToyAssTy_association)
l_ass
, Outer_syntax_Class.make
from_expr
(abstract = TOY_associationclass_abstract)
ty_object
attribute
oper)))
in
val () = mk_associationClassDefinition TOY_associationclass @{command_keyword Associationclass}
val () = mk_associationClassDefinition TOY_associationclass_abstract @{command_keyword Abstract_associationclass}
end
\<close>
subsection\<open>Setup of Meta Commands for Toy: Context\<close>
ML\<open>
local
open TOY_parse
in
val () =
outer_syntax_command2 @{mk_string} @{command_keyword Context} ""
(optional (Parse.list1 Parse.binding --| colon)
-- Parse.binding
-- context)
(curry META.META_ctxt META.Floor1)
(curry META.META_ctxt META.Floor2)
(fn (from_expr, META_ctxt) =>
(fn ((l_param, name), l) =>
META_ctxt
(META.Toy_ctxt_ext
( case l_param of NONE => [] | SOME l => From.list From.binding l
, META.ToyTyObj (META.ToyTyCore_pre (From.binding name), [])
, From.list (fn f => f from_expr) l
, ()))))
end
\<close>
subsection\<open>Setup of Meta Commands for Toy: End\<close>
ML\<open>
val () =
outer_syntax_command0 @{mk_string} @{command_keyword End} "Class generation"
(Scan.optional ( Parse.$$$ "[" -- Parse.reserved "forced" -- Parse.$$$ "]" >> K true
|| Parse.$$$ "!" >> K true) false)
(fn b => fn _ =>
if b then
[META.META_flush_all META.ToyFlushAll]
else
[])
\<close>
subsection\<open>Setup of Meta Commands for Toy: BaseType, Instance, State\<close>
ML\<open>
val () =
outer_syntax_command @{mk_string} @{command_keyword BaseType} ""
(parse_l' TOY_parse.term_base)
(K o META.META_def_base_l o META.ToyDefBase)
local
open TOY_parse
in
val () =
outer_syntax_command @{mk_string} @{command_keyword Instance} ""
(Scan.optional (parse_instance -- Scan.repeat (optional @{keyword "and"} |-- parse_instance) >>
(fn (x, xs) => x :: xs)) [])
(META.META_instance oo get_toyinst)
val () =
outer_syntax_command @{mk_string} @{command_keyword State} ""
(TOY_parse.optional (paren @{keyword "shallow"}) -- Parse.binding --| @{keyword "="}
-- state_parse)
(fn ((is_shallow, name), l) => fn thy =>
META.META_def_state
( if is_shallow = NONE then META.Floor1 else META.Floor2
, META.ToyDefSt (From.binding name, mk_state thy l)))
end
\<close>
subsection\<open>Setup of Meta Commands for Toy: PrePost\<close>
ML\<open>
local
open TOY_parse
in
val () =
outer_syntax_command @{mk_string} @{command_keyword PrePost} ""
(TOY_parse.optional (paren @{keyword "shallow"})
-- TOY_parse.optional (Parse.binding --| @{keyword "="})
-- state_pp_parse
-- TOY_parse.optional state_pp_parse)
(fn (((is_shallow, n), s_pre), s_post) => fn thy =>
META.META_def_pre_post
( if is_shallow = NONE then META.Floor1 else META.Floor2
, META.ToyDefPP ( From.option From.binding n
, mk_pp_state thy s_pre
, From.option (mk_pp_state thy) s_post)))
end
(*val _ = print_depth 100*)
\<close>
(*>*)
end
diff --git a/thys/Isabelle_Meta_Model/toy_example/embedding/core/Floor2_examp.thy b/thys/Isabelle_Meta_Model/toy_example/embedding/core/Floor2_examp.thy
--- a/thys/Isabelle_Meta_Model/toy_example/embedding/core/Floor2_examp.thy
+++ b/thys/Isabelle_Meta_Model/toy_example/embedding/core/Floor2_examp.thy
@@ -1,184 +1,196 @@
(******************************************************************************
* HOL-TOY
*
* Copyright (c) 2011-2018 Université Paris-Saclay, Univ. Paris-Sud, France
* 2013-2017 IRT SystemX, France
* 2011-2015 Achim D. Brucker, Germany
* 2016-2018 The University of Sheffield, UK
* 2016-2017 Nanyang Technological University, Singapore
* 2017-2018 Virginia Tech, USA
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials provided
* with the distribution.
*
* * Neither the name of the copyright holders nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
section\<open>Main Translation for: Example (Floor 2)\<close>
theory Floor2_examp
imports Floor1_examp
begin
definition "print_examp_def_st_locale_distinct = \<open>distinct_oid\<close>"
definition "print_examp_def_st_locale_metis = M.metis (L.map T.thm [print_examp_def_st_locale_distinct, \<open>distinct_length_2_or_more\<close>])"
definition "print_examp_def_st_locale_aux f_toyi l =
(let b = \<lambda>s. Term_basic [s] in
map_prod
id
L.flatten
(L.split
(L.map
(\<lambda> name.
let (toyi, cpt) = f_toyi name
; n = inst_name toyi
; ty = inst_ty toyi
; f = \<lambda>s. s @@ String.isub ty
; name_pers = print_examp_instance_name f n in
( Term_oid var_oid_uniq (oidGetInh cpt)
, [ ( [(b name_pers, Typ_base (f datatype_name))], None)
, ( [(b n, Typ_base (wrap_toyty ty))]
, Some (hol_definition n, Term_rewrite (b n) \<open>=\<close> (Term_lambda wildcard (Term_some (Term_some (b name_pers)))))) ]))
l)))"
definition "print_examp_def_st_locale_make f_name f_toyi f_spec l =
(let (oid, l_fix_assum) = print_examp_def_st_locale_aux f_toyi l
; ty_n = \<open>nat\<close> in
\<lparr> HolThyLocale_name = f_name
, HolThyLocale_header = L.flatten
[ [ ( L.map (\<lambda>x. (x, Typ_base ty_n)) oid
, Some ( print_examp_def_st_locale_distinct
, Term_app \<open>distinct\<close> [let e = Term_list oid in
if oid = [] then Term_annot' e (ty_n @@ \<open> list\<close>) else e])) ]
, l_fix_assum
, f_spec ] \<rparr>)"
definition "print_examp_def_st_locale_name n = \<open>state_\<close> @@ n"
definition "print_examp_def_st_locale = (\<lambda> ToyDefSt n l \<Rightarrow> \<lambda>env.
(\<lambda>d. (d, env))
(print_examp_def_st_locale_make
(\<open>state_\<close> @@ n)
(\<lambda> ToyDefCoreBinding name \<Rightarrow> case String.assoc name (D_input_instance env) of Some n \<Rightarrow> n)
[]
l))"
definition "print_examp_def_st_mapsto_gen f =
L.map
(\<lambda>(cpt, ocore).
let b = \<lambda>s. Term_basic [s]
; (toyi, exp) = case ocore of
ToyDefCoreBinding (name, toyi) \<Rightarrow>
(toyi, Some (b (print_examp_instance_name (\<lambda>s. s @@ String.isub (inst_ty toyi)) name))) in
f (cpt, ocore) toyi exp)"
definition "print_examp_def_st_mapsto l = list_bind id id
(print_examp_def_st_mapsto_gen
(\<lambda>(cpt, _) toyi. map_option (\<lambda>exp.
Term_binop (Term_oid var_oid_uniq (oidGetInh cpt)) \<open>\<mapsto>\<close> (Term_app (datatype_in @@ String.isub (inst_ty toyi)) [exp])))
l)"
definition "print_examp_def_st2 = (\<lambda> ToyDefSt name l \<Rightarrow> \<lambda>env.
(\<lambda>(l, l_st). (L.map O'.definition l, env \<lparr> D_input_state := (String.to_String\<^sub>b\<^sub>a\<^sub>s\<^sub>e name, l_st) # D_input_state env \<rparr>))
(let b = \<lambda>s. Term_basic [s]
; l = L.map (\<lambda> ToyDefCoreBinding name \<Rightarrow> map_option (Pair name) (String.assoc name (D_input_instance env))) l
; (rbt, (map_self, map_username)) =
(init_map_class
(env \<lparr> D_toy_oid_start := oidReinitInh (D_toy_oid_start env) \<rparr>)
(L.map (\<lambda> Some (_, toyi, _) \<Rightarrow> toyi | None \<Rightarrow> toy_instance_single_empty) l)
:: (_ \<Rightarrow> _ \<times> _ \<times> (_ \<Rightarrow> ((_ \<Rightarrow> nat \<Rightarrow> _ \<Rightarrow> _) \<Rightarrow> _
\<Rightarrow> (toy_ty_class option \<times> (toy_ty \<times> toy_data_shallow) option) list) option)) \<times> _ \<times> _)
; (l_st, l_assoc) = L.mapM (\<lambda> o_n l_assoc.
case o_n of
Some (name, toyi, cpt) \<Rightarrow> ([(cpt, ToyDefCoreBinding (name, toyi))], (toyi, cpt) # l_assoc)
| None \<Rightarrow> ([], l_assoc)) l []
; l_st = L.unique oidGetInh (L.flatten l_st) in
( [ Definition (Term_rewrite (b name) \<open>=\<close> (Term_app \<open>state.make\<close>
( Term_app \<open>Map.empty\<close> (case print_examp_def_st_mapsto l_st of None \<Rightarrow> [] | Some l \<Rightarrow> l)
# [ print_examp_def_st_assoc (snd o rbt) map_self map_username l_assoc ]))) ]
, l_st)))"
definition "print_examp_def_st_perm_name name = S.flatten [\<open>perm_\<close>, name]"
definition "print_examp_def_st_perm = (\<lambda> _ env.
(\<lambda> l. (L.map O'.lemma l, env))
(let (name, l_st) = map_prod String\<^sub>b\<^sub>a\<^sub>s\<^sub>e.to_String id (hd (D_input_state env))
; expr_app = print_examp_def_st_mapsto (rev l_st)
; b = \<lambda>s. Term_basic [s]
; d = hol_definition
; (l_app, l_last) =
case l_st of [] \<Rightarrow> ([], C.by [M.simp_add [d name]])
| [_] \<Rightarrow> ([], C.by [M.simp_add [d name]])
| _ \<Rightarrow>
( [ M.simp_add [d name]]
# L.flatten (L.map (\<lambda>i_max. L.map (\<lambda>i. [M.subst_l (L.map String.nat_to_digit10 [i_max - i]) (T.thm \<open>fun_upd_twist\<close>), print_examp_def_st_locale_metis]) (List.upt 0 i_max)) (List.upt 1 (List.length l_st)))
, C.by [M.simp]) in
case expr_app of None \<Rightarrow> [] | Some expr_app \<Rightarrow>
[ Lemma
(print_examp_def_st_perm_name name)
[Term_rewrite (b name) \<open>=\<close> (Term_app \<open>state.make\<close>
(Term_app \<open>Map.empty\<close> expr_app # [Term_app var_assocs [b name]]))]
l_app
l_last ]))"
definition "merge_unique_gen f l = List.fold (List.fold (\<lambda>x. case f x of Some (x, v) \<Rightarrow> RBT.insert x v | None \<Rightarrow> id)) l RBT.empty"
definition "merge_unique f l = RBT.entries (merge_unique_gen f l)"
definition "merge_unique' = L.map snd o merge_unique (\<lambda> (a, b). ((\<lambda>x. Some (x, (a, b))) o oidGetInh) a)"
definition "get_state f = (\<lambda> ToyDefPP _ s_pre s_post \<Rightarrow> \<lambda> env.
let get_state = let l_st = D_input_state env in \<lambda>ToyDefPPCoreBinding s \<Rightarrow> (s, case String.assoc s l_st of None \<Rightarrow> [] | Some l \<Rightarrow> l)
; (s_pre, l_pre) = get_state s_pre
; (s_post, l_post) = case s_post of None \<Rightarrow> (s_pre, l_pre) | Some s_post \<Rightarrow> get_state s_post in
f (s_pre, l_pre)
(s_post, l_post)
((s_pre, l_pre) # (if s_pre \<triangleq> s_post then
[]
else
[ (s_post, l_post) ]))
env)"
definition "print_pre_post_locale_aux f_toyi l =
(let (oid, l_fix_assum) = print_examp_def_st_locale_aux f_toyi l in
L.flatten [oid, L.flatten (L.map (L.map fst o fst) l_fix_assum) ])"
definition "print_pre_post_locale = get_state (\<lambda> (s_pre, l_pre) (s_post, l_post) l_pre_post. Pair
(let f_toyi = \<lambda>(cpt, ToyDefCoreBinding (_, toyi)) \<Rightarrow> (toyi, cpt) in
print_examp_def_st_locale_make
(\<open>pre_post_\<close> @@ s_pre @@ \<open>_\<close> @@ s_post)
f_toyi
(L.map (\<lambda>(s, l). ([], Some (s, Term_app
(print_examp_def_st_locale_name s)
(print_pre_post_locale_aux f_toyi l))))
l_pre_post)
(merge_unique' [l_pre, l_post])))"
definition "print_pre_post_interp = get_state (\<lambda> _ _.
Pair o L.map O'.interpretation o L.map
(\<lambda>(s, l).
let n = print_examp_def_st_locale_name s in
Interpretation n n (print_pre_post_locale_aux (\<lambda>(cpt, ToyDefCoreBinding (_, toyi)) \<Rightarrow> (toyi, cpt)) l) (C.by [M.rule (T.thm s)])))"
+definition "print_pre_post_def_state' = get_state (\<lambda> pre post _.
+ (Pair o L.map O'.definition)
+ (L.map
+ (let a = \<lambda>f x. Term_app f [x]
+ ; b = \<lambda>s. Term_basic [s]
+ ; heap = \<open>heap\<close> in
+ (\<lambda>(s, _).
+ Definition (Term_rewrite (b (heap @@ \<open>_\<close> @@ s))
+ \<open>=\<close>
+ (a heap (b (print_examp_def_st_locale_name s @@ \<open>.\<close> @@ s))))))
+ [ pre, post ]))"
+
end
diff --git a/thys/Lucas_Theorem/Lucas_Theorem.thy b/thys/Lucas_Theorem/Lucas_Theorem.thy
new file mode 100644
--- /dev/null
+++ b/thys/Lucas_Theorem/Lucas_Theorem.thy
@@ -0,0 +1,373 @@
+(*
+ Title: Lucas_Theorem.thy
+ Author: Chelsea Edmonds, University of Cambridge
+*)
+
+theory Lucas_Theorem
+ imports Main "HOL-Computational_Algebra.Computational_Algebra"
+begin
+
+notation fps_nth (infixl "$" 75)
+
+section \<open>Extensions on Formal Power Series (FPS) Library\<close>
+
+text \<open>This section presents a few extensions on the Formal Power Series (FPS) library, described in \cite{Chaieb2011} \<close>
+
+subsection \<open>FPS Equivalence Relation \<close>
+
+text \<open> This proof requires reasoning around the equivalence of coefficients mod some prime number.
+This section defines an equivalence relation on FPS using the pattern described by Paulson
+in \cite{paulsonDefiningFunctionsEquivalence2006}, as well as some basic lemmas for reasoning around
+how the equivalence holds after common operations are applied \<close>
+
+definition "fpsmodrel p \<equiv> { (f, g). \<forall> n. (f $ n) mod p = (g $ n) mod p }"
+
+lemma fpsrel_iff [simp]: "(f, g) \<in> fpsmodrel p \<longleftrightarrow> (\<forall>n. (f $ n) mod p = (g $ n) mod p)"
+ by (simp add: fpsmodrel_def)
+
+lemma fps_equiv: "equiv UNIV (fpsmodrel p)"
+proof (rule equivI)
+ show "refl (fpsmodrel p)" by (simp add: refl_on_def fpsmodrel_def)
+ show "sym (fpsmodrel p)" by (simp add: sym_def fpsmodrel_def)
+ show "trans (fpsmodrel p)" by (intro transI) (simp add: fpsmodrel_def)
+qed
+
+text \<open> Equivalence relation over multiplication \<close>
+
+lemma fps_mult_equiv_coeff:
+ fixes f g :: "('a :: {euclidean_ring_cancel}) fps"
+ assumes "(f, g) \<in> fpsmodrel p"
+ shows "(f*h)$n mod p = (g*h)$n mod p"
+proof -
+ have "((f*h) $ n) mod p =(\<Sum>i=0..n. (f$i mod p * h$(n - i) mod p) mod p) mod p"
+ using mod_sum_eq mod_mult_left_eq
+ by (simp add: fps_mult_nth mod_sum_eq mod_mult_left_eq)
+ also have "... = (\<Sum>i=0..n. (g$i mod p * h$(n - i) mod p) mod p) mod p"
+ using assms by auto
+ also have "... = ((g*h) $ n) mod p"
+ by (simp add: mod_mult_left_eq mod_sum_eq fps_mult_nth)
+ thus ?thesis by (simp add: calculation)
+qed
+
+lemma fps_mult_equiv:
+ fixes f g :: "('a :: {euclidean_ring_cancel}) fps"
+ assumes "(f, g) \<in> fpsmodrel p"
+ shows "(f*h, g*h) \<in> fpsmodrel p"
+ using fpsmodrel_def fps_mult_equiv_coeff assms by blast
+
+
+text \<open> Equivalence relation over power operator \<close>
+lemma fps_power_equiv:
+ fixes f g :: "('a :: {euclidean_ring_cancel}) fps"
+ fixes x :: nat
+ assumes "(f, g) \<in> fpsmodrel p"
+ shows "(f^x, g^x) \<in> fpsmodrel p"
+ using assms
+proof (induct x)
+ case 0
+ thus ?case by (simp add: fpsmodrel_def)
+next
+ case (Suc x)
+ then have hyp: " \<forall>n. f^x $ n mod p = g ^x $ n mod p"
+ using fpsrel_iff by blast
+ thus ?case
+ proof -
+ have fact: "\<forall>n h. (g * h) $ n mod p = (f * h) $ n mod p"
+ by (metis assms fps_mult_equiv_coeff)
+ have "\<forall>n h. (g ^ x * h) $ n mod p = (f ^ x * h) $ n mod p"
+ by (simp add: fps_mult_equiv_coeff hyp)
+ then have "\<forall>n h. (h * g ^ x) $ n mod p = (h * f ^ x) $ n mod p"
+ by (simp add: mult.commute)
+ thus ?thesis
+ using fact by force
+ qed
+qed
+
+subsection \<open>Binomial Coefficients \<close>
+
+text \<open>The @{term "fps_binomial"} definition in the formal power series uses the @{term "n gchoose k"} operator. It's
+defined as being of type @{typ "'a :: field_char_0 fps"}, however the equivalence relation requires a type @{typ 'a}
+that supports the modulo operator.
+The proof of the binomial theorem based on FPS coefficients below uses the choose operator and does
+not put bounds on the type of @{term "fps_X"}.\<close>
+
+lemma binomial_coeffs_induct:
+ fixes n k :: nat
+ shows "(1 + fps_X)^n $ k = of_nat(n choose k)"
+proof (induct n arbitrary: k)
+ case 0
+ thus ?case
+ by (metis binomial_eq_0_iff binomial_n_0 fps_nth_of_nat not_gr_zero of_nat_0 of_nat_1 power_0)
+next
+ case h: (Suc n)
+ fix k
+ have start: "(1 + fps_X)^(n + 1) = (1 + fps_X) * (1 + fps_X)^n" by auto
+ show ?case
+ using One_nat_def Suc_eq_plus1 Suc_pred add.commute binomial_Suc_Suc binomial_n_0
+ fps_mult_fps_X_plus_1_nth h.hyps neq0_conv start by (smt of_nat_add)
+qed
+
+subsection \<open>Freshman's Dream Lemma on FPS \<close>
+text \<open> The Freshman's dream lemma modulo a prime number $p$ is a well known proof that $(1 + x^p) \equiv (1 + x)^p \mod p$\<close>
+
+text \<open> First prove that $\binom{p^n}{k} \equiv 0 \mod p$ for $k \ge 1$ and $k < p^n$. The eventual
+proof only ended up requiring this with $n = 1$\<close>
+
+lemma pn_choose_k_modp_0:
+ fixes n k::nat
+ assumes "prime p"
+ "k \<ge> 1 \<and> k \<le> p^n - 1"
+ "n > 0"
+ shows "(p^n choose k) mod p = 0"
+proof -
+ have inequality: "k \<le> p^n" using assms (2) by arith
+ have choose_take_1: "((p^n - 1) choose ( k - 1))= fact (p^n - 1) div (fact (k - 1) * fact (p^n - k))"
+ using binomial_altdef_nat diff_le_mono inequality assms(2) by auto
+ have "k * (p^n choose k) = k * ((fact (p^n)) div (fact k * fact((p^n) - k)))"
+ using assms binomial_fact'[OF inequality] by auto
+ also have "... = k * fact (p^n) div (fact k * fact((p^n) - k))"
+ using binomial_fact_lemma div_mult_self_is_m fact_gt_zero inequality mult.assoc mult.commute
+ nat_0_less_mult_iff by smt
+ also have "... = k * fact (p^n) div (k * fact (k - 1) * fact((p^n) - k))"
+ by (metis assms(2) fact_nonzero fact_num_eq_if le0 le_antisym of_nat_id)
+ also have "... = fact (p^n) div (fact (k - 1) * fact((p^n) - k))"
+ using assms by auto
+ also have "... = ((p^n) * fact (p^n - 1)) div (fact (k - 1) * fact((p^n) - k))"
+ by (metis assms(2) fact_nonzero fact_num_eq_if inequality le0 le_antisym of_nat_id)
+ also have "... = (p^n) * (fact (p^n - 1) div (fact (k - 1) * fact((p^n) - k)))"
+ by (metis assms(2) calculation choose_take_1 neq0_conv not_one_le_zero times_binomial_minus1_eq)
+ finally have equality: "k * (p^n choose k) = p^n * ((p^n - 1) choose (k - 1))"
+ using assms(2) times_binomial_minus1_eq by auto
+ then have dvd_result: "p^n dvd (k * (p^n choose k))" by simp
+ have "\<not> (p^n dvd k)"
+ using assms (2) binomial_n_0 diff_diff_cancel nat_dvd_not_less neq0_conv by auto
+ then have "p dvd (p^n choose k)"
+ using mult.commute prime_imp_prime_elem prime_power_dvd_multD assms dvd_result by metis
+ thus "?thesis" by simp
+qed
+
+text \<open> Applying the above lemma to the coefficients of $(1 + X)^p$, it is easy to show that all
+coefficients other than the $0$th and $p$th will be $0$ \<close>
+
+lemma fps_middle_coeffs:
+ assumes "prime p"
+ "n \<noteq> 0 \<and> n \<noteq> p"
+ shows "((1 + fps_X :: int fps) ^p) $ n mod p = 0 mod p"
+proof -
+ let ?f = "(1 + fps_X :: int fps)^p"
+ have "\<forall> n. n > 0 \<and> n < p \<longrightarrow> (p choose n) mod p = 0" using pn_choose_k_modp_0
+ by (metis (no_types, lifting) add_le_imp_le_diff assms(1) diff_diff_cancel diff_is_0_eq'
+ discrete le_add_diff_inverse le_numeral_extra(4) power_one_right zero_le_one zero_less_one)
+ then have middle_0: "\<forall> n. n > 0 \<and> n < p \<longrightarrow> (?f $ n) mod p = 0"
+ using binomial_coeffs_induct by (metis of_nat_0 zmod_int)
+ have "\<forall> n. n > p \<longrightarrow> ?f $ n mod p = 0"
+ using binomial_eq_0_iff binomial_coeffs_induct mod_0 by (metis of_nat_eq_0_iff)
+ thus ?thesis using middle_0 assms(2) nat_neq_iff by auto
+qed
+
+text \<open>It follows that $(1+ X)^p$ is equivalent to $(1 + X^p)$ under our equivalence relation,
+as required to prove the freshmans dream lemma. \<close>
+
+lemma fps_freshmans_dream:
+ assumes "prime p"
+ shows "(((1 + fps_X :: int fps ) ^p), (1 + (fps_X)^(p))) \<in> fpsmodrel p"
+proof -
+ let ?f = "(1 + fps_X :: int fps)^p"
+ let ?g = "(1 + (fps_X :: int fps)^p)"
+ have all_f_coeffs: "\<forall> n. n \<noteq> 0 \<and> n \<noteq> p \<longrightarrow> ?f $ n mod p = 0 mod p"
+ using fps_middle_coeffs assms by blast
+ have "?g $ 0 = 1" using assms by auto
+ then have "?g $ 0 mod p = 1 mod p"
+ using int_ops(2) zmod_int assms by presburger
+ then have "?g $ p mod p = 1 mod p" using assms by auto
+ then have "\<forall> n . ?f $ n mod p = ?g $ n mod p"
+ using all_f_coeffs by (simp add: binomial_coeffs_induct)
+ thus ?thesis using fpsrel_iff by blast
+qed
+
+section \<open>Lucas's Theorem Proof\<close>
+
+text \<open>A formalisation of Lucas's theorem based on a generating function proof using the existing formal power series (FPS) Isabelle library\<close>
+
+subsection \<open>Reasoning about Coefficients Helpers\<close>
+
+text \<open>A generating function proof of Lucas's theorem relies on direct comparison between coefficients of FPS which requires a number
+of helper lemmas to prove formally. In particular it compares the coefficients of
+$(1 + X)^n \mod p$ to $(1 + X^p)^N * (1 + X) ^rn \mod p$, where $N = n / p$, and $rn = n \mod p$.
+This section proves that the $k$th coefficient of $(1 + X^p)^N * (1 + X) ^rn = (N choose K) * (rn choose rk)$\<close>
+
+text \<open>Applying the @{term "fps_compose"} operator enables reasoning about the coefficients of $(1 + X^p)^n$
+using the existing binomial theorem proof with $X^p$ instead of $X$.\<close>
+
+lemma fps_binomial_p_compose:
+ assumes "p \<noteq> 0"
+ shows "(1 + (fps_X:: ('a :: {idom} fps))^p)^n = ((1 + fps_X)^n) oo (fps_X^p)"
+proof -
+ have "(1::'a fps) + fps_X ^ p = 1 + fps_X oo fps_X ^ p"
+ by (simp add: assms fps_compose_add_distrib)
+ then show ?thesis
+ by (simp add: assms fps_compose_power)
+qed
+
+text \<open> Next the proof determines the value of the $k$th coefficient of $(1 + X^p)^N$. \<close>
+
+lemma fps_X_pow_binomial_coeffs:
+ assumes "prime p"
+ shows "(1 + (fps_X ::int fps)^p)^N $k = (if p dvd k then (N choose (k div p)) else 0)"
+proof -
+ let ?fx = "(fps_X :: int fps)"
+ have "(1 + ?fx^p)^N $ k = (((1 + ?fx)^N) oo (?fx^p)) $k"
+ by (metis assms fps_binomial_p_compose not_prime_0)
+ also have "... = (\<Sum>i=0..k.((1 + ?fx)^N)$i * ((?fx^p)^i$k))"
+ by (simp add: fps_compose_nth)
+ finally have coeffs: "(1 + ?fx^p)^N $ k = (\<Sum>i=0..k. (N choose i) * ((?fx^(p*i))$k))"
+ using binomial_coeffs_induct sum.cong by (metis (no_types, lifting) power_mult)
+ thus ?thesis
+ proof (cases "p dvd k")
+ case False \<comment> \<open>$p$ does not divide $k$ implies the $k$th term has a coefficient of 0\<close>
+ have "\<forall> i. \<not>(p dvd k) \<longrightarrow> (?fx^(p*i)) $ k = 0"
+ by auto
+ thus ?thesis using coeffs by (simp add: False)
+ next
+ case True \<comment> \<open>$p$ divides $k$ implies the $k$th term has a non-zero coefficient\<close>
+ have contained: "k div p \<in> {0.. k}" by simp
+ have "\<forall> i. i \<noteq> k div p \<longrightarrow> (?fx^(p*i)) $ k = 0" using assms by auto
+ then have notdivpis0: "\<forall> i \<in> ({0 .. k} - {k div p}). (?fx^(p*i)) $ k = 0" by simp
+ have "(1 + ?fx^p)^N $ k = (N choose (k div p)) * (?fx^(p * (k div p))) $ k + (\<Sum>i\<in>({0..k} -{k div p}). (N choose i) * ((?fx^(p*i))$k))"
+ using contained coeffs sum.remove by (metis (no_types, lifting) finite_atLeastAtMost)
+ thus ?thesis using notdivpis0 True by simp
+ qed
+qed
+
+text \<open> The final helper lemma proves the $k$th coefficient is equivalent to $\binom{?N}{?K}*\binom{?rn}{?rk}$ as required.\<close>
+lemma fps_div_rep_coeffs:
+ assumes "prime p"
+ shows "((1 + (fps_X::int fps)^p)^(n div p) * (1 + fps_X)^(n mod p)) $ k =
+ ((n div p) choose (k div p)) * ((n mod p) choose (k mod p))"
+ (is "((1 + (fps_X::int fps)^p)^?N * (1 + fps_X)^?rn) $ k = (?N choose ?K) * (?rn choose ?rk)")
+proof -
+ \<comment> \<open>Initial facts with results around representation and 0 valued terms\<close>
+ let ?fx = "fps_X :: int fps"
+ have krep: "k - ?rk = ?K*p"
+ by (simp add: minus_mod_eq_mult_div)
+ have rk_in_range: "?rk \<in> {0..k}" by simp
+ have "\<forall> i \<ge> p. (?rn choose i) = 0"
+ using binomial_eq_0_iff
+ by (metis assms(1) leD le_less_trans linorder_cases mod_le_divisor mod_less_divisor prime_gt_0_nat)
+ then have ptok0: "\<forall> i \<in> {p..k}. ((?rn choose i) * (1 + ?fx^p)^?N $ (k - i)) = 0"
+ by simp
+ then have notrkis0: "\<forall>i \<in> {0.. k}. i \<noteq> ?rk \<longrightarrow> (?rn choose i) * (1 + ?fx^p)^?N $ (k - i) = 0"
+ proof (cases "k < p")
+ case True \<comment> \<open>When $k < p$, it presents a side case with regards to range of reasoning\<close>
+ then have k_value: "k = ?rk" by simp
+ then have "\<forall> i < k. \<not> (p dvd (k - i))"
+ using True by (metis diff_diff_cancel diff_is_0_eq dvd_imp_mod_0 less_imp_diff_less less_irrefl_nat mod_less)
+ then show ?thesis using fps_X_pow_binomial_coeffs assms(1) k_value by simp
+ next
+ case False
+ then have "\<forall> i < p. i \<noteq> ?rk \<longrightarrow> \<not>(p dvd (k - i))"
+ using mod_nat_eqI by auto
+ then have "\<forall> i \<in> {0..<p}. i \<noteq> ?rk \<longrightarrow> (1 + ?fx^p)^?N $ (k - i) = 0"
+ using assms fps_X_pow_binomial_coeffs by simp
+ then show ?thesis using ptok0 by auto
+ qed
+ \<comment> \<open>Main body of the proof, using helper facts above\<close>
+ have "((1 + fps_X^p)^?N * (1 + fps_X)^?rn) $ k = (((1 + fps_X)^?rn) * (1 + fps_X^p)^?N) $ k"
+ by (metis (no_types, hide_lams) distrib_left distrib_right fps_mult_fps_X_commute fps_one_mult(1)
+ fps_one_mult(2) power_commuting_commutes)
+ also have "... = (\<Sum>i=0..k.(of_nat(?rn choose i)) * ((1 + (fps_X)^p)^?N $ (k - i)))"
+ by (simp add: fps_mult_nth binomial_coeffs_induct)
+ also have "... = ((?rn choose ?rk) * (1 + ?fx^p)^?N $ (k - ?rk)) + (\<Sum>i\<in>({0..k} - {?rk}). (?rn choose i) * (1 + ?fx^p)^?N $ (k - i))"
+ using rk_in_range sum.remove by (metis (no_types, lifting) finite_atLeastAtMost)
+ finally have "((1 + ?fx^p)^?N * (1 + ?fx)^?rn) $ k = ((?rn choose ?rk) * (1 + ?fx^p)^?N $ (k - ?rk))"
+ using notrkis0 by simp
+ thus ?thesis using fps_X_pow_binomial_coeffs assms krep by auto
+qed
+
+(* Lucas theorem proof *)
+subsection \<open>Lucas Theorem Proof\<close>
+
+text \<open> The proof of Lucas's theorem combines a generating function approach, based off \cite{Fine} with induction.
+For formalisation purposes, it was easier to first prove a well known corollary of the main theorem (also
+often presented as an alternative statement for Lucas's theorem), which can itself be used to backwards
+prove the the original statement by induction.
+This approach was adapted from P. Cameron's lecture notes on combinatorics \cite{petercameronNotesCombinatorics2007} \<close>
+
+subsubsection \<open> Proof of the Corollary \<close>
+text \<open> This step makes use of the coefficient equivalence arguments proved in the previous sections \<close>
+corollary lucas_corollary:
+ fixes n k :: nat
+ assumes "prime p"
+ shows "(n choose k) mod p = (((n div p) choose (k div p)) * ((n mod p) choose (k mod p))) mod p"
+ (is "(n choose k) mod p = ((?N choose ?K) * (?rn choose ?rk)) mod p")
+proof -
+ let ?fx = "fps_X :: int fps"
+ have n_rep: "n = ?N * p + ?rn"
+ by simp
+ have k_rep: "k =?K * p + ?rk" by simp
+ have rhs_coeffs: "((1 + ?fx^p)^(?N) * (1 + ?fx)^(?rn)) $ k = (?N choose ?K) * (?rn choose ?rk)"
+ using assms fps_div_rep_coeffs k_rep n_rep by blast \<comment> \<open>Application of coefficient reasoning\<close>
+ have "((((1 + ?fx)^p)^(?N) * (1 + ?fx)^(?rn)),
+ ((1 + ?fx^p)^(?N) * (1 + ?fx)^(?rn))) \<in> fpsmodrel p"
+ using fps_freshmans_dream assms fps_mult_equiv fps_power_equiv by blast \<comment> \<open>Application of equivalence facts and freshmans dream lemma\<close>
+ then have modrel2: "((1 + ?fx)^n, ((1 + ?fx^p)^(?N) * (1 + ?fx)^(?rn)))
+ \<in> fpsmodrel p"
+ by (metis (mono_tags, hide_lams) mult_div_mod_eq power_add power_mult)
+ thus ?thesis
+ using fpsrel_iff binomial_coeffs_induct rhs_coeffs by (metis of_nat_eq_iff zmod_int)
+qed
+
+subsubsection \<open> Proof of the Theorem \<close>
+
+text \<open>The theorem statement requires a formalised way of referring to the base $p$ representation of a number.
+We use a definition that specifies the $i$th digit of the base $p$ representation. This definition is originally
+from the Hilbert's 10th Problem Formalisation project \cite{bayerDPRMTheoremIsabelle2019} which this work contributes to.\<close>
+definition nth_digit_general :: "nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> nat" where
+ "nth_digit_general num i base = (num div (base ^ i)) mod base"
+
+text \<open>Applying induction on $d$, where $d$ is the highest power required in either $n$ or $k$'s base $p$
+representation, @{thm lucas_corollary} can be used to prove the original theorem.\<close>
+
+theorem lucas_theorem:
+ fixes n k d::nat
+assumes "n < p ^ (Suc d)"
+assumes "k < p ^ (Suc d)"
+assumes "prime p"
+shows "(n choose k) mod p = (\<Prod>i\<le>d. ((nth_digit_general n i p) choose (nth_digit_general k i p))) mod p"
+ using assms
+proof (induct d arbitrary: n k)
+ case 0
+ thus ?case using nth_digit_general_def assms by simp
+next
+ case (Suc d)
+ \<comment> \<open>Representation Variables\<close>
+ let ?N = "n div p"
+ let ?K = "k div p"
+ let ?nr = "n mod p"
+ let ?kr = "k mod p"
+ \<comment> \<open>Required assumption facts\<close>
+ have Mlessthan: "?N < p ^ (Suc d)"
+ using less_mult_imp_div_less power_Suc2 assms(3) prime_ge_2_nat Suc.prems(1) by metis
+ have Nlessthan: "?K < p ^ (Suc d)"
+ using less_mult_imp_div_less power_Suc2 prime_ge_2_nat Suc.prems(2) assms(3) by metis
+ have shift_bounds_fact: "(\<Prod>i=(Suc 0)..(Suc (d )). ((nth_digit_general n i p) choose (nth_digit_general k i p))) =
+ (\<Prod>i=0..(d). (nth_digit_general n (Suc i) p) choose (nth_digit_general k (Suc i) p))"
+ using prod.shift_bounds_cl_Suc_ivl by blast \<comment> \<open>Product manipulation helper fact\<close>
+ have "(n choose k ) mod p = ((?N choose ?K) * (?nr choose ?kr)) mod p"
+ using lucas_corollary assms(3) by blast \<comment> \<open>Application of corollary\<close>
+ also have "...= ((\<Prod>i\<le>d. ((nth_digit_general ?N i p) choose (nth_digit_general ?K i p))) * (?nr choose ?kr)) mod p"
+ using Mlessthan Nlessthan Suc.hyps mod_mult_cong assms(3) by blast \<comment> \<open>Using Inductive Hypothesis\<close>
+ \<comment> \<open>Product manipulation steps\<close>
+ also have "... = ((\<Prod>i=0..(d). (nth_digit_general n (Suc i) p) choose (nth_digit_general k (Suc i) p)) * (?nr choose ?kr)) mod p"
+ using atMost_atLeast0 nth_digit_general_def div_mult2_eq by auto
+ also have "... = ((\<Prod>i=1..(d+1). (nth_digit_general n i p) choose (nth_digit_general k i p)) *
+ ((nth_digit_general n 0 p) choose (nth_digit_general k 0 p))) mod p"
+ using nth_digit_general_def shift_bounds_fact by simp
+ finally have "(n choose k ) mod p = ((\<Prod>i=0..(d+1). (nth_digit_general n i p) choose (nth_digit_general k i p))) mod p"
+ using One_nat_def atMost_atLeast0 mult.commute prod.atLeast1_atMost_eq prod.atMost_shift
+ by (smt Suc_eq_plus1 shift_bounds_fact)
+ thus ?case
+ using Suc_eq_plus1 atMost_atLeast0 by presburger
+qed
+
+end
\ No newline at end of file
diff --git a/thys/Lucas_Theorem/ROOT b/thys/Lucas_Theorem/ROOT
new file mode 100644
--- /dev/null
+++ b/thys/Lucas_Theorem/ROOT
@@ -0,0 +1,10 @@
+chapter AFP
+
+session Lucas_Theorem (AFP) = "HOL-Computational_Algebra" +
+ options [timeout = 600]
+ theories
+ Lucas_Theorem
+ document_files
+ "root.bib"
+ "root.tex"
+
diff --git a/thys/Lucas_Theorem/document/root.bib b/thys/Lucas_Theorem/document/root.bib
new file mode 100644
--- /dev/null
+++ b/thys/Lucas_Theorem/document/root.bib
@@ -0,0 +1,71 @@
+
+@article{Chaieb2011,
+ title = {Formal Power Series},
+ author = {Chaieb, Amine},
+ year = {2011},
+ month = oct,
+ volume = {47},
+ pages = {291--318},
+ issn = {1573-0670},
+ doi = {10.1007/s10817-010-9195-9},
+ journal = {Journal of Automated Reasoning},
+ number = {3}
+}
+
+@article{Fine,
+ title = {Binomial Coefficients modulo a Prime},
+ author = {Fine, N. J.},
+ year = {1947},
+ volume = {54},
+ pages = {589--592},
+ publisher = {{Mathematical Association of America}},
+ issn = {00029890, 19300972},
+ journal = {The American Mathematical Monthly},
+ number = {10}
+}
+
+@article{paulsonDefiningFunctionsEquivalence2006,
+ title = {Defining {{Functions}} on {{Equivalence Classes}}},
+ author = {Paulson, Lawrence C.},
+ year = {2006},
+ month = oct,
+ volume = {7},
+ pages = {658--675},
+ issn = {1529-3785, 1557-945X},
+ doi = {10.1145/1183278.1183280},
+ abstract = {A quotient construction defines an abstract type from a concrete type, using an equivalence relation to identify elements of the concrete type that are to be regarded as indistinguishable. The elements of a quotient type are \textbackslash{}emph\{equivalence classes\}: sets of equivalent concrete values. Simple techniques are presented for defining and reasoning about quotient constructions, based on a general lemma library concerning functions that operate on equivalence classes. The techniques are applied to a definition of the integers from the natural numbers, and then to the definition of a recursive datatype satisfying equational constraints.},
+ archivePrefix = {arXiv},
+ eprint = {1907.07591},
+ eprinttype = {arxiv},
+ journal = {ACM Transactions on Computational Logic (TOCL)},
+ keywords = {Computer Science - Logic in Computer Science,F.4.1,G.2.0},
+ number = {4}
+}
+
+@misc{petercameronNotesCombinatorics2007,
+ title = {Notes on {{Combinatorics}}},
+ author = {{Peter Cameron}},
+ year = {2007},
+ publisher = {{Queen Mary University of London}},
+ url = {http://www.maths.qmul.ac.uk/~pjc/notes/comb.pdf},
+ howpublished = {\url{http://www.maths.qmul.ac.uk/~pjc/notes/comb.pdf}}
+}
+
+@InProceedings{bayerDPRMTheoremIsabelle2019,
+ author = {Jonas Bayer and Marco David and Abhik Pal and Benedikt Stock and Dierk Schleicher},
+ title = {{The DPRM Theorem in Isabelle (Short Paper)}},
+ booktitle = {10th International Conference on Interactive Theorem Proving (ITP 2019)},
+ pages = {33:1--33:7},
+ series = {Leibniz International Proceedings in Informatics (LIPIcs)},
+ ISBN = {978-3-95977-122-1},
+ ISSN = {1868-8969},
+ year = {2019},
+ volume = {141},
+ editor = {John Harrison and John O'Leary and Andrew Tolmach},
+ publisher = {Schloss Dagstuhl--Leibniz-Zentrum für Informatik},
+ address = {Dagstuhl, Germany},
+ URL = {http://drops.dagstuhl.de/opus/volltexte/2019/11088},
+ URN = {urn:nbn:de:0030-drops-110883},
+ doi = {10.4230/LIPIcs.ITP.2019.33},
+ annote = {Keywords: DPRM theorem, Hilbert's tenth problem, Diophantine predicates, Register machines, Recursively enumerable sets, Isabelle, Formal verification}
+}
diff --git a/thys/Lucas_Theorem/document/root.tex b/thys/Lucas_Theorem/document/root.tex
new file mode 100644
--- /dev/null
+++ b/thys/Lucas_Theorem/document/root.tex
@@ -0,0 +1,66 @@
+\documentclass[11pt,a4paper]{article}
+\usepackage{isabelle,isabellesym}
+\usepackage{amsmath}
+
+% further packages required for unusual symbols (see also
+% isabellesym.sty), use only when needed
+
+%\usepackage{amssymb}
+ %for \<leadsto>, \<box>, \<diamond>, \<sqsupset>, \<mho>, \<Join>,
+ %\<lhd>, \<lesssim>, \<greatersim>, \<lessapprox>, \<greaterapprox>,
+ %\<triangleq>, \<yen>, \<lozenge>
+
+%\usepackage{eurosym}
+ %for \<euro>
+
+%\usepackage[only,bigsqcap]{stmaryrd}
+ %for \<Sqinter>
+
+%\usepackage{eufrak}
+ %for \<AA> ... \<ZZ>, \<aa> ... \<zz> (also included in amssymb)
+
+%\usepackage{textcomp}
+ %for \<onequarter>, \<onehalf>, \<threequarters>, \<degree>, \<cent>,
+ %\<currency>
+
+% this should be the last package used
+\usepackage{pdfsetup}
+
+% urls in roman style, theory text in math-similar italics
+\urlstyle{rm}
+\isabellestyle{it}
+
+% for uniform font size
+%\renewcommand{\isastyle}{\isastyleminor}
+
+
+\begin{document}
+
+\title{Lucas's Theorem}
+\author{Chelsea Edmonds}
+\maketitle
+
+\begin{abstract}
+ This work presents a formalisation of a generating function proof for Lucas's theorem. We first outline extensions to the existing Formal Power Series (FPS) library, including an equivalence relation for coefficients modulo $n$, an alternate binomial theorem statement, and a formalised proof of the Freshman's dream (mod $p$) lemma.
+
+ The second part of the work presents the formal proof of Lucas's Theorem. Working backwards, the formalisation first proves a well known corollary of the theorem which is easier to formalise and then applies induction to prove the original theorem statement. The proof of the corollary aims to provide a good example of a formalised generating function equivalence proof using the FPS library. The final theorem statement is intended to be integrated into the formalised proof of Hilbert's 10th Problem \cite{bayerDPRMTheoremIsabelle2019}.
+\end{abstract}
+
+\tableofcontents
+
+% sane default for proof documents
+\parindent 0pt\parskip 0.5ex
+
+% generated text of all theories
+\input{session}
+
+% optional bibliography
+\bibliographystyle{abbrv}
+\bibliography{root}
+
+\end{document}
+
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: t
+%%% End:
diff --git a/thys/Native_Word/Bits_Integer.thy b/thys/Native_Word/Bits_Integer.thy
--- a/thys/Native_Word/Bits_Integer.thy
+++ b/thys/Native_Word/Bits_Integer.thy
@@ -1,651 +1,651 @@
(* Title: Bits_Integer.thy
Author: Andreas Lochbihler, ETH Zurich
*)
chapter \<open>Bit operations for target language integers\<close>
theory Bits_Integer imports
More_Bits_Int
Code_Symbolic_Bits_Int
begin
lemmas [transfer_rule] =
identity_quotient
fun_quotient
Quotient_integer[folded integer.pcr_cr_eq]
lemma undefined_transfer:
assumes "Quotient R Abs Rep T"
shows "T (Rep undefined) undefined"
using assms unfolding Quotient_alt_def by blast
bundle undefined_transfer = undefined_transfer[transfer_rule]
section \<open>More lemmas about @{typ integer}s\<close>
context
includes integer.lifting
begin
lemma bitval_integer_transfer [transfer_rule]:
"(rel_fun (=) pcr_integer) of_bool of_bool"
by(auto simp add: of_bool_def integer.pcr_cr_eq cr_integer_def)
lemma integer_of_nat_less_0_conv [simp]: "\<not> integer_of_nat n < 0"
by(transfer) simp
lemma int_of_integer_pow: "int_of_integer (x ^ n) = int_of_integer x ^ n"
by(induct n) simp_all
lemma pow_integer_transfer [transfer_rule]:
"(rel_fun pcr_integer (rel_fun (=) pcr_integer)) (^) (^)"
by(auto 4 3 simp add: integer.pcr_cr_eq cr_integer_def int_of_integer_pow)
lemma sub1_lt_0_iff [simp]: "Code_Numeral.sub n num.One < 0 \<longleftrightarrow> False"
by(cases n)(simp_all add: Code_Numeral.sub_code)
lemma nat_of_integer_numeral [simp]: "nat_of_integer (numeral n) = numeral n"
by transfer simp
lemma nat_of_integer_sub1_conv_pred_numeral [simp]:
"nat_of_integer (Code_Numeral.sub n num.One) = pred_numeral n"
by(cases n)(simp_all add: Code_Numeral.sub_code)
lemma nat_of_integer_1 [simp]: "nat_of_integer 1 = 1"
by transfer simp
lemma dup_1 [simp]: "Code_Numeral.dup 1 = 2"
by transfer simp
section \<open>Bit operations on @{typ integer}\<close>
text \<open>Bit operations on @{typ integer} are the same as on @{typ int}\<close>
lift_definition bin_rest_integer :: "integer \<Rightarrow> integer" is bin_rest .
lift_definition bin_last_integer :: "integer \<Rightarrow> bool" is bin_last .
lift_definition Bit_integer :: "integer \<Rightarrow> bool \<Rightarrow> integer" is Bit .
end
instantiation integer :: bit_operations begin
context includes integer.lifting begin
-lift_definition bitAND_integer :: "integer \<Rightarrow> integer \<Rightarrow> integer" is "bitAND" .
-lift_definition bitOR_integer :: "integer \<Rightarrow> integer \<Rightarrow> integer" is "bitOR" .
-lift_definition bitXOR_integer :: "integer \<Rightarrow> integer \<Rightarrow> integer" is "bitXOR" .
-lift_definition bitNOT_integer :: "integer \<Rightarrow> integer" is "bitNOT" .
+lift_definition bitAND_integer :: "integer \<Rightarrow> integer \<Rightarrow> integer" is "(AND)" .
+lift_definition bitOR_integer :: "integer \<Rightarrow> integer \<Rightarrow> integer" is "(OR)" .
+lift_definition bitXOR_integer :: "integer \<Rightarrow> integer \<Rightarrow> integer" is "(XOR)" .
+lift_definition bitNOT_integer :: "integer \<Rightarrow> integer" is "NOT" .
lift_definition test_bit_integer :: "integer \<Rightarrow> nat \<Rightarrow> bool" is test_bit .
lift_definition lsb_integer :: "integer \<Rightarrow> bool" is lsb .
lift_definition set_bit_integer :: "integer \<Rightarrow> nat \<Rightarrow> bool \<Rightarrow> integer" is set_bit .
lift_definition shiftl_integer :: "integer \<Rightarrow> nat \<Rightarrow> integer" is shiftl .
lift_definition shiftr_integer :: "integer \<Rightarrow> nat \<Rightarrow> integer" is shiftr .
lift_definition msb_integer :: "integer \<Rightarrow> bool" is msb .
instance ..
end
end
abbreviation (input) wf_set_bits_integer
where "wf_set_bits_integer \<equiv> wf_set_bits_int"
section \<open>Target language implementations\<close>
text \<open>
Unfortunately, this is not straightforward,
because these API functions have different signatures and preconditions on the parameters:
\begin{description}
\item[Standard ML] Shifts in IntInf are given as word, but not IntInf.
\item[Haskell] In the Data.Bits.Bits type class, shifts and bit indices are given as Int rather than Integer.
\end{description}
Additional constants take only parameters of type @{typ integer} rather than @{typ nat}
and check the preconditions as far as possible (e.g., being non-negative) in a portable way.
Manual implementations inside code\_printing perform the remaining range checks and convert
these @{typ integer}s into the right type.
For normalisation by evaluation, we derive custom code equations, because NBE
does not know these code\_printing serialisations and would otherwise loop.
\<close>
code_identifier code_module Bits_Integer \<rightharpoonup>
(SML) Bits_Int and (OCaml) Bits_Int and (Haskell) Bits_Int and (Scala) Bits_Int
code_printing code_module Bits_Integer \<rightharpoonup> (SML)
\<open>structure Bits_Integer : sig
val set_bit : IntInf.int -> IntInf.int -> bool -> IntInf.int
val shiftl : IntInf.int -> IntInf.int -> IntInf.int
val shiftr : IntInf.int -> IntInf.int -> IntInf.int
val test_bit : IntInf.int -> IntInf.int -> bool
end = struct
val maxWord = IntInf.pow (2, Word.wordSize);
fun set_bit x n b =
if n < maxWord then
if b then IntInf.orb (x, IntInf.<< (1, Word.fromLargeInt (IntInf.toLarge n)))
else IntInf.andb (x, IntInf.notb (IntInf.<< (1, Word.fromLargeInt (IntInf.toLarge n))))
else raise (Fail ("Bit index too large: " ^ IntInf.toString n));
fun shiftl x n =
if n < maxWord then IntInf.<< (x, Word.fromLargeInt (IntInf.toLarge n))
else raise (Fail ("Shift operand too large: " ^ IntInf.toString n));
fun shiftr x n =
if n < maxWord then IntInf.~>> (x, Word.fromLargeInt (IntInf.toLarge n))
else raise (Fail ("Shift operand too large: " ^ IntInf.toString n));
fun test_bit x n =
if n < maxWord then IntInf.andb (x, IntInf.<< (1, Word.fromLargeInt (IntInf.toLarge n))) <> 0
else raise (Fail ("Bit index too large: " ^ IntInf.toString n));
end; (*struct Bits_Integer*)\<close>
code_reserved SML Bits_Integer
code_printing code_module Bits_Integer \<rightharpoonup> (OCaml)
\<open>module Bits_Integer : sig
val shiftl : Z.t -> Z.t -> Z.t
val shiftr : Z.t -> Z.t -> Z.t
val test_bit : Z.t -> Z.t -> bool
end = struct
(* We do not need an explicit range checks here,
because Big_int.int_of_big_int raises Failure
if the argument does not fit into an int. *)
let shiftl x n = Z.shift_left x (Z.to_int n);;
let shiftr x n = Z.shift_right x (Z.to_int n);;
let test_bit x n = Z.testbit x (Z.to_int n);;
end;; (*struct Bits_Integer*)\<close>
code_reserved OCaml Bits_Integer
code_printing code_module Data_Bits \<rightharpoonup> (Haskell)
\<open>
module Data_Bits where {
import qualified Data.Bits;
{-
The ...Bounded functions assume that the Integer argument for the shift
or bit index fits into an Int, is non-negative and (for types of fixed bit width)
less than bitSize
-}
infixl 7 .&.;
infixl 6 `xor`;
infixl 5 .|.;
(.&.) :: Data.Bits.Bits a => a -> a -> a;
(.&.) = (Data.Bits..&.);
xor :: Data.Bits.Bits a => a -> a -> a;
xor = Data.Bits.xor;
(.|.) :: Data.Bits.Bits a => a -> a -> a;
(.|.) = (Data.Bits..|.);
complement :: Data.Bits.Bits a => a -> a;
complement = Data.Bits.complement;
testBitUnbounded :: Data.Bits.Bits a => a -> Integer -> Bool;
testBitUnbounded x b
| b <= toInteger (Prelude.maxBound :: Int) = Data.Bits.testBit x (fromInteger b)
| otherwise = error ("Bit index too large: " ++ show b)
;
testBitBounded :: Data.Bits.Bits a => a -> Integer -> Bool;
testBitBounded x b = Data.Bits.testBit x (fromInteger b);
setBitUnbounded :: Data.Bits.Bits a => a -> Integer -> Bool -> a;
setBitUnbounded x n b
| n <= toInteger (Prelude.maxBound :: Int) =
if b then Data.Bits.setBit x (fromInteger n) else Data.Bits.clearBit x (fromInteger n)
| otherwise = error ("Bit index too large: " ++ show n)
;
setBitBounded :: Data.Bits.Bits a => a -> Integer -> Bool -> a;
setBitBounded x n True = Data.Bits.setBit x (fromInteger n);
setBitBounded x n False = Data.Bits.clearBit x (fromInteger n);
shiftlUnbounded :: Data.Bits.Bits a => a -> Integer -> a;
shiftlUnbounded x n
| n <= toInteger (Prelude.maxBound :: Int) = Data.Bits.shiftL x (fromInteger n)
| otherwise = error ("Shift operand too large: " ++ show n)
;
shiftlBounded :: Data.Bits.Bits a => a -> Integer -> a;
shiftlBounded x n = Data.Bits.shiftL x (fromInteger n);
shiftrUnbounded :: Data.Bits.Bits a => a -> Integer -> a;
shiftrUnbounded x n
| n <= toInteger (Prelude.maxBound :: Int) = Data.Bits.shiftR x (fromInteger n)
| otherwise = error ("Shift operand too large: " ++ show n)
;
shiftrBounded :: (Ord a, Data.Bits.Bits a) => a -> Integer -> a;
shiftrBounded x n = Data.Bits.shiftR x (fromInteger n);
}\<close>
and \<comment> \<open>@{theory HOL.Quickcheck_Narrowing} maps @{typ integer} to
Haskell's Prelude.Int type instead of Integer. For compatibility
with the Haskell target, we nevertheless provide bounded and
unbounded functions.\<close>
(Haskell_Quickcheck)
\<open>
module Data_Bits where {
import qualified Data.Bits;
{-
The functions assume that the Int argument for the shift or bit index is
non-negative and (for types of fixed bit width) less than bitSize
-}
infixl 7 .&.;
infixl 6 `xor`;
infixl 5 .|.;
(.&.) :: Data.Bits.Bits a => a -> a -> a;
(.&.) = (Data.Bits..&.);
xor :: Data.Bits.Bits a => a -> a -> a;
xor = Data.Bits.xor;
(.|.) :: Data.Bits.Bits a => a -> a -> a;
(.|.) = (Data.Bits..|.);
complement :: Data.Bits.Bits a => a -> a;
complement = Data.Bits.complement;
testBitUnbounded :: Data.Bits.Bits a => a -> Prelude.Int -> Bool;
testBitUnbounded = Data.Bits.testBit;
testBitBounded :: Data.Bits.Bits a => a -> Prelude.Int -> Bool;
testBitBounded = Data.Bits.testBit;
setBitUnbounded :: Data.Bits.Bits a => a -> Prelude.Int -> Bool -> a;
setBitUnbounded x n True = Data.Bits.setBit x n;
setBitUnbounded x n False = Data.Bits.clearBit x n;
setBitBounded :: Data.Bits.Bits a => a -> Prelude.Int -> Bool -> a;
setBitBounded x n True = Data.Bits.setBit x n;
setBitBounded x n False = Data.Bits.clearBit x n;
shiftlUnbounded :: Data.Bits.Bits a => a -> Prelude.Int -> a;
shiftlUnbounded = Data.Bits.shiftL;
shiftlBounded :: Data.Bits.Bits a => a -> Prelude.Int -> a;
shiftlBounded = Data.Bits.shiftL;
shiftrUnbounded :: Data.Bits.Bits a => a -> Prelude.Int -> a;
shiftrUnbounded = Data.Bits.shiftR;
shiftrBounded :: (Ord a, Data.Bits.Bits a) => a -> Prelude.Int -> a;
shiftrBounded = Data.Bits.shiftR;
}\<close>
code_reserved Haskell Data_Bits
code_printing code_module Bits_Integer \<rightharpoonup> (Scala)
\<open>object Bits_Integer {
def setBit(x: BigInt, n: BigInt, b: Boolean) : BigInt =
if (n.isValidInt)
if (b)
x.setBit(n.toInt)
else
x.clearBit(n.toInt)
else
sys.error("Bit index too large: " + n.toString)
def shiftl(x: BigInt, n: BigInt) : BigInt =
if (n.isValidInt)
x << n.toInt
else
sys.error("Shift index too large: " + n.toString)
def shiftr(x: BigInt, n: BigInt) : BigInt =
if (n.isValidInt)
x << n.toInt
else
sys.error("Shift index too large: " + n.toString)
def testBit(x: BigInt, n: BigInt) : Boolean =
if (n.isValidInt)
x.testBit(n.toInt)
else
sys.error("Bit index too large: " + n.toString)
} /* object Bits_Integer */\<close>
code_printing
- constant "bitAND :: integer \<Rightarrow> integer \<Rightarrow> integer" \<rightharpoonup>
+ constant "(AND) :: integer \<Rightarrow> integer \<Rightarrow> integer" \<rightharpoonup>
(SML) "IntInf.andb ((_),/ (_))" and
(OCaml) "Z.logand" and
(Haskell) "((Data'_Bits..&.) :: Integer -> Integer -> Integer)" and
(Haskell_Quickcheck) "((Data'_Bits..&.) :: Prelude.Int -> Prelude.Int -> Prelude.Int)" and
(Scala) infixl 3 "&"
-| constant "bitOR :: integer \<Rightarrow> integer \<Rightarrow> integer" \<rightharpoonup>
+| constant "(OR) :: integer \<Rightarrow> integer \<Rightarrow> integer" \<rightharpoonup>
(SML) "IntInf.orb ((_),/ (_))" and
(OCaml) "Z.logor" and
(Haskell) "((Data'_Bits..|.) :: Integer -> Integer -> Integer)" and
(Haskell_Quickcheck) "((Data'_Bits..|.) :: Prelude.Int -> Prelude.Int -> Prelude.Int)" and
(Scala) infixl 1 "|"
-| constant "bitXOR :: integer \<Rightarrow> integer \<Rightarrow> integer" \<rightharpoonup>
+| constant "(XOR) :: integer \<Rightarrow> integer \<Rightarrow> integer" \<rightharpoonup>
(SML) "IntInf.xorb ((_),/ (_))" and
(OCaml) "Z.logxor" and
(Haskell) "(Data'_Bits.xor :: Integer -> Integer -> Integer)" and
(Haskell_Quickcheck) "(Data'_Bits.xor :: Prelude.Int -> Prelude.Int -> Prelude.Int)" and
(Scala) infixl 2 "^"
-| constant "bitNOT :: integer \<Rightarrow> integer" \<rightharpoonup>
+| constant "NOT :: integer \<Rightarrow> integer" \<rightharpoonup>
(SML) "IntInf.notb" and
(OCaml) "Z.lognot" and
(Haskell) "(Data'_Bits.complement :: Integer -> Integer)" and
(Haskell_Quickcheck) "(Data'_Bits.complement :: Prelude.Int -> Prelude.Int)" and
(Scala) "_.unary'_~"
code_printing constant bin_rest_integer \<rightharpoonup>
(SML) "IntInf.div ((_), 2)" and
(OCaml) "Z.shift'_right/ _/ 1" and
(Haskell) "(Data'_Bits.shiftrUnbounded _ 1 :: Integer)" and
(Haskell_Quickcheck) "(Data'_Bits.shiftrUnbounded _ 1 :: Prelude.Int)" and
(Scala) "_ >> 1"
context
includes integer.lifting
begin
lemma bitNOT_integer_code [code]:
fixes i :: integer shows
"NOT i = - i - 1"
by transfer(simp add: int_not_def)
lemma bin_rest_integer_code [code nbe]:
"bin_rest_integer i = i div 2"
by transfer(simp add: bin_rest_def)
lemma bin_last_integer_code [code]:
"bin_last_integer i \<longleftrightarrow> i AND 1 \<noteq> 0"
by transfer(rule bin_last_conv_AND)
lemma bin_last_integer_nbe [code nbe]:
"bin_last_integer i \<longleftrightarrow> i mod 2 \<noteq> 0"
by transfer(simp add: bin_last_def)
lemma bitval_bin_last_integer [code_unfold]:
"of_bool (bin_last_integer i) = i AND 1"
by transfer(rule bitval_bin_last)
end
definition integer_test_bit :: "integer \<Rightarrow> integer \<Rightarrow> bool"
where "integer_test_bit x n = (if n < 0 then undefined x n else x !! nat_of_integer n)"
lemma test_bit_integer_code [code]:
"x !! n \<longleftrightarrow> integer_test_bit x (integer_of_nat n)"
by(simp add: integer_test_bit_def)
lemma integer_test_bit_code [code]:
"integer_test_bit x (Code_Numeral.Neg n) = undefined x (Code_Numeral.Neg n)"
"integer_test_bit 0 0 = False"
"integer_test_bit 0 (Code_Numeral.Pos n) = False"
"integer_test_bit (Code_Numeral.Pos num.One) 0 = True"
"integer_test_bit (Code_Numeral.Pos (num.Bit0 n)) 0 = False"
"integer_test_bit (Code_Numeral.Pos (num.Bit1 n)) 0 = True"
"integer_test_bit (Code_Numeral.Pos num.One) (Code_Numeral.Pos n') = False"
"integer_test_bit (Code_Numeral.Pos (num.Bit0 n)) (Code_Numeral.Pos n') =
integer_test_bit (Code_Numeral.Pos n) (Code_Numeral.sub n' num.One)"
"integer_test_bit (Code_Numeral.Pos (num.Bit1 n)) (Code_Numeral.Pos n') =
integer_test_bit (Code_Numeral.Pos n) (Code_Numeral.sub n' num.One)"
"integer_test_bit (Code_Numeral.Neg num.One) 0 = True"
"integer_test_bit (Code_Numeral.Neg (num.Bit0 n)) 0 = False"
"integer_test_bit (Code_Numeral.Neg (num.Bit1 n)) 0 = True"
"integer_test_bit (Code_Numeral.Neg num.One) (Code_Numeral.Pos n') = True"
"integer_test_bit (Code_Numeral.Neg (num.Bit0 n)) (Code_Numeral.Pos n') =
integer_test_bit (Code_Numeral.Neg n) (Code_Numeral.sub n' num.One)"
"integer_test_bit (Code_Numeral.Neg (num.Bit1 n)) (Code_Numeral.Pos n') =
integer_test_bit (Code_Numeral.Neg (n + num.One)) (Code_Numeral.sub n' num.One)"
by(simp_all add: integer_test_bit_def test_bit_integer_def)
code_printing constant integer_test_bit \<rightharpoonup>
(SML) "Bits'_Integer.test'_bit" and
(OCaml) "Bits'_Integer.test'_bit" and
(Haskell) "(Data'_Bits.testBitUnbounded :: Integer -> Integer -> Bool)" and
(Haskell_Quickcheck) "(Data'_Bits.testBitUnbounded :: Prelude.Int -> Prelude.Int -> Bool)" and
(Scala) "Bits'_Integer.testBit"
context
includes integer.lifting
begin
lemma lsb_integer_code [code]:
fixes x :: integer shows
"lsb x = x !! 0"
by transfer(simp add: lsb_int_def)
definition integer_set_bit :: "integer \<Rightarrow> integer \<Rightarrow> bool \<Rightarrow> integer"
where [code del]: "integer_set_bit x n b = (if n < 0 then undefined x n b else set_bit x (nat_of_integer n) b)"
lemma set_bit_integer_code [code]:
"set_bit x i b = integer_set_bit x (integer_of_nat i) b"
by(simp add: integer_set_bit_def)
lemma set_bit_integer_conv_masks:
fixes x :: integer shows
"set_bit x i b = (if b then x OR (1 << i) else x AND NOT (1 << i))"
by transfer(simp add: int_set_bit_conv_ops)
end
code_printing constant integer_set_bit \<rightharpoonup>
(SML) "Bits'_Integer.set'_bit" and
(Haskell) "(Data'_Bits.setBitUnbounded :: Integer -> Integer -> Bool -> Integer)" and
(Haskell_Quickcheck) "(Data'_Bits.setBitUnbounded :: Prelude.Int -> Prelude.Int -> Bool -> Prelude.Int)" and
(Scala) "Bits'_Integer.setBit"
text \<open>
OCaml.Big\_int does not have a method for changing an individual bit, so we emulate that with masks.
We prefer an Isabelle implementation, because this then takes care of the signs for AND and OR.
\<close>
lemma integer_set_bit_code [code]:
"integer_set_bit x n b =
(if n < 0 then undefined x n b else
if b then x OR (1 << nat_of_integer n)
else x AND NOT (1 << nat_of_integer n))"
by(auto simp add: integer_set_bit_def set_bit_integer_conv_masks)
definition integer_shiftl :: "integer \<Rightarrow> integer \<Rightarrow> integer"
where [code del]: "integer_shiftl x n = (if n < 0 then undefined x n else x << nat_of_integer n)"
lemma shiftl_integer_code [code]:
fixes x :: integer shows
"x << n = integer_shiftl x (integer_of_nat n)"
by(auto simp add: integer_shiftl_def)
context
includes integer.lifting
begin
lemma shiftl_integer_conv_mult_pow2:
fixes x :: integer shows
"x << n = x * 2 ^ n"
by transfer(simp add: shiftl_int_def)
lemma integer_shiftl_code [code]:
"integer_shiftl x (Code_Numeral.Neg n) = undefined x (Code_Numeral.Neg n)"
"integer_shiftl x 0 = x"
"integer_shiftl x (Code_Numeral.Pos n) = integer_shiftl (Code_Numeral.dup x) (Code_Numeral.sub n num.One)"
"integer_shiftl 0 (Code_Numeral.Pos n) = 0"
by (simp_all add: integer_shiftl_def shiftl_integer_def shiftl_int_def numeral_eq_Suc)
(transfer, simp)
end
code_printing constant integer_shiftl \<rightharpoonup>
(SML) "Bits'_Integer.shiftl" and
(OCaml) "Bits'_Integer.shiftl" and
(Haskell) "(Data'_Bits.shiftlUnbounded :: Integer -> Integer -> Integer)" and
(Haskell_Quickcheck) "(Data'_Bits.shiftlUnbounded :: Prelude.Int -> Prelude.Int -> Prelude.Int)" and
(Scala) "Bits'_Integer.shiftl"
definition integer_shiftr :: "integer \<Rightarrow> integer \<Rightarrow> integer"
where [code del]: "integer_shiftr x n = (if n < 0 then undefined x n else x >> nat_of_integer n)"
lemma shiftr_integer_conv_div_pow2:
includes integer.lifting fixes x :: integer shows
"x >> n = x div 2 ^ n"
by transfer(simp add: shiftr_int_def)
lemma shiftr_integer_code [code]:
fixes x :: integer shows
"x >> n = integer_shiftr x (integer_of_nat n)"
by(auto simp add: integer_shiftr_def)
code_printing constant integer_shiftr \<rightharpoonup>
(SML) "Bits'_Integer.shiftr" and
(OCaml) "Bits'_Integer.shiftr" and
(Haskell) "(Data'_Bits.shiftrUnbounded :: Integer -> Integer -> Integer)" and
(Haskell_Quickcheck) "(Data'_Bits.shiftrUnbounded :: Prelude.Int -> Prelude.Int -> Prelude.Int)" and
(Scala) "Bits'_Integer.shiftr"
lemma integer_shiftr_code [code]:
"integer_shiftr x (Code_Numeral.Neg n) = undefined x (Code_Numeral.Neg n)"
"integer_shiftr x 0 = x"
"integer_shiftr 0 (Code_Numeral.Pos n) = 0"
"integer_shiftr (Code_Numeral.Pos num.One) (Code_Numeral.Pos n) = 0"
"integer_shiftr (Code_Numeral.Pos (num.Bit0 n')) (Code_Numeral.Pos n) =
integer_shiftr (Code_Numeral.Pos n') (Code_Numeral.sub n num.One)"
"integer_shiftr (Code_Numeral.Pos (num.Bit1 n')) (Code_Numeral.Pos n) =
integer_shiftr (Code_Numeral.Pos n') (Code_Numeral.sub n num.One)"
"integer_shiftr (Code_Numeral.Neg num.One) (Code_Numeral.Pos n) = -1"
"integer_shiftr (Code_Numeral.Neg (num.Bit0 n')) (Code_Numeral.Pos n) =
integer_shiftr (Code_Numeral.Neg n') (Code_Numeral.sub n num.One)"
"integer_shiftr (Code_Numeral.Neg (num.Bit1 n')) (Code_Numeral.Pos n) =
integer_shiftr (Code_Numeral.Neg (Num.inc n')) (Code_Numeral.sub n num.One)"
by(simp_all add: integer_shiftr_def shiftr_integer_def)
context
includes integer.lifting
begin
lemma Bit_integer_code [code]:
"Bit_integer i False = i << 1"
"Bit_integer i True = (i << 1) + 1"
by(transfer, simp add: Bit_def shiftl_int_def)+
lemma msb_integer_code [code]:
"msb (x :: integer) \<longleftrightarrow> x < 0"
by transfer(simp add: msb_int_def)
end
context
includes integer.lifting natural.lifting
begin
lemma bitAND_integer_unfold [code]:
"x AND y =
(if x = 0 then 0
else if x = - 1 then y
else Bit_integer (bin_rest_integer x AND bin_rest_integer y) (bin_last_integer x \<and> bin_last_integer y))"
by transfer (fact bitAND_int.simps)
lemma bitOR_integer_unfold [code]:
"x OR y =
(if x = 0 then y
else if x = - 1 then - 1
else Bit_integer (bin_rest_integer x OR bin_rest_integer y) (bin_last_integer x \<or> bin_last_integer y))"
proof transfer
fix x y :: int
from int_or_Bits [of "bin_rest x" "bin_last x" "bin_rest y" "bin_last y"]
have "(bin_rest x OR bin_rest y) BIT (bin_last x \<or> bin_last y) = x OR y"
by simp
then show "x OR y =
(if x = 0 then y
else if x = - 1 then - 1
else Bit (bin_rest x OR bin_rest y) (bin_last x \<or> bin_last y))"
by simp
qed
lemma bitXOR_integer_unfold [code]:
"x XOR y =
(if x = 0 then y
else if x = - 1 then NOT y
else Bit_integer (bin_rest_integer x XOR bin_rest_integer y)
(\<not> bin_last_integer x \<longleftrightarrow> bin_last_integer y))"
proof transfer
fix x y :: int
from int_xor_Bits [of "bin_rest x" "bin_last x" "bin_rest y" "bin_last y"]
have "(bin_rest x XOR bin_rest y) BIT
((bin_last x \<or> bin_last y) \<and> (bin_last x \<longrightarrow> \<not> bin_last y)) = x XOR y"
by simp
also have "(bin_last x \<or> bin_last y) \<and> (bin_last x \<longrightarrow> \<not> bin_last y) \<longleftrightarrow> (\<not> bin_last x \<longleftrightarrow> bin_last y)"
by auto
finally show "x XOR y =
(if x = 0 then y
else if x = - 1 then NOT y
else Bit (bin_rest x XOR bin_rest y)
(\<not> bin_last x \<longleftrightarrow> bin_last y))"
by simp
qed
end
section \<open>Test code generator setup\<close>
definition bit_integer_test :: "bool" where
"bit_integer_test =
(([ -1 AND 3, 1 AND -3, 3 AND 5, -3 AND (- 5)
, -3 OR 1, 1 OR -3, 3 OR 5, -3 OR (- 5)
, NOT 1, NOT (- 3)
, -1 XOR 3, 1 XOR (- 3), 3 XOR 5, -5 XOR (- 3)
, set_bit 5 4 True, set_bit (- 5) 2 True, set_bit 5 0 False, set_bit (- 5) 1 False
, 1 << 2, -1 << 3
, 100 >> 3, -100 >> 3] :: integer list)
= [ 3, 1, 1, -7
, -3, -3, 7, -1
, -2, 2
, -4, -4, 6, 6
, 21, -1, 4, -7
, 4, -8
, 12, -13] \<and>
[ (5 :: integer) !! 4, (5 :: integer) !! 2, (-5 :: integer) !! 4, (-5 :: integer) !! 2
, lsb (5 :: integer), lsb (4 :: integer), lsb (-1 :: integer), lsb (-2 :: integer),
msb (5 :: integer), msb (0 :: integer), msb (-1 :: integer), msb (-2 :: integer)]
= [ False, True, True, False,
True, False, True, False,
False, False, True, True])"
export_code bit_integer_test checking SML Haskell? Haskell_Quickcheck? OCaml? Scala
notepad begin
have bit_integer_test by eval
have bit_integer_test by normalization
have bit_integer_test by code_simp
end
ML_val \<open>val true = @{code bit_integer_test}\<close>
lemma "x AND y = x OR (y :: integer)"
quickcheck[random, expect=counterexample]
quickcheck[exhaustive, expect=counterexample]
oops
lemma "(x :: integer) AND x = x OR x"
quickcheck[narrowing, expect=no_counterexample]
oops
lemma "(f :: integer \<Rightarrow> unit) = g"
quickcheck[narrowing, size=3, expect=no_counterexample]
by(simp add: fun_eq_iff)
hide_const bit_integer_test
hide_fact bit_integer_test_def
end
diff --git a/thys/Native_Word/Code_Target_Bits_Int.thy b/thys/Native_Word/Code_Target_Bits_Int.thy
--- a/thys/Native_Word/Code_Target_Bits_Int.thy
+++ b/thys/Native_Word/Code_Target_Bits_Int.thy
@@ -1,75 +1,75 @@
(* Title: Code_Target_Bits_Int.thy
Author: Andreas Lochbihler, ETH Zurich
*)
chapter \<open>Implementation of bit operations on int by target language operations\<close>
theory Code_Target_Bits_Int
imports
Bits_Integer
"HOL-Library.Code_Target_Int"
begin
declare [[code drop:
- "bitAND :: int \<Rightarrow> _" "bitOR :: int \<Rightarrow> _" "bitXOR :: int \<Rightarrow> _" "bitNOT :: int \<Rightarrow> _"
+ "(AND) :: int \<Rightarrow> _" "(OR) :: int \<Rightarrow> _" "(XOR) :: int \<Rightarrow> _" "NOT :: int \<Rightarrow> _"
"lsb :: int \<Rightarrow> _" "set_bit :: int \<Rightarrow> _" "test_bit :: int \<Rightarrow> _"
"shiftl :: int \<Rightarrow> _" "shiftr :: int \<Rightarrow> _"
bin_last bin_rest bin_nth Bit
int_of_integer_symbolic
]]
context
includes integer.lifting
begin
lemma bitAND_int_code [code]:
"int_of_integer i AND int_of_integer j = int_of_integer (i AND j)"
by transfer simp
lemma bitOR_int_code [code]:
"int_of_integer i OR int_of_integer j = int_of_integer (i OR j)"
by transfer simp
lemma bitXOR_int_code [code]:
"int_of_integer i XOR int_of_integer j = int_of_integer (i XOR j)"
by transfer simp
lemma bitNOT_int_code [code]:
"NOT (int_of_integer i) = int_of_integer (NOT i)"
by transfer simp
declare bin_last_conv_AND [code]
lemma bin_rest_code [code]:
"bin_rest (int_of_integer i) = int_of_integer (bin_rest_integer i)"
by transfer simp
declare bitval_bin_last [code_unfold]
declare bin_nth_conv_AND [code]
lemma Bit_code [code]: "int_of_integer i BIT b = int_of_integer (Bit_integer i b)"
by transfer simp
lemma test_bit_int_code [code]: "int_of_integer x !! n = x !! n"
by transfer simp
lemma lsb_int_code [code]: "lsb (int_of_integer x) = lsb x"
by transfer simp
lemma set_bit_int_code [code]: "set_bit (int_of_integer x) n b = int_of_integer (set_bit x n b)"
by transfer simp
lemma shiftl_int_code [code]: "int_of_integer x << n = int_of_integer (x << n)"
by transfer simp
lemma shiftr_int_code [code]: "int_of_integer x >> n = int_of_integer (x >> n)"
by transfer simp
lemma int_of_integer_symbolic_code [code]:
"int_of_integer_symbolic = int_of_integer"
by(simp add: int_of_integer_symbolic_def)
end
end
diff --git a/thys/Native_Word/Uint.thy b/thys/Native_Word/Uint.thy
--- a/thys/Native_Word/Uint.thy
+++ b/thys/Native_Word/Uint.thy
@@ -1,794 +1,794 @@
(* Title: Uint.thy
Author: Peter Lammich, TU Munich
Author: Andreas Lochbihler, ETH Zurich
*)
chapter \<open>Unsigned words of default size\<close>
theory Uint imports
Code_Target_Word_Base
begin
text \<open>
This theory provides access to words in the target languages of the code generator
whose bit width is the default of the target language. To that end, the type \<open>uint\<close>
models words of width \<open>dflt_size\<close>, but \<open>dflt_size\<close> is known only to be positive.
Usage restrictions:
Default-size words (type \<open>uint\<close>) cannot be used for evaluation, because
the results depend on the particular choice of word size in the target language
and implementation. Symbolic evaluation has not yet been set up for \<open>uint\<close>.
\<close>
text \<open>The default size type\<close>
typedecl dflt_size
instantiation dflt_size :: typerep begin
definition "typerep_class.typerep \<equiv> \<lambda>_ :: dflt_size itself. Typerep.Typerep (STR ''Uint.dflt_size'') []"
instance ..
end
consts dflt_size_aux :: "nat"
specification (dflt_size_aux) dflt_size_aux_g0: "dflt_size_aux > 0"
by auto
hide_fact dflt_size_aux_def
instantiation dflt_size :: len begin
definition "len_of_dflt_size (_ :: dflt_size itself) \<equiv> dflt_size_aux"
instance by(intro_classes)(simp add: len_of_dflt_size_def dflt_size_aux_g0)
end
abbreviation "dflt_size \<equiv> len_of (TYPE (dflt_size))"
context includes integer.lifting begin
lift_definition dflt_size_integer :: integer is "int dflt_size" .
declare dflt_size_integer_def[code del]
\<comment> \<open>The code generator will substitute a machine-dependent value for this constant\<close>
lemma dflt_size_by_int[code]: "dflt_size = nat_of_integer dflt_size_integer"
by transfer simp
lemma dflt_size[simp]:
"dflt_size > 0"
"dflt_size \<ge> Suc 0"
"\<not> dflt_size < Suc 0"
using len_gt_0[where 'a=dflt_size]
by (simp_all del: len_gt_0)
end
declare prod.Quotient[transfer_rule]
section \<open>Type definition and primitive operations\<close>
typedef uint = "UNIV :: dflt_size word set" ..
setup_lifting type_definition_uint
text \<open>Use an abstract type for code generation to disable pattern matching on @{term Abs_uint}.\<close>
declare Rep_uint_inverse[code abstype]
declare Quotient_uint[transfer_rule]
instantiation uint :: "{neg_numeral, modulo, comm_monoid_mult, comm_ring}" begin
lift_definition zero_uint :: uint is "0 :: dflt_size word" .
lift_definition one_uint :: uint is "1" .
lift_definition plus_uint :: "uint \<Rightarrow> uint \<Rightarrow> uint" is "(+) :: dflt_size word \<Rightarrow> _" .
lift_definition minus_uint :: "uint \<Rightarrow> uint \<Rightarrow> uint" is "(-)" .
lift_definition uminus_uint :: "uint \<Rightarrow> uint" is uminus .
lift_definition times_uint :: "uint \<Rightarrow> uint \<Rightarrow> uint" is "(*)" .
lift_definition divide_uint :: "uint \<Rightarrow> uint \<Rightarrow> uint" is "(div)" .
lift_definition modulo_uint :: "uint \<Rightarrow> uint \<Rightarrow> uint" is "(mod)" .
instance by standard (transfer, simp add: algebra_simps)+
end
instantiation uint :: linorder begin
lift_definition less_uint :: "uint \<Rightarrow> uint \<Rightarrow> bool" is "(<)" .
lift_definition less_eq_uint :: "uint \<Rightarrow> uint \<Rightarrow> bool" is "(\<le>)" .
instance by standard (transfer, simp add: less_le_not_le linear)+
end
lemmas [code] = less_uint.rep_eq less_eq_uint.rep_eq
instantiation uint :: bit_operations begin
-lift_definition bitNOT_uint :: "uint \<Rightarrow> uint" is bitNOT .
-lift_definition bitAND_uint :: "uint \<Rightarrow> uint \<Rightarrow> uint" is bitAND .
-lift_definition bitOR_uint :: "uint \<Rightarrow> uint \<Rightarrow> uint" is bitOR .
-lift_definition bitXOR_uint :: "uint \<Rightarrow> uint \<Rightarrow> uint" is bitXOR .
+lift_definition bitNOT_uint :: "uint \<Rightarrow> uint" is NOT .
+lift_definition bitAND_uint :: "uint \<Rightarrow> uint \<Rightarrow> uint" is \<open>(AND)\<close> .
+lift_definition bitOR_uint :: "uint \<Rightarrow> uint \<Rightarrow> uint" is \<open>(OR)\<close> .
+lift_definition bitXOR_uint :: "uint \<Rightarrow> uint \<Rightarrow> uint" is \<open>(XOR)\<close> .
lift_definition test_bit_uint :: "uint \<Rightarrow> nat \<Rightarrow> bool" is test_bit .
lift_definition set_bit_uint :: "uint \<Rightarrow> nat \<Rightarrow> bool \<Rightarrow> uint" is set_bit .
lift_definition lsb_uint :: "uint \<Rightarrow> bool" is lsb .
lift_definition shiftl_uint :: "uint \<Rightarrow> nat \<Rightarrow> uint" is shiftl .
lift_definition shiftr_uint :: "uint \<Rightarrow> nat \<Rightarrow> uint" is shiftr .
lift_definition msb_uint :: "uint \<Rightarrow> bool" is msb .
instance ..
end
instantiation uint :: bit_comprehension begin
lift_definition set_bits_uint :: "(nat \<Rightarrow> bool) \<Rightarrow> uint" is "set_bits" .
instance ..
end
lemmas [code] = test_bit_uint.rep_eq lsb_uint.rep_eq msb_uint.rep_eq
instantiation uint :: equal begin
lift_definition equal_uint :: "uint \<Rightarrow> uint \<Rightarrow> bool" is "equal_class.equal" .
instance by standard (transfer, simp add: equal_eq)
end
lemmas [code] = equal_uint.rep_eq
instantiation uint :: size begin
lift_definition size_uint :: "uint \<Rightarrow> nat" is "size" .
instance ..
end
lemmas [code] = size_uint.rep_eq
lift_definition sshiftr_uint :: "uint \<Rightarrow> nat \<Rightarrow> uint" (infixl ">>>" 55) is sshiftr .
lift_definition uint_of_int :: "int \<Rightarrow> uint" is "word_of_int" .
lemma of_bool_integer_transfer [transfer_rule]:
"(rel_fun (=) pcr_integer) of_bool of_bool"
by(auto simp add: integer.pcr_cr_eq cr_integer_def split: bit.split)
text \<open>Use pretty numerals from integer for pretty printing\<close>
context includes integer.lifting begin
lift_definition Uint :: "integer \<Rightarrow> uint" is "word_of_int" .
lemma Rep_uint_numeral [simp]: "Rep_uint (numeral n) = numeral n"
by(induction n)(simp_all add: one_uint_def Abs_uint_inverse numeral.simps plus_uint_def)
lemma numeral_uint_transfer [transfer_rule]:
"(rel_fun (=) cr_uint) numeral numeral"
by(auto simp add: cr_uint_def)
lemma numeral_uint [code_unfold]: "numeral n = Uint (numeral n)"
by transfer simp
lemma Rep_uint_neg_numeral [simp]: "Rep_uint (- numeral n) = - numeral n"
by(simp only: uminus_uint_def)(simp add: Abs_uint_inverse)
lemma neg_numeral_uint [code_unfold]: "- numeral n = Uint (- numeral n)"
by transfer(simp add: cr_uint_def)
end
lemma Abs_uint_numeral [code_post]: "Abs_uint (numeral n) = numeral n"
by(induction n)(simp_all add: one_uint_def numeral.simps plus_uint_def Abs_uint_inverse)
lemma Abs_uint_0 [code_post]: "Abs_uint 0 = 0"
by(simp add: zero_uint_def)
lemma Abs_uint_1 [code_post]: "Abs_uint 1 = 1"
by(simp add: one_uint_def)
section \<open>Code setup\<close>
code_printing code_module Uint \<rightharpoonup> (SML)
\<open>
structure Uint : sig
val set_bit : Word.word -> IntInf.int -> bool -> Word.word
val shiftl : Word.word -> IntInf.int -> Word.word
val shiftr : Word.word -> IntInf.int -> Word.word
val shiftr_signed : Word.word -> IntInf.int -> Word.word
val test_bit : Word.word -> IntInf.int -> bool
end = struct
fun set_bit x n b =
let val mask = Word.<< (0wx1, Word.fromLargeInt (IntInf.toLarge n))
in if b then Word.orb (x, mask)
else Word.andb (x, Word.notb mask)
end
fun shiftl x n =
Word.<< (x, Word.fromLargeInt (IntInf.toLarge n))
fun shiftr x n =
Word.>> (x, Word.fromLargeInt (IntInf.toLarge n))
fun shiftr_signed x n =
Word.~>> (x, Word.fromLargeInt (IntInf.toLarge n))
fun test_bit x n =
Word.andb (x, Word.<< (0wx1, Word.fromLargeInt (IntInf.toLarge n))) <> Word.fromInt 0
end; (* struct Uint *)\<close>
code_reserved SML Uint
code_printing code_module Uint \<rightharpoonup> (Haskell)
\<open>module Uint(Int, Word, dflt_size) where
import qualified Prelude
import Data.Int(Int)
import Data.Word(Word)
import qualified Data.Bits
dflt_size :: Prelude.Integer
dflt_size = Prelude.toInteger (bitSize_aux (0::Word)) where
bitSize_aux :: (Data.Bits.Bits a, Prelude.Bounded a) => a -> Int
bitSize_aux = Data.Bits.bitSize\<close>
and (Haskell_Quickcheck)
\<open>module Uint(Int, Word, dflt_size) where
import qualified Prelude
import Data.Int(Int)
import Data.Word(Word)
import qualified Data.Bits
dflt_size :: Prelude.Int
dflt_size = bitSize_aux (0::Word) where
bitSize_aux :: (Data.Bits.Bits a, Prelude.Bounded a) => a -> Int
bitSize_aux = Data.Bits.bitSize
\<close>
code_reserved Haskell Uint dflt_size
text \<open>
OCaml and Scala provide only signed bit numbers, so we use these and
implement sign-sensitive operations like comparisons manually.
\<close>
code_printing code_module "Uint" \<rightharpoonup> (OCaml)
\<open>module Uint : sig
type t = int
val dflt_size : Z.t
val less : t -> t -> bool
val less_eq : t -> t -> bool
val set_bit : t -> Z.t -> bool -> t
val shiftl : t -> Z.t -> t
val shiftr : t -> Z.t -> t
val shiftr_signed : t -> Z.t -> t
val test_bit : t -> Z.t -> bool
val int_mask : int
val int32_mask : int32
val int64_mask : int64
end = struct
type t = int
let dflt_size = Z.of_int Sys.int_size;;
(* negative numbers have their highest bit set,
so they are greater than positive ones *)
let less x y =
if x<0 then
y<0 && x<y
else y < 0 || x < y;;
let less_eq x y =
if x < 0 then
y < 0 && x <= y
else y < 0 || x <= y;;
let set_bit x n b =
let mask = 1 lsl (Z.to_int n)
in if b then x lor mask
else x land (lnot mask);;
let shiftl x n = x lsl (Z.to_int n);;
let shiftr x n = x lsr (Z.to_int n);;
let shiftr_signed x n = x asr (Z.to_int n);;
let test_bit x n = x land (1 lsl (Z.to_int n)) <> 0;;
let int_mask =
if Sys.int_size < 32 then lnot 0 else 0xFFFFFFFF;;
let int32_mask =
if Sys.int_size < 32 then Int32.pred (Int32.shift_left Int32.one Sys.int_size)
else Int32.of_string "0xFFFFFFFF";;
let int64_mask =
if Sys.int_size < 64 then Int64.pred (Int64.shift_left Int64.one Sys.int_size)
else Int64.of_string "0xFFFFFFFFFFFFFFFF";;
end;; (*struct Uint*)\<close>
code_reserved OCaml Uint
code_printing code_module Uint \<rightharpoonup> (Scala)
\<open>object Uint {
def dflt_size : BigInt = BigInt(32)
def less(x: Int, y: Int) : Boolean =
if (x < 0) y < 0 && x < y
else y < 0 || x < y
def less_eq(x: Int, y: Int) : Boolean =
if (x < 0) y < 0 && x <= y
else y < 0 || x <= y
def set_bit(x: Int, n: BigInt, b: Boolean) : Int =
if (b)
x | (1 << n.intValue)
else
x & (1 << n.intValue).unary_~
def shiftl(x: Int, n: BigInt) : Int = x << n.intValue
def shiftr(x: Int, n: BigInt) : Int = x >>> n.intValue
def shiftr_signed(x: Int, n: BigInt) : Int = x >> n.intValue
def test_bit(x: Int, n: BigInt) : Boolean =
(x & (1 << n.intValue)) != 0
} /* object Uint */\<close>
code_reserved Scala Uint
text \<open>
OCaml's conversion from Big\_int to int demands that the value fits into a signed integer.
The following justifies the implementation.
\<close>
context includes integer.lifting begin
definition wivs_mask :: int where "wivs_mask = 2^ dflt_size - 1"
lift_definition wivs_mask_integer :: integer is wivs_mask .
lemma [code]: "wivs_mask_integer = 2 ^ dflt_size - 1"
by transfer (simp add: wivs_mask_def)
definition wivs_shift :: int where "wivs_shift = 2 ^ dflt_size"
lift_definition wivs_shift_integer :: integer is wivs_shift .
lemma [code]: "wivs_shift_integer = 2 ^ dflt_size"
by transfer (simp add: wivs_shift_def)
definition wivs_index :: nat where "wivs_index == dflt_size - 1"
lift_definition wivs_index_integer :: integer is "int wivs_index".
lemma wivs_index_integer_code[code]: "wivs_index_integer = dflt_size_integer - 1"
by transfer (simp add: wivs_index_def of_nat_diff)
definition wivs_overflow :: int where "wivs_overflow == 2^ (dflt_size - 1)"
lift_definition wivs_overflow_integer :: integer is wivs_overflow .
lemma [code]: "wivs_overflow_integer = 2 ^ (dflt_size - 1)"
by transfer (simp add: wivs_overflow_def)
definition wivs_least :: int where "wivs_least == - wivs_overflow"
lift_definition wivs_least_integer :: integer is wivs_least .
lemma [code]: "wivs_least_integer = - (2 ^ (dflt_size - 1))"
by transfer (simp add: wivs_overflow_def wivs_least_def)
definition Uint_signed :: "integer \<Rightarrow> uint" where
"Uint_signed i = (if i < wivs_least_integer \<or> wivs_overflow_integer \<le> i then undefined Uint i else Uint i)"
lemma Uint_code [code]:
"Uint i =
(let i' = i AND wivs_mask_integer in
if i' !! wivs_index then Uint_signed (i' - wivs_shift_integer) else Uint_signed i')"
including undefined_transfer
unfolding Uint_signed_def
apply transfer
apply (rule word_of_int_via_signed)
by (simp_all add: wivs_mask_def wivs_shift_def wivs_index_def wivs_overflow_def
wivs_least_def bin_mask_conv_pow2 shiftl_int_def)
lemma Uint_signed_code [code abstract]:
"Rep_uint (Uint_signed i) =
(if i < wivs_least_integer \<or> i \<ge> wivs_overflow_integer then Rep_uint (undefined Uint i) else word_of_int (int_of_integer_symbolic i))"
unfolding Uint_signed_def Uint_def int_of_integer_symbolic_def word_of_integer_def
by(simp add: Abs_uint_inverse)
end
text \<open>
Avoid @{term Abs_uint} in generated code, use @{term Rep_uint'} instead.
The symbolic implementations for code\_simp use @{term Rep_uint}.
The new destructor @{term Rep_uint'} is executable.
As the simplifier is given the [code abstract] equations literally,
we cannot implement @{term Rep_uint} directly, because that makes code\_simp loop.
If code generation raises Match, some equation probably contains @{term Rep_uint}
([code abstract] equations for @{typ uint} may use @{term Rep_uint} because
these instances will be folded away.)
\<close>
definition Rep_uint' where [simp]: "Rep_uint' = Rep_uint"
lemma Rep_uint'_code [code]: "Rep_uint' x = (BITS n. x !! n)"
unfolding Rep_uint'_def by transfer simp
lift_definition Abs_uint' :: "dflt_size word \<Rightarrow> uint" is "\<lambda>x :: dflt_size word. x" .
lemma Abs_uint'_code [code]:
"Abs_uint' x = Uint (integer_of_int (uint x))"
including integer.lifting by transfer simp
declare [[code drop: "term_of_class.term_of :: uint \<Rightarrow> _"]]
lemma term_of_uint_code [code]:
defines "TR \<equiv> typerep.Typerep" and "bit0 \<equiv> STR ''Numeral_Type.bit0''"
shows
"term_of_class.term_of x =
Code_Evaluation.App (Code_Evaluation.Const (STR ''Uint.uint.Abs_uint'') (TR (STR ''fun'') [TR (STR ''Word.word'') [TR (STR ''Uint.dflt_size'') []], TR (STR ''Uint.uint'') []]))
(term_of_class.term_of (Rep_uint' x))"
by(simp add: term_of_anything)
text \<open>Important:
We must prevent the reflection oracle (eval-tac) to
use our machine-dependent type.
\<close>
code_printing
type_constructor uint \<rightharpoonup>
(SML) "Word.word" and
(Haskell) "Uint.Word" and
(OCaml) "Uint.t" and
(Scala) "Int" and
(Eval) "*** \"Error: Machine dependent type\" ***" and
(Quickcheck) "Word.word"
| constant dflt_size_integer \<rightharpoonup>
(SML) "(IntInf.fromLarge (Int.toLarge Word.wordSize))" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.wordSize" and
(Haskell) "Uint.dflt'_size" and
(OCaml) "Uint.dflt'_size" and
(Scala) "Uint.dflt'_size"
| constant Uint \<rightharpoonup>
(SML) "Word.fromLargeInt (IntInf.toLarge _)" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.fromInt" and
(Haskell) "(Prelude.fromInteger _ :: Uint.Word)" and
(Haskell_Quickcheck) "(Prelude.fromInteger (Prelude.toInteger _) :: Uint.Word)" and
(Scala) "_.intValue"
| constant Uint_signed \<rightharpoonup>
(OCaml) "Z.to'_int"
| constant "0 :: uint" \<rightharpoonup>
(SML) "(Word.fromInt 0)" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "(Word.fromInt 0)" and
(Haskell) "(0 :: Uint.Word)" and
(OCaml) "0" and
(Scala) "0"
| constant "1 :: uint" \<rightharpoonup>
(SML) "(Word.fromInt 1)" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "(Word.fromInt 1)" and
(Haskell) "(1 :: Uint.Word)" and
(OCaml) "1" and
(Scala) "1"
| constant "plus :: uint \<Rightarrow> _ " \<rightharpoonup>
(SML) "Word.+ ((_), (_))" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.+ ((_), (_))" and
(Haskell) infixl 6 "+" and
(OCaml) "Pervasives.(+)" and
(Scala) infixl 7 "+"
| constant "uminus :: uint \<Rightarrow> _" \<rightharpoonup>
(SML) "Word.~" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.~" and
(Haskell) "negate" and
(OCaml) "Pervasives.(~-)" and
(Scala) "!(- _)"
| constant "minus :: uint \<Rightarrow> _" \<rightharpoonup>
(SML) "Word.- ((_), (_))" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.- ((_), (_))" and
(Haskell) infixl 6 "-" and
(OCaml) "Pervasives.(-)" and
(Scala) infixl 7 "-"
| constant "times :: uint \<Rightarrow> _ \<Rightarrow> _" \<rightharpoonup>
(SML) "Word.* ((_), (_))" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.* ((_), (_))" and
(Haskell) infixl 7 "*" and
(OCaml) "Pervasives.( * )" and
(Scala) infixl 8 "*"
| constant "HOL.equal :: uint \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "!((_ : Word.word) = _)" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "!((_ : Word.word) = _)" and
(Haskell) infix 4 "==" and
(OCaml) "(Pervasives.(=):Uint.t -> Uint.t -> bool)" and
(Scala) infixl 5 "=="
| class_instance uint :: equal \<rightharpoonup>
(Haskell) -
| constant "less_eq :: uint \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "Word.<= ((_), (_))" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.<= ((_), (_))" and
(Haskell) infix 4 "<=" and
(OCaml) "Uint.less'_eq" and
(Scala) "Uint.less'_eq"
| constant "less :: uint \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "Word.< ((_), (_))" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.< ((_), (_))" and
(Haskell) infix 4 "<" and
(OCaml) "Uint.less" and
(Scala) "Uint.less"
-| constant "bitNOT :: uint \<Rightarrow> _" \<rightharpoonup>
+| constant "NOT :: uint \<Rightarrow> _" \<rightharpoonup>
(SML) "Word.notb" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.notb" and
(Haskell) "Data'_Bits.complement" and
(OCaml) "Pervasives.lnot" and
(Scala) "_.unary'_~"
-| constant "bitAND :: uint \<Rightarrow> _" \<rightharpoonup>
+| constant "(AND) :: uint \<Rightarrow> _" \<rightharpoonup>
(SML) "Word.andb ((_),/ (_))" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.andb ((_),/ (_))" and
(Haskell) infixl 7 "Data_Bits..&." and
(OCaml) "Pervasives.(land)" and
(Scala) infixl 3 "&"
-| constant "bitOR :: uint \<Rightarrow> _" \<rightharpoonup>
+| constant "(OR) :: uint \<Rightarrow> _" \<rightharpoonup>
(SML) "Word.orb ((_),/ (_))" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.orb ((_),/ (_))" and
(Haskell) infixl 5 "Data_Bits..|." and
(OCaml) "Pervasives.(lor)" and
(Scala) infixl 1 "|"
-| constant "bitXOR :: uint \<Rightarrow> _" \<rightharpoonup>
+| constant "(XOR) :: uint \<Rightarrow> _" \<rightharpoonup>
(SML) "Word.xorb ((_),/ (_))" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.xorb ((_),/ (_))" and
(Haskell) "Data'_Bits.xor" and
(OCaml) "Pervasives.(lxor)" and
(Scala) infixl 2 "^"
definition uint_divmod :: "uint \<Rightarrow> uint \<Rightarrow> uint \<times> uint" where
"uint_divmod x y =
(if y = 0 then (undefined ((div) :: uint \<Rightarrow> _) x (0 :: uint), undefined ((mod) :: uint \<Rightarrow> _) x (0 :: uint))
else (x div y, x mod y))"
definition uint_div :: "uint \<Rightarrow> uint \<Rightarrow> uint"
where "uint_div x y = fst (uint_divmod x y)"
definition uint_mod :: "uint \<Rightarrow> uint \<Rightarrow> uint"
where "uint_mod x y = snd (uint_divmod x y)"
lemma div_uint_code [code]: "x div y = (if y = 0 then 0 else uint_div x y)"
including undefined_transfer unfolding uint_divmod_def uint_div_def
by transfer(simp add: word_div_def)
lemma mod_uint_code [code]: "x mod y = (if y = 0 then x else uint_mod x y)"
including undefined_transfer unfolding uint_mod_def uint_divmod_def
by transfer(simp add: word_mod_def)
definition uint_sdiv :: "uint \<Rightarrow> uint \<Rightarrow> uint"
where [code del]:
"uint_sdiv x y =
(if y = 0 then undefined ((div) :: uint \<Rightarrow> _) x (0 :: uint)
else Abs_uint (Rep_uint x sdiv Rep_uint y))"
definition div0_uint :: "uint \<Rightarrow> uint"
where [code del]: "div0_uint x = undefined ((div) :: uint \<Rightarrow> _) x (0 :: uint)"
declare [[code abort: div0_uint]]
definition mod0_uint :: "uint \<Rightarrow> uint"
where [code del]: "mod0_uint x = undefined ((mod) :: uint \<Rightarrow> _) x (0 :: uint)"
declare [[code abort: mod0_uint]]
definition wivs_overflow_uint :: uint
where "wivs_overflow_uint \<equiv> 1 << (dflt_size - 1)"
(* TODO: Move to Word *)
lemma dflt_size_word_pow_ne_zero [simp]:
"(2 :: 'a word) ^ (LENGTH('a::len) - Suc 0) \<noteq> 0"
proof
assume "(2 :: 'a word) ^ (LENGTH('a::len) - Suc 0) = 0"
then have "unat ((2 :: 'a word) ^ (LENGTH('a::len) - Suc 0)) = unat 0"
by simp
then show False by (simp add: unat_p2)
qed
lemma uint_divmod_code [code]:
"uint_divmod x y =
(if wivs_overflow_uint \<le> y then if x < y then (0, x) else (1, x - y)
else if y = 0 then (div0_uint x, mod0_uint x)
else let q = (uint_sdiv (x >> 1) y) << 1;
r = x - q * y
in if r \<ge> y then (q + 1, r - y) else (q, r))"
including undefined_transfer
unfolding uint_divmod_def uint_sdiv_def div0_uint_def mod0_uint_def
wivs_overflow_uint_def
by transfer (simp add: divmod_via_sdivmod)
lemma uint_sdiv_code [code abstract]:
"Rep_uint (uint_sdiv x y) =
(if y = 0 then Rep_uint (undefined ((div) :: uint \<Rightarrow> _) x (0 :: uint))
else Rep_uint x sdiv Rep_uint y)"
unfolding uint_sdiv_def by(simp add: Abs_uint_inverse)
text \<open>
Note that we only need a translation for signed division, but not for the remainder
because @{thm uint_divmod_code} computes both with division only.
\<close>
code_printing
constant uint_div \<rightharpoonup>
(SML) "Word.div ((_), (_))" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.div ((_), (_))" and
(Haskell) "Prelude.div"
| constant uint_mod \<rightharpoonup>
(SML) "Word.mod ((_), (_))" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Word.mod ((_), (_))" and
(Haskell) "Prelude.mod"
| constant uint_divmod \<rightharpoonup>
(Haskell) "divmod"
| constant uint_sdiv \<rightharpoonup>
(OCaml) "Pervasives.('/)" and
(Scala) "_ '/ _"
definition uint_test_bit :: "uint \<Rightarrow> integer \<Rightarrow> bool"
where [code del]:
"uint_test_bit x n =
(if n < 0 \<or> dflt_size_integer \<le> n then undefined (test_bit :: uint \<Rightarrow> _) x n
else x !! (nat_of_integer n))"
lemma test_bit_uint_code [code]:
"test_bit x n \<longleftrightarrow> n < dflt_size \<and> uint_test_bit x (integer_of_nat n)"
including undefined_transfer integer.lifting unfolding uint_test_bit_def
by transfer (auto cong: conj_cong dest: test_bit_size simp add: word_size)
lemma uint_test_bit_code [code]:
"uint_test_bit w n =
(if n < 0 \<or> dflt_size_integer \<le> n then undefined (test_bit :: uint \<Rightarrow> _) w n else Rep_uint w !! nat_of_integer n)"
unfolding uint_test_bit_def
by(simp add: test_bit_uint.rep_eq)
code_printing constant uint_test_bit \<rightharpoonup>
(SML) "Uint.test'_bit" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Uint.test'_bit" and
(Haskell) "Data'_Bits.testBitBounded" and
(OCaml) "Uint.test'_bit" and
(Scala) "Uint.test'_bit"
definition uint_set_bit :: "uint \<Rightarrow> integer \<Rightarrow> bool \<Rightarrow> uint"
where [code del]:
"uint_set_bit x n b =
(if n < 0 \<or> dflt_size_integer \<le> n then undefined (set_bit :: uint \<Rightarrow> _) x n b
else set_bit x (nat_of_integer n) b)"
lemma set_bit_uint_code [code]:
"set_bit x n b = (if n < dflt_size then uint_set_bit x (integer_of_nat n) b else x)"
including undefined_transfer integer.lifting unfolding uint_set_bit_def
by (transfer) (auto cong: conj_cong simp add: not_less set_bit_beyond word_size)
lemma uint_set_bit_code [code abstract]:
"Rep_uint (uint_set_bit w n b) =
(if n < 0 \<or> dflt_size_integer \<le> n then Rep_uint (undefined (set_bit :: uint \<Rightarrow> _) w n b)
else set_bit (Rep_uint w) (nat_of_integer n) b)"
including undefined_transfer integer.lifting unfolding uint_set_bit_def by transfer simp
code_printing constant uint_set_bit \<rightharpoonup>
(SML) "Uint.set'_bit" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Uint.set'_bit" and
(Haskell) "Data'_Bits.setBitBounded" and
(OCaml) "Uint.set'_bit" and
(Scala) "Uint.set'_bit"
lift_definition uint_set_bits :: "(nat \<Rightarrow> bool) \<Rightarrow> uint \<Rightarrow> nat \<Rightarrow> uint" is set_bits_aux .
lemma uint_set_bits_code [code]:
"uint_set_bits f w n =
(if n = 0 then w
else let n' = n - 1 in uint_set_bits f ((w << 1) OR (if f n' then 1 else 0)) n')"
by(transfer fixing: n)(cases n, simp_all)
lemma set_bits_uint [code]:
"(BITS n. f n) = uint_set_bits f 0 dflt_size"
by transfer (simp add: set_bits_conv_set_bits_aux)
lemma lsb_code [code]: fixes x :: uint shows "lsb x = x !! 0"
by transfer(simp add: word_lsb_def word_test_bit_def)
definition uint_shiftl :: "uint \<Rightarrow> integer \<Rightarrow> uint"
where [code del]:
"uint_shiftl x n = (if n < 0 \<or> dflt_size_integer \<le> n then undefined (shiftl :: uint \<Rightarrow> _) x n else x << (nat_of_integer n))"
lemma shiftl_uint_code [code]: "x << n = (if n < dflt_size then uint_shiftl x (integer_of_nat n) else 0)"
including undefined_transfer integer.lifting unfolding uint_shiftl_def
by transfer(simp add: not_less shiftl_zero_size word_size)
lemma uint_shiftl_code [code abstract]:
"Rep_uint (uint_shiftl w n) =
(if n < 0 \<or> dflt_size_integer \<le> n then Rep_uint (undefined (shiftl :: uint \<Rightarrow> _) w n) else Rep_uint w << (nat_of_integer n))"
including undefined_transfer integer.lifting unfolding uint_shiftl_def by transfer simp
code_printing constant uint_shiftl \<rightharpoonup>
(SML) "Uint.shiftl" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Uint.shiftl" and
(Haskell) "Data'_Bits.shiftlBounded" and
(OCaml) "Uint.shiftl" and
(Scala) "Uint.shiftl"
definition uint_shiftr :: "uint \<Rightarrow> integer \<Rightarrow> uint"
where [code del]:
"uint_shiftr x n = (if n < 0 \<or> dflt_size_integer \<le> n then undefined (shiftr :: uint \<Rightarrow> _) x n else x >> (nat_of_integer n))"
lemma shiftr_uint_code [code]: "x >> n = (if n < dflt_size then uint_shiftr x (integer_of_nat n) else 0)"
including undefined_transfer integer.lifting unfolding uint_shiftr_def
by transfer(simp add: not_less shiftr_zero_size word_size)
lemma uint_shiftr_code [code abstract]:
"Rep_uint (uint_shiftr w n) =
(if n < 0 \<or> dflt_size_integer \<le> n then Rep_uint (undefined (shiftr :: uint \<Rightarrow> _) w n) else Rep_uint w >> nat_of_integer n)"
including undefined_transfer unfolding uint_shiftr_def by transfer simp
code_printing constant uint_shiftr \<rightharpoonup>
(SML) "Uint.shiftr" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Uint.shiftr" and
(Haskell) "Data'_Bits.shiftrBounded" and
(OCaml) "Uint.shiftr" and
(Scala) "Uint.shiftr"
definition uint_sshiftr :: "uint \<Rightarrow> integer \<Rightarrow> uint"
where [code del]:
"uint_sshiftr x n =
(if n < 0 \<or> dflt_size_integer \<le> n then undefined sshiftr_uint x n else sshiftr_uint x (nat_of_integer n))"
lemma sshiftr_beyond: fixes x :: "'a :: len word" shows
"size x \<le> n \<Longrightarrow> x >>> n = (if x !! (size x - 1) then -1 else 0)"
by(rule word_eqI)(simp add: nth_sshiftr word_size)
lemma sshiftr_uint_code [code]:
"x >>> n =
(if n < dflt_size then uint_sshiftr x (integer_of_nat n) else
if x !! wivs_index then -1 else 0)"
including undefined_transfer integer.lifting unfolding uint_sshiftr_def
by transfer(simp add: not_less sshiftr_beyond word_size wivs_index_def)
lemma uint_sshiftr_code [code abstract]:
"Rep_uint (uint_sshiftr w n) =
(if n < 0 \<or> dflt_size_integer \<le> n then Rep_uint (undefined sshiftr_uint w n) else Rep_uint w >>> (nat_of_integer n))"
including undefined_transfer unfolding uint_sshiftr_def by transfer simp
code_printing constant uint_sshiftr \<rightharpoonup>
(SML) "Uint.shiftr'_signed" and
(Eval) "(raise (Fail \"Machine dependent code\"))" and
(Quickcheck) "Uint.shiftr'_signed" and
(Haskell)
"(Prelude.fromInteger (Prelude.toInteger (Data'_Bits.shiftrBounded (Prelude.fromInteger (Prelude.toInteger _) :: Uint.Int) _)) :: Uint.Word)" and
(OCaml) "Uint.shiftr'_signed" and
(Scala) "Uint.shiftr'_signed"
lemma uint_msb_test_bit: "msb x \<longleftrightarrow> (x :: uint) !! wivs_index"
by transfer(simp add: msb_nth wivs_index_def)
lemma msb_uint_code [code]: "msb x \<longleftrightarrow> uint_test_bit x wivs_index_integer"
apply(simp add: uint_test_bit_def uint_msb_test_bit
wivs_index_integer_code dflt_size_integer_def wivs_index_def)
by (metis (full_types) One_nat_def dflt_size(2) less_iff_diff_less_0
nat_of_integer_of_nat of_nat_1 of_nat_diff of_nat_less_0_iff wivs_index_def)
lemma uint_of_int_code [code]: "uint_of_int i = (BITS n. i !! n)"
by transfer(simp add: word_of_int_conv_set_bits test_bit_int_def[abs_def])
section \<open>Quickcheck setup\<close>
definition uint_of_natural :: "natural \<Rightarrow> uint"
where "uint_of_natural x \<equiv> Uint (integer_of_natural x)"
instantiation uint :: "{random, exhaustive, full_exhaustive}" begin
definition "random_uint \<equiv> qc_random_cnv uint_of_natural"
definition "exhaustive_uint \<equiv> qc_exhaustive_cnv uint_of_natural"
definition "full_exhaustive_uint \<equiv> qc_full_exhaustive_cnv uint_of_natural"
instance ..
end
instantiation uint :: narrowing begin
interpretation quickcheck_narrowing_samples
"\<lambda>i. (Uint i, Uint (- i))" "0"
"Typerep.Typerep (STR ''Uint.uint'') []" .
definition "narrowing_uint d = qc_narrowing_drawn_from (narrowing_samples d) d"
declare [[code drop: "partial_term_of :: uint itself \<Rightarrow> _"]]
lemmas partial_term_of_uint [code] = partial_term_of_code
instance ..
end
no_notation sshiftr_uint (infixl ">>>" 55)
end
diff --git a/thys/Native_Word/Uint16.thy b/thys/Native_Word/Uint16.thy
--- a/thys/Native_Word/Uint16.thy
+++ b/thys/Native_Word/Uint16.thy
@@ -1,525 +1,525 @@
(* Title: Uint16.thy
Author: Andreas Lochbihler, ETH Zurich
*)
chapter \<open>Unsigned words of 16 bits\<close>
theory Uint16 imports
Code_Target_Word_Base
begin
text \<open>
Restriction for ML code generation:
This theory assumes that the ML system provides a Word16
implementation (mlton does, but PolyML 5.5 does not).
Therefore, the code setup lives in the target \<open>SML_word\<close>
rather than \<open>SML\<close>. This ensures that code generation still
works as long as \<open>uint16\<close> is not involved.
For the target \<open>SML\<close> itself, no special code generation
for this type is set up. Nevertheless, it should work by emulation via @{typ "16 word"}
if the theory \<open>Code_Target_Bits_Int\<close> is imported.
Restriction for OCaml code generation:
OCaml does not provide an int16 type, so no special code generation
for this type is set up.
\<close>
declare prod.Quotient[transfer_rule]
section \<open>Type definition and primitive operations\<close>
typedef uint16 = "UNIV :: 16 word set" ..
setup_lifting type_definition_uint16
text \<open>Use an abstract type for code generation to disable pattern matching on @{term Abs_uint16}.\<close>
declare Rep_uint16_inverse[code abstype]
declare Quotient_uint16[transfer_rule]
instantiation uint16 :: "{neg_numeral, modulo, comm_monoid_mult, comm_ring}" begin
lift_definition zero_uint16 :: uint16 is "0" .
lift_definition one_uint16 :: uint16 is "1" .
lift_definition plus_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> uint16" is "(+)" .
lift_definition minus_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> uint16" is "(-)" .
lift_definition uminus_uint16 :: "uint16 \<Rightarrow> uint16" is uminus .
lift_definition times_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> uint16" is "(*)" .
lift_definition divide_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> uint16" is "(div)" .
lift_definition modulo_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> uint16" is "(mod)" .
instance by standard (transfer, simp add: algebra_simps)+
end
instantiation uint16 :: linorder begin
lift_definition less_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> bool" is "(<)" .
lift_definition less_eq_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> bool" is "(\<le>)" .
instance by standard (transfer, simp add: less_le_not_le linear)+
end
lemmas [code] = less_uint16.rep_eq less_eq_uint16.rep_eq
instantiation uint16 :: bit_operations begin
-lift_definition bitNOT_uint16 :: "uint16 \<Rightarrow> uint16" is bitNOT .
-lift_definition bitAND_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> uint16" is bitAND .
-lift_definition bitOR_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> uint16" is bitOR .
-lift_definition bitXOR_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> uint16" is bitXOR .
+lift_definition bitNOT_uint16 :: "uint16 \<Rightarrow> uint16" is NOT .
+lift_definition bitAND_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> uint16" is \<open>(AND)\<close> .
+lift_definition bitOR_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> uint16" is \<open>(OR)\<close> .
+lift_definition bitXOR_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> uint16" is \<open>(XOR)\<close> .
lift_definition test_bit_uint16 :: "uint16 \<Rightarrow> nat \<Rightarrow> bool" is test_bit .
lift_definition set_bit_uint16 :: "uint16 \<Rightarrow> nat \<Rightarrow> bool \<Rightarrow> uint16" is set_bit .
lift_definition lsb_uint16 :: "uint16 \<Rightarrow> bool" is lsb .
lift_definition shiftl_uint16 :: "uint16 \<Rightarrow> nat \<Rightarrow> uint16" is shiftl .
lift_definition shiftr_uint16 :: "uint16 \<Rightarrow> nat \<Rightarrow> uint16" is shiftr .
lift_definition msb_uint16 :: "uint16 \<Rightarrow> bool" is msb .
instance ..
end
instantiation uint16 :: bit_comprehension begin
lift_definition set_bits_uint16 :: "(nat \<Rightarrow> bool) \<Rightarrow> uint16" is "set_bits" .
instance ..
end
lemmas [code] = test_bit_uint16.rep_eq lsb_uint16.rep_eq msb_uint16.rep_eq
instantiation uint16 :: equal begin
lift_definition equal_uint16 :: "uint16 \<Rightarrow> uint16 \<Rightarrow> bool" is "equal_class.equal" .
instance by standard (transfer, simp add: equal_eq)
end
lemmas [code] = equal_uint16.rep_eq
instantiation uint16 :: size begin
lift_definition size_uint16 :: "uint16 \<Rightarrow> nat" is "size" .
instance ..
end
lemmas [code] = size_uint16.rep_eq
lift_definition sshiftr_uint16 :: "uint16 \<Rightarrow> nat \<Rightarrow> uint16" (infixl ">>>" 55) is sshiftr .
lift_definition uint16_of_int :: "int \<Rightarrow> uint16" is "word_of_int" .
definition uint16_of_nat :: "nat \<Rightarrow> uint16"
where "uint16_of_nat = uint16_of_int \<circ> int"
lift_definition int_of_uint16 :: "uint16 \<Rightarrow> int" is "uint" .
lift_definition nat_of_uint16 :: "uint16 \<Rightarrow> nat" is "unat" .
definition integer_of_uint16 :: "uint16 \<Rightarrow> integer"
where "integer_of_uint16 = integer_of_int o int_of_uint16"
text \<open>Use pretty numerals from integer for pretty printing\<close>
context includes integer.lifting begin
lift_definition Uint16 :: "integer \<Rightarrow> uint16" is "word_of_int" .
lemma Rep_uint16_numeral [simp]: "Rep_uint16 (numeral n) = numeral n"
by(induction n)(simp_all add: one_uint16_def Abs_uint16_inverse numeral.simps plus_uint16_def)
lemma Rep_uint16_neg_numeral [simp]: "Rep_uint16 (- numeral n) = - numeral n"
by(simp only: uminus_uint16_def)(simp add: Abs_uint16_inverse)
lemma numeral_uint16_transfer [transfer_rule]:
"(rel_fun (=) cr_uint16) numeral numeral"
by(auto simp add: cr_uint16_def)
lemma numeral_uint16 [code_unfold]: "numeral n = Uint16 (numeral n)"
by transfer simp
lemma neg_numeral_uint16 [code_unfold]: "- numeral n = Uint16 (- numeral n)"
by transfer(simp add: cr_uint16_def)
end
lemma Abs_uint16_numeral [code_post]: "Abs_uint16 (numeral n) = numeral n"
by(induction n)(simp_all add: one_uint16_def numeral.simps plus_uint16_def Abs_uint16_inverse)
lemma Abs_uint16_0 [code_post]: "Abs_uint16 0 = 0"
by(simp add: zero_uint16_def)
lemma Abs_uint16_1 [code_post]: "Abs_uint16 1 = 1"
by(simp add: one_uint16_def)
section \<open>Code setup\<close>
code_printing code_module Uint16 \<rightharpoonup> (SML_word)
\<open>(* Test that words can handle numbers between 0 and 15 *)
val _ = if 4 <= Word.wordSize then () else raise (Fail ("wordSize less than 4"));
structure Uint16 : sig
val set_bit : Word16.word -> IntInf.int -> bool -> Word16.word
val shiftl : Word16.word -> IntInf.int -> Word16.word
val shiftr : Word16.word -> IntInf.int -> Word16.word
val shiftr_signed : Word16.word -> IntInf.int -> Word16.word
val test_bit : Word16.word -> IntInf.int -> bool
end = struct
fun set_bit x n b =
let val mask = Word16.<< (0wx1, Word.fromLargeInt (IntInf.toLarge n))
in if b then Word16.orb (x, mask)
else Word16.andb (x, Word16.notb mask)
end
fun shiftl x n =
Word16.<< (x, Word.fromLargeInt (IntInf.toLarge n))
fun shiftr x n =
Word16.>> (x, Word.fromLargeInt (IntInf.toLarge n))
fun shiftr_signed x n =
Word16.~>> (x, Word.fromLargeInt (IntInf.toLarge n))
fun test_bit x n =
Word16.andb (x, Word16.<< (0wx1, Word.fromLargeInt (IntInf.toLarge n))) <> Word16.fromInt 0
end; (* struct Uint16 *)\<close>
code_reserved SML_word Uint16
code_printing code_module Uint16 \<rightharpoonup> (Haskell)
\<open>module Uint16(Int16, Word16) where
import Data.Int(Int16)
import Data.Word(Word16)\<close>
code_reserved Haskell Uint16
text \<open>Scala provides unsigned 16-bit numbers as Char.\<close>
code_printing code_module Uint16 \<rightharpoonup> (Scala)
\<open>object Uint16 {
def set_bit(x: scala.Char, n: BigInt, b: Boolean) : scala.Char =
if (b)
(x | (1.toChar << n.intValue)).toChar
else
(x & (1.toChar << n.intValue).unary_~).toChar
def shiftl(x: scala.Char, n: BigInt) : scala.Char = (x << n.intValue).toChar
def shiftr(x: scala.Char, n: BigInt) : scala.Char = (x >>> n.intValue).toChar
def shiftr_signed(x: scala.Char, n: BigInt) : scala.Char = (x.toShort >> n.intValue).toChar
def test_bit(x: scala.Char, n: BigInt) : Boolean = (x & (1.toChar << n.intValue)) != 0
} /* object Uint16 */\<close>
code_reserved Scala Uint16
text \<open>
Avoid @{term Abs_uint16} in generated code, use @{term Rep_uint16'} instead.
The symbolic implementations for code\_simp use @{term Rep_uint16}.
The new destructor @{term Rep_uint16'} is executable.
As the simplifier is given the [code abstract] equations literally,
we cannot implement @{term Rep_uint16} directly, because that makes code\_simp loop.
If code generation raises Match, some equation probably contains @{term Rep_uint16}
([code abstract] equations for @{typ uint16} may use @{term Rep_uint16} because
these instances will be folded away.)
To convert @{typ "16 word"} values into @{typ uint16}, use @{term "Abs_uint16'"}.
\<close>
definition Rep_uint16' where [simp]: "Rep_uint16' = Rep_uint16"
lemma Rep_uint16'_transfer [transfer_rule]:
"rel_fun cr_uint16 (=) (\<lambda>x. x) Rep_uint16'"
unfolding Rep_uint16'_def by(rule uint16.rep_transfer)
lemma Rep_uint16'_code [code]: "Rep_uint16' x = (BITS n. x !! n)"
by transfer simp
lift_definition Abs_uint16' :: "16 word \<Rightarrow> uint16" is "\<lambda>x :: 16 word. x" .
lemma Abs_uint16'_code [code]:
"Abs_uint16' x = Uint16 (integer_of_int (uint x))"
including integer.lifting by transfer simp
declare [[code drop: "term_of_class.term_of :: uint16 \<Rightarrow> _"]]
lemma term_of_uint16_code [code]:
defines "TR \<equiv> typerep.Typerep" and "bit0 \<equiv> STR ''Numeral_Type.bit0''" shows
"term_of_class.term_of x =
Code_Evaluation.App (Code_Evaluation.Const (STR ''Uint16.uint16.Abs_uint16'') (TR (STR ''fun'') [TR (STR ''Word.word'') [TR bit0 [TR bit0 [TR bit0 [TR bit0 [TR (STR ''Numeral_Type.num1'') []]]]]], TR (STR ''Uint16.uint16'') []]))
(term_of_class.term_of (Rep_uint16' x))"
by(simp add: term_of_anything)
lemma Uin16_code [code abstract]: "Rep_uint16 (Uint16 i) = word_of_int (int_of_integer_symbolic i)"
unfolding Uint16_def int_of_integer_symbolic_def by(simp add: Abs_uint16_inverse)
code_printing
type_constructor uint16 \<rightharpoonup>
(SML_word) "Word16.word" and
(Haskell) "Uint16.Word16" and
(Scala) "scala.Char"
| constant Uint16 \<rightharpoonup>
(SML_word) "Word16.fromLargeInt (IntInf.toLarge _)" and
(Haskell) "(Prelude.fromInteger _ :: Uint16.Word16)" and
(Haskell_Quickcheck) "(Prelude.fromInteger (Prelude.toInteger _) :: Uint16.Word16)" and
(Scala) "_.charValue"
| constant "0 :: uint16" \<rightharpoonup>
(SML_word) "(Word16.fromInt 0)" and
(Haskell) "(0 :: Uint16.Word16)" and
(Scala) "0"
| constant "1 :: uint16" \<rightharpoonup>
(SML_word) "(Word16.fromInt 1)" and
(Haskell) "(1 :: Uint16.Word16)" and
(Scala) "1"
| constant "plus :: uint16 \<Rightarrow> _ \<Rightarrow> _" \<rightharpoonup>
(SML_word) "Word16.+ ((_), (_))" and
(Haskell) infixl 6 "+" and
(Scala) "(_ +/ _).toChar"
| constant "uminus :: uint16 \<Rightarrow> _" \<rightharpoonup>
(SML_word) "Word16.~" and
(Haskell) "negate" and
(Scala) "(- _).toChar"
| constant "minus :: uint16 \<Rightarrow> _" \<rightharpoonup>
(SML_word) "Word16.- ((_), (_))" and
(Haskell) infixl 6 "-" and
(Scala) "(_ -/ _).toChar"
| constant "times :: uint16 \<Rightarrow> _ \<Rightarrow> _" \<rightharpoonup>
(SML_word) "Word16.* ((_), (_))" and
(Haskell) infixl 7 "*" and
(Scala) "(_ */ _).toChar"
| constant "HOL.equal :: uint16 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML_word) "!((_ : Word16.word) = _)" and
(Haskell) infix 4 "==" and
(Scala) infixl 5 "=="
| class_instance uint16 :: equal \<rightharpoonup> (Haskell) -
| constant "less_eq :: uint16 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML_word) "Word16.<= ((_), (_))" and
(Haskell) infix 4 "<=" and
(Scala) infixl 4 "<="
| constant "less :: uint16 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML_word) "Word16.< ((_), (_))" and
(Haskell) infix 4 "<" and
(Scala) infixl 4 "<"
-| constant "bitNOT :: uint16 \<Rightarrow> _" \<rightharpoonup>
+| constant "NOT :: uint16 \<Rightarrow> _" \<rightharpoonup>
(SML_word) "Word16.notb" and
(Haskell) "Data'_Bits.complement" and
(Scala) "_.unary'_~.toChar"
-| constant "bitAND :: uint16 \<Rightarrow> _" \<rightharpoonup>
+| constant "(AND) :: uint16 \<Rightarrow> _" \<rightharpoonup>
(SML_word) "Word16.andb ((_),/ (_))" and
(Haskell) infixl 7 "Data_Bits..&." and
(Scala) "(_ & _).toChar"
-| constant "bitOR :: uint16 \<Rightarrow> _" \<rightharpoonup>
+| constant "(OR) :: uint16 \<Rightarrow> _" \<rightharpoonup>
(SML_word) "Word16.orb ((_),/ (_))" and
(Haskell) infixl 5 "Data_Bits..|." and
(Scala) "(_ | _).toChar"
-| constant "bitXOR :: uint16 \<Rightarrow> _" \<rightharpoonup>
+| constant "(XOR) :: uint16 \<Rightarrow> _" \<rightharpoonup>
(SML_word) "Word16.xorb ((_),/ (_))" and
(Haskell) "Data'_Bits.xor" and
(Scala) "(_ ^ _).toChar"
definition uint16_div :: "uint16 \<Rightarrow> uint16 \<Rightarrow> uint16"
where "uint16_div x y = (if y = 0 then undefined ((div) :: uint16 \<Rightarrow> _) x (0 :: uint16) else x div y)"
definition uint16_mod :: "uint16 \<Rightarrow> uint16 \<Rightarrow> uint16"
where "uint16_mod x y = (if y = 0 then undefined ((mod) :: uint16 \<Rightarrow> _) x (0 :: uint16) else x mod y)"
context includes undefined_transfer begin
lemma div_uint16_code [code]: "x div y = (if y = 0 then 0 else uint16_div x y)"
unfolding uint16_div_def by transfer (simp add: word_div_def)
lemma mod_uint16_code [code]: "x mod y = (if y = 0 then x else uint16_mod x y)"
unfolding uint16_mod_def by transfer (simp add: word_mod_def)
lemma uint16_div_code [code abstract]:
"Rep_uint16 (uint16_div x y) =
(if y = 0 then Rep_uint16 (undefined ((div) :: uint16 \<Rightarrow> _) x (0 :: uint16)) else Rep_uint16 x div Rep_uint16 y)"
unfolding uint16_div_def by transfer simp
lemma uint16_mod_code [code abstract]:
"Rep_uint16 (uint16_mod x y) =
(if y = 0 then Rep_uint16 (undefined ((mod) :: uint16 \<Rightarrow> _) x (0 :: uint16)) else Rep_uint16 x mod Rep_uint16 y)"
unfolding uint16_mod_def by transfer simp
end
code_printing constant uint16_div \<rightharpoonup>
(SML_word) "Word16.div ((_), (_))" and
(Haskell) "Prelude.div" and
(Scala) "(_ '/ _).toChar"
| constant uint16_mod \<rightharpoonup>
(SML_word) "Word16.mod ((_), (_))" and
(Haskell) "Prelude.mod" and
(Scala) "(_ % _).toChar"
definition uint16_test_bit :: "uint16 \<Rightarrow> integer \<Rightarrow> bool"
where [code del]:
"uint16_test_bit x n =
(if n < 0 \<or> 15 < n then undefined (test_bit :: uint16 \<Rightarrow> _) x n
else x !! (nat_of_integer n))"
lemma test_bit_uint16_code [code]:
"test_bit x n \<longleftrightarrow> n < 16 \<and> uint16_test_bit x (integer_of_nat n)"
unfolding uint16_test_bit_def including undefined_transfer integer.lifting
by transfer(auto cong: conj_cong dest: test_bit_size simp add: word_size)
lemma uint16_test_bit_code [code]:
"uint16_test_bit w n =
(if n < 0 \<or> 15 < n then undefined (test_bit :: uint16 \<Rightarrow> _) w n else Rep_uint16 w !! nat_of_integer n)"
unfolding uint16_test_bit_def by(simp add: test_bit_uint16.rep_eq)
code_printing constant uint16_test_bit \<rightharpoonup>
(SML_word) "Uint16.test'_bit" and
(Haskell) "Data'_Bits.testBitBounded" and
(Scala) "Uint16.test'_bit"
definition uint16_set_bit :: "uint16 \<Rightarrow> integer \<Rightarrow> bool \<Rightarrow> uint16"
where [code del]:
"uint16_set_bit x n b =
(if n < 0 \<or> 15 < n then undefined (set_bit :: uint16 \<Rightarrow> _) x n b
else set_bit x (nat_of_integer n) b)"
lemma set_bit_uint16_code [code]:
"set_bit x n b = (if n < 16 then uint16_set_bit x (integer_of_nat n) b else x)"
including undefined_transfer integer.lifting unfolding uint16_set_bit_def
by(transfer)(auto cong: conj_cong simp add: not_less set_bit_beyond word_size)
lemma uint16_set_bit_code [code abstract]:
"Rep_uint16 (uint16_set_bit w n b) =
(if n < 0 \<or> 15 < n then Rep_uint16 (undefined (set_bit :: uint16 \<Rightarrow> _) w n b)
else set_bit (Rep_uint16 w) (nat_of_integer n) b)"
including undefined_transfer unfolding uint16_set_bit_def by transfer simp
code_printing constant uint16_set_bit \<rightharpoonup>
(SML_word) "Uint16.set'_bit" and
(Haskell) "Data'_Bits.setBitBounded" and
(Scala) "Uint16.set'_bit"
lift_definition uint16_set_bits :: "(nat \<Rightarrow> bool) \<Rightarrow> uint16 \<Rightarrow> nat \<Rightarrow> uint16" is set_bits_aux .
lemma uint16_set_bits_code [code]:
"uint16_set_bits f w n =
(if n = 0 then w
else let n' = n - 1 in uint16_set_bits f ((w << 1) OR (if f n' then 1 else 0)) n')"
by(transfer fixing: n)(cases n, simp_all)
lemma set_bits_uint16 [code]:
"(BITS n. f n) = uint16_set_bits f 0 16"
by transfer(simp add: set_bits_conv_set_bits_aux)
lemma lsb_code [code]: fixes x :: uint16 shows "lsb x = x !! 0"
by transfer(simp add: word_lsb_def word_test_bit_def)
definition uint16_shiftl :: "uint16 \<Rightarrow> integer \<Rightarrow> uint16"
where [code del]:
"uint16_shiftl x n = (if n < 0 \<or> 16 \<le> n then undefined (shiftl :: uint16 \<Rightarrow> _) x n else x << (nat_of_integer n))"
lemma shiftl_uint16_code [code]: "x << n = (if n < 16 then uint16_shiftl x (integer_of_nat n) else 0)"
including undefined_transfer integer.lifting unfolding uint16_shiftl_def
by transfer(simp add: not_less shiftl_zero_size word_size)
lemma uint16_shiftl_code [code abstract]:
"Rep_uint16 (uint16_shiftl w n) =
(if n < 0 \<or> 16 \<le> n then Rep_uint16 (undefined (shiftl :: uint16 \<Rightarrow> _) w n)
else Rep_uint16 w << nat_of_integer n)"
including undefined_transfer unfolding uint16_shiftl_def by transfer simp
code_printing constant uint16_shiftl \<rightharpoonup>
(SML_word) "Uint16.shiftl" and
(Haskell) "Data'_Bits.shiftlBounded" and
(Scala) "Uint16.shiftl"
definition uint16_shiftr :: "uint16 \<Rightarrow> integer \<Rightarrow> uint16"
where [code del]:
"uint16_shiftr x n = (if n < 0 \<or> 16 \<le> n then undefined (shiftr :: uint16 \<Rightarrow> _) x n else x >> (nat_of_integer n))"
lemma shiftr_uint16_code [code]: "x >> n = (if n < 16 then uint16_shiftr x (integer_of_nat n) else 0)"
including undefined_transfer integer.lifting unfolding uint16_shiftr_def
by transfer(simp add: not_less shiftr_zero_size word_size)
lemma uint16_shiftr_code [code abstract]:
"Rep_uint16 (uint16_shiftr w n) =
(if n < 0 \<or> 16 \<le> n then Rep_uint16 (undefined (shiftr :: uint16 \<Rightarrow> _) w n)
else Rep_uint16 w >> nat_of_integer n)"
including undefined_transfer unfolding uint16_shiftr_def by transfer simp
code_printing constant uint16_shiftr \<rightharpoonup>
(SML_word) "Uint16.shiftr" and
(Haskell) "Data'_Bits.shiftrBounded" and
(Scala) "Uint16.shiftr"
definition uint16_sshiftr :: "uint16 \<Rightarrow> integer \<Rightarrow> uint16"
where [code del]:
"uint16_sshiftr x n =
(if n < 0 \<or> 16 \<le> n then undefined sshiftr_uint16 x n else sshiftr_uint16 x (nat_of_integer n))"
lemma sshiftr_beyond: fixes x :: "'a :: len word" shows
"size x \<le> n \<Longrightarrow> x >>> n = (if x !! (size x - 1) then -1 else 0)"
by(rule word_eqI)(simp add: nth_sshiftr word_size)
lemma sshiftr_uint16_code [code]:
"x >>> n =
(if n < 16 then uint16_sshiftr x (integer_of_nat n) else if x !! 15 then -1 else 0)"
including undefined_transfer integer.lifting unfolding uint16_sshiftr_def
by transfer (simp add: not_less sshiftr_beyond word_size)
lemma uint16_sshiftr_code [code abstract]:
"Rep_uint16 (uint16_sshiftr w n) =
(if n < 0 \<or> 16 \<le> n then Rep_uint16 (undefined sshiftr_uint16 w n)
else Rep_uint16 w >>> nat_of_integer n)"
including undefined_transfer unfolding uint16_sshiftr_def by transfer simp
code_printing constant uint16_sshiftr \<rightharpoonup>
(SML_word) "Uint16.shiftr'_signed" and
(Haskell)
"(Prelude.fromInteger (Prelude.toInteger (Data'_Bits.shiftrBounded (Prelude.fromInteger (Prelude.toInteger _) :: Uint16.Int16) _)) :: Uint16.Word16)" and
(Scala) "Uint16.shiftr'_signed"
lemma uint16_msb_test_bit: "msb x \<longleftrightarrow> (x :: uint16) !! 15"
by transfer(simp add: msb_nth)
lemma msb_uint16_code [code]: "msb x \<longleftrightarrow> uint16_test_bit x 15"
by(simp add: uint16_test_bit_def uint16_msb_test_bit)
lemma uint16_of_int_code [code]: "uint16_of_int i = Uint16 (integer_of_int i)"
including integer.lifting by transfer simp
lemma int_of_uint16_code [code]:
"int_of_uint16 x = int_of_integer (integer_of_uint16 x)"
by(simp add: integer_of_uint16_def)
lemma nat_of_uint16_code [code]:
"nat_of_uint16 x = nat_of_integer (integer_of_uint16 x)"
unfolding integer_of_uint16_def including integer.lifting by transfer (simp add: unat_def)
lemma integer_of_uint16_code [code]:
"integer_of_uint16 n = integer_of_int (uint (Rep_uint16' n))"
unfolding integer_of_uint16_def by transfer auto
code_printing
constant "integer_of_uint16" \<rightharpoonup>
(SML_word) "Word16.toInt _ : IntInf.int" and
(Haskell) "Prelude.toInteger" and
(Scala) "BigInt"
section \<open>Quickcheck setup\<close>
definition uint16_of_natural :: "natural \<Rightarrow> uint16"
where "uint16_of_natural x \<equiv> Uint16 (integer_of_natural x)"
instantiation uint16 :: "{random, exhaustive, full_exhaustive}" begin
definition "random_uint16 \<equiv> qc_random_cnv uint16_of_natural"
definition "exhaustive_uint16 \<equiv> qc_exhaustive_cnv uint16_of_natural"
definition "full_exhaustive_uint16 \<equiv> qc_full_exhaustive_cnv uint16_of_natural"
instance ..
end
instantiation uint16 :: narrowing begin
interpretation quickcheck_narrowing_samples
"\<lambda>i. let x = Uint16 i in (x, 0xFFFF - x)" "0"
"Typerep.Typerep (STR ''Uint16.uint16'') []" .
definition "narrowing_uint16 d = qc_narrowing_drawn_from (narrowing_samples d) d"
declare [[code drop: "partial_term_of :: uint16 itself \<Rightarrow> _"]]
lemmas partial_term_of_uint16 [code] = partial_term_of_code
instance ..
end
no_notation sshiftr_uint16 (infixl ">>>" 55)
end
diff --git a/thys/Native_Word/Uint32.thy b/thys/Native_Word/Uint32.thy
--- a/thys/Native_Word/Uint32.thy
+++ b/thys/Native_Word/Uint32.thy
@@ -1,659 +1,659 @@
(* Title: Uint32.thy
Author: Andreas Lochbihler, ETH Zurich
*)
chapter \<open>Unsigned words of 32 bits\<close>
theory Uint32 imports
Code_Target_Word_Base
begin
declare prod.Quotient[transfer_rule]
section \<open>Type definition and primitive operations\<close>
typedef uint32 = "UNIV :: 32 word set" ..
setup_lifting type_definition_uint32
text \<open>Use an abstract type for code generation to disable pattern matching on @{term Abs_uint32}.\<close>
declare Rep_uint32_inverse[code abstype]
declare Quotient_uint32[transfer_rule]
instantiation uint32 :: "{neg_numeral, modulo, comm_monoid_mult, comm_ring}" begin
lift_definition zero_uint32 :: uint32 is "0 :: 32 word" .
lift_definition one_uint32 :: uint32 is "1" .
lift_definition plus_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32" is "(+) :: 32 word \<Rightarrow> _" .
lift_definition minus_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32" is "(-)" .
lift_definition uminus_uint32 :: "uint32 \<Rightarrow> uint32" is uminus .
lift_definition times_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32" is "(*)" .
lift_definition divide_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32" is "(div)" .
lift_definition modulo_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32" is "(mod)" .
instance by standard (transfer, simp add: algebra_simps)+
end
instantiation uint32 :: linorder begin
lift_definition less_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> bool" is "(<)" .
lift_definition less_eq_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> bool" is "(\<le>)" .
instance by standard (transfer, simp add: less_le_not_le linear)+
end
lemmas [code] = less_uint32.rep_eq less_eq_uint32.rep_eq
instantiation uint32 :: bit_operations begin
-lift_definition bitNOT_uint32 :: "uint32 \<Rightarrow> uint32" is bitNOT .
-lift_definition bitAND_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32" is bitAND .
-lift_definition bitOR_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32" is bitOR .
-lift_definition bitXOR_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32" is bitXOR .
+lift_definition bitNOT_uint32 :: "uint32 \<Rightarrow> uint32" is NOT .
+lift_definition bitAND_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32" is \<open>(AND)\<close> .
+lift_definition bitOR_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32" is \<open>(OR)\<close> .
+lift_definition bitXOR_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32" is \<open>(XOR)\<close> .
lift_definition test_bit_uint32 :: "uint32 \<Rightarrow> nat \<Rightarrow> bool" is test_bit .
lift_definition set_bit_uint32 :: "uint32 \<Rightarrow> nat \<Rightarrow> bool \<Rightarrow> uint32" is set_bit .
lift_definition lsb_uint32 :: "uint32 \<Rightarrow> bool" is lsb .
lift_definition shiftl_uint32 :: "uint32 \<Rightarrow> nat \<Rightarrow> uint32" is shiftl .
lift_definition shiftr_uint32 :: "uint32 \<Rightarrow> nat \<Rightarrow> uint32" is shiftr .
lift_definition msb_uint32 :: "uint32 \<Rightarrow> bool" is msb .
instance ..
end
instantiation uint32 :: bit_comprehension begin
lift_definition set_bits_uint32 :: "(nat \<Rightarrow> bool) \<Rightarrow> uint32" is "set_bits" .
instance ..
end
lemmas [code] = test_bit_uint32.rep_eq lsb_uint32.rep_eq msb_uint32.rep_eq
instantiation uint32 :: equal begin
lift_definition equal_uint32 :: "uint32 \<Rightarrow> uint32 \<Rightarrow> bool" is "equal_class.equal" .
instance by standard (transfer, simp add: equal_eq)
end
lemmas [code] = equal_uint32.rep_eq
instantiation uint32 :: size begin
lift_definition size_uint32 :: "uint32 \<Rightarrow> nat" is "size" .
instance ..
end
lemmas [code] = size_uint32.rep_eq
lift_definition sshiftr_uint32 :: "uint32 \<Rightarrow> nat \<Rightarrow> uint32" (infixl ">>>" 55) is sshiftr .
lift_definition uint32_of_int :: "int \<Rightarrow> uint32" is "word_of_int" .
definition uint32_of_nat :: "nat \<Rightarrow> uint32"
where "uint32_of_nat = uint32_of_int \<circ> int"
lift_definition int_of_uint32 :: "uint32 \<Rightarrow> int" is "uint" .
lift_definition nat_of_uint32 :: "uint32 \<Rightarrow> nat" is "unat" .
definition integer_of_uint32 :: "uint32 \<Rightarrow> integer"
where "integer_of_uint32 = integer_of_int o int_of_uint32"
lemma bitval_integer_transfer [transfer_rule]:
"(rel_fun (=) pcr_integer) of_bool of_bool"
by(auto simp add: of_bool_def integer.pcr_cr_eq cr_integer_def split: bit.split)
text \<open>Use pretty numerals from integer for pretty printing\<close>
context includes integer.lifting begin
lift_definition Uint32 :: "integer \<Rightarrow> uint32" is "word_of_int" .
lemma Rep_uint32_numeral [simp]: "Rep_uint32 (numeral n) = numeral n"
by(induction n)(simp_all add: one_uint32_def Abs_uint32_inverse numeral.simps plus_uint32_def)
lemma numeral_uint32_transfer [transfer_rule]:
"(rel_fun (=) cr_uint32) numeral numeral"
by(auto simp add: cr_uint32_def)
lemma numeral_uint32 [code_unfold]: "numeral n = Uint32 (numeral n)"
by transfer simp
lemma Rep_uint32_neg_numeral [simp]: "Rep_uint32 (- numeral n) = - numeral n"
by(simp only: uminus_uint32_def)(simp add: Abs_uint32_inverse)
lemma neg_numeral_uint32 [code_unfold]: "- numeral n = Uint32 (- numeral n)"
by transfer(simp add: cr_uint32_def)
end
lemma Abs_uint32_numeral [code_post]: "Abs_uint32 (numeral n) = numeral n"
by(induction n)(simp_all add: one_uint32_def numeral.simps plus_uint32_def Abs_uint32_inverse)
lemma Abs_uint32_0 [code_post]: "Abs_uint32 0 = 0"
by(simp add: zero_uint32_def)
lemma Abs_uint32_1 [code_post]: "Abs_uint32 1 = 1"
by(simp add: one_uint32_def)
section \<open>Code setup\<close>
code_printing code_module Uint32 \<rightharpoonup> (SML)
\<open>(* Test that words can handle numbers between 0 and 31 *)
val _ = if 5 <= Word.wordSize then () else raise (Fail ("wordSize less than 5"));
structure Uint32 : sig
val set_bit : Word32.word -> IntInf.int -> bool -> Word32.word
val shiftl : Word32.word -> IntInf.int -> Word32.word
val shiftr : Word32.word -> IntInf.int -> Word32.word
val shiftr_signed : Word32.word -> IntInf.int -> Word32.word
val test_bit : Word32.word -> IntInf.int -> bool
end = struct
fun set_bit x n b =
let val mask = Word32.<< (0wx1, Word.fromLargeInt (IntInf.toLarge n))
in if b then Word32.orb (x, mask)
else Word32.andb (x, Word32.notb mask)
end
fun shiftl x n =
Word32.<< (x, Word.fromLargeInt (IntInf.toLarge n))
fun shiftr x n =
Word32.>> (x, Word.fromLargeInt (IntInf.toLarge n))
fun shiftr_signed x n =
Word32.~>> (x, Word.fromLargeInt (IntInf.toLarge n))
fun test_bit x n =
Word32.andb (x, Word32.<< (0wx1, Word.fromLargeInt (IntInf.toLarge n))) <> Word32.fromInt 0
end; (* struct Uint32 *)\<close>
code_reserved SML Uint32
code_printing code_module Uint32 \<rightharpoonup> (Haskell)
\<open>module Uint32(Int32, Word32) where
import Data.Int(Int32)
import Data.Word(Word32)\<close>
code_reserved Haskell Uint32
text \<open>
OCaml and Scala provide only signed 32bit numbers, so we use these and
implement sign-sensitive operations like comparisons manually.
\<close>
code_printing code_module "Uint32" \<rightharpoonup> (OCaml)
\<open>module Uint32 : sig
val less : int32 -> int32 -> bool
val less_eq : int32 -> int32 -> bool
val set_bit : int32 -> Z.t -> bool -> int32
val shiftl : int32 -> Z.t -> int32
val shiftr : int32 -> Z.t -> int32
val shiftr_signed : int32 -> Z.t -> int32
val test_bit : int32 -> Z.t -> bool
end = struct
(* negative numbers have their highest bit set,
so they are greater than positive ones *)
let less x y =
if Int32.compare x Int32.zero < 0 then
Int32.compare y Int32.zero < 0 && Int32.compare x y < 0
else Int32.compare y Int32.zero < 0 || Int32.compare x y < 0;;
let less_eq x y =
if Int32.compare x Int32.zero < 0 then
Int32.compare y Int32.zero < 0 && Int32.compare x y <= 0
else Int32.compare y Int32.zero < 0 || Int32.compare x y <= 0;;
let set_bit x n b =
let mask = Int32.shift_left Int32.one (Z.to_int n)
in if b then Int32.logor x mask
else Int32.logand x (Int32.lognot mask);;
let shiftl x n = Int32.shift_left x (Z.to_int n);;
let shiftr x n = Int32.shift_right_logical x (Z.to_int n);;
let shiftr_signed x n = Int32.shift_right x (Z.to_int n);;
let test_bit x n =
Int32.compare
(Int32.logand x (Int32.shift_left Int32.one (Z.to_int n)))
Int32.zero
<> 0;;
end;; (*struct Uint32*)\<close>
code_reserved OCaml Uint32
code_printing code_module Uint32 \<rightharpoonup> (Scala)
\<open>object Uint32 {
def less(x: Int, y: Int) : Boolean =
if (x < 0) y < 0 && x < y
else y < 0 || x < y
def less_eq(x: Int, y: Int) : Boolean =
if (x < 0) y < 0 && x <= y
else y < 0 || x <= y
def set_bit(x: Int, n: BigInt, b: Boolean) : Int =
if (b)
x | (1 << n.intValue)
else
x & (1 << n.intValue).unary_~
def shiftl(x: Int, n: BigInt) : Int = x << n.intValue
def shiftr(x: Int, n: BigInt) : Int = x >>> n.intValue
def shiftr_signed(x: Int, n: BigInt) : Int = x >> n.intValue
def test_bit(x: Int, n: BigInt) : Boolean =
(x & (1 << n.intValue)) != 0
} /* object Uint32 */\<close>
code_reserved Scala Uint32
text \<open>
OCaml's conversion from Big\_int to int32 demands that the value fits int a signed 32-bit integer.
The following justifies the implementation.
\<close>
definition Uint32_signed :: "integer \<Rightarrow> uint32"
where "Uint32_signed i = (if i < -(0x80000000) \<or> i \<ge> 0x80000000 then undefined Uint32 i else Uint32 i)"
lemma Uint32_code [code]:
"Uint32 i =
(let i' = i AND 0xFFFFFFFF
in if i' !! 31 then Uint32_signed (i' - 0x100000000) else Uint32_signed i')"
including undefined_transfer integer.lifting unfolding Uint32_signed_def
by transfer(rule word_of_int_via_signed, simp_all add: bin_mask_numeral)
lemma Uint32_signed_code [code abstract]:
"Rep_uint32 (Uint32_signed i) =
(if i < -(0x80000000) \<or> i \<ge> 0x80000000 then Rep_uint32 (undefined Uint32 i) else word_of_int (int_of_integer_symbolic i))"
unfolding Uint32_signed_def Uint32_def int_of_integer_symbolic_def word_of_integer_def
by(simp add: Abs_uint32_inverse)
text \<open>
Avoid @{term Abs_uint32} in generated code, use @{term Rep_uint32'} instead.
The symbolic implementations for code\_simp use @{term Rep_uint32}.
The new destructor @{term Rep_uint32'} is executable.
As the simplifier is given the [code abstract] equations literally,
we cannot implement @{term Rep_uint32} directly, because that makes code\_simp loop.
If code generation raises Match, some equation probably contains @{term Rep_uint32}
([code abstract] equations for @{typ uint32} may use @{term Rep_uint32} because
these instances will be folded away.)
To convert @{typ "32 word"} values into @{typ uint32}, use @{term "Abs_uint32'"}.
\<close>
definition Rep_uint32' where [simp]: "Rep_uint32' = Rep_uint32"
lemma Rep_uint32'_transfer [transfer_rule]:
"rel_fun cr_uint32 (=) (\<lambda>x. x) Rep_uint32'"
unfolding Rep_uint32'_def by(rule uint32.rep_transfer)
lemma Rep_uint32'_code [code]: "Rep_uint32' x = (BITS n. x !! n)"
by transfer simp
lift_definition Abs_uint32' :: "32 word \<Rightarrow> uint32" is "\<lambda>x :: 32 word. x" .
lemma Abs_uint32'_code [code]:
"Abs_uint32' x = Uint32 (integer_of_int (uint x))"
including integer.lifting by transfer simp
declare [[code drop: "term_of_class.term_of :: uint32 \<Rightarrow> _"]]
lemma term_of_uint32_code [code]:
defines "TR \<equiv> typerep.Typerep" and "bit0 \<equiv> STR ''Numeral_Type.bit0''"
shows
"term_of_class.term_of x =
Code_Evaluation.App (Code_Evaluation.Const (STR ''Uint32.uint32.Abs_uint32'') (TR (STR ''fun'') [TR (STR ''Word.word'') [TR bit0 [TR bit0 [TR bit0 [TR bit0 [TR bit0 [TR (STR ''Numeral_Type.num1'') []]]]]]], TR (STR ''Uint32.uint32'') []]))
(term_of_class.term_of (Rep_uint32' x))"
by(simp add: term_of_anything)
code_printing
type_constructor uint32 \<rightharpoonup>
(SML) "Word32.word" and
(Haskell) "Uint32.Word32" and
(OCaml) "int32" and
(Scala) "Int" and
(Eval) "Word32.word"
| constant Uint32 \<rightharpoonup>
(SML) "Word32.fromLargeInt (IntInf.toLarge _)" and
(Haskell) "(Prelude.fromInteger _ :: Uint32.Word32)" and
(Haskell_Quickcheck) "(Prelude.fromInteger (Prelude.toInteger _) :: Uint32.Word32)" and
(Scala) "_.intValue"
| constant Uint32_signed \<rightharpoonup>
(OCaml) "Z.to'_int32"
| constant "0 :: uint32" \<rightharpoonup>
(SML) "(Word32.fromInt 0)" and
(Haskell) "(0 :: Uint32.Word32)" and
(OCaml) "Int32.zero" and
(Scala) "0"
| constant "1 :: uint32" \<rightharpoonup>
(SML) "(Word32.fromInt 1)" and
(Haskell) "(1 :: Uint32.Word32)" and
(OCaml) "Int32.one" and
(Scala) "1"
| constant "plus :: uint32 \<Rightarrow> _ " \<rightharpoonup>
(SML) "Word32.+ ((_), (_))" and
(Haskell) infixl 6 "+" and
(OCaml) "Int32.add" and
(Scala) infixl 7 "+"
| constant "uminus :: uint32 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word32.~" and
(Haskell) "negate" and
(OCaml) "Int32.neg" and
(Scala) "!(- _)"
| constant "minus :: uint32 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word32.- ((_), (_))" and
(Haskell) infixl 6 "-" and
(OCaml) "Int32.sub" and
(Scala) infixl 7 "-"
| constant "times :: uint32 \<Rightarrow> _ \<Rightarrow> _" \<rightharpoonup>
(SML) "Word32.* ((_), (_))" and
(Haskell) infixl 7 "*" and
(OCaml) "Int32.mul" and
(Scala) infixl 8 "*"
| constant "HOL.equal :: uint32 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "!((_ : Word32.word) = _)" and
(Haskell) infix 4 "==" and
(OCaml) "(Int32.compare _ _ = 0)" and
(Scala) infixl 5 "=="
| class_instance uint32 :: equal \<rightharpoonup>
(Haskell) -
| constant "less_eq :: uint32 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "Word32.<= ((_), (_))" and
(Haskell) infix 4 "<=" and
(OCaml) "Uint32.less'_eq" and
(Scala) "Uint32.less'_eq"
| constant "less :: uint32 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "Word32.< ((_), (_))" and
(Haskell) infix 4 "<" and
(OCaml) "Uint32.less" and
(Scala) "Uint32.less"
-| constant "bitNOT :: uint32 \<Rightarrow> _" \<rightharpoonup>
+| constant "NOT :: uint32 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word32.notb" and
(Haskell) "Data'_Bits.complement" and
(OCaml) "Int32.lognot" and
(Scala) "_.unary'_~"
-| constant "bitAND :: uint32 \<Rightarrow> _" \<rightharpoonup>
+| constant "(AND) :: uint32 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word32.andb ((_),/ (_))" and
(Haskell) infixl 7 "Data_Bits..&." and
(OCaml) "Int32.logand" and
(Scala) infixl 3 "&"
-| constant "bitOR :: uint32 \<Rightarrow> _" \<rightharpoonup>
+| constant "(OR) :: uint32 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word32.orb ((_),/ (_))" and
(Haskell) infixl 5 "Data_Bits..|." and
(OCaml) "Int32.logor" and
(Scala) infixl 1 "|"
-| constant "bitXOR :: uint32 \<Rightarrow> _" \<rightharpoonup>
+| constant "(XOR) :: uint32 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word32.xorb ((_),/ (_))" and
(Haskell) "Data'_Bits.xor" and
(OCaml) "Int32.logxor" and
(Scala) infixl 2 "^"
definition uint32_divmod :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32 \<times> uint32" where
"uint32_divmod x y =
(if y = 0 then (undefined ((div) :: uint32 \<Rightarrow> _) x (0 :: uint32), undefined ((mod) :: uint32 \<Rightarrow> _) x (0 :: uint32))
else (x div y, x mod y))"
definition uint32_div :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32"
where "uint32_div x y = fst (uint32_divmod x y)"
definition uint32_mod :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32"
where "uint32_mod x y = snd (uint32_divmod x y)"
lemma div_uint32_code [code]: "x div y = (if y = 0 then 0 else uint32_div x y)"
including undefined_transfer unfolding uint32_divmod_def uint32_div_def
by transfer (simp add: word_div_def)
lemma mod_uint32_code [code]: "x mod y = (if y = 0 then x else uint32_mod x y)"
including undefined_transfer unfolding uint32_mod_def uint32_divmod_def
by transfer (simp add: word_mod_def)
definition uint32_sdiv :: "uint32 \<Rightarrow> uint32 \<Rightarrow> uint32"
where [code del]:
"uint32_sdiv x y =
(if y = 0 then undefined ((div) :: uint32 \<Rightarrow> _) x (0 :: uint32)
else Abs_uint32 (Rep_uint32 x sdiv Rep_uint32 y))"
definition div0_uint32 :: "uint32 \<Rightarrow> uint32"
where [code del]: "div0_uint32 x = undefined ((div) :: uint32 \<Rightarrow> _) x (0 :: uint32)"
declare [[code abort: div0_uint32]]
definition mod0_uint32 :: "uint32 \<Rightarrow> uint32"
where [code del]: "mod0_uint32 x = undefined ((mod) :: uint32 \<Rightarrow> _) x (0 :: uint32)"
declare [[code abort: mod0_uint32]]
lemma uint32_divmod_code [code]:
"uint32_divmod x y =
(if 0x80000000 \<le> y then if x < y then (0, x) else (1, x - y)
else if y = 0 then (div0_uint32 x, mod0_uint32 x)
else let q = (uint32_sdiv (x >> 1) y) << 1;
r = x - q * y
in if r \<ge> y then (q + 1, r - y) else (q, r))"
including undefined_transfer unfolding uint32_divmod_def uint32_sdiv_def div0_uint32_def mod0_uint32_def
by transfer(simp add: divmod_via_sdivmod)
lemma uint32_sdiv_code [code abstract]:
"Rep_uint32 (uint32_sdiv x y) =
(if y = 0 then Rep_uint32 (undefined ((div) :: uint32 \<Rightarrow> _) x (0 :: uint32))
else Rep_uint32 x sdiv Rep_uint32 y)"
unfolding uint32_sdiv_def by(simp add: Abs_uint32_inverse)
text \<open>
Note that we only need a translation for signed division, but not for the remainder
because @{thm uint32_divmod_code} computes both with division only.
\<close>
code_printing
constant uint32_div \<rightharpoonup>
(SML) "Word32.div ((_), (_))" and
(Haskell) "Prelude.div"
| constant uint32_mod \<rightharpoonup>
(SML) "Word32.mod ((_), (_))" and
(Haskell) "Prelude.mod"
| constant uint32_divmod \<rightharpoonup>
(Haskell) "divmod"
| constant uint32_sdiv \<rightharpoonup>
(OCaml) "Int32.div" and
(Scala) "_ '/ _"
definition uint32_test_bit :: "uint32 \<Rightarrow> integer \<Rightarrow> bool"
where [code del]:
"uint32_test_bit x n =
(if n < 0 \<or> 31 < n then undefined (test_bit :: uint32 \<Rightarrow> _) x n
else x !! (nat_of_integer n))"
lemma test_bit_uint32_code [code]:
"test_bit x n \<longleftrightarrow> n < 32 \<and> uint32_test_bit x (integer_of_nat n)"
including undefined_transfer integer.lifting unfolding uint32_test_bit_def
by transfer(auto cong: conj_cong dest: test_bit_size simp add: word_size)
lemma uint32_test_bit_code [code]:
"uint32_test_bit w n =
(if n < 0 \<or> 31 < n then undefined (test_bit :: uint32 \<Rightarrow> _) w n else Rep_uint32 w !! nat_of_integer n)"
unfolding uint32_test_bit_def
by(simp add: test_bit_uint32.rep_eq)
code_printing constant uint32_test_bit \<rightharpoonup>
(SML) "Uint32.test'_bit" and
(Haskell) "Data'_Bits.testBitBounded" and
(OCaml) "Uint32.test'_bit" and
(Scala) "Uint32.test'_bit" and
(Eval) "(fn w => fn n => if n < 0 orelse 32 <= n then raise (Fail \"argument to uint32'_test'_bit out of bounds\") else Uint32.test'_bit w n)"
definition uint32_set_bit :: "uint32 \<Rightarrow> integer \<Rightarrow> bool \<Rightarrow> uint32"
where [code del]:
"uint32_set_bit x n b =
(if n < 0 \<or> 31 < n then undefined (set_bit :: uint32 \<Rightarrow> _) x n b
else set_bit x (nat_of_integer n) b)"
lemma set_bit_uint32_code [code]:
"set_bit x n b = (if n < 32 then uint32_set_bit x (integer_of_nat n) b else x)"
including undefined_transfer integer.lifting unfolding uint32_set_bit_def
by(transfer)(auto cong: conj_cong simp add: not_less set_bit_beyond word_size)
lemma uint32_set_bit_code [code abstract]:
"Rep_uint32 (uint32_set_bit w n b) =
(if n < 0 \<or> 31 < n then Rep_uint32 (undefined (set_bit :: uint32 \<Rightarrow> _) w n b)
else set_bit (Rep_uint32 w) (nat_of_integer n) b)"
including undefined_transfer unfolding uint32_set_bit_def by transfer simp
code_printing constant uint32_set_bit \<rightharpoonup>
(SML) "Uint32.set'_bit" and
(Haskell) "Data'_Bits.setBitBounded" and
(OCaml) "Uint32.set'_bit" and
(Scala) "Uint32.set'_bit" and
(Eval) "(fn w => fn n => fn b => if n < 0 orelse 32 <= n then raise (Fail \"argument to uint32'_set'_bit out of bounds\") else Uint32.set'_bit x n b)"
lift_definition uint32_set_bits :: "(nat \<Rightarrow> bool) \<Rightarrow> uint32 \<Rightarrow> nat \<Rightarrow> uint32" is set_bits_aux .
lemma uint32_set_bits_code [code]:
"uint32_set_bits f w n =
(if n = 0 then w
else let n' = n - 1 in uint32_set_bits f ((w << 1) OR (if f n' then 1 else 0)) n')"
by(transfer fixing: n)(cases n, simp_all)
lemma set_bits_uint32 [code]:
"(BITS n. f n) = uint32_set_bits f 0 32"
by transfer(simp add: set_bits_conv_set_bits_aux)
lemma lsb_code [code]: fixes x :: uint32 shows "lsb x = x !! 0"
by transfer(simp add: word_lsb_def word_test_bit_def)
definition uint32_shiftl :: "uint32 \<Rightarrow> integer \<Rightarrow> uint32"
where [code del]:
"uint32_shiftl x n = (if n < 0 \<or> 32 \<le> n then undefined (shiftl :: uint32 \<Rightarrow> _) x n else x << (nat_of_integer n))"
lemma shiftl_uint32_code [code]: "x << n = (if n < 32 then uint32_shiftl x (integer_of_nat n) else 0)"
including undefined_transfer integer.lifting unfolding uint32_shiftl_def
by transfer(simp add: not_less shiftl_zero_size word_size)
lemma uint32_shiftl_code [code abstract]:
"Rep_uint32 (uint32_shiftl w n) =
(if n < 0 \<or> 32 \<le> n then Rep_uint32 (undefined (shiftl :: uint32 \<Rightarrow> _) w n) else Rep_uint32 w << (nat_of_integer n))"
including undefined_transfer unfolding uint32_shiftl_def by transfer simp
code_printing constant uint32_shiftl \<rightharpoonup>
(SML) "Uint32.shiftl" and
(Haskell) "Data'_Bits.shiftlBounded" and
(OCaml) "Uint32.shiftl" and
(Scala) "Uint32.shiftl" and
(Eval) "(fn x => fn i => if i < 0 orelse i >= 32 then raise Fail \"argument to uint32'_shiftl out of bounds\" else Uint32.shiftl x i)"
definition uint32_shiftr :: "uint32 \<Rightarrow> integer \<Rightarrow> uint32"
where [code del]:
"uint32_shiftr x n = (if n < 0 \<or> 32 \<le> n then undefined (shiftr :: uint32 \<Rightarrow> _) x n else x >> (nat_of_integer n))"
lemma shiftr_uint32_code [code]: "x >> n = (if n < 32 then uint32_shiftr x (integer_of_nat n) else 0)"
including undefined_transfer integer.lifting unfolding uint32_shiftr_def
by transfer(simp add: not_less shiftr_zero_size word_size)
lemma uint32_shiftr_code [code abstract]:
"Rep_uint32 (uint32_shiftr w n) =
(if n < 0 \<or> 32 \<le> n then Rep_uint32 (undefined (shiftr :: uint32 \<Rightarrow> _) w n) else Rep_uint32 w >> nat_of_integer n)"
including undefined_transfer unfolding uint32_shiftr_def by transfer simp
code_printing constant uint32_shiftr \<rightharpoonup>
(SML) "Uint32.shiftr" and
(Haskell) "Data'_Bits.shiftrBounded" and
(OCaml) "Uint32.shiftr" and
(Scala) "Uint32.shiftr" and
(Eval) "(fn x => fn i => if i < 0 orelse i >= 32 then raise Fail \"argument to uint32'_shiftr out of bounds\" else Uint32.shiftr x i)"
definition uint32_sshiftr :: "uint32 \<Rightarrow> integer \<Rightarrow> uint32"
where [code del]:
"uint32_sshiftr x n =
(if n < 0 \<or> 32 \<le> n then undefined sshiftr_uint32 x n else sshiftr_uint32 x (nat_of_integer n))"
lemma sshiftr_beyond: fixes x :: "'a :: len word" shows
"size x \<le> n \<Longrightarrow> x >>> n = (if x !! (size x - 1) then -1 else 0)"
by(rule word_eqI)(simp add: nth_sshiftr word_size)
lemma sshiftr_uint32_code [code]:
"x >>> n =
(if n < 32 then uint32_sshiftr x (integer_of_nat n) else if x !! 31 then -1 else 0)"
including undefined_transfer integer.lifting unfolding uint32_sshiftr_def
by transfer(simp add: not_less sshiftr_beyond word_size)
lemma uint32_sshiftr_code [code abstract]:
"Rep_uint32 (uint32_sshiftr w n) =
(if n < 0 \<or> 32 \<le> n then Rep_uint32 (undefined sshiftr_uint32 w n) else Rep_uint32 w >>> (nat_of_integer n))"
including undefined_transfer unfolding uint32_sshiftr_def by transfer simp
code_printing constant uint32_sshiftr \<rightharpoonup>
(SML) "Uint32.shiftr'_signed" and
(Haskell)
"(Prelude.fromInteger (Prelude.toInteger (Data'_Bits.shiftrBounded (Prelude.fromInteger (Prelude.toInteger _) :: Uint32.Int32) _)) :: Uint32.Word32)" and
(OCaml) "Uint32.shiftr'_signed" and
(Scala) "Uint32.shiftr'_signed" and
(Eval) "(fn x => fn i => if i < 0 orelse i >= 32 then raise Fail \"argument to uint32'_shiftr'_signed out of bounds\" else Uint32.shiftr'_signed x i)"
lemma uint32_msb_test_bit: "msb x \<longleftrightarrow> (x :: uint32) !! 31"
by transfer(simp add: msb_nth)
lemma msb_uint32_code [code]: "msb x \<longleftrightarrow> uint32_test_bit x 31"
by(simp add: uint32_test_bit_def uint32_msb_test_bit)
lemma uint32_of_int_code [code]: "uint32_of_int i = Uint32 (integer_of_int i)"
including integer.lifting by transfer simp
lemma int_of_uint32_code [code]:
"int_of_uint32 x = int_of_integer (integer_of_uint32 x)"
by(simp add: integer_of_uint32_def)
lemma nat_of_uint32_code [code]:
"nat_of_uint32 x = nat_of_integer (integer_of_uint32 x)"
unfolding integer_of_uint32_def including integer.lifting by transfer (simp add: unat_def)
definition integer_of_uint32_signed :: "uint32 \<Rightarrow> integer"
where
"integer_of_uint32_signed n = (if n !! 31 then undefined integer_of_uint32 n else integer_of_uint32 n)"
lemma integer_of_uint32_signed_code [code]:
"integer_of_uint32_signed n =
(if n !! 31 then undefined integer_of_uint32 n else integer_of_int (uint (Rep_uint32' n)))"
unfolding integer_of_uint32_signed_def integer_of_uint32_def
including undefined_transfer by transfer simp
lemma integer_of_uint32_code [code]:
"integer_of_uint32 n =
(if n !! 31 then integer_of_uint32_signed (n AND 0x7FFFFFFF) OR 0x80000000 else integer_of_uint32_signed n)"
unfolding integer_of_uint32_def integer_of_uint32_signed_def o_def
including undefined_transfer integer.lifting
by transfer(auto simp add: word_ao_nth uint_and_mask_or_full mask_numeral mask_Suc_0 intro!: uint_and_mask_or_full[symmetric])
code_printing
constant "integer_of_uint32" \<rightharpoonup>
(SML) "IntInf.fromLarge (Word32.toLargeInt _) : IntInf.int" and
(Haskell) "Prelude.toInteger"
| constant "integer_of_uint32_signed" \<rightharpoonup>
(OCaml) "Z.of'_int32" and
(Scala) "BigInt"
section \<open>Quickcheck setup\<close>
definition uint32_of_natural :: "natural \<Rightarrow> uint32"
where "uint32_of_natural x \<equiv> Uint32 (integer_of_natural x)"
instantiation uint32 :: "{random, exhaustive, full_exhaustive}" begin
definition "random_uint32 \<equiv> qc_random_cnv uint32_of_natural"
definition "exhaustive_uint32 \<equiv> qc_exhaustive_cnv uint32_of_natural"
definition "full_exhaustive_uint32 \<equiv> qc_full_exhaustive_cnv uint32_of_natural"
instance ..
end
instantiation uint32 :: narrowing begin
interpretation quickcheck_narrowing_samples
"\<lambda>i. let x = Uint32 i in (x, 0xFFFFFFFF - x)" "0"
"Typerep.Typerep (STR ''Uint32.uint32'') []" .
definition "narrowing_uint32 d = qc_narrowing_drawn_from (narrowing_samples d) d"
declare [[code drop: "partial_term_of :: uint32 itself \<Rightarrow> _"]]
lemmas partial_term_of_uint32 [code] = partial_term_of_code
instance ..
end
no_notation sshiftr_uint32 (infixl ">>>" 55)
end
diff --git a/thys/Native_Word/Uint64.thy b/thys/Native_Word/Uint64.thy
--- a/thys/Native_Word/Uint64.thy
+++ b/thys/Native_Word/Uint64.thy
@@ -1,860 +1,860 @@
(* Title: Uint64.thy
Author: Andreas Lochbihler, ETH Zurich
*)
chapter \<open>Unsigned words of 64 bits\<close>
theory Uint64 imports
Code_Target_Word_Base
begin
text \<open>
PolyML (in version 5.7) provides a Word64 structure only when run in 64-bit mode.
Therefore, we by default provide an implementation of 64-bit words using \verb$IntInf.int$ and
masking. The code target \texttt{SML\_word} replaces this implementation and maps the operations
directly to the \verb$Word64$ structure provided by the Standard ML implementations.
The \verb$Eval$ target used by @{command value} and @{method eval} dynamically tests at
runtime for the version of PolyML and uses PolyML's Word64 structure if it detects a 64-bit
version which does not suffer from a division bug found in PolyML 5.6.
\<close>
declare prod.Quotient[transfer_rule]
section \<open>Type definition and primitive operations\<close>
typedef uint64 = "UNIV :: 64 word set" ..
setup_lifting type_definition_uint64
text \<open>Use an abstract type for code generation to disable pattern matching on @{term Abs_uint64}.\<close>
declare Rep_uint64_inverse[code abstype]
declare Quotient_uint64[transfer_rule]
instantiation uint64 :: "{neg_numeral, modulo, comm_monoid_mult, comm_ring}" begin
lift_definition zero_uint64 :: uint64 is "0 :: 64 word" .
lift_definition one_uint64 :: uint64 is "1" .
lift_definition plus_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64" is "(+) :: 64 word \<Rightarrow> _" .
lift_definition minus_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64" is "(-)" .
lift_definition uminus_uint64 :: "uint64 \<Rightarrow> uint64" is uminus .
lift_definition times_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64" is "(*)" .
lift_definition divide_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64" is "(div)" .
lift_definition modulo_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64" is "(mod)" .
instance by standard (transfer, simp add: algebra_simps)+
end
instantiation uint64 :: linorder begin
lift_definition less_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> bool" is "(<)" .
lift_definition less_eq_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> bool" is "(\<le>)" .
instance by standard (transfer, simp add: less_le_not_le linear)+
end
lemmas [code] = less_uint64.rep_eq less_eq_uint64.rep_eq
instantiation uint64 :: bit_operations begin
-lift_definition bitNOT_uint64 :: "uint64 \<Rightarrow> uint64" is bitNOT .
-lift_definition bitAND_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64" is bitAND .
-lift_definition bitOR_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64" is bitOR .
-lift_definition bitXOR_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64" is bitXOR .
+lift_definition bitNOT_uint64 :: "uint64 \<Rightarrow> uint64" is NOT .
+lift_definition bitAND_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64" is \<open>(AND)\<close> .
+lift_definition bitOR_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64" is \<open>(OR)\<close> .
+lift_definition bitXOR_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64" is \<open>(XOR)\<close> .
lift_definition test_bit_uint64 :: "uint64 \<Rightarrow> nat \<Rightarrow> bool" is test_bit .
lift_definition set_bit_uint64 :: "uint64 \<Rightarrow> nat \<Rightarrow> bool \<Rightarrow> uint64" is set_bit .
lift_definition lsb_uint64 :: "uint64 \<Rightarrow> bool" is lsb .
lift_definition shiftl_uint64 :: "uint64 \<Rightarrow> nat \<Rightarrow> uint64" is shiftl .
lift_definition shiftr_uint64 :: "uint64 \<Rightarrow> nat \<Rightarrow> uint64" is shiftr .
lift_definition msb_uint64 :: "uint64 \<Rightarrow> bool" is msb .
instance ..
end
instantiation uint64 :: bit_comprehension begin
lift_definition set_bits_uint64 :: "(nat \<Rightarrow> bool) \<Rightarrow> uint64" is "set_bits" .
instance ..
end
lemmas [code] = test_bit_uint64.rep_eq lsb_uint64.rep_eq msb_uint64.rep_eq
instantiation uint64 :: equal begin
lift_definition equal_uint64 :: "uint64 \<Rightarrow> uint64 \<Rightarrow> bool" is "equal_class.equal" .
instance by standard (transfer, simp add: equal_eq)
end
lemmas [code] = equal_uint64.rep_eq
instantiation uint64 :: size begin
lift_definition size_uint64 :: "uint64 \<Rightarrow> nat" is "size" .
instance ..
end
lemmas [code] = size_uint64.rep_eq
lift_definition sshiftr_uint64 :: "uint64 \<Rightarrow> nat \<Rightarrow> uint64" (infixl ">>>" 55) is sshiftr .
lift_definition uint64_of_int :: "int \<Rightarrow> uint64" is "word_of_int" .
definition uint64_of_nat :: "nat \<Rightarrow> uint64"
where "uint64_of_nat = uint64_of_int \<circ> int"
lift_definition int_of_uint64 :: "uint64 \<Rightarrow> int" is "uint" .
lift_definition nat_of_uint64 :: "uint64 \<Rightarrow> nat" is "unat" .
definition integer_of_uint64 :: "uint64 \<Rightarrow> integer"
where "integer_of_uint64 = integer_of_int o int_of_uint64"
lemma bitval_integer_transfer [transfer_rule]:
"(rel_fun (=) pcr_integer) of_bool of_bool"
by(auto simp add: of_bool_def integer.pcr_cr_eq cr_integer_def split: bit.split)
text \<open>Use pretty numerals from integer for pretty printing\<close>
context includes integer.lifting begin
lift_definition Uint64 :: "integer \<Rightarrow> uint64" is "word_of_int" .
lemma Rep_uint64_numeral [simp]: "Rep_uint64 (numeral n) = numeral n"
by(induction n)(simp_all add: one_uint64_def Abs_uint64_inverse numeral.simps plus_uint64_def)
lemma numeral_uint64_transfer [transfer_rule]:
"(rel_fun (=) cr_uint64) numeral numeral"
by(auto simp add: cr_uint64_def)
lemma numeral_uint64 [code_unfold]: "numeral n = Uint64 (numeral n)"
by transfer simp
lemma Rep_uint64_neg_numeral [simp]: "Rep_uint64 (- numeral n) = - numeral n"
by(simp only: uminus_uint64_def)(simp add: Abs_uint64_inverse)
lemma neg_numeral_uint64 [code_unfold]: "- numeral n = Uint64 (- numeral n)"
by transfer(simp add: cr_uint64_def)
end
lemma Abs_uint64_numeral [code_post]: "Abs_uint64 (numeral n) = numeral n"
by(induction n)(simp_all add: one_uint64_def numeral.simps plus_uint64_def Abs_uint64_inverse)
lemma Abs_uint64_0 [code_post]: "Abs_uint64 0 = 0"
by(simp add: zero_uint64_def)
lemma Abs_uint64_1 [code_post]: "Abs_uint64 1 = 1"
by(simp add: one_uint64_def)
section \<open>Code setup\<close>
text \<open> For SML, we generate an implementation of unsigned 64-bit words using \verb$IntInf.int$.
If @{ML "LargeWord.wordSize > 63"} of the Isabelle/ML runtime environment holds, then we assume
that there is also a \<open>Word64\<close> structure available and accordingly replace the implementation
for the target \verb$Eval$.
\<close>
code_printing code_module "Uint64" \<rightharpoonup> (SML) \<open>(* Test that words can handle numbers between 0 and 63 *)
val _ = if 6 <= Word.wordSize then () else raise (Fail ("wordSize less than 6"));
structure Uint64 : sig
eqtype uint64;
val zero : uint64;
val one : uint64;
val fromInt : IntInf.int -> uint64;
val toInt : uint64 -> IntInf.int;
val toLarge : uint64 -> LargeWord.word;
val fromLarge : LargeWord.word -> uint64
val plus : uint64 -> uint64 -> uint64;
val minus : uint64 -> uint64 -> uint64;
val times : uint64 -> uint64 -> uint64;
val divide : uint64 -> uint64 -> uint64;
val modulus : uint64 -> uint64 -> uint64;
val negate : uint64 -> uint64;
val less_eq : uint64 -> uint64 -> bool;
val less : uint64 -> uint64 -> bool;
val notb : uint64 -> uint64;
val andb : uint64 -> uint64 -> uint64;
val orb : uint64 -> uint64 -> uint64;
val xorb : uint64 -> uint64 -> uint64;
val shiftl : uint64 -> IntInf.int -> uint64;
val shiftr : uint64 -> IntInf.int -> uint64;
val shiftr_signed : uint64 -> IntInf.int -> uint64;
val set_bit : uint64 -> IntInf.int -> bool -> uint64;
val test_bit : uint64 -> IntInf.int -> bool;
end = struct
type uint64 = IntInf.int;
val mask = 0xFFFFFFFFFFFFFFFF : IntInf.int;
val zero = 0 : IntInf.int;
val one = 1 : IntInf.int;
fun fromInt x = IntInf.andb(x, mask);
fun toInt x = x
fun toLarge x = LargeWord.fromLargeInt (IntInf.toLarge x);
fun fromLarge x = IntInf.fromLarge (LargeWord.toLargeInt x);
fun plus x y = IntInf.andb(IntInf.+(x, y), mask);
fun minus x y = IntInf.andb(IntInf.-(x, y), mask);
fun negate x = IntInf.andb(IntInf.~(x), mask);
fun times x y = IntInf.andb(IntInf.*(x, y), mask);
fun divide x y = IntInf.div(x, y);
fun modulus x y = IntInf.mod(x, y);
fun less_eq x y = IntInf.<=(x, y);
fun less x y = IntInf.<(x, y);
fun notb x = IntInf.andb(IntInf.notb(x), mask);
fun orb x y = IntInf.orb(x, y);
fun andb x y = IntInf.andb(x, y);
fun xorb x y = IntInf.xorb(x, y);
val maxWord = IntInf.pow (2, Word.wordSize);
fun shiftl x n =
if n < maxWord then IntInf.andb(IntInf.<< (x, Word.fromLargeInt (IntInf.toLarge n)), mask)
else 0;
fun shiftr x n =
if n < maxWord then IntInf.~>> (x, Word.fromLargeInt (IntInf.toLarge n))
else 0;
val msb_mask = 0x8000000000000000 : IntInf.int;
fun shiftr_signed x i =
if IntInf.andb(x, msb_mask) = 0 then shiftr x i
else if i >= 64 then 0xFFFFFFFFFFFFFFFF
else let
val x' = shiftr x i
val m' = IntInf.andb(IntInf.<<(mask, Word.max(0w64 - Word.fromLargeInt (IntInf.toLarge i), 0w0)), mask)
in IntInf.orb(x', m') end;
fun test_bit x n =
if n < maxWord then IntInf.andb (x, IntInf.<< (1, Word.fromLargeInt (IntInf.toLarge n))) <> 0
else false;
fun set_bit x n b =
if n < 64 then
if b then IntInf.orb (x, IntInf.<< (1, Word.fromLargeInt (IntInf.toLarge n)))
else IntInf.andb (x, IntInf.notb (IntInf.<< (1, Word.fromLargeInt (IntInf.toLarge n))))
else x;
end
\<close>
code_reserved SML Uint64
setup \<open>
let
val polyml64 = LargeWord.wordSize > 63;
(* PolyML 5.6 has bugs in its Word64 implementation. We test for one such bug and refrain
from using Word64 in that case. Testing is done with dynamic code evaluation such that
the compiler does not choke on the Word64 structure, which need not be present in a 32bit
environment. *)
val error_msg = "Buggy Word64 structure";
val test_code =
"val _ = if Word64.div (0w18446744073709551611 : Word64.word, 0w3) = 0w6148914691236517203 then ()\n" ^
"else raise (Fail \"" ^ error_msg ^ "\");";
val f = Exn.interruptible_capture (fn () => ML_Compiler.eval ML_Compiler.flags Position.none (ML_Lex.tokenize test_code))
val use_Word64 = polyml64 andalso
(case f () of
Exn.Res _ => true
| Exn.Exn (e as ERROR m) => if String.isSuffix error_msg m then false else Exn.reraise e
| Exn.Exn e => Exn.reraise e)
;
val newline = "\n";
val content =
"structure Uint64 : sig" ^ newline ^
" eqtype uint64;" ^ newline ^
" val zero : uint64;" ^ newline ^
" val one : uint64;" ^ newline ^
" val fromInt : IntInf.int -> uint64;" ^ newline ^
" val toInt : uint64 -> IntInf.int;" ^ newline ^
" val toLarge : uint64 -> LargeWord.word;" ^ newline ^
" val fromLarge : LargeWord.word -> uint64" ^ newline ^
" val plus : uint64 -> uint64 -> uint64;" ^ newline ^
" val minus : uint64 -> uint64 -> uint64;" ^ newline ^
" val times : uint64 -> uint64 -> uint64;" ^ newline ^
" val divide : uint64 -> uint64 -> uint64;" ^ newline ^
" val modulus : uint64 -> uint64 -> uint64;" ^ newline ^
" val negate : uint64 -> uint64;" ^ newline ^
" val less_eq : uint64 -> uint64 -> bool;" ^ newline ^
" val less : uint64 -> uint64 -> bool;" ^ newline ^
" val notb : uint64 -> uint64;" ^ newline ^
" val andb : uint64 -> uint64 -> uint64;" ^ newline ^
" val orb : uint64 -> uint64 -> uint64;" ^ newline ^
" val xorb : uint64 -> uint64 -> uint64;" ^ newline ^
" val shiftl : uint64 -> IntInf.int -> uint64;" ^ newline ^
" val shiftr : uint64 -> IntInf.int -> uint64;" ^ newline ^
" val shiftr_signed : uint64 -> IntInf.int -> uint64;" ^ newline ^
" val set_bit : uint64 -> IntInf.int -> bool -> uint64;" ^ newline ^
" val test_bit : uint64 -> IntInf.int -> bool;" ^ newline ^
"end = struct" ^ newline ^
"" ^ newline ^
"type uint64 = Word64.word;" ^ newline ^
"" ^ newline ^
"val zero = (0wx0 : uint64);" ^ newline ^
"" ^ newline ^
"val one = (0wx1 : uint64);" ^ newline ^
"" ^ newline ^
"fun fromInt x = Word64.fromLargeInt (IntInf.toLarge x);" ^ newline ^
"" ^ newline ^
"fun toInt x = IntInf.fromLarge (Word64.toLargeInt x);" ^ newline ^
"" ^ newline ^
"fun fromLarge x = Word64.fromLarge x;" ^ newline ^
"" ^ newline ^
"fun toLarge x = Word64.toLarge x;" ^ newline ^
"" ^ newline ^
"fun plus x y = Word64.+(x, y);" ^ newline ^
"" ^ newline ^
"fun minus x y = Word64.-(x, y);" ^ newline ^
"" ^ newline ^
"fun negate x = Word64.~(x);" ^ newline ^
"" ^ newline ^
"fun times x y = Word64.*(x, y);" ^ newline ^
"" ^ newline ^
"fun divide x y = Word64.div(x, y);" ^ newline ^
"" ^ newline ^
"fun modulus x y = Word64.mod(x, y);" ^ newline ^
"" ^ newline ^
"fun less_eq x y = Word64.<=(x, y);" ^ newline ^
"" ^ newline ^
"fun less x y = Word64.<(x, y);" ^ newline ^
"" ^ newline ^
"fun set_bit x n b =" ^ newline ^
" let val mask = Word64.<< (0wx1, Word.fromLargeInt (IntInf.toLarge n))" ^ newline ^
" in if b then Word64.orb (x, mask)" ^ newline ^
" else Word64.andb (x, Word64.notb mask)" ^ newline ^
" end" ^ newline ^
"" ^ newline ^
"fun shiftl x n =" ^ newline ^
" Word64.<< (x, Word.fromLargeInt (IntInf.toLarge n))" ^ newline ^
"" ^ newline ^
"fun shiftr x n =" ^ newline ^
" Word64.>> (x, Word.fromLargeInt (IntInf.toLarge n))" ^ newline ^
"" ^ newline ^
"fun shiftr_signed x n =" ^ newline ^
" Word64.~>> (x, Word.fromLargeInt (IntInf.toLarge n))" ^ newline ^
"" ^ newline ^
"fun test_bit x n =" ^ newline ^
" Word64.andb (x, Word64.<< (0wx1, Word.fromLargeInt (IntInf.toLarge n))) <> Word64.fromInt 0" ^ newline ^
"" ^ newline ^
"val notb = Word64.notb" ^ newline ^
"" ^ newline ^
"fun andb x y = Word64.andb(x, y);" ^ newline ^
"" ^ newline ^
"fun orb x y = Word64.orb(x, y);" ^ newline ^
"" ^ newline ^
"fun xorb x y = Word64.xorb(x, y);" ^ newline ^
"" ^ newline ^
"end (*struct Uint64*)"
val target_SML64 = "SML_word";
in
(if use_Word64 then Code_Target.set_printings (Code_Symbol.Module ("Uint64", [(Code_Runtime.target, SOME (content, []))])) else I)
#> Code_Target.set_printings (Code_Symbol.Module ("Uint64", [(target_SML64, SOME (content, []))]))
end
\<close>
code_printing code_module Uint64 \<rightharpoonup> (Haskell)
\<open>module Uint64(Int64, Word64) where
import Data.Int(Int64)
import Data.Word(Word64)\<close>
code_reserved Haskell Uint64
text \<open>
OCaml and Scala provide only signed 64bit numbers, so we use these and
implement sign-sensitive operations like comparisons manually.
\<close>
code_printing code_module "Uint64" \<rightharpoonup> (OCaml)
\<open>module Uint64 : sig
val less : int64 -> int64 -> bool
val less_eq : int64 -> int64 -> bool
val set_bit : int64 -> Z.t -> bool -> int64
val shiftl : int64 -> Z.t -> int64
val shiftr : int64 -> Z.t -> int64
val shiftr_signed : int64 -> Z.t -> int64
val test_bit : int64 -> Z.t -> bool
end = struct
(* negative numbers have their highest bit set,
so they are greater than positive ones *)
let less x y =
if Int64.compare x Int64.zero < 0 then
Int64.compare y Int64.zero < 0 && Int64.compare x y < 0
else Int64.compare y Int64.zero < 0 || Int64.compare x y < 0;;
let less_eq x y =
if Int64.compare x Int64.zero < 0 then
Int64.compare y Int64.zero < 0 && Int64.compare x y <= 0
else Int64.compare y Int64.zero < 0 || Int64.compare x y <= 0;;
let set_bit x n b =
let mask = Int64.shift_left Int64.one (Z.to_int n)
in if b then Int64.logor x mask
else Int64.logand x (Int64.lognot mask);;
let shiftl x n = Int64.shift_left x (Z.to_int n);;
let shiftr x n = Int64.shift_right_logical x (Z.to_int n);;
let shiftr_signed x n = Int64.shift_right x (Z.to_int n);;
let test_bit x n =
Int64.compare
(Int64.logand x (Int64.shift_left Int64.one (Z.to_int n)))
Int64.zero
<> 0;;
end;; (*struct Uint64*)\<close>
code_reserved OCaml Uint64
code_printing code_module Uint64 \<rightharpoonup> (Scala)
\<open>object Uint64 {
def less(x: Long, y: Long) : Boolean =
if (x < 0) y < 0 && x < y
else y < 0 || x < y
def less_eq(x: Long, y: Long) : Boolean =
if (x < 0) y < 0 && x <= y
else y < 0 || x <= y
def set_bit(x: Long, n: BigInt, b: Boolean) : Long =
if (b)
x | (1L << n.intValue)
else
x & (1L << n.intValue).unary_~
def shiftl(x: Long, n: BigInt) : Long = x << n.intValue
def shiftr(x: Long, n: BigInt) : Long = x >>> n.intValue
def shiftr_signed(x: Long, n: BigInt) : Long = x >> n.intValue
def test_bit(x: Long, n: BigInt) : Boolean =
(x & (1L << n.intValue)) != 0
} /* object Uint64 */\<close>
code_reserved Scala Uint64
text \<open>
OCaml's conversion from Big\_int to int64 demands that the value fits int a signed 64-bit integer.
The following justifies the implementation.
\<close>
definition Uint64_signed :: "integer \<Rightarrow> uint64"
where "Uint64_signed i = (if i < -(0x8000000000000000) \<or> i \<ge> 0x8000000000000000 then undefined Uint64 i else Uint64 i)"
lemma Uint64_code [code]:
"Uint64 i =
(let i' = i AND 0xFFFFFFFFFFFFFFFF
in if i' !! 63 then Uint64_signed (i' - 0x10000000000000000) else Uint64_signed i')"
including undefined_transfer integer.lifting unfolding Uint64_signed_def
by transfer(rule word_of_int_via_signed, simp_all add: bin_mask_numeral)
lemma Uint64_signed_code [code abstract]:
"Rep_uint64 (Uint64_signed i) =
(if i < -(0x8000000000000000) \<or> i \<ge> 0x8000000000000000 then Rep_uint64 (undefined Uint64 i) else word_of_int (int_of_integer_symbolic i))"
unfolding Uint64_signed_def Uint64_def int_of_integer_symbolic_def word_of_integer_def
by(simp add: Abs_uint64_inverse)
text \<open>
Avoid @{term Abs_uint64} in generated code, use @{term Rep_uint64'} instead.
The symbolic implementations for code\_simp use @{term Rep_uint64}.
The new destructor @{term Rep_uint64'} is executable.
As the simplifier is given the [code abstract] equations literally,
we cannot implement @{term Rep_uint64} directly, because that makes code\_simp loop.
If code generation raises Match, some equation probably contains @{term Rep_uint64}
([code abstract] equations for @{typ uint64} may use @{term Rep_uint64} because
these instances will be folded away.)
To convert @{typ "64 word"} values into @{typ uint64}, use @{term "Abs_uint64'"}.
\<close>
definition Rep_uint64' where [simp]: "Rep_uint64' = Rep_uint64"
lemma Rep_uint64'_transfer [transfer_rule]:
"rel_fun cr_uint64 (=) (\<lambda>x. x) Rep_uint64'"
unfolding Rep_uint64'_def by(rule uint64.rep_transfer)
lemma Rep_uint64'_code [code]: "Rep_uint64' x = (BITS n. x !! n)"
by transfer simp
lift_definition Abs_uint64' :: "64 word \<Rightarrow> uint64" is "\<lambda>x :: 64 word. x" .
lemma Abs_uint64'_code [code]:
"Abs_uint64' x = Uint64 (integer_of_int (uint x))"
including integer.lifting by transfer simp
declare [[code drop: "term_of_class.term_of :: uint64 \<Rightarrow> _"]]
lemma term_of_uint64_code [code]:
defines "TR \<equiv> typerep.Typerep" and "bit0 \<equiv> STR ''Numeral_Type.bit0''"
shows
"term_of_class.term_of x =
Code_Evaluation.App (Code_Evaluation.Const (STR ''Uint64.uint64.Abs_uint64'') (TR (STR ''fun'') [TR (STR ''Word.word'') [TR bit0 [TR bit0 [TR bit0 [TR bit0 [TR bit0 [TR bit0 [TR (STR ''Numeral_Type.num1'') []]]]]]]], TR (STR ''Uint64.uint64'') []]))
(term_of_class.term_of (Rep_uint64' x))"
by(simp add: term_of_anything)
code_printing
type_constructor uint64 \<rightharpoonup>
(SML) "Uint64.uint64" and
(Haskell) "Uint64.Word64" and
(OCaml) "int64" and
(Scala) "Long"
| constant Uint64 \<rightharpoonup>
(SML) "Uint64.fromInt" and
(Haskell) "(Prelude.fromInteger _ :: Uint64.Word64)" and
(Haskell_Quickcheck) "(Prelude.fromInteger (Prelude.toInteger _) :: Uint64.Word64)" and
(Scala) "_.longValue"
| constant Uint64_signed \<rightharpoonup>
(OCaml) "Z.to'_int64"
| constant "0 :: uint64" \<rightharpoonup>
(SML) "Uint64.zero" and
(Haskell) "(0 :: Uint64.Word64)" and
(OCaml) "Int64.zero" and
(Scala) "0"
| constant "1 :: uint64" \<rightharpoonup>
(SML) "Uint64.one" and
(Haskell) "(1 :: Uint64.Word64)" and
(OCaml) "Int64.one" and
(Scala) "1"
| constant "plus :: uint64 \<Rightarrow> _ " \<rightharpoonup>
(SML) "Uint64.plus" and
(Haskell) infixl 6 "+" and
(OCaml) "Int64.add" and
(Scala) infixl 7 "+"
| constant "uminus :: uint64 \<Rightarrow> _" \<rightharpoonup>
(SML) "Uint64.negate" and
(Haskell) "negate" and
(OCaml) "Int64.neg" and
(Scala) "!(- _)"
| constant "minus :: uint64 \<Rightarrow> _" \<rightharpoonup>
(SML) "Uint64.minus" and
(Haskell) infixl 6 "-" and
(OCaml) "Int64.sub" and
(Scala) infixl 7 "-"
| constant "times :: uint64 \<Rightarrow> _ \<Rightarrow> _" \<rightharpoonup>
(SML) "Uint64.times" and
(Haskell) infixl 7 "*" and
(OCaml) "Int64.mul" and
(Scala) infixl 8 "*"
| constant "HOL.equal :: uint64 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "!((_ : Uint64.uint64) = _)" and
(Haskell) infix 4 "==" and
(OCaml) "(Int64.compare _ _ = 0)" and
(Scala) infixl 5 "=="
| class_instance uint64 :: equal \<rightharpoonup>
(Haskell) -
| constant "less_eq :: uint64 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "Uint64.less'_eq" and
(Haskell) infix 4 "<=" and
(OCaml) "Uint64.less'_eq" and
(Scala) "Uint64.less'_eq"
| constant "less :: uint64 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "Uint64.less" and
(Haskell) infix 4 "<" and
(OCaml) "Uint64.less" and
(Scala) "Uint64.less"
-| constant "bitNOT :: uint64 \<Rightarrow> _" \<rightharpoonup>
+| constant "NOT :: uint64 \<Rightarrow> _" \<rightharpoonup>
(SML) "Uint64.notb" and
(Haskell) "Data'_Bits.complement" and
(OCaml) "Int64.lognot" and
(Scala) "_.unary'_~"
-| constant "bitAND :: uint64 \<Rightarrow> _" \<rightharpoonup>
+| constant "(AND) :: uint64 \<Rightarrow> _" \<rightharpoonup>
(SML) "Uint64.andb" and
(Haskell) infixl 7 "Data_Bits..&." and
(OCaml) "Int64.logand" and
(Scala) infixl 3 "&"
-| constant "bitOR :: uint64 \<Rightarrow> _" \<rightharpoonup>
+| constant "(OR) :: uint64 \<Rightarrow> _" \<rightharpoonup>
(SML) "Uint64.orb" and
(Haskell) infixl 5 "Data_Bits..|." and
(OCaml) "Int64.logor" and
(Scala) infixl 1 "|"
-| constant "bitXOR :: uint64 \<Rightarrow> _" \<rightharpoonup>
+| constant "(XOR) :: uint64 \<Rightarrow> _" \<rightharpoonup>
(SML) "Uint64.xorb" and
(Haskell) "Data'_Bits.xor" and
(OCaml) "Int64.logxor" and
(Scala) infixl 2 "^"
definition uint64_divmod :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64 \<times> uint64" where
"uint64_divmod x y =
(if y = 0 then (undefined ((div) :: uint64 \<Rightarrow> _) x (0 :: uint64), undefined ((mod) :: uint64 \<Rightarrow> _) x (0 :: uint64))
else (x div y, x mod y))"
definition uint64_div :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64"
where "uint64_div x y = fst (uint64_divmod x y)"
definition uint64_mod :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64"
where "uint64_mod x y = snd (uint64_divmod x y)"
lemma div_uint64_code [code]: "x div y = (if y = 0 then 0 else uint64_div x y)"
including undefined_transfer unfolding uint64_divmod_def uint64_div_def
by transfer (simp add: word_div_def)
lemma mod_uint64_code [code]: "x mod y = (if y = 0 then x else uint64_mod x y)"
including undefined_transfer unfolding uint64_mod_def uint64_divmod_def
by transfer (simp add: word_mod_def)
definition uint64_sdiv :: "uint64 \<Rightarrow> uint64 \<Rightarrow> uint64"
where [code del]:
"uint64_sdiv x y =
(if y = 0 then undefined ((div) :: uint64 \<Rightarrow> _) x (0 :: uint64)
else Abs_uint64 (Rep_uint64 x sdiv Rep_uint64 y))"
definition div0_uint64 :: "uint64 \<Rightarrow> uint64"
where [code del]: "div0_uint64 x = undefined ((div) :: uint64 \<Rightarrow> _) x (0 :: uint64)"
declare [[code abort: div0_uint64]]
definition mod0_uint64 :: "uint64 \<Rightarrow> uint64"
where [code del]: "mod0_uint64 x = undefined ((mod) :: uint64 \<Rightarrow> _) x (0 :: uint64)"
declare [[code abort: mod0_uint64]]
lemma uint64_divmod_code [code]:
"uint64_divmod x y =
(if 0x8000000000000000 \<le> y then if x < y then (0, x) else (1, x - y)
else if y = 0 then (div0_uint64 x, mod0_uint64 x)
else let q = (uint64_sdiv (x >> 1) y) << 1;
r = x - q * y
in if r \<ge> y then (q + 1, r - y) else (q, r))"
including undefined_transfer unfolding uint64_divmod_def uint64_sdiv_def div0_uint64_def mod0_uint64_def
by transfer(simp add: divmod_via_sdivmod)
lemma uint64_sdiv_code [code abstract]:
"Rep_uint64 (uint64_sdiv x y) =
(if y = 0 then Rep_uint64 (undefined ((div) :: uint64 \<Rightarrow> _) x (0 :: uint64))
else Rep_uint64 x sdiv Rep_uint64 y)"
unfolding uint64_sdiv_def by(simp add: Abs_uint64_inverse)
text \<open>
Note that we only need a translation for signed division, but not for the remainder
because @{thm uint64_divmod_code} computes both with division only.
\<close>
code_printing
constant uint64_div \<rightharpoonup>
(SML) "Uint64.divide" and
(Haskell) "Prelude.div"
| constant uint64_mod \<rightharpoonup>
(SML) "Uint64.modulus" and
(Haskell) "Prelude.mod"
| constant uint64_divmod \<rightharpoonup>
(Haskell) "divmod"
| constant uint64_sdiv \<rightharpoonup>
(OCaml) "Int64.div" and
(Scala) "_ '/ _"
definition uint64_test_bit :: "uint64 \<Rightarrow> integer \<Rightarrow> bool"
where [code del]:
"uint64_test_bit x n =
(if n < 0 \<or> 63 < n then undefined (test_bit :: uint64 \<Rightarrow> _) x n
else x !! (nat_of_integer n))"
lemma test_bit_uint64_code [code]:
"test_bit x n \<longleftrightarrow> n < 64 \<and> uint64_test_bit x (integer_of_nat n)"
including undefined_transfer integer.lifting unfolding uint64_test_bit_def
by transfer(auto cong: conj_cong dest: test_bit_size simp add: word_size)
lemma uint64_test_bit_code [code]:
"uint64_test_bit w n =
(if n < 0 \<or> 63 < n then undefined (test_bit :: uint64 \<Rightarrow> _) w n else Rep_uint64 w !! nat_of_integer n)"
unfolding uint64_test_bit_def
by(simp add: test_bit_uint64.rep_eq)
code_printing constant uint64_test_bit \<rightharpoonup>
(SML) "Uint64.test'_bit" and
(Haskell) "Data'_Bits.testBitBounded" and
(OCaml) "Uint64.test'_bit" and
(Scala) "Uint64.test'_bit" and
(Eval) "(fn x => fn i => if i < 0 orelse i >= 64 then raise (Fail \"argument to uint64'_test'_bit out of bounds\") else Uint64.test'_bit x i)"
definition uint64_set_bit :: "uint64 \<Rightarrow> integer \<Rightarrow> bool \<Rightarrow> uint64"
where [code del]:
"uint64_set_bit x n b =
(if n < 0 \<or> 63 < n then undefined (set_bit :: uint64 \<Rightarrow> _) x n b
else set_bit x (nat_of_integer n) b)"
lemma set_bit_uint64_code [code]:
"set_bit x n b = (if n < 64 then uint64_set_bit x (integer_of_nat n) b else x)"
including undefined_transfer integer.lifting unfolding uint64_set_bit_def
by(transfer)(auto cong: conj_cong simp add: not_less set_bit_beyond word_size)
lemma uint64_set_bit_code [code abstract]:
"Rep_uint64 (uint64_set_bit w n b) =
(if n < 0 \<or> 63 < n then Rep_uint64 (undefined (set_bit :: uint64 \<Rightarrow> _) w n b)
else set_bit (Rep_uint64 w) (nat_of_integer n) b)"
including undefined_transfer unfolding uint64_set_bit_def by transfer simp
code_printing constant uint64_set_bit \<rightharpoonup>
(SML) "Uint64.set'_bit" and
(Haskell) "Data'_Bits.setBitBounded" and
(OCaml) "Uint64.set'_bit" and
(Scala) "Uint64.set'_bit" and
(Eval) "(fn x => fn i => fn b => if i < 0 orelse i >= 64 then raise (Fail \"argument to uint64'_set'_bit out of bounds\") else Uint64.set'_bit x i b)"
lift_definition uint64_set_bits :: "(nat \<Rightarrow> bool) \<Rightarrow> uint64 \<Rightarrow> nat \<Rightarrow> uint64" is set_bits_aux .
lemma uint64_set_bits_code [code]:
"uint64_set_bits f w n =
(if n = 0 then w
else let n' = n - 1 in uint64_set_bits f ((w << 1) OR (if f n' then 1 else 0)) n')"
by(transfer fixing: n)(cases n, simp_all)
lemma set_bits_uint64 [code]:
"(BITS n. f n) = uint64_set_bits f 0 64"
by transfer(simp add: set_bits_conv_set_bits_aux)
lemma lsb_code [code]: fixes x :: uint64 shows "lsb x = x !! 0"
by transfer(simp add: word_lsb_def word_test_bit_def)
definition uint64_shiftl :: "uint64 \<Rightarrow> integer \<Rightarrow> uint64"
where [code del]:
"uint64_shiftl x n = (if n < 0 \<or> 64 \<le> n then undefined (shiftl :: uint64 \<Rightarrow> _) x n else x << (nat_of_integer n))"
lemma shiftl_uint64_code [code]: "x << n = (if n < 64 then uint64_shiftl x (integer_of_nat n) else 0)"
including undefined_transfer integer.lifting unfolding uint64_shiftl_def
by transfer(simp add: not_less shiftl_zero_size word_size)
lemma uint64_shiftl_code [code abstract]:
"Rep_uint64 (uint64_shiftl w n) =
(if n < 0 \<or> 64 \<le> n then Rep_uint64 (undefined (shiftl :: uint64 \<Rightarrow> _) w n) else Rep_uint64 w << (nat_of_integer n))"
including undefined_transfer unfolding uint64_shiftl_def by transfer simp
code_printing constant uint64_shiftl \<rightharpoonup>
(SML) "Uint64.shiftl" and
(Haskell) "Data'_Bits.shiftlBounded" and
(OCaml) "Uint64.shiftl" and
(Scala) "Uint64.shiftl" and
(Eval) "(fn x => fn i => if i < 0 orelse i >= 64 then raise (Fail \"argument to uint64'_shiftl out of bounds\") else Uint64.shiftl x i)"
definition uint64_shiftr :: "uint64 \<Rightarrow> integer \<Rightarrow> uint64"
where [code del]:
"uint64_shiftr x n = (if n < 0 \<or> 64 \<le> n then undefined (shiftr :: uint64 \<Rightarrow> _) x n else x >> (nat_of_integer n))"
lemma shiftr_uint64_code [code]: "x >> n = (if n < 64 then uint64_shiftr x (integer_of_nat n) else 0)"
including undefined_transfer integer.lifting unfolding uint64_shiftr_def
by transfer(simp add: not_less shiftr_zero_size word_size)
lemma uint64_shiftr_code [code abstract]:
"Rep_uint64 (uint64_shiftr w n) =
(if n < 0 \<or> 64 \<le> n then Rep_uint64 (undefined (shiftr :: uint64 \<Rightarrow> _) w n) else Rep_uint64 w >> nat_of_integer n)"
including undefined_transfer unfolding uint64_shiftr_def by transfer simp
code_printing constant uint64_shiftr \<rightharpoonup>
(SML) "Uint64.shiftr" and
(Haskell) "Data'_Bits.shiftrBounded" and
(OCaml) "Uint64.shiftr" and
(Scala) "Uint64.shiftr" and
(Eval) "(fn x => fn i => if i < 0 orelse i >= 64 then raise (Fail \"argument to uint64'_shiftr out of bounds\") else Uint64.shiftr x i)"
definition uint64_sshiftr :: "uint64 \<Rightarrow> integer \<Rightarrow> uint64"
where [code del]:
"uint64_sshiftr x n =
(if n < 0 \<or> 64 \<le> n then undefined sshiftr_uint64 x n else sshiftr_uint64 x (nat_of_integer n))"
lemma sshiftr_beyond: fixes x :: "'a :: len word" shows
"size x \<le> n \<Longrightarrow> x >>> n = (if x !! (size x - 1) then -1 else 0)"
by(rule word_eqI)(simp add: nth_sshiftr word_size)
lemma sshiftr_uint64_code [code]:
"x >>> n =
(if n < 64 then uint64_sshiftr x (integer_of_nat n) else if x !! 63 then -1 else 0)"
including undefined_transfer integer.lifting unfolding uint64_sshiftr_def
by transfer(simp add: not_less sshiftr_beyond word_size)
lemma uint64_sshiftr_code [code abstract]:
"Rep_uint64 (uint64_sshiftr w n) =
(if n < 0 \<or> 64 \<le> n then Rep_uint64 (undefined sshiftr_uint64 w n) else Rep_uint64 w >>> (nat_of_integer n))"
including undefined_transfer unfolding uint64_sshiftr_def by transfer simp
code_printing constant uint64_sshiftr \<rightharpoonup>
(SML) "Uint64.shiftr'_signed" and
(Haskell)
"(Prelude.fromInteger (Prelude.toInteger (Data'_Bits.shiftrBounded (Prelude.fromInteger (Prelude.toInteger _) :: Uint64.Int64) _)) :: Uint64.Word64)" and
(OCaml) "Uint64.shiftr'_signed" and
(Scala) "Uint64.shiftr'_signed" and
(Eval) "(fn x => fn i => if i < 0 orelse i >= 64 then raise (Fail \"argument to uint64'_shiftr'_signed out of bounds\") else Uint64.shiftr'_signed x i)"
lemma uint64_msb_test_bit: "msb x \<longleftrightarrow> (x :: uint64) !! 63"
by transfer(simp add: msb_nth)
lemma msb_uint64_code [code]: "msb x \<longleftrightarrow> uint64_test_bit x 63"
by(simp add: uint64_test_bit_def uint64_msb_test_bit)
lemma uint64_of_int_code [code]: "uint64_of_int i = Uint64 (integer_of_int i)"
including integer.lifting by transfer simp
lemma int_of_uint64_code [code]:
"int_of_uint64 x = int_of_integer (integer_of_uint64 x)"
by(simp add: integer_of_uint64_def)
lemma nat_of_uint64_code [code]:
"nat_of_uint64 x = nat_of_integer (integer_of_uint64 x)"
unfolding integer_of_uint64_def including integer.lifting by transfer (simp add: unat_def)
definition integer_of_uint64_signed :: "uint64 \<Rightarrow> integer"
where
"integer_of_uint64_signed n = (if n !! 63 then undefined integer_of_uint64 n else integer_of_uint64 n)"
lemma integer_of_uint64_signed_code [code]:
"integer_of_uint64_signed n =
(if n !! 63 then undefined integer_of_uint64 n else integer_of_int (uint (Rep_uint64' n)))"
unfolding integer_of_uint64_signed_def integer_of_uint64_def
including undefined_transfer by transfer simp
lemma integer_of_uint64_code [code]:
"integer_of_uint64 n =
(if n !! 63 then integer_of_uint64_signed (n AND 0x7FFFFFFFFFFFFFFF) OR 0x8000000000000000 else integer_of_uint64_signed n)"
unfolding integer_of_uint64_def integer_of_uint64_signed_def o_def
including undefined_transfer integer.lifting
by transfer(auto simp add: word_ao_nth uint_and_mask_or_full mask_numeral mask_Suc_0 intro!: uint_and_mask_or_full[symmetric])
code_printing
constant "integer_of_uint64" \<rightharpoonup>
(SML) "Uint64.toInt" and
(Haskell) "Prelude.toInteger"
| constant "integer_of_uint64_signed" \<rightharpoonup>
(OCaml) "Z.of'_int64" and
(Scala) "BigInt"
section \<open>Quickcheck setup\<close>
definition uint64_of_natural :: "natural \<Rightarrow> uint64"
where "uint64_of_natural x \<equiv> Uint64 (integer_of_natural x)"
instantiation uint64 :: "{random, exhaustive, full_exhaustive}" begin
definition "random_uint64 \<equiv> qc_random_cnv uint64_of_natural"
definition "exhaustive_uint64 \<equiv> qc_exhaustive_cnv uint64_of_natural"
definition "full_exhaustive_uint64 \<equiv> qc_full_exhaustive_cnv uint64_of_natural"
instance ..
end
instantiation uint64 :: narrowing begin
interpretation quickcheck_narrowing_samples
"\<lambda>i. let x = Uint64 i in (x, 0xFFFFFFFFFFFFFFFF - x)" "0"
"Typerep.Typerep (STR ''Uint64.uint64'') []" .
definition "narrowing_uint64 d = qc_narrowing_drawn_from (narrowing_samples d) d"
declare [[code drop: "partial_term_of :: uint64 itself \<Rightarrow> _"]]
lemmas partial_term_of_uint64 [code] = partial_term_of_code
instance ..
end
no_notation sshiftr_uint64 (infixl ">>>" 55)
end
diff --git a/thys/Native_Word/Uint8.thy b/thys/Native_Word/Uint8.thy
--- a/thys/Native_Word/Uint8.thy
+++ b/thys/Native_Word/Uint8.thy
@@ -1,579 +1,579 @@
(* Title: Uint8.thy
Author: Andreas Lochbihler, ETH Zurich
*)
chapter \<open>Unsigned words of 8 bits\<close>
theory Uint8 imports
Code_Target_Word_Base
begin
text \<open>
Restriction for OCaml code generation:
OCaml does not provide an int8 type, so no special code generation
for this type is set up. If the theory \<open>Code_Target_Bits_Int\<close>
is imported, the type \<open>uint8\<close> is emulated via @{typ "8 word"}.
\<close>
declare prod.Quotient[transfer_rule]
section \<open>Type definition and primitive operations\<close>
typedef uint8 = "UNIV :: 8 word set" ..
setup_lifting type_definition_uint8
text \<open>Use an abstract type for code generation to disable pattern matching on @{term Abs_uint8}.\<close>
declare Rep_uint8_inverse[code abstype]
declare Quotient_uint8[transfer_rule]
instantiation uint8 :: "{neg_numeral, modulo, comm_monoid_mult, comm_ring}" begin
lift_definition zero_uint8 :: uint8 is "0" .
lift_definition one_uint8 :: uint8 is "1" .
lift_definition plus_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is "(+)" .
lift_definition minus_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is "(-)" .
lift_definition uminus_uint8 :: "uint8 \<Rightarrow> uint8" is uminus .
lift_definition times_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is "(*)" .
lift_definition divide_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is "(div)" .
lift_definition modulo_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is "(mod)" .
instance by standard (transfer, simp add: algebra_simps)+
end
instantiation uint8 :: linorder begin
lift_definition less_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> bool" is "(<)" .
lift_definition less_eq_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> bool" is "(\<le>)" .
instance by standard (transfer, simp add: less_le_not_le linear)+
end
lemmas [code] = less_uint8.rep_eq less_eq_uint8.rep_eq
instantiation uint8 :: bit_operations begin
-lift_definition bitNOT_uint8 :: "uint8 \<Rightarrow> uint8" is bitNOT .
-lift_definition bitAND_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is bitAND .
-lift_definition bitOR_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is bitOR .
-lift_definition bitXOR_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is bitXOR .
+lift_definition bitNOT_uint8 :: "uint8 \<Rightarrow> uint8" is NOT .
+lift_definition bitAND_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is \<open>(AND)\<close> .
+lift_definition bitOR_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is \<open>(OR)\<close> .
+lift_definition bitXOR_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is \<open>(XOR)\<close> .
lift_definition test_bit_uint8 :: "uint8 \<Rightarrow> nat \<Rightarrow> bool" is test_bit .
lift_definition set_bit_uint8 :: "uint8 \<Rightarrow> nat \<Rightarrow> bool \<Rightarrow> uint8" is set_bit .
lift_definition lsb_uint8 :: "uint8 \<Rightarrow> bool" is lsb .
lift_definition shiftl_uint8 :: "uint8 \<Rightarrow> nat \<Rightarrow> uint8" is shiftl .
lift_definition shiftr_uint8 :: "uint8 \<Rightarrow> nat \<Rightarrow> uint8" is shiftr .
lift_definition msb_uint8 :: "uint8 \<Rightarrow> bool" is msb .
instance ..
end
instantiation uint8 :: bit_comprehension begin
lift_definition set_bits_uint8 :: "(nat \<Rightarrow> bool) \<Rightarrow> uint8" is "set_bits" .
instance ..
end
lemmas [code] = test_bit_uint8.rep_eq lsb_uint8.rep_eq msb_uint8.rep_eq
instantiation uint8 :: equal begin
lift_definition equal_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> bool" is "equal_class.equal" .
instance by standard (transfer, simp add: equal_eq)
end
lemmas [code] = equal_uint8.rep_eq
instantiation uint8 :: size begin
lift_definition size_uint8 :: "uint8 \<Rightarrow> nat" is "size" .
instance ..
end
lemmas [code] = size_uint8.rep_eq
lift_definition sshiftr_uint8 :: "uint8 \<Rightarrow> nat \<Rightarrow> uint8" (infixl ">>>" 55) is sshiftr .
lift_definition uint8_of_int :: "int \<Rightarrow> uint8" is "word_of_int" .
definition uint8_of_nat :: "nat \<Rightarrow> uint8"
where "uint8_of_nat = uint8_of_int \<circ> int"
lift_definition int_of_uint8 :: "uint8 \<Rightarrow> int" is "uint" .
lift_definition nat_of_uint8 :: "uint8 \<Rightarrow> nat" is "unat" .
definition integer_of_uint8 :: "uint8 \<Rightarrow> integer"
where "integer_of_uint8 = integer_of_int o int_of_uint8"
text \<open>Use pretty numerals from integer for pretty printing\<close>
context includes integer.lifting begin
lift_definition Uint8 :: "integer \<Rightarrow> uint8" is "word_of_int" .
lemma Rep_uint8_numeral [simp]: "Rep_uint8 (numeral n) = numeral n"
by(induction n)(simp_all add: one_uint8_def Abs_uint8_inverse numeral.simps plus_uint8_def)
lemma numeral_uint8_transfer [transfer_rule]:
"(rel_fun (=) cr_uint8) numeral numeral"
by(auto simp add: cr_uint8_def)
lemma numeral_uint8 [code_unfold]: "numeral n = Uint8 (numeral n)"
by transfer simp
lemma Rep_uint8_neg_numeral [simp]: "Rep_uint8 (- numeral n) = - numeral n"
by(simp only: uminus_uint8_def)(simp add: Abs_uint8_inverse)
lemma neg_numeral_uint8 [code_unfold]: "- numeral n = Uint8 (- numeral n)"
by transfer(simp add: cr_uint8_def)
end
lemma Abs_uint8_numeral [code_post]: "Abs_uint8 (numeral n) = numeral n"
by(induction n)(simp_all add: one_uint8_def numeral.simps plus_uint8_def Abs_uint8_inverse)
lemma Abs_uint8_0 [code_post]: "Abs_uint8 0 = 0"
by(simp add: zero_uint8_def)
lemma Abs_uint8_1 [code_post]: "Abs_uint8 1 = 1"
by(simp add: one_uint8_def)
section \<open>Code setup\<close>
code_printing code_module Uint8 \<rightharpoonup> (SML)
\<open>(* Test that words can handle numbers between 0 and 3 *)
val _ = if 3 <= Word.wordSize then () else raise (Fail ("wordSize less than 3"));
structure Uint8 : sig
val set_bit : Word8.word -> IntInf.int -> bool -> Word8.word
val shiftl : Word8.word -> IntInf.int -> Word8.word
val shiftr : Word8.word -> IntInf.int -> Word8.word
val shiftr_signed : Word8.word -> IntInf.int -> Word8.word
val test_bit : Word8.word -> IntInf.int -> bool
end = struct
fun set_bit x n b =
let val mask = Word8.<< (0wx1, Word.fromLargeInt (IntInf.toLarge n))
in if b then Word8.orb (x, mask)
else Word8.andb (x, Word8.notb mask)
end
fun shiftl x n =
Word8.<< (x, Word.fromLargeInt (IntInf.toLarge n))
fun shiftr x n =
Word8.>> (x, Word.fromLargeInt (IntInf.toLarge n))
fun shiftr_signed x n =
Word8.~>> (x, Word.fromLargeInt (IntInf.toLarge n))
fun test_bit x n =
Word8.andb (x, Word8.<< (0wx1, Word.fromLargeInt (IntInf.toLarge n))) <> Word8.fromInt 0
end; (* struct Uint8 *)\<close>
code_reserved SML Uint8
code_printing code_module Uint8 \<rightharpoonup> (Haskell)
\<open>module Uint8(Int8, Word8) where
import Data.Int(Int8)
import Data.Word(Word8)\<close>
code_reserved Haskell Uint8
text \<open>
Scala provides only signed 8bit numbers, so we use these and
implement sign-sensitive operations like comparisons manually.
\<close>
code_printing code_module Uint8 \<rightharpoonup> (Scala)
\<open>object Uint8 {
def less(x: Byte, y: Byte) : Boolean =
if (x < 0) y < 0 && x < y
else y < 0 || x < y
def less_eq(x: Byte, y: Byte) : Boolean =
if (x < 0) y < 0 && x <= y
else y < 0 || x <= y
def set_bit(x: Byte, n: BigInt, b: Boolean) : Byte =
if (b)
(x | (1 << n.intValue)).toByte
else
(x & (1 << n.intValue).unary_~).toByte
def shiftl(x: Byte, n: BigInt) : Byte = (x << n.intValue).toByte
def shiftr(x: Byte, n: BigInt) : Byte = ((x & 255) >>> n.intValue).toByte
def shiftr_signed(x: Byte, n: BigInt) : Byte = (x >> n.intValue).toByte
def test_bit(x: Byte, n: BigInt) : Boolean =
(x & (1 << n.intValue)) != 0
} /* object Uint8 */\<close>
code_reserved Scala Uint8
text \<open>
Avoid @{term Abs_uint8} in generated code, use @{term Rep_uint8'} instead.
The symbolic implementations for code\_simp use @{term Rep_uint8}.
The new destructor @{term Rep_uint8'} is executable.
As the simplifier is given the [code abstract] equations literally,
we cannot implement @{term Rep_uint8} directly, because that makes code\_simp loop.
If code generation raises Match, some equation probably contains @{term Rep_uint8}
([code abstract] equations for @{typ uint8} may use @{term Rep_uint8} because
these instances will be folded away.)
To convert @{typ "8 word"} values into @{typ uint8}, use @{term "Abs_uint8'"}.
\<close>
definition Rep_uint8' where [simp]: "Rep_uint8' = Rep_uint8"
lemma Rep_uint8'_transfer [transfer_rule]:
"rel_fun cr_uint8 (=) (\<lambda>x. x) Rep_uint8'"
unfolding Rep_uint8'_def by(rule uint8.rep_transfer)
lemma Rep_uint8'_code [code]: "Rep_uint8' x = (BITS n. x !! n)"
by transfer simp
lift_definition Abs_uint8' :: "8 word \<Rightarrow> uint8" is "\<lambda>x :: 8 word. x" .
lemma Abs_uint8'_code [code]: "Abs_uint8' x = Uint8 (integer_of_int (uint x))"
including integer.lifting by transfer simp
declare [[code drop: "term_of_class.term_of :: uint8 \<Rightarrow> _"]]
lemma term_of_uint8_code [code]:
defines "TR \<equiv> typerep.Typerep" and "bit0 \<equiv> STR ''Numeral_Type.bit0''" shows
"term_of_class.term_of x =
Code_Evaluation.App (Code_Evaluation.Const (STR ''Uint8.uint8.Abs_uint8'') (TR (STR ''fun'') [TR (STR ''Word.word'') [TR bit0 [TR bit0 [TR bit0 [TR (STR ''Numeral_Type.num1'') []]]]], TR (STR ''Uint8.uint8'') []]))
(term_of_class.term_of (Rep_uint8' x))"
by(simp add: term_of_anything)
lemma Uin8_code [code abstract]: "Rep_uint8 (Uint8 i) = word_of_int (int_of_integer_symbolic i)"
unfolding Uint8_def int_of_integer_symbolic_def by(simp add: Abs_uint8_inverse)
code_printing type_constructor uint8 \<rightharpoonup>
(SML) "Word8.word" and
(Haskell) "Uint8.Word8" and
(Scala) "Byte"
| constant Uint8 \<rightharpoonup>
(SML) "Word8.fromLargeInt (IntInf.toLarge _)" and
(Haskell) "(Prelude.fromInteger _ :: Uint8.Word8)" and
(Haskell_Quickcheck) "(Prelude.fromInteger (Prelude.toInteger _) :: Uint8.Word8)" and
(Scala) "_.byteValue"
| constant "0 :: uint8" \<rightharpoonup>
(SML) "(Word8.fromInt 0)" and
(Haskell) "(0 :: Uint8.Word8)" and
(Scala) "0.toByte"
| constant "1 :: uint8" \<rightharpoonup>
(SML) "(Word8.fromInt 1)" and
(Haskell) "(1 :: Uint8.Word8)" and
(Scala) "1.toByte"
| constant "plus :: uint8 \<Rightarrow> _ \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.+ ((_), (_))" and
(Haskell) infixl 6 "+" and
(Scala) "(_ +/ _).toByte"
| constant "uminus :: uint8 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.~" and
(Haskell) "negate" and
(Scala) "(- _).toByte"
| constant "minus :: uint8 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.- ((_), (_))" and
(Haskell) infixl 6 "-" and
(Scala) "(_ -/ _).toByte"
| constant "times :: uint8 \<Rightarrow> _ \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.* ((_), (_))" and
(Haskell) infixl 7 "*" and
(Scala) "(_ */ _).toByte"
| constant "HOL.equal :: uint8 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "!((_ : Word8.word) = _)" and
(Haskell) infix 4 "==" and
(Scala) infixl 5 "=="
| class_instance uint8 :: equal \<rightharpoonup> (Haskell) -
| constant "less_eq :: uint8 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "Word8.<= ((_), (_))" and
(Haskell) infix 4 "<=" and
(Scala) "Uint8.less'_eq"
| constant "less :: uint8 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "Word8.< ((_), (_))" and
(Haskell) infix 4 "<" and
(Scala) "Uint8.less"
-| constant "bitNOT :: uint8 \<Rightarrow> _" \<rightharpoonup>
+| constant "NOT :: uint8 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.notb" and
(Haskell) "Data'_Bits.complement" and
(Scala) "_.unary'_~.toByte"
-| constant "bitAND :: uint8 \<Rightarrow> _" \<rightharpoonup>
+| constant "(AND) :: uint8 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.andb ((_),/ (_))" and
(Haskell) infixl 7 "Data_Bits..&." and
(Scala) "(_ & _).toByte"
-| constant "bitOR :: uint8 \<Rightarrow> _" \<rightharpoonup>
+| constant "(OR) :: uint8 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.orb ((_),/ (_))" and
(Haskell) infixl 5 "Data_Bits..|." and
(Scala) "(_ | _).toByte"
-| constant "bitXOR :: uint8 \<Rightarrow> _" \<rightharpoonup>
+| constant "(XOR) :: uint8 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.xorb ((_),/ (_))" and
(Haskell) "Data'_Bits.xor" and
(Scala) "(_ ^ _).toByte"
definition uint8_divmod :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8 \<times> uint8" where
"uint8_divmod x y =
(if y = 0 then (undefined ((div) :: uint8 \<Rightarrow> _) x (0 :: uint8), undefined ((mod) :: uint8 \<Rightarrow> _) x (0 :: uint8))
else (x div y, x mod y))"
definition uint8_div :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8"
where "uint8_div x y = fst (uint8_divmod x y)"
definition uint8_mod :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8"
where "uint8_mod x y = snd (uint8_divmod x y)"
lemma div_uint8_code [code]: "x div y = (if y = 0 then 0 else uint8_div x y)"
including undefined_transfer unfolding uint8_divmod_def uint8_div_def
by transfer (simp add: word_div_def)
lemma mod_uint8_code [code]: "x mod y = (if y = 0 then x else uint8_mod x y)"
including undefined_transfer unfolding uint8_mod_def uint8_divmod_def
by transfer (simp add: word_mod_def)
definition uint8_sdiv :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8"
where
"uint8_sdiv x y =
(if y = 0 then undefined ((div) :: uint8 \<Rightarrow> _) x (0 :: uint8)
else Abs_uint8 (Rep_uint8 x sdiv Rep_uint8 y))"
definition div0_uint8 :: "uint8 \<Rightarrow> uint8"
where [code del]: "div0_uint8 x = undefined ((div) :: uint8 \<Rightarrow> _) x (0 :: uint8)"
declare [[code abort: div0_uint8]]
definition mod0_uint8 :: "uint8 \<Rightarrow> uint8"
where [code del]: "mod0_uint8 x = undefined ((mod) :: uint8 \<Rightarrow> _) x (0 :: uint8)"
declare [[code abort: mod0_uint8]]
lemma uint8_divmod_code [code]:
"uint8_divmod x y =
(if 0x80 \<le> y then if x < y then (0, x) else (1, x - y)
else if y = 0 then (div0_uint8 x, mod0_uint8 x)
else let q = (uint8_sdiv (x >> 1) y) << 1;
r = x - q * y
in if r \<ge> y then (q + 1, r - y) else (q, r))"
including undefined_transfer unfolding uint8_divmod_def uint8_sdiv_def div0_uint8_def mod0_uint8_def
by transfer(simp add: divmod_via_sdivmod)
lemma uint8_sdiv_code [code abstract]:
"Rep_uint8 (uint8_sdiv x y) =
(if y = 0 then Rep_uint8 (undefined ((div) :: uint8 \<Rightarrow> _) x (0 :: uint8))
else Rep_uint8 x sdiv Rep_uint8 y)"
unfolding uint8_sdiv_def by(simp add: Abs_uint8_inverse)
text \<open>
Note that we only need a translation for signed division, but not for the remainder
because @{thm uint8_divmod_code} computes both with division only.
\<close>
code_printing
constant uint8_div \<rightharpoonup>
(SML) "Word8.div ((_), (_))" and
(Haskell) "Prelude.div"
| constant uint8_mod \<rightharpoonup>
(SML) "Word8.mod ((_), (_))" and
(Haskell) "Prelude.mod"
| constant uint8_divmod \<rightharpoonup>
(Haskell) "divmod"
| constant uint8_sdiv \<rightharpoonup>
(Scala) "(_ '/ _).toByte"
definition uint8_test_bit :: "uint8 \<Rightarrow> integer \<Rightarrow> bool"
where [code del]:
"uint8_test_bit x n =
(if n < 0 \<or> 7 < n then undefined (test_bit :: uint8 \<Rightarrow> _) x n
else x !! (nat_of_integer n))"
lemma test_bit_uint8_code [code]:
"test_bit x n \<longleftrightarrow> n < 8 \<and> uint8_test_bit x (integer_of_nat n)"
including undefined_transfer integer.lifting unfolding uint8_test_bit_def
by transfer(auto cong: conj_cong dest: test_bit_size simp add: word_size)
lemma uint8_test_bit_code [code]:
"uint8_test_bit w n =
(if n < 0 \<or> 7 < n then undefined (test_bit :: uint8 \<Rightarrow> _) w n else Rep_uint8 w !! nat_of_integer n)"
unfolding uint8_test_bit_def by(simp add: test_bit_uint8.rep_eq)
code_printing constant uint8_test_bit \<rightharpoonup>
(SML) "Uint8.test'_bit" and
(Haskell) "Data'_Bits.testBitBounded" and
(Scala) "Uint8.test'_bit" and
(Eval) "(fn x => fn i => if i < 0 orelse i >= 8 then raise (Fail \"argument to uint8'_test'_bit out of bounds\") else Uint8.test'_bit x i)"
definition uint8_set_bit :: "uint8 \<Rightarrow> integer \<Rightarrow> bool \<Rightarrow> uint8"
where [code del]:
"uint8_set_bit x n b =
(if n < 0 \<or> 7 < n then undefined (set_bit :: uint8 \<Rightarrow> _) x n b
else set_bit x (nat_of_integer n) b)"
lemma set_bit_uint8_code [code]:
"set_bit x n b = (if n < 8 then uint8_set_bit x (integer_of_nat n) b else x)"
including undefined_transfer integer.lifting unfolding uint8_set_bit_def
by(transfer)(auto cong: conj_cong simp add: not_less set_bit_beyond word_size)
lemma uint8_set_bit_code [code abstract]:
"Rep_uint8 (uint8_set_bit w n b) =
(if n < 0 \<or> 7 < n then Rep_uint8 (undefined (set_bit :: uint8 \<Rightarrow> _) w n b)
else set_bit (Rep_uint8 w) (nat_of_integer n) b)"
including undefined_transfer unfolding uint8_set_bit_def by transfer simp
code_printing constant uint8_set_bit \<rightharpoonup>
(SML) "Uint8.set'_bit" and
(Haskell) "Data'_Bits.setBitBounded" and
(Scala) "Uint8.set'_bit" and
(Eval) "(fn x => fn i => fn b => if i < 0 orelse i >= 8 then raise (Fail \"argument to uint8'_set'_bit out of bounds\") else Uint8.set'_bit x i b)"
lift_definition uint8_set_bits :: "(nat \<Rightarrow> bool) \<Rightarrow> uint8 \<Rightarrow> nat \<Rightarrow> uint8" is set_bits_aux .
lemma uint8_set_bits_code [code]:
"uint8_set_bits f w n =
(if n = 0 then w
else let n' = n - 1 in uint8_set_bits f ((w << 1) OR (if f n' then 1 else 0)) n')"
by(transfer fixing: n)(cases n, simp_all)
lemma set_bits_uint8 [code]:
"(BITS n. f n) = uint8_set_bits f 0 8"
by transfer(simp add: set_bits_conv_set_bits_aux)
lemma lsb_code [code]: fixes x :: uint8 shows "lsb x = x !! 0"
by transfer(simp add: word_lsb_def word_test_bit_def)
definition uint8_shiftl :: "uint8 \<Rightarrow> integer \<Rightarrow> uint8"
where [code del]:
"uint8_shiftl x n = (if n < 0 \<or> 8 \<le> n then undefined (shiftl :: uint8 \<Rightarrow> _) x n else x << (nat_of_integer n))"
lemma shiftl_uint8_code [code]: "x << n = (if n < 8 then uint8_shiftl x (integer_of_nat n) else 0)"
including undefined_transfer integer.lifting unfolding uint8_shiftl_def
by transfer(simp add: not_less shiftl_zero_size word_size)
lemma uint8_shiftl_code [code abstract]:
"Rep_uint8 (uint8_shiftl w n) =
(if n < 0 \<or> 8 \<le> n then Rep_uint8 (undefined (shiftl :: uint8 \<Rightarrow> _) w n)
else Rep_uint8 w << nat_of_integer n)"
including undefined_transfer unfolding uint8_shiftl_def by transfer simp
code_printing constant uint8_shiftl \<rightharpoonup>
(SML) "Uint8.shiftl" and
(Haskell) "Data'_Bits.shiftlBounded" and
(Scala) "Uint8.shiftl" and
(Eval) "(fn x => fn i => if i < 0 orelse i >= 8 then raise (Fail \"argument to uint8'_shiftl out of bounds\") else Uint8.shiftl x i)"
definition uint8_shiftr :: "uint8 \<Rightarrow> integer \<Rightarrow> uint8"
where [code del]:
"uint8_shiftr x n = (if n < 0 \<or> 8 \<le> n then undefined (shiftr :: uint8 \<Rightarrow> _) x n else x >> (nat_of_integer n))"
lemma shiftr_uint8_code [code]: "x >> n = (if n < 8 then uint8_shiftr x (integer_of_nat n) else 0)"
including undefined_transfer integer.lifting unfolding uint8_shiftr_def
by transfer(simp add: not_less shiftr_zero_size word_size)
lemma uint8_shiftr_code [code abstract]:
"Rep_uint8 (uint8_shiftr w n) =
(if n < 0 \<or> 8 \<le> n then Rep_uint8 (undefined (shiftr :: uint8 \<Rightarrow> _) w n)
else Rep_uint8 w >> nat_of_integer n)"
including undefined_transfer unfolding uint8_shiftr_def by transfer simp
code_printing constant uint8_shiftr \<rightharpoonup>
(SML) "Uint8.shiftr" and
(Haskell) "Data'_Bits.shiftrBounded" and
(Scala) "Uint8.shiftr" and
(Eval) "(fn x => fn i => if i < 0 orelse i >= 8 then raise (Fail \"argument to uint8'_shiftr out of bounds\") else Uint8.shiftr x i)"
definition uint8_sshiftr :: "uint8 \<Rightarrow> integer \<Rightarrow> uint8"
where [code del]:
"uint8_sshiftr x n =
(if n < 0 \<or> 8 \<le> n then undefined sshiftr_uint8 x n else sshiftr_uint8 x (nat_of_integer n))"
lemma sshiftr_beyond: fixes x :: "'a :: len word" shows
"size x \<le> n \<Longrightarrow> x >>> n = (if x !! (size x - 1) then -1 else 0)"
by(rule word_eqI)(simp add: nth_sshiftr word_size)
lemma sshiftr_uint8_code [code]:
"x >>> n =
(if n < 8 then uint8_sshiftr x (integer_of_nat n) else if x !! 7 then -1 else 0)"
including undefined_transfer integer.lifting unfolding uint8_sshiftr_def
by transfer (simp add: not_less sshiftr_beyond word_size)
lemma uint8_sshiftr_code [code abstract]:
"Rep_uint8 (uint8_sshiftr w n) =
(if n < 0 \<or> 8 \<le> n then Rep_uint8 (undefined sshiftr_uint8 w n)
else Rep_uint8 w >>> nat_of_integer n)"
including undefined_transfer unfolding uint8_sshiftr_def by transfer simp
code_printing constant uint8_sshiftr \<rightharpoonup>
(SML) "Uint8.shiftr'_signed" and
(Haskell)
"(Prelude.fromInteger (Prelude.toInteger (Data'_Bits.shiftrBounded (Prelude.fromInteger (Prelude.toInteger _) :: Uint8.Int8) _)) :: Uint8.Word8)" and
(Scala) "Uint8.shiftr'_signed" and
(Eval) "(fn x => fn i => if i < 0 orelse i >= 8 then raise (Fail \"argument to uint8'_sshiftr out of bounds\") else Uint8.shiftr'_signed x i)"
lemma uint8_msb_test_bit: "msb x \<longleftrightarrow> (x :: uint8) !! 7"
by transfer(simp add: msb_nth)
lemma msb_uint16_code [code]: "msb x \<longleftrightarrow> uint8_test_bit x 7"
by(simp add: uint8_test_bit_def uint8_msb_test_bit)
lemma uint8_of_int_code [code]: "uint8_of_int i = Uint8 (integer_of_int i)"
including integer.lifting by transfer simp
lemma int_of_uint8_code [code]:
"int_of_uint8 x = int_of_integer (integer_of_uint8 x)"
by(simp add: integer_of_uint8_def)
lemma nat_of_uint8_code [code]:
"nat_of_uint8 x = nat_of_integer (integer_of_uint8 x)"
unfolding integer_of_uint8_def including integer.lifting by transfer (simp add: unat_def)
definition integer_of_uint8_signed :: "uint8 \<Rightarrow> integer"
where
"integer_of_uint8_signed n = (if n !! 7 then undefined integer_of_uint8 n else integer_of_uint8 n)"
lemma integer_of_uint8_signed_code [code]:
"integer_of_uint8_signed n =
(if n !! 7 then undefined integer_of_uint8 n else integer_of_int (uint (Rep_uint8' n)))"
unfolding integer_of_uint8_signed_def integer_of_uint8_def
including undefined_transfer by transfer simp
lemma integer_of_uint8_code [code]:
"integer_of_uint8 n =
(if n !! 7 then integer_of_uint8_signed (n AND 0x7F) OR 0x80 else integer_of_uint8_signed n)"
unfolding integer_of_uint8_def integer_of_uint8_signed_def o_def
including undefined_transfer integer.lifting
by transfer(auto simp add: word_ao_nth uint_and_mask_or_full mask_numeral mask_Suc_0 intro!: uint_and_mask_or_full[symmetric])
code_printing
constant "integer_of_uint8" \<rightharpoonup>
(SML) "IntInf.fromLarge (Word8.toLargeInt _)" and
(Haskell) "Prelude.toInteger"
| constant "integer_of_uint8_signed" \<rightharpoonup>
(Scala) "BigInt"
section \<open>Quickcheck setup\<close>
definition uint8_of_natural :: "natural \<Rightarrow> uint8"
where "uint8_of_natural x \<equiv> Uint8 (integer_of_natural x)"
instantiation uint8 :: "{random, exhaustive, full_exhaustive}" begin
definition "random_uint8 \<equiv> qc_random_cnv uint8_of_natural"
definition "exhaustive_uint8 \<equiv> qc_exhaustive_cnv uint8_of_natural"
definition "full_exhaustive_uint8 \<equiv> qc_full_exhaustive_cnv uint8_of_natural"
instance ..
end
instantiation uint8 :: narrowing begin
interpretation quickcheck_narrowing_samples
"\<lambda>i. let x = Uint8 i in (x, 0xFF - x)" "0"
"Typerep.Typerep (STR ''Uint8.uint8'') []" .
definition "narrowing_uint8 d = qc_narrowing_drawn_from (narrowing_samples d) d"
declare [[code drop: "partial_term_of :: uint8 itself \<Rightarrow> _"]]
lemmas partial_term_of_uint8 [code] = partial_term_of_code
instance ..
end
no_notation sshiftr_uint8 (infixl ">>>" 55)
end
diff --git a/thys/Nested_Multisets_Ordinals/Multiset_More.thy b/thys/Nested_Multisets_Ordinals/Multiset_More.thy
--- a/thys/Nested_Multisets_Ordinals/Multiset_More.thy
+++ b/thys/Nested_Multisets_Ordinals/Multiset_More.thy
@@ -1,1027 +1,1032 @@
(* Title: More about Multisets
Author: Mathias Fleury <mathias.fleury at mpi-inf.mpg.de>, 2015
Author: Jasmin Blanchette <blanchette at in.tum.de>, 2014, 2015
Author: Anders Schlichtkrull <andschl at dtu.dk>, 2017
Author: Dmitriy Traytel <traytel at in.tum.de>, 2014
Maintainer: Mathias Fleury <mathias.fleury at mpi-inf.mpg.de>
*)
section \<open>More about Multisets\<close>
theory Multiset_More
imports
"HOL-Library.Multiset_Order"
"HOL-Library.Sublist"
begin
text \<open>
Isabelle's theory of finite multisets is not as developed as other areas, such as lists and sets.
The present theory introduces some missing concepts and lemmas. Some of it is expected to move to
Isabelle's library.
\<close>
subsection \<open>Basic Setup\<close>
declare
diff_single_trivial [simp]
in_image_mset [iff]
image_mset.compositionality [simp]
(*To have the same rules as the set counter-part*)
mset_subset_eqD[dest, intro?] (*@{thm subsetD}*)
Multiset.in_multiset_in_set[simp]
inter_add_left1[simp]
inter_add_left2[simp]
inter_add_right1[simp]
inter_add_right2[simp]
sum_mset_sum_list[simp]
subsection \<open>Lemmas about Intersection, Union and Pointwise Inclusion\<close>
lemma subset_mset_imp_subset_add_mset: "A \<subseteq># B \<Longrightarrow> A \<subseteq># add_mset x B"
by (metis add_mset_diff_bothsides diff_subset_eq_self multiset_inter_def subset_mset.inf.absorb2)
lemma subset_add_mset_notin_subset_mset: \<open>A \<subseteq># add_mset b B \<Longrightarrow> b \<notin># A \<Longrightarrow> A \<subseteq># B\<close>
by (simp add: subset_mset.le_iff_sup)
lemma subset_msetE: "\<lbrakk>A \<subset># B; \<lbrakk>A \<subseteq># B; \<not> B \<subseteq># A\<rbrakk> \<Longrightarrow> R\<rbrakk> \<Longrightarrow> R"
by (simp add: subset_mset.less_le_not_le)
lemma Diff_triv_mset: "M \<inter># N = {#} \<Longrightarrow> M - N = M"
by (metis diff_intersect_left_idem diff_zero)
lemma diff_intersect_sym_diff: "(A - B) \<inter># (B - A) = {#}"
unfolding multiset_inter_def
proof -
have "A - (B - (B - A)) = A - B"
by (metis diff_intersect_right_idem multiset_inter_def)
then show "A - B - (A - B - (B - A)) = {#}"
by (metis diff_add diff_cancel diff_subset_eq_self subset_mset.diff_add)
qed
declare subset_msetE [elim!]
lemma subseq_mset_subseteq_mset: "subseq xs ys \<Longrightarrow> mset xs \<subseteq># mset ys"
proof (induct xs arbitrary: ys)
case (Cons x xs)
note Outer_Cons = this
then show ?case
proof (induct ys)
case (Cons y ys)
have "subseq xs ys"
by (metis Cons.prems(2) subseq_Cons' subseq_Cons2_iff)
then show ?case
using Cons by (metis mset.simps(2) mset_subset_eq_add_mset_cancel subseq_Cons2_iff
subset_mset_imp_subset_add_mset)
qed simp
qed simp
subsection \<open>Lemmas about Filter and Image\<close>
lemma count_image_mset_ge_count: "count (image_mset f A) (f b) \<ge> count A b"
by (induction A) auto
lemma count_image_mset_inj:
assumes \<open>inj f\<close>
shows \<open>count (image_mset f M) (f x) = count M x\<close>
by (induct M) (use assms in \<open>auto simp: inj_on_def\<close>)
lemma count_image_mset_le_count_inj_on:
"inj_on f (set_mset M) \<Longrightarrow> count (image_mset f M) y \<le> count M (inv_into (set_mset M) f y)"
proof (induct M)
case (add x M)
note ih = this(1) and inj_xM = this(2)
have inj_M: "inj_on f (set_mset M)"
using inj_xM by simp
show ?case
proof (cases "x \<in># M")
case x_in_M: True
show ?thesis
proof (cases "y = f x")
case y_eq_fx: True
show ?thesis
using x_in_M ih[OF inj_M] unfolding y_eq_fx by (simp add: inj_M insert_absorb)
next
case y_ne_fx: False
show ?thesis
using x_in_M ih[OF inj_M] y_ne_fx insert_absorb by fastforce
qed
next
case x_ni_M: False
show ?thesis
proof (cases "y = f x")
case y_eq_fx: True
have "f x \<notin># image_mset f M"
using x_ni_M inj_xM by force
thus ?thesis
unfolding y_eq_fx
by (metis (no_types) inj_xM count_add_mset count_greater_eq_Suc_zero_iff count_inI
image_mset_add_mset inv_into_f_f union_single_eq_member)
next
case y_ne_fx: False
show ?thesis
proof (rule ccontr)
assume neg_conj: "\<not> count (image_mset f (add_mset x M)) y
\<le> count (add_mset x M) (inv_into (set_mset (add_mset x M)) f y)"
have cnt_y: "count (add_mset (f x) (image_mset f M)) y = count (image_mset f M) y"
using y_ne_fx by simp
have "inv_into (set_mset M) f y \<in># add_mset x M \<Longrightarrow>
inv_into (set_mset (add_mset x M)) f (f (inv_into (set_mset M) f y)) =
inv_into (set_mset M) f y"
by (meson inj_xM inv_into_f_f)
hence "0 < count (image_mset f (add_mset x M)) y \<Longrightarrow>
count M (inv_into (set_mset M) f y) = 0 \<or> x = inv_into (set_mset M) f y"
using neg_conj cnt_y ih[OF inj_M]
by (metis (no_types) count_add_mset count_greater_zero_iff count_inI f_inv_into_f
image_mset_add_mset set_image_mset)
thus False
using neg_conj cnt_y x_ni_M ih[OF inj_M]
by (metis (no_types) count_greater_zero_iff count_inI eq_iff image_mset_add_mset
less_imp_le)
qed
qed
qed
qed simp
lemma mset_filter_compl: "mset (filter p xs) + mset (filter (Not \<circ> p) xs) = mset xs"
by (induction xs) (auto simp: ac_simps)
text \<open>Near duplicate of @{thm [source] filter_eq_replicate_mset}: @{thm filter_eq_replicate_mset}.\<close>
lemma filter_mset_eq: "filter_mset ((=) L) A = replicate_mset (count A L) L"
by (auto simp: multiset_eq_iff)
lemma filter_mset_cong[fundef_cong]:
assumes "M = M'" "\<And>a. a \<in># M \<Longrightarrow> P a = Q a"
shows "filter_mset P M = filter_mset Q M"
proof -
have "M - filter_mset Q M = filter_mset (\<lambda>a. \<not>Q a) M"
by (metis multiset_partition add_diff_cancel_left')
then show ?thesis
by (auto simp: filter_mset_eq_conv assms)
qed
lemma image_mset_filter_swap: "image_mset f {# x \<in># M. P (f x)#} = {# x \<in># image_mset f M. P x#}"
by (induction M) auto
lemma image_mset_cong2:
"(\<And>x. x \<in># M \<Longrightarrow> f x = g x) \<Longrightarrow> M = N \<Longrightarrow> image_mset f M = image_mset g N"
by (hypsubst, rule image_mset_cong)
lemma filter_mset_empty_conv: \<open>(filter_mset P M = {#}) = (\<forall>L\<in>#M. \<not> P L)\<close>
by (induction M) auto
lemma multiset_filter_mono2: \<open>filter_mset P A \<subseteq># filter_mset Q A \<longleftrightarrow> (\<forall>a\<in>#A. P a \<longrightarrow> Q a)\<close>
by (induction A) (auto intro: subset_mset.order.trans)
lemma image_filter_cong:
assumes \<open>\<And>C. C \<in># M \<Longrightarrow> P C \<Longrightarrow> f C = g C\<close>
shows \<open>{#f C. C \<in># {#C \<in># M. P C#}#} = {#g C | C\<in># M. P C#}\<close>
using assms by (induction M) auto
lemma image_mset_filter_swap2: \<open>{#C \<in># {#P x. x \<in># D#}. Q C #} = {#P x. x \<in># {#C| C \<in># D. Q (P C)#}#}\<close>
by (simp add: image_mset_filter_swap)
declare image_mset_cong2 [cong]
lemma filter_mset_empty_if_finite_and_filter_set_empty:
assumes
"{x \<in> X. P x} = {}" and
"finite X"
shows "{#x \<in># mset_set X. P x#} = {#}"
proof -
have empty_empty: "\<And>Y. set_mset Y = {} \<Longrightarrow> Y = {#}"
by auto
from assms have "set_mset {#x \<in># mset_set X. P x#} = {}"
by auto
then show ?thesis
by (rule empty_empty)
qed
subsection \<open>Lemmas about Sum\<close>
lemma sum_image_mset_sum_map[simp]: "sum_mset (image_mset f (mset xs)) = sum_list (map f xs)"
by (metis mset_map sum_mset_sum_list)
lemma sum_image_mset_mono:
fixes f :: "'a \<Rightarrow> 'b::canonically_ordered_monoid_add"
assumes sub: "A \<subseteq># B"
shows "(\<Sum>m \<in># A. f m) \<le> (\<Sum>m \<in># B. f m)"
by (metis image_mset_union le_iff_add sub subset_mset.add_diff_inverse sum_mset.union)
lemma sum_image_mset_mono_mem:
"n \<in># M \<Longrightarrow> f n \<le> (\<Sum>m \<in># M. f m)" for f :: "'a \<Rightarrow> 'b::canonically_ordered_monoid_add"
using le_iff_add multi_member_split by fastforce
lemma count_sum_mset_if_1_0: \<open>count M a = (\<Sum>x\<in>#M. if x = a then 1 else 0)\<close>
by (induction M) auto
lemma sum_mset_dvd:
fixes k :: "'a::comm_semiring_1_cancel"
assumes "\<forall>m \<in># M. k dvd f m"
shows "k dvd (\<Sum>m \<in># M. f m)"
using assms by (induct M) auto
lemma sum_mset_distrib_div_if_dvd:
fixes k :: "'a::unique_euclidean_semiring"
assumes "\<forall>m \<in># M. k dvd f m"
shows "(\<Sum>m \<in># M. f m) div k = (\<Sum>m \<in># M. f m div k)"
using assms by (induct M) (auto simp: div_plus_div_distrib_dvd_left)
subsection \<open>Lemmas about Remove\<close>
lemma set_mset_minus_replicate_mset[simp]:
"n \<ge> count A a \<Longrightarrow> set_mset (A - replicate_mset n a) = set_mset A - {a}"
"n < count A a \<Longrightarrow> set_mset (A - replicate_mset n a) = set_mset A"
unfolding set_mset_def by (auto split: if_split simp: not_in_iff)
abbreviation removeAll_mset :: "'a \<Rightarrow> 'a multiset \<Rightarrow> 'a multiset" where
"removeAll_mset C M \<equiv> M - replicate_mset (count M C) C"
lemma mset_removeAll[simp, code]: "removeAll_mset C (mset L) = mset (removeAll C L)"
by (induction L) (auto simp: ac_simps multiset_eq_iff split: if_split_asm)
lemma removeAll_mset_filter_mset: "removeAll_mset C M = filter_mset ((\<noteq>) C) M"
by (induction M) (auto simp: ac_simps multiset_eq_iff)
abbreviation remove1_mset :: "'a \<Rightarrow> 'a multiset \<Rightarrow> 'a multiset" where
"remove1_mset C M \<equiv> M - {#C#}"
lemma removeAll_subseteq_remove1_mset: "removeAll_mset x M \<subseteq># remove1_mset x M"
by (auto simp: subseteq_mset_def)
lemma in_remove1_mset_neq:
assumes ab: "a \<noteq> b"
shows "a \<in># remove1_mset b C \<longleftrightarrow> a \<in># C"
by (metis assms diff_single_trivial in_diffD insert_DiffM insert_noteq_member)
lemma size_mset_removeAll_mset_le_iff: "size (removeAll_mset x M) < size M \<longleftrightarrow> x \<in># M"
by (auto intro: count_inI mset_subset_size simp: subset_mset_def multiset_eq_iff)
lemma size_remove1_mset_If: \<open>size (remove1_mset x M) = size M - (if x \<in># M then 1 else 0)\<close>
by (auto simp: size_Diff_subset_Int)
lemma size_mset_remove1_mset_le_iff: "size (remove1_mset x M) < size M \<longleftrightarrow> x \<in># M"
using less_irrefl
by (fastforce intro!: mset_subset_size elim: in_countE simp: subset_mset_def multiset_eq_iff)
lemma remove_1_mset_id_iff_notin: "remove1_mset a M = M \<longleftrightarrow> a \<notin># M"
by (meson diff_single_trivial multi_drop_mem_not_eq)
lemma id_remove_1_mset_iff_notin: "M = remove1_mset a M \<longleftrightarrow> a \<notin># M"
using remove_1_mset_id_iff_notin by metis
lemma remove1_mset_eqE:
"remove1_mset L x1 = M \<Longrightarrow>
(L \<in># x1 \<Longrightarrow> x1 = M + {#L#} \<Longrightarrow> P) \<Longrightarrow>
(L \<notin># x1 \<Longrightarrow> x1 = M \<Longrightarrow> P) \<Longrightarrow>
P"
by (cases "L \<in># x1") auto
lemma image_filter_ne_mset[simp]:
"image_mset f {#x \<in># M. f x \<noteq> y#} = removeAll_mset y (image_mset f M)"
by (induction M) simp_all
lemma image_mset_remove1_mset_if:
"image_mset f (remove1_mset a M) =
(if a \<in># M then remove1_mset (f a) (image_mset f M) else image_mset f M)"
by (auto simp: image_mset_Diff)
lemma filter_mset_neq: "{#x \<in># M. x \<noteq> y#} = removeAll_mset y M"
by (metis add_diff_cancel_left' filter_eq_replicate_mset multiset_partition)
lemma filter_mset_neq_cond: "{#x \<in># M. P x \<and> x \<noteq> y#} = removeAll_mset y {# x\<in>#M. P x#}"
by (metis filter_filter_mset filter_mset_neq)
lemma remove1_mset_add_mset_If:
"remove1_mset L (add_mset L' C) = (if L = L' then C else remove1_mset L C + {#L'#})"
by (auto simp: multiset_eq_iff)
lemma minus_remove1_mset_if:
"A - remove1_mset b B = (if b \<in># B \<and> b \<in># A \<and> count A b \<ge> count B b then {#b#} + (A - B) else A - B)"
by (auto simp: multiset_eq_iff count_greater_zero_iff[symmetric]
simp del: count_greater_zero_iff)
lemma add_mset_eq_add_mset_ne:
"a \<noteq> b \<Longrightarrow> add_mset a A = add_mset b B \<longleftrightarrow> a \<in># B \<and> b \<in># A \<and> A = add_mset b (B - {#a#})"
by (metis (no_types, lifting) diff_single_eq_union diff_union_swap multi_self_add_other_not_self
remove_1_mset_id_iff_notin union_single_eq_diff)
lemma add_mset_eq_add_mset: \<open>add_mset a M = add_mset b M' \<longleftrightarrow>
(a = b \<and> M = M') \<or> (a \<noteq> b \<and> b \<in># M \<and> add_mset a (M - {#b#}) = M')\<close>
by (metis add_mset_eq_add_mset_ne add_mset_remove_trivial union_single_eq_member)
(* TODO move to Multiset: could replace add_mset_remove_trivial_eq? *)
lemma add_mset_remove_trivial_iff: \<open>N = add_mset a (N - {#b#}) \<longleftrightarrow> a \<in># N \<and> a = b\<close>
by (metis add_left_cancel add_mset_remove_trivial insert_DiffM2 single_eq_single
size_mset_remove1_mset_le_iff union_single_eq_member)
lemma trivial_add_mset_remove_iff: \<open>add_mset a (N - {#b#}) = N \<longleftrightarrow> a \<in># N \<and> a = b\<close>
by (subst eq_commute) (fact add_mset_remove_trivial_iff)
lemma remove1_single_empty_iff[simp]: \<open>remove1_mset L {#L'#} = {#} \<longleftrightarrow> L = L'\<close>
using add_mset_remove_trivial_iff by fastforce
lemma add_mset_less_imp_less_remove1_mset:
assumes xM_lt_N: "add_mset x M < N"
shows "M < remove1_mset x N"
proof -
have "M < N"
using assms le_multiset_right_total mset_le_trans by blast
then show ?thesis
by (metis add_less_cancel_right add_mset_add_single diff_single_trivial insert_DiffM2 xM_lt_N)
qed
subsection \<open>Lemmas about Replicate\<close>
lemma replicate_mset_minus_replicate_mset_same[simp]:
"replicate_mset m x - replicate_mset n x = replicate_mset (m - n) x"
by (induct m arbitrary: n, simp, metis left_diff_repeat_mset_distrib' repeat_mset_replicate_mset)
lemma replicate_mset_subset_iff_lt[simp]: "replicate_mset m x \<subset># replicate_mset n x \<longleftrightarrow> m < n"
by (induct n m rule: diff_induct) (auto intro: subset_mset.gr_zeroI)
lemma replicate_mset_subseteq_iff_le[simp]: "replicate_mset m x \<subseteq># replicate_mset n x \<longleftrightarrow> m \<le> n"
by (induct n m rule: diff_induct) auto
lemma replicate_mset_lt_iff_lt[simp]: "replicate_mset m x < replicate_mset n x \<longleftrightarrow> m < n"
by (induct n m rule: diff_induct) (auto intro: subset_mset.gr_zeroI gr_zeroI)
lemma replicate_mset_le_iff_le[simp]: "replicate_mset m x \<le> replicate_mset n x \<longleftrightarrow> m \<le> n"
by (induct n m rule: diff_induct) auto
lemma replicate_mset_eq_iff[simp]:
"replicate_mset m x = replicate_mset n y \<longleftrightarrow> m = n \<and> (m \<noteq> 0 \<longrightarrow> x = y)"
by (cases m; cases n; simp)
(metis in_replicate_mset insert_noteq_member size_replicate_mset union_single_eq_diff)
lemma replicate_mset_plus: "replicate_mset (a + b) C = replicate_mset a C + replicate_mset b C"
by (induct a) (auto simp: ac_simps)
lemma mset_replicate_replicate_mset: "mset (replicate n L) = replicate_mset n L"
by (induction n) auto
lemma set_mset_single_iff_replicate_mset: "set_mset U = {a} \<longleftrightarrow> (\<exists>n > 0. U = replicate_mset n a)"
by (rule, metis count_greater_zero_iff count_replicate_mset insertI1 multi_count_eq singletonD
zero_less_iff_neq_zero, force)
lemma ex_replicate_mset_if_all_elems_eq:
assumes "\<forall>x \<in># M. x = y"
shows "\<exists>n. M = replicate_mset n y"
using assms by (metis count_replicate_mset mem_Collect_eq multiset_eqI neq0_conv set_mset_def)
subsection \<open>Multiset and Set Conversions\<close>
lemma count_mset_set_if: "count (mset_set A) a = (if a \<in> A \<and> finite A then 1 else 0)"
by auto
lemma mset_set_set_mset_empty_mempty[iff]: "mset_set (set_mset D) = {#} \<longleftrightarrow> D = {#}"
by (simp add: mset_set_empty_iff)
lemma count_mset_set_le_one: "count (mset_set A) x \<le> 1"
by (simp add: count_mset_set_if)
lemma mset_set_set_mset_subseteq[simp]: "mset_set (set_mset A) \<subseteq># A"
by (simp add: mset_set_set_mset_msubset)
lemma mset_sorted_list_of_set[simp]: "mset (sorted_list_of_set A) = mset_set A"
by (metis mset_sorted_list_of_multiset sorted_list_of_mset_set)
lemma sorted_sorted_list_of_multiset[simp]:
"sorted (sorted_list_of_multiset (M :: 'a::linorder multiset))"
by (metis mset_sorted_list_of_multiset sorted_list_of_multiset_mset sorted_sort)
lemma mset_take_subseteq: "mset (take n xs) \<subseteq># mset xs"
apply (induct xs arbitrary: n)
apply simp
by (case_tac n) simp_all
lemma sorted_list_of_multiset_eq_Nil[simp]: "sorted_list_of_multiset M = [] \<longleftrightarrow> M = {#}"
by (metis mset_sorted_list_of_multiset sorted_list_of_multiset_empty)
subsection \<open>Duplicate Removal\<close>
(* TODO: use abbreviation? *)
definition remdups_mset :: "'v multiset \<Rightarrow> 'v multiset" where
"remdups_mset S = mset_set (set_mset S)"
lemma set_mset_remdups_mset[simp]: \<open>set_mset (remdups_mset A) = set_mset A\<close>
unfolding remdups_mset_def by auto
lemma count_remdups_mset_eq_1: "a \<in># remdups_mset A \<longleftrightarrow> count (remdups_mset A) a = 1"
unfolding remdups_mset_def by (auto simp: count_eq_zero_iff intro: count_inI)
lemma remdups_mset_empty[simp]: "remdups_mset {#} = {#}"
unfolding remdups_mset_def by auto
lemma remdups_mset_singleton[simp]: "remdups_mset {#a#} = {#a#}"
unfolding remdups_mset_def by auto
lemma remdups_mset_eq_empty[iff]: "remdups_mset D = {#} \<longleftrightarrow> D = {#}"
unfolding remdups_mset_def by blast
lemma remdups_mset_singleton_sum[simp]:
"remdups_mset (add_mset a A) = (if a \<in># A then remdups_mset A else add_mset a (remdups_mset A))"
unfolding remdups_mset_def by (simp_all add: insert_absorb)
lemma mset_remdups_remdups_mset[simp]: "mset (remdups D) = remdups_mset (mset D)"
by (induction D) (auto simp add: ac_simps)
declare mset_remdups_remdups_mset[symmetric, code]
definition distinct_mset :: "'a multiset \<Rightarrow> bool" where
"distinct_mset S \<longleftrightarrow> (\<forall>a. a \<in># S \<longrightarrow> count S a = 1)"
lemma distinct_mset_count_less_1: "distinct_mset S \<longleftrightarrow> (\<forall>a. count S a \<le> 1)"
using eq_iff nat_le_linear unfolding distinct_mset_def by fastforce
lemma distinct_mset_empty[simp]: "distinct_mset {#}"
unfolding distinct_mset_def by auto
lemma distinct_mset_singleton: "distinct_mset {#a#}"
unfolding distinct_mset_def by auto
lemma distinct_mset_union:
assumes dist: "distinct_mset (A + B)"
shows "distinct_mset A"
unfolding distinct_mset_count_less_1
proof (rule allI)
fix a
have \<open>count A a \<le> count (A + B) a\<close> by auto
moreover have \<open>count (A + B) a \<le> 1\<close>
using dist unfolding distinct_mset_count_less_1 by auto
ultimately show \<open>count A a \<le> 1\<close>
by simp
qed
lemma distinct_mset_minus[simp]: "distinct_mset A \<Longrightarrow> distinct_mset (A - B)"
by (metis diff_subset_eq_self mset_subset_eq_exists_conv distinct_mset_union)
lemma count_remdups_mset_If: \<open>count (remdups_mset A) a = (if a \<in># A then 1 else 0)\<close>
unfolding remdups_mset_def by auto
lemma distinct_mset_rempdups_union_mset:
assumes "distinct_mset A" and "distinct_mset B"
shows "A \<union># B = remdups_mset (A + B)"
using assms nat_le_linear unfolding remdups_mset_def
by (force simp add: multiset_eq_iff max_def count_mset_set_if distinct_mset_def not_in_iff)
lemma distinct_mset_add_mset[simp]: "distinct_mset (add_mset a L) \<longleftrightarrow> a \<notin># L \<and> distinct_mset L"
unfolding distinct_mset_def
apply (rule iffI)
apply (auto split: if_split_asm; fail)[]
by (auto simp: not_in_iff; fail)
lemma distinct_mset_size_eq_card: "distinct_mset C \<Longrightarrow> size C = card (set_mset C)"
by (induction C) auto
lemma distinct_mset_add:
"distinct_mset (L + L') \<longleftrightarrow> distinct_mset L \<and> distinct_mset L' \<and> L \<inter># L' = {#}"
by (induction L arbitrary: L') auto
lemma distinct_mset_set_mset_ident[simp]: "distinct_mset M \<Longrightarrow> mset_set (set_mset M) = M"
by (induction M) auto
lemma distinct_finite_set_mset_subseteq_iff[iff]:
assumes "distinct_mset M" "finite N"
shows "set_mset M \<subseteq> N \<longleftrightarrow> M \<subseteq># mset_set N"
by (metis assms distinct_mset_set_mset_ident finite_set_mset msubset_mset_set_iff)
lemma distinct_mem_diff_mset:
assumes dist: "distinct_mset M" and mem: "x \<in> set_mset (M - N)"
shows "x \<notin> set_mset N"
proof -
have "count M x = 1"
using dist mem by (meson distinct_mset_def in_diffD)
then show ?thesis
using mem by (metis count_greater_eq_one_iff in_diff_count not_less)
qed
lemma distinct_set_mset_eq:
assumes "distinct_mset M" "distinct_mset N" "set_mset M = set_mset N"
shows "M = N"
using assms distinct_mset_set_mset_ident by fastforce
lemma distinct_mset_union_mset[simp]:
\<open>distinct_mset (D \<union># C) \<longleftrightarrow> distinct_mset D \<and> distinct_mset C\<close>
unfolding distinct_mset_count_less_1 by force
lemma distinct_mset_inter_mset:
"distinct_mset C \<Longrightarrow> distinct_mset (C \<inter># D)"
"distinct_mset D \<Longrightarrow> distinct_mset (C \<inter># D)"
by (simp_all add: multiset_inter_def,
metis distinct_mset_minus multiset_inter_commute multiset_inter_def)
lemma distinct_mset_remove1_All: "distinct_mset C \<Longrightarrow> remove1_mset L C = removeAll_mset L C"
by (auto simp: multiset_eq_iff distinct_mset_count_less_1)
lemma distinct_mset_size_2: "distinct_mset {#a, b#} \<longleftrightarrow> a \<noteq> b"
by auto
lemma distinct_mset_filter: "distinct_mset M \<Longrightarrow> distinct_mset {# L \<in># M. P L#}"
by (simp add: distinct_mset_def)
lemma distinct_mset_mset_distinct[simp]: \<open>distinct_mset (mset xs) = distinct xs\<close>
by (induction xs) auto
lemma distinct_image_mset_inj:
\<open>inj_on f (set_mset M) \<Longrightarrow> distinct_mset (image_mset f M) \<longleftrightarrow> distinct_mset M\<close>
by (induction M) (auto simp: inj_on_def)
subsection \<open>Repeat Operation\<close>
lemma repeat_mset_compower: "repeat_mset n A = (((+) A) ^^ n) {#}"
by (induction n) auto
lemma repeat_mset_prod: "repeat_mset (m * n) A = (((+) (repeat_mset n A)) ^^ m) {#}"
by (induction m) (auto simp: repeat_mset_distrib)
subsection \<open>Cartesian Product\<close>
text \<open>Definition of the cartesian products over multisets. The construction mimics of the cartesian
product on sets and use the same theorem names (adding only the suffix \<open>_mset\<close> to Sigma
and Times). See file @{file \<open>~~/src/HOL/Product_Type.thy\<close>}\<close>
definition Sigma_mset :: "'a multiset \<Rightarrow> ('a \<Rightarrow> 'b multiset) \<Rightarrow> ('a \<times> 'b) multiset" where
"Sigma_mset A B \<equiv> \<Union># {#{#(a, b). b \<in># B a#}. a \<in># A #}"
abbreviation Times_mset :: "'a multiset \<Rightarrow> 'b multiset \<Rightarrow> ('a \<times> 'b) multiset" (infixr "\<times>#" 80) where
"Times_mset A B \<equiv> Sigma_mset A (\<lambda>_. B)"
hide_const (open) Times_mset
text \<open>Contrary to the set version @{term \<open>SIGMA x:A. B\<close>}, we use the non-ASCII symbol \<open>\<in>#\<close>.\<close>
syntax
"_Sigma_mset" :: "[pttrn, 'a multiset, 'b multiset] => ('a * 'b) multiset"
("(3SIGMAMSET _\<in>#_./ _)" [0, 0, 10] 10)
translations
"SIGMAMSET x\<in>#A. B" == "CONST Sigma_mset A (\<lambda>x. B)"
text \<open>Link between the multiset and the set cartesian product:\<close>
lemma Times_mset_Times: "set_mset (A \<times># B) = set_mset A \<times> set_mset B"
unfolding Sigma_mset_def by auto
lemma Sigma_msetI [intro!]: "\<lbrakk>a \<in># A; b \<in># B a\<rbrakk> \<Longrightarrow> (a, b) \<in># Sigma_mset A B"
by (unfold Sigma_mset_def) auto
lemma Sigma_msetE[elim!]: "\<lbrakk>c \<in># Sigma_mset A B; \<And>x y. \<lbrakk>x \<in># A; y \<in># B x; c = (x, y)\<rbrakk> \<Longrightarrow> P\<rbrakk> \<Longrightarrow> P"
by (unfold Sigma_mset_def) auto
text \<open>Elimination of @{term "(a, b) \<in># A \<times># B"} -- introduces no eigenvariables.\<close>
lemma Sigma_msetD1: "(a, b) \<in># Sigma_mset A B \<Longrightarrow> a \<in># A"
by blast
lemma Sigma_msetD2: "(a, b) \<in># Sigma_mset A B \<Longrightarrow> b \<in># B a"
by blast
lemma Sigma_msetE2: "\<lbrakk>(a, b) \<in># Sigma_mset A B; \<lbrakk>a \<in># A; b \<in># B a\<rbrakk> \<Longrightarrow> P\<rbrakk> \<Longrightarrow> P"
by blast
lemma Sigma_mset_cong:
"\<lbrakk>A = B; \<And>x. x \<in># B \<Longrightarrow> C x = D x\<rbrakk> \<Longrightarrow> (SIGMAMSET x \<in># A. C x) = (SIGMAMSET x \<in># B. D x)"
by (metis (mono_tags, lifting) Sigma_mset_def image_mset_cong)
lemma count_sum_mset: "count (\<Union># M) b = (\<Sum>P \<in># M. count P b)"
by (induction M) auto
lemma Sigma_mset_plus_distrib1[simp]: "Sigma_mset (A + B) C = Sigma_mset A C + Sigma_mset B C"
unfolding Sigma_mset_def by auto
lemma Sigma_mset_plus_distrib2[simp]:
"Sigma_mset A (\<lambda>i. B i + C i) = Sigma_mset A B + Sigma_mset A C"
unfolding Sigma_mset_def by (induction A) (auto simp: multiset_eq_iff)
lemma Times_mset_single_left: "{#a#} \<times># B = image_mset (Pair a) B"
unfolding Sigma_mset_def by auto
lemma Times_mset_single_right: "A \<times># {#b#} = image_mset (\<lambda>a. Pair a b) A"
unfolding Sigma_mset_def by (induction A) auto
lemma Times_mset_single_single[simp]: "{#a#} \<times># {#b#} = {#(a, b)#}"
unfolding Sigma_mset_def by simp
lemma count_image_mset_Pair:
"count (image_mset (Pair a) B) (x, b) = (if x = a then count B b else 0)"
by (induction B) auto
lemma count_Sigma_mset: "count (Sigma_mset A B) (a, b) = count A a * count (B a) b"
by (induction A) (auto simp: Sigma_mset_def count_image_mset_Pair)
lemma Sigma_mset_empty1[simp]: "Sigma_mset {#} B = {#}"
unfolding Sigma_mset_def by auto
lemma Sigma_mset_empty2[simp]: "A \<times># {#} = {#}"
by (auto simp: multiset_eq_iff count_Sigma_mset)
lemma Sigma_mset_mono:
assumes "A \<subseteq># C" and "\<And>x. x \<in># A \<Longrightarrow> B x \<subseteq># D x"
shows "Sigma_mset A B \<subseteq># Sigma_mset C D"
proof -
have "count A a * count (B a) b \<le> count C a * count (D a) b" for a b
using assms unfolding subseteq_mset_def by (metis count_inI eq_iff mult_eq_0_iff mult_le_mono)
then show ?thesis
by (auto simp: subseteq_mset_def count_Sigma_mset)
qed
lemma mem_Sigma_mset_iff[iff]: "((a,b) \<in># Sigma_mset A B) = (a \<in># A \<and> b \<in># B a)"
by blast
lemma mem_Times_mset_iff: "x \<in># A \<times># B \<longleftrightarrow> fst x \<in># A \<and> snd x \<in># B"
by (induct x) simp
lemma Sigma_mset_empty_iff: "(SIGMAMSET i\<in>#I. X i) = {#} \<longleftrightarrow> (\<forall>i\<in>#I. X i = {#})"
by (auto simp: Sigma_mset_def)
lemma Times_mset_subset_mset_cancel1: "x \<in># A \<Longrightarrow> (A \<times># B \<subseteq># A \<times># C) = (B \<subseteq># C)"
by (auto simp: subseteq_mset_def count_Sigma_mset)
lemma Times_mset_subset_mset_cancel2: "x \<in># C \<Longrightarrow> (A \<times># C \<subseteq># B \<times># C) = (A \<subseteq># B)"
by (auto simp: subseteq_mset_def count_Sigma_mset)
lemma Times_mset_eq_cancel2: "x \<in># C \<Longrightarrow> (A \<times># C = B \<times># C) = (A = B)"
by (auto simp: multiset_eq_iff count_Sigma_mset dest!: in_countE)
lemma split_paired_Ball_mset_Sigma_mset[simp]:
"(\<forall>z\<in>#Sigma_mset A B. P z) \<longleftrightarrow> (\<forall>x\<in>#A. \<forall>y\<in>#B x. P (x, y))"
by blast
lemma split_paired_Bex_mset_Sigma_mset[simp]:
"(\<exists>z\<in>#Sigma_mset A B. P z) \<longleftrightarrow> (\<exists>x\<in>#A. \<exists>y\<in>#B x. P (x, y))"
by blast
lemma sum_mset_if_eq_constant:
"(\<Sum>x\<in>#M. if a = x then (f x) else 0) = (((+) (f a)) ^^ (count M a)) 0"
by (induction M) (auto simp: ac_simps)
lemma iterate_op_plus: "(((+) k) ^^ m) 0 = k * m"
by (induction m) auto
lemma untion_image_mset_Pair_distribute:
"\<Union>#{#image_mset (Pair x) (C x). x \<in># J - I#} =
\<Union># {#image_mset (Pair x) (C x). x \<in># J#} - \<Union>#{#image_mset (Pair x) (C x). x \<in># I#}"
by (auto simp: multiset_eq_iff count_sum_mset count_image_mset_Pair sum_mset_if_eq_constant
iterate_op_plus diff_mult_distrib2)
lemma Sigma_mset_Un_distrib1: "Sigma_mset (I \<union># J) C = Sigma_mset I C \<union># Sigma_mset J C"
by (auto simp: Sigma_mset_def sup_subset_mset_def untion_image_mset_Pair_distribute)
lemma Sigma_mset_Un_distrib2: "(SIGMAMSET i\<in>#I. A i \<union># B i) = Sigma_mset I A \<union># Sigma_mset I B"
by (auto simp: multiset_eq_iff count_sum_mset count_image_mset_Pair sum_mset_if_eq_constant
Sigma_mset_def diff_mult_distrib2 iterate_op_plus max_def not_in_iff)
lemma Sigma_mset_Int_distrib1: "Sigma_mset (I \<inter># J) C = Sigma_mset I C \<inter># Sigma_mset J C"
by (auto simp: multiset_eq_iff count_sum_mset count_image_mset_Pair sum_mset_if_eq_constant
Sigma_mset_def iterate_op_plus min_def not_in_iff)
lemma Sigma_mset_Int_distrib2: "(SIGMAMSET i\<in>#I. A i \<inter># B i) = Sigma_mset I A \<inter># Sigma_mset I B"
by (auto simp: multiset_eq_iff count_sum_mset count_image_mset_Pair sum_mset_if_eq_constant
Sigma_mset_def iterate_op_plus min_def not_in_iff)
lemma Sigma_mset_Diff_distrib1: "Sigma_mset (I - J) C = Sigma_mset I C - Sigma_mset J C"
by (auto simp: multiset_eq_iff count_sum_mset count_image_mset_Pair sum_mset_if_eq_constant
Sigma_mset_def iterate_op_plus min_def not_in_iff diff_mult_distrib2)
lemma Sigma_mset_Diff_distrib2: "(SIGMAMSET i\<in>#I. A i - B i) = Sigma_mset I A - Sigma_mset I B"
by (auto simp: multiset_eq_iff count_sum_mset count_image_mset_Pair sum_mset_if_eq_constant
Sigma_mset_def iterate_op_plus min_def not_in_iff diff_mult_distrib)
lemma Sigma_mset_Union: "Sigma_mset (\<Union>#X) B = (\<Union># (image_mset (\<lambda>A. Sigma_mset A B) X))"
by (auto simp: multiset_eq_iff count_sum_mset count_image_mset_Pair sum_mset_if_eq_constant
Sigma_mset_def iterate_op_plus min_def not_in_iff sum_mset_distrib_left)
lemma Times_mset_Un_distrib1: "(A \<union># B) \<times># C = A \<times># C \<union># B \<times># C"
by (fact Sigma_mset_Un_distrib1)
lemma Times_mset_Int_distrib1: "(A \<inter># B) \<times># C = A \<times># C \<inter># B \<times># C"
by (fact Sigma_mset_Int_distrib1)
lemma Times_mset_Diff_distrib1: "(A - B) \<times># C = A \<times># C - B \<times># C"
by (fact Sigma_mset_Diff_distrib1)
lemma Times_mset_empty[simp]: "A \<times># B = {#} \<longleftrightarrow> A = {#} \<or> B = {#}"
by (auto simp: Sigma_mset_empty_iff)
lemma Times_insert_left: "A \<times># add_mset x B = A \<times># B + image_mset (\<lambda>a. Pair a x) A"
unfolding add_mset_add_single[of x B] Sigma_mset_plus_distrib2
by (simp add: Times_mset_single_right)
lemma Times_insert_right: "add_mset a A \<times># B = A \<times># B + image_mset (Pair a) B"
unfolding add_mset_add_single[of a A] Sigma_mset_plus_distrib1
by (simp add: Times_mset_single_left)
lemma fst_image_mset_times_mset [simp]:
"image_mset fst (A \<times># B) = (if B = {#} then {#} else repeat_mset (size B) A)"
by (induct B) (auto simp: Times_mset_single_right ac_simps Times_insert_left)
lemma snd_image_mset_times_mset [simp]:
"image_mset snd (A \<times># B) = (if A = {#} then {#} else repeat_mset (size A) B)"
by (induct B) (auto simp add: Times_mset_single_right Times_insert_left image_mset_const_eq)
lemma product_swap_mset: "image_mset prod.swap (A \<times># B) = B \<times># A"
by (induction A) (auto simp add: Times_mset_single_left Times_mset_single_right
Times_insert_right Times_insert_left)
context
begin
qualified definition product_mset :: "'a multiset \<Rightarrow> 'b multiset \<Rightarrow> ('a \<times> 'b) multiset" where
[code_abbrev]: "product_mset A B = A \<times># B"
lemma member_product_mset: "x \<in># product_mset A B \<longleftrightarrow> x \<in># A \<times># B"
by (simp add: Multiset_More.product_mset_def)
end
lemma count_Sigma_mset_abs_def: "count (Sigma_mset A B) = (\<lambda>(a, b) \<Rightarrow> count A a * count (B a) b)"
by (auto simp: fun_eq_iff count_Sigma_mset)
lemma Times_mset_image_mset1: "image_mset f A \<times># B = image_mset (\<lambda>(a, b). (f a, b)) (A \<times># B)"
by (induct B) (auto simp: Times_insert_left)
lemma Times_mset_image_mset2: "A \<times># image_mset f B = image_mset (\<lambda>(a, b). (a, f b)) (A \<times># B)"
by (induct A) (auto simp: Times_insert_right)
lemma sum_le_singleton: "A \<subseteq> {x} \<Longrightarrow> sum f A = (if x \<in> A then f x else 0)"
by (auto simp: subset_singleton_iff elim: finite_subset)
lemma Times_mset_assoc: "(A \<times># B) \<times># C = image_mset (\<lambda>(a, b, c). ((a, b), c)) (A \<times># B \<times># C)"
by (auto simp: multiset_eq_iff count_Sigma_mset count_image_mset vimage_def Times_mset_Times
Int_commute count_eq_zero_iff intro!: trans[OF _ sym[OF sum_le_singleton[of _ "(_, _, _)"]]]
cong: sum.cong if_cong)
subsection \<open>Transfer Rules\<close>
lemma plus_multiset_transfer[transfer_rule]:
"(rel_fun (rel_mset R) (rel_fun (rel_mset R) (rel_mset R))) (+) (+)"
by (unfold rel_fun_def rel_mset_def)
(force dest: list_all2_appendI intro: exI[of _ "_ @ _"] conjI[rotated])
lemma minus_multiset_transfer[transfer_rule]:
assumes [transfer_rule]: "bi_unique R"
shows "(rel_fun (rel_mset R) (rel_fun (rel_mset R) (rel_mset R))) (-) (-)"
proof (unfold rel_fun_def rel_mset_def, safe)
fix xs ys xs' ys'
assume [transfer_rule]: "list_all2 R xs ys" "list_all2 R xs' ys'"
have "list_all2 R (fold remove1 xs' xs) (fold remove1 ys' ys)"
by transfer_prover
moreover have "mset (fold remove1 xs' xs) = mset xs - mset xs'"
by (induct xs' arbitrary: xs) auto
moreover have "mset (fold remove1 ys' ys) = mset ys - mset ys'"
by (induct ys' arbitrary: ys) auto
ultimately show "\<exists>xs'' ys''.
mset xs'' = mset xs - mset xs' \<and> mset ys'' = mset ys - mset ys' \<and> list_all2 R xs'' ys''"
by blast
qed
declare rel_mset_Zero[transfer_rule]
lemma count_transfer[transfer_rule]:
assumes "bi_unique R"
shows "(rel_fun (rel_mset R) (rel_fun R (=))) count count"
unfolding rel_fun_def rel_mset_def proof safe
fix x y xs ys
assume "list_all2 R xs ys" "R x y"
then show "count (mset xs) x = count (mset ys) y"
proof (induct xs ys rule: list.rel_induct)
case (Cons x' xs y' ys)
then show ?case
using assms unfolding bi_unique_alt_def2 by (auto simp: rel_fun_def)
qed simp
qed
lemma subseteq_multiset_transfer[transfer_rule]:
assumes [transfer_rule]: "bi_unique R" "right_total R"
shows "(rel_fun (rel_mset R) (rel_fun (rel_mset R) (=)))
(\<lambda>M N. filter_mset (Domainp R) M \<subseteq># filter_mset (Domainp R) N) (\<subseteq>#)"
proof -
have count_filter_mset_less:
"(\<forall>a. count (filter_mset (Domainp R) M) a \<le> count (filter_mset (Domainp R) N) a) \<longleftrightarrow>
(\<forall>a \<in> {x. Domainp R x}. count M a \<le> count N a)" for M and N by auto
show ?thesis unfolding subseteq_mset_def count_filter_mset_less
by transfer_prover
qed
lemma sum_mset_transfer[transfer_rule]:
"R 0 0 \<Longrightarrow> rel_fun R (rel_fun R R) (+) (+) \<Longrightarrow> (rel_fun (rel_mset R) R) sum_mset sum_mset"
using sum_list_transfer[of R] unfolding rel_fun_def rel_mset_def by auto
lemma Sigma_mset_transfer[transfer_rule]:
"(rel_fun (rel_mset R) (rel_fun (rel_fun R (rel_mset S)) (rel_mset (rel_prod R S))))
Sigma_mset Sigma_mset"
by (unfold Sigma_mset_def) transfer_prover
subsection \<open>Even More about Multisets\<close>
subsubsection \<open>Multisets and Functions\<close>
lemma range_image_mset:
assumes "set_mset Ds \<subseteq> range f"
shows "Ds \<in> range (image_mset f)"
proof -
have "\<forall>D. D \<in># Ds \<longrightarrow> (\<exists>C. f C = D)"
using assms by blast
then obtain f_i where
f_p: "\<forall>D. D \<in># Ds \<longrightarrow> (f (f_i D) = D)"
by metis
define Cs where
"Cs \<equiv> image_mset f_i Ds"
from f_p Cs_def have "image_mset f Cs = Ds"
by auto
then show ?thesis
by blast
qed
subsubsection \<open>Multisets and Lists\<close>
lemma length_sorted_list_of_multiset[simp]: "length (sorted_list_of_multiset A) = size A"
by (metis mset_sorted_list_of_multiset size_mset)
definition list_of_mset :: "'a multiset \<Rightarrow> 'a list" where
"list_of_mset m = (SOME l. m = mset l)"
lemma list_of_mset_exi: "\<exists>l. m = mset l"
using ex_mset by metis
-lemma mset_list_of_mset [simp]: "mset (list_of_mset m) = m"
+lemma mset_list_of_mset[simp]: "mset (list_of_mset m) = m"
by (metis (mono_tags, lifting) ex_mset list_of_mset_def someI_ex)
lemma length_list_of_mset[simp]: "length (list_of_mset A) = size A"
unfolding list_of_mset_def by (metis (mono_tags) ex_mset size_mset someI_ex)
lemma range_mset_map:
assumes "set_mset Ds \<subseteq> range f"
shows "Ds \<in> range (\<lambda>Cl. mset (map f Cl))"
proof -
have "Ds \<in> range (image_mset f)"
by (simp add: assms range_image_mset)
then obtain Cs where Cs_p: "image_mset f Cs = Ds"
by auto
define Cl where "Cl = list_of_mset Cs"
then have "mset Cl = Cs"
by auto
then have "image_mset f (mset Cl) = Ds"
using Cs_p by auto
then have "mset (map f Cl) = Ds"
by auto
then show ?thesis
by auto
qed
lemma list_of_mset_empty[iff]: "list_of_mset m = [] \<longleftrightarrow> m = {#}"
by (metis (mono_tags, lifting) ex_mset list_of_mset_def mset_zero_iff_right someI_ex)
lemma in_mset_conv_nth: "(x \<in># mset xs) = (\<exists>i<length xs. xs ! i = x)"
by (auto simp: in_set_conv_nth)
lemma in_mset_sum_list:
assumes "L \<in># LL"
assumes "LL \<in> set Ci"
shows "L \<in># sum_list Ci"
using assms by (induction Ci) auto
lemma in_mset_sum_list2:
assumes "L \<in># sum_list Ci"
obtains LL where
"LL \<in> set Ci"
"L \<in># LL"
using assms by (induction Ci) auto
+(* TODO: Make [simp]. *)
+lemma in_mset_sum_list_iff: "a \<in># sum_list \<A> \<longleftrightarrow> (\<exists>A \<in> set \<A>. a \<in># A)"
+ by (metis in_mset_sum_list in_mset_sum_list2)
+
lemma subseteq_list_Union_mset:
assumes "length Ci = n"
assumes "length CAi = n"
assumes "\<forall>i<n. Ci ! i \<subseteq># CAi ! i "
shows "\<Union># (mset Ci) \<subseteq># \<Union># (mset CAi)"
using assms proof (induction n arbitrary: Ci CAi)
case 0
then show ?case by auto
next
case (Suc n)
from Suc have "\<forall>i<n. tl Ci ! i \<subseteq># tl CAi ! i"
by (simp add: nth_tl)
hence "\<Union>#(mset (tl Ci)) \<subseteq># \<Union>#(mset (tl CAi))" using Suc by auto
moreover
have "hd Ci \<subseteq># hd CAi" using Suc
by (metis hd_conv_nth length_greater_0_conv zero_less_Suc)
ultimately
show "\<Union>#(mset Ci) \<subseteq># \<Union>#(mset CAi)"
using Suc by (cases Ci; cases CAi) (auto intro: subset_mset.add_mono)
qed
subsubsection \<open>More on Multisets and Functions\<close>
lemma subseteq_mset_size_eql: "X \<subseteq># Y \<Longrightarrow> size Y = size X \<Longrightarrow> X = Y"
using mset_subset_size subset_mset_def by fastforce
lemma image_mset_of_subset_list:
assumes "image_mset \<eta> C' = mset lC"
shows "\<exists>qC'. map \<eta> qC' = lC \<and> mset qC' = C'"
using assms apply (induction lC arbitrary: C')
subgoal by simp
subgoal by (fastforce dest!: msed_map_invR intro: exI[of _ \<open>_ # _\<close>])
done
lemma image_mset_of_subset:
assumes "A \<subseteq># image_mset \<eta> C'"
shows "\<exists>A'. image_mset \<eta> A' = A \<and> A' \<subseteq># C'"
proof -
define C where "C = image_mset \<eta> C'"
define lA where "lA = list_of_mset A"
define lD where "lD = list_of_mset (C-A)"
define lC where "lC = lA @ lD"
have "mset lC = C"
using C_def assms unfolding lD_def lC_def lA_def by auto
then have "\<exists>qC'. map \<eta> qC' = lC \<and> mset qC' = C'"
using assms image_mset_of_subset_list unfolding C_def by metis
then obtain qC' where qC'_p: "map \<eta> qC' = lC \<and> mset qC' = C'"
by auto
let ?lA' = "take (length lA) qC'"
have m: "map \<eta> ?lA' = lA"
using qC'_p lC_def
by (metis append_eq_conv_conj take_map)
let ?A' = "mset ?lA'"
have "image_mset \<eta> ?A' = A"
using m using lA_def
by (metis (full_types) ex_mset list_of_mset_def mset_map someI_ex)
moreover have "?A' \<subseteq># C'"
using qC'_p unfolding lA_def
using mset_take_subseteq by blast
ultimately show ?thesis by blast
qed
lemma all_the_same: "\<forall>x \<in># X. x = y \<Longrightarrow> card (set_mset X) \<le> Suc 0"
by (metis card.empty card.insert card_mono finite.intros(1) finite_insert le_SucI singletonI subsetI)
lemma Melem_subseteq_Union_mset[simp]:
assumes "x \<in># T"
shows "x \<subseteq># \<Union>#T"
using assms sum_mset.remove by force
lemma Melem_subset_eq_sum_list[simp]:
assumes "x \<in># mset T"
shows "x \<subseteq># sum_list T"
using assms by (metis mset_subset_eq_add_left sum_mset.remove sum_mset_sum_list)
lemma less_subset_eq_Union_mset[simp]:
assumes "i < length CAi"
shows "CAi ! i \<subseteq># \<Union>#(mset CAi)"
proof -
from assms have "CAi ! i \<in># mset CAi"
by auto
then show ?thesis
by auto
qed
lemma less_subset_eq_sum_list[simp]:
assumes "i < length CAi"
shows "CAi ! i \<subseteq># sum_list CAi"
proof -
from assms have "CAi ! i \<in># mset CAi"
by auto
then show ?thesis
by auto
qed
subsubsection \<open>More on Multiset Order\<close>
lemma less_multiset_doubletons:
assumes
- "y < t \<or> y < s"
- "x < t \<or> x < s"
+ "y < t \<or> y < s"
+ "x < t \<or> x < s"
shows
- "{# y, x#} < {# t, s#}"
+ "{#y, x#} < {#t, s#}"
unfolding less_multiset\<^sub>D\<^sub>M
proof (intro exI)
- let ?X = "{# t, s#}"
+ let ?X = "{#t, s#}"
let ?Y = "{#y, x#}"
- show "?X \<noteq> {#} \<and> ?X \<subseteq># {#t, s#} \<and> {#y, x#} = {#t, s#} - ?X + ?Y \<and> (\<forall>k. k \<in># ?Y \<longrightarrow> (\<exists>a. a \<in># ?X \<and> k < a))"
- using add_eq_conv_diff assms(1) assms(2) by auto
+ show "?X \<noteq> {#} \<and> ?X \<subseteq># {#t, s#} \<and> {#y, x#} = {#t, s#} - ?X + ?Y
+ \<and> (\<forall>k. k \<in># ?Y \<longrightarrow> (\<exists>a. a \<in># ?X \<and> k < a))"
+ using add_eq_conv_diff assms by auto
qed
end
diff --git a/thys/Neumann_Morgenstern_Utility/Neumann_Morgenstern_Utility_Theorem.thy b/thys/Neumann_Morgenstern_Utility/Neumann_Morgenstern_Utility_Theorem.thy
--- a/thys/Neumann_Morgenstern_Utility/Neumann_Morgenstern_Utility_Theorem.thy
+++ b/thys/Neumann_Morgenstern_Utility/Neumann_Morgenstern_Utility_Theorem.thy
@@ -1,1508 +1,1508 @@
(* License: LGPL *)
(* Author: Julian Parsert *)
theory Neumann_Morgenstern_Utility_Theorem
imports
"HOL-Probability.Probability"
"First_Welfare_Theorem.Utility_Functions"
Lotteries
begin
section \<open> Properties of Preferences \<close>
subsection \<open> Independent Preferences\<close>
text \<open> Independence is sometimes called substitution \<close>
text \<open> Notice how r is "added" to the right of mix-pmf and the element to the left q/p changes \<close>
definition independent_vnm
where
"independent_vnm C P =
(\<forall>p \<in> C. \<forall>q \<in> C. \<forall>r \<in> C. \<forall>(\<alpha>::real) \<in> {0<..1}. p \<succeq>[P] q \<longleftrightarrow> mix_pmf \<alpha> p r \<succeq>[P] mix_pmf \<alpha> q r)"
lemma independent_vnmI1:
assumes "(\<forall>p \<in> C. \<forall>q \<in> C. \<forall>r \<in> C. \<forall>\<alpha> \<in> {0<..1}. p \<succeq>[P] q \<longleftrightarrow> mix_pmf \<alpha> p r \<succeq>[P] mix_pmf \<alpha> q r)"
shows "independent_vnm C P"
using assms independent_vnm_def by blast
lemma independent_vnmI2:
assumes "\<And>p q r \<alpha>. p \<in> C \<Longrightarrow> q \<in> C \<Longrightarrow> r \<in> C \<Longrightarrow> \<alpha> \<in> {0<..1} \<Longrightarrow> p \<succeq>[P] q \<longleftrightarrow> mix_pmf \<alpha> p r \<succeq>[P] mix_pmf \<alpha> q r"
shows "independent_vnm C P"
by (rule independent_vnmI1, standard, standard, standard,
standard, simp add: assms) (meson assms greaterThanAtMost_iff)
lemma independent_vnm_alt_def:
shows "independent_vnm C P \<longleftrightarrow> (\<forall>p \<in> C. \<forall>q \<in> C. \<forall>r \<in> C. \<forall>\<alpha> \<in> {0<..<1}.
p \<succeq>[P] q \<longleftrightarrow> mix_pmf \<alpha> p r \<succeq>[P] mix_pmf \<alpha> q r)" (is "?L \<longleftrightarrow> ?R")
proof (rule iffI)
assume a: "?R"
have "independent_vnm C P"
by(rule independent_vnmI2, simp add: a) (metis a greaterThanLessThan_iff
linorder_neqE_linordered_idom not_le pmf_mix_1)
then show "?L" by auto
qed (simp add: independent_vnm_def)
lemma independece_dest_alt:
assumes "independent_vnm C P"
shows "(\<forall>p \<in> C. \<forall>q \<in> C. \<forall>r \<in> C. \<forall>(\<alpha>::real) \<in> {0<..1}. p \<succeq>[P] q \<longleftrightarrow> mix_pmf \<alpha> p r \<succeq>[P] mix_pmf \<alpha> q r)"
proof (standard, standard, standard, standard)
fix p q r \<alpha>
assume as1: "p \<in> C"
assume as2: "q \<in> C"
assume as3: "r \<in> C"
assume as4: "(\<alpha>::real) \<in> {0<..1}"
then show "p \<succeq>[P] q = mix_pmf \<alpha> p r \<succeq>[P] mix_pmf \<alpha> q r"
using as1 as2 as3 assms(1) independent_vnm_def by blast
qed
lemma independent_vnmD1:
assumes "independent_vnm C P"
shows "(\<forall>p \<in> C. \<forall>q \<in> C. \<forall>r \<in> C. \<forall>\<alpha> \<in> {0<..1}. p \<succeq>[P] q \<longleftrightarrow> mix_pmf \<alpha> p r \<succeq>[P] mix_pmf \<alpha> q r)"
using assms independent_vnm_def by blast
lemma independent_vnmD2:
fixes p q r \<alpha>
assumes "\<alpha> \<in> {0<..1}"
and "p \<in> C"
and "q \<in> C"
and "r \<in> C"
assumes "independent_vnm C P"
assumes "p \<succeq>[P] q"
shows "mix_pmf \<alpha> p r \<succeq>[P] mix_pmf \<alpha> q r"
using assms independece_dest_alt by blast
lemma independent_vnmD3:
fixes p q r \<alpha>
assumes "\<alpha> \<in> {0<..1}"
and "p \<in> C"
and "q \<in> C"
and "r \<in> C"
assumes "independent_vnm C P"
assumes "mix_pmf \<alpha> p r \<succeq>[P] mix_pmf \<alpha> q r"
shows "p \<succeq>[P] q"
using assms independece_dest_alt by blast
lemma independent_vnmD4:
assumes "independent_vnm C P"
assumes "refl_on C P"
assumes "p \<in> C"
and "q \<in> C"
and "r \<in> C"
and "\<alpha> \<in> {0..1}"
and "p \<succeq>[P] q"
shows "mix_pmf \<alpha> p r \<succeq>[P] mix_pmf \<alpha> q r"
using assms
by (cases "\<alpha> = 0 \<or> \<alpha> \<in> {0<..1}",metis assms(1,2,3,4)
independece_dest_alt pmf_mix_0 refl_onD, auto)
lemma approx_indep_ge:
assumes "x \<approx>[\<R>] y"
assumes "\<alpha> \<in> {0..(1::real)}"
assumes rpr: "rational_preference (lotteries_on outcomes) \<R>"
and ind: "independent_vnm (lotteries_on outcomes) \<R>"
shows "\<forall>r \<in> lotteries_on outcomes. (mix_pmf \<alpha> y r) \<succeq>[\<R>] (mix_pmf \<alpha> x r)"
proof
fix r
assume a: "r \<in> lotteries_on outcomes" (is "r \<in> ?lo")
have clct: "y \<succeq>[\<R>] x \<and> independent_vnm ?lo \<R> \<and> y \<in> ?lo \<and> x \<in> ?lo \<and> r \<in> ?lo"
by (meson a assms(1) assms(2) atLeastAtMost_iff greaterThanAtMost_iff
ind preference_def rational_preference_def rpr)
then have in_lo: "mix_pmf \<alpha> y r \<in> ?lo" "(mix_pmf \<alpha> x r) \<in> ?lo"
by (metis assms(2) atLeastAtMost_iff greaterThanLessThan_iff
less_eq_real_def mix_pmf_in_lotteries pmf_mix_0 pmf_mix_1 a)+
have "0 = \<alpha> \<or> 0 < \<alpha>"
using assms by auto
then show "mix_pmf \<alpha> y r \<succeq>[\<R>] mix_pmf \<alpha> x r"
using in_lo(2) rational_preference.compl rpr
by (auto,blast) (meson assms(2) atLeastAtMost_iff clct
greaterThanAtMost_iff independent_vnmD2)
qed
lemma approx_imp_approx_ind:
assumes "x \<approx>[\<R>] y"
assumes "\<alpha> \<in> {0..(1::real)}"
assumes rpr: "rational_preference (lotteries_on outcomes) \<R>"
and ind: "independent_vnm (lotteries_on outcomes) \<R>"
shows "\<forall>r \<in> lotteries_on outcomes. (mix_pmf \<alpha> y r) \<approx>[\<R>] (mix_pmf \<alpha> x r)"
using approx_indep_ge assms(1) assms(2) ind rpr by blast
lemma geq_imp_mix_geq_right:
assumes "x \<succeq>[\<R>] y"
assumes rpr: "rational_preference (lotteries_on outcomes) \<R>"
assumes ind: "independent_vnm (lotteries_on outcomes) \<R>"
assumes "\<alpha> \<in> {0..(1::real)}"
shows "(mix_pmf \<alpha> x y) \<succeq>[\<R>] y"
proof -
have xy_p: "x \<in> (lotteries_on outcomes)" "y \<in> (lotteries_on outcomes)"
by (meson assms(1) preference.not_outside rational_preference_def rpr)
(meson assms(1) preference_def rational_preference_def rpr)
have "(mix_pmf \<alpha> x y) \<in> (lotteries_on outcomes)" (is "?mpf \<in> ?lot")
using mix_pmf_in_lotteries [of x outcomes y \<alpha>] xy_p assms(2)
by (meson approx_indep_ge assms(4) ind preference.not_outside
rational_preference.compl rational_preference_def)
have all: "\<forall>r \<in> ?lot. (mix_pmf \<alpha> x r) \<succeq>[\<R>] (mix_pmf \<alpha> y r)"
by (metis assms assms(2) atLeastAtMost_iff greaterThanAtMost_iff independece_dest_alt
less_eq_real_def pmf_mix_0 rational_preference.compl rpr ind xy_p)
thus ?thesis
by (metis all assms(4) set_pmf_mix_eq xy_p(2))
qed
lemma geq_imp_mix_geq_left:
assumes "x \<succeq>[\<R>] y"
assumes rpr: "rational_preference (lotteries_on outcomes) \<R>"
assumes ind: "independent_vnm (lotteries_on outcomes) \<R>"
assumes "\<alpha> \<in> {0..(1::real)}"
shows "(mix_pmf \<alpha> y x) \<succeq>[\<R>] y"
proof -
define \<beta> where
b: "\<beta> = 1 - \<alpha>"
have "\<beta> \<in> {0..1}"
using assms(4) b by auto
then have "mix_pmf \<beta> x y \<succeq>[\<R>] y"
using geq_imp_mix_geq_right[OF assms] assms(1) geq_imp_mix_geq_right ind rpr by blast
moreover have "mix_pmf \<beta> x y = mix_pmf \<alpha> y x"
by (metis assms(4) b pmf_inverse_switch_eqals)
ultimately show ?thesis
by simp
qed
lemma sg_imp_mix_sg:
assumes "x \<succ>[\<R>] y"
assumes rpr: "rational_preference (lotteries_on outcomes) \<R>"
assumes ind: "independent_vnm (lotteries_on outcomes) \<R>"
assumes "\<alpha> \<in> {0<..(1::real)}"
shows "(mix_pmf \<alpha> x y) \<succ>[\<R>] y"
proof -
have xy_p: "x \<in> (lotteries_on outcomes)" "y \<in> (lotteries_on outcomes)"
by (meson assms(1) preference.not_outside rational_preference_def rpr)
(meson assms(1) preference_def rational_preference_def rpr)
have "(mix_pmf \<alpha> x y) \<in> (lotteries_on outcomes)" (is "?mpf \<in> ?lot")
using mix_pmf_in_lotteries [of x outcomes y \<alpha>] xy_p assms(2)
using assms(4) by fastforce
have all: "\<forall>r \<in> ?lot. (mix_pmf \<alpha> x r) \<succeq>[\<R>] (mix_pmf \<alpha> y r)"
by (metis assms(1,3,4) independece_dest_alt ind xy_p)
have "(mix_pmf \<alpha> x y) \<succeq>[\<R>] y"
by (metis all assms(4) atLeastAtMost_iff greaterThanAtMost_iff
less_eq_real_def set_pmf_mix_eq xy_p(2))
have all2: "\<forall>r \<in> ?lot. (mix_pmf \<alpha> x r) \<succ>[\<R>] (mix_pmf \<alpha> y r)"
using assms(1) assms(4) ind independece_dest_alt xy_p(1) xy_p(2) by blast
then show ?thesis
by (metis assms(4) atLeastAtMost_iff greaterThanAtMost_iff
less_eq_real_def set_pmf_mix_eq xy_p(2))
qed
subsection \<open> Continuity \<close>
text \<open> Continuity is sometimes called Archimedean Axiom\<close>
definition continuous_vnm
where
"continuous_vnm C P = (\<forall>p \<in> C. \<forall>q \<in> C. \<forall>r \<in> C. p \<succeq>[P] q \<and> q \<succeq>[P] r \<longrightarrow>
(\<exists>\<alpha> \<in> {0..1}. (mix_pmf \<alpha> p r) \<approx>[P] q))"
lemma continuous_vnmD:
assumes "continuous_vnm C P"
shows "(\<forall>p \<in> C. \<forall>q \<in> C. \<forall>r \<in> C. p \<succeq>[P] q \<and> q \<succeq>[P] r \<longrightarrow>
(\<exists>\<alpha> \<in> {0..1}. (mix_pmf \<alpha> p r) \<approx>[P] q))"
using continuous_vnm_def assms by blast
lemma continuous_vnmI:
assumes "\<And>p q r. p \<in> C \<Longrightarrow> q \<in> C \<Longrightarrow> r \<in> C \<Longrightarrow> p \<succeq>[P] q \<and> q \<succeq>[P] r \<Longrightarrow>
\<exists>\<alpha> \<in> {0..1}. (mix_pmf \<alpha> p r) \<approx>[P] q"
shows "continuous_vnm C P"
by (simp add: assms continuous_vnm_def)
lemma mix_in_lot:
assumes "x \<in> lotteries_on outcomes"
and "y \<in> lotteries_on outcomes"
and "\<alpha> \<in> {0..1}"
shows "(mix_pmf \<alpha> x y) \<in> lotteries_on outcomes"
using assms(1) assms(2) assms(3) less_eq_real_def mix_pmf_in_lotteries by fastforce
lemma non_unique_continuous_unfolding:
assumes cnt: "continuous_vnm (lotteries_on outcomes) \<R>"
assumes "rational_preference (lotteries_on outcomes) \<R>"
assumes "p \<succeq>[\<R>] q"
and "q \<succeq>[\<R>] r"
and "p \<succ>[\<R>] r"
shows "\<exists>\<alpha> \<in> {0..1}. q \<approx>[\<R>] mix_pmf \<alpha> p r"
using assms(1) assms(2) cnt continuous_vnmD assms
proof -
have "\<forall>p q. p\<in> (lotteries_on outcomes) \<and> q \<in> (lotteries_on outcomes) \<longleftrightarrow> p \<succeq>[\<R>] q \<or> q \<succeq>[\<R>] p"
using assms rational_preference.compl[of "lotteries_on outcomes" \<R>]
by (metis (no_types, hide_lams) preference_def rational_preference_def)
then show ?thesis
using continuous_vnmD[OF assms(1)] by (metis assms(3) assms(4))
qed
section \<open> System U start, as per vNM\<close>
text \<open> These are the first two assumptions which we use to derive the first results.
We assume rationality and independence. In this system U the von-Neumann-Morgenstern
Utility Theorem is proven. \<close>
context
fixes outcomes :: "'a set"
fixes \<R>
assumes rpr: "rational_preference (lotteries_on outcomes) \<R>"
assumes ind: "independent_vnm (lotteries_on outcomes) \<R>"
begin
abbreviation "\<P> \<equiv> lotteries_on outcomes"
lemma relation_in_carrier:
"x \<succeq>[\<R>] y \<Longrightarrow> x \<in> \<P> \<and> y \<in> \<P>"
by (meson preference_def rational_preference_def rpr)
lemma mix_pmf_preferred_independence:
assumes "r \<in> \<P>"
and "\<alpha> \<in> {0..1}"
assumes "p \<succeq>[\<R>] q"
shows "mix_pmf \<alpha> p r \<succeq>[\<R>] mix_pmf \<alpha> q r"
using ind by (metis relation_in_carrier antisym_conv1 assms atLeastAtMost_iff
greaterThanAtMost_iff independece_dest_alt pmf_mix_0
rational_preference.no_better_thansubset_rel rpr subsetI)
lemma mix_pmf_strict_preferred_independence:
assumes "r \<in> \<P>"
and "\<alpha> \<in> {0<..1}"
assumes "p \<succ>[\<R>] q"
shows "mix_pmf \<alpha> p r \<succ>[\<R>] mix_pmf \<alpha> q r"
by (meson assms(1) assms(2) assms(3) ind independent_vnmD2
independent_vnmD3 relation_in_carrier)
lemma mix_pmf_preferred_independence_rev:
assumes "p \<in> \<P>"
and "q \<in> \<P>"
and "r \<in> \<P>"
and "\<alpha> \<in> {0<..1}"
assumes "mix_pmf \<alpha> p r \<succeq>[\<R>] mix_pmf \<alpha> q r"
shows "p \<succeq>[\<R>] q"
proof -
have "mix_pmf \<alpha> p r \<in> \<P>"
using assms mix_in_lot relation_in_carrier by blast
moreover have "mix_pmf \<alpha> q r \<in> \<P>"
using assms mix_in_lot assms(2) relation_in_carrier by blast
ultimately show ?thesis
using ind independent_vnmD3[of \<alpha> p \<P> q r \<R>] assms by blast
qed
lemma x_sg_y_sg_mpmf_right:
assumes "x \<succ>[\<R>] y"
assumes "b \<in> {0<..(1::real)}"
shows "x \<succ>[\<R>] mix_pmf b y x"
proof -
consider "b = 1" | "b \<noteq> 1"
by blast
then show ?thesis
proof (cases)
case 2
have sg: "(mix_pmf b x y) \<succ>[\<R>] y"
using assms(1) assms(2) assms ind rpr sg_imp_mix_sg "2" by fastforce
have "mix_pmf b x y \<in> \<P>"
by (meson sg preference_def rational_preference_def rpr)
have "mix_pmf b x x \<in> \<P>"
using relation_in_carrier assms(2) mix_in_lot assms by fastforce
have "b \<in> {0<..<1}"
using "2" assms(2) by auto
have "mix_pmf b x x \<succ>[\<R>] mix_pmf b y x"
using mix_pmf_preferred_independence[of x b] assms
by (meson \<open>b \<in> {0<..<1}\<close> greaterThanAtMost_iff greaterThanLessThan_iff ind
independece_dest_alt less_eq_real_def preference_def
rational_preference.axioms(1) relation_in_carrier rpr)
then show ?thesis
using mix_pmf_preferred_independence
by (metis assms(2) atLeastAtMost_iff greaterThanAtMost_iff less_eq_real_def set_pmf_mix_eq)
qed (simp add: assms(1))
qed
lemma neumann_3B_b:
assumes "u \<succ>[\<R>] v"
assumes "\<alpha> \<in> {0<..<1}"
shows "u \<succ>[\<R>] mix_pmf \<alpha> u v"
proof -
have *: "preorder_on \<P> \<R> \<and> rational_preference_axioms \<P> \<R>"
by (metis (no_types) preference_def rational_preference_def rpr)
have "1 - \<alpha> \<in> {0<..1}"
using assms(2) by auto
then show ?thesis
using * assms by (metis atLeastAtMost_iff greaterThanLessThan_iff
less_eq_real_def pmf_inverse_switch_eqals x_sg_y_sg_mpmf_right)
qed
lemma neumann_3B_b_non_strict:
assumes "u \<succeq>[\<R>] v"
assumes "\<alpha> \<in> {0..1}"
shows "u \<succeq>[\<R>] mix_pmf \<alpha> u v"
proof -
have f2: "mix_pmf \<alpha> (u::'a pmf) v = mix_pmf (1 - \<alpha>) v u"
using pmf_inverse_switch_eqals assms(2) by auto
have "1 - \<alpha> \<in> {0..1}"
using assms(2) by force
then show ?thesis
using f2 relation_in_carrier
by (metis (no_types) assms(1) mix_pmf_preferred_independence set_pmf_mix_eq)
qed
lemma greater_mix_pmf_greater_step_1_aux:
assumes "v \<succ>[\<R>] u"
assumes "\<alpha> \<in> {0<..<(1::real)}"
and "\<beta> \<in> {0<..<(1::real)}"
assumes "\<beta> > \<alpha>"
shows "(mix_pmf \<beta> v u) \<succ>[\<R>] (mix_pmf \<alpha> v u)"
proof -
define t where
t: "t = mix_pmf \<beta> v u"
obtain \<gamma> where
g: "\<alpha> = \<beta> * \<gamma>"
by (metis assms(2) assms(4) greaterThanLessThan_iff
mult.commute nonzero_eq_divide_eq not_less_iff_gr_or_eq)
have g1: "\<gamma> > 0 \<and> \<gamma> < 1"
by (metis (full_types) assms(2) assms(4) g greaterThanLessThan_iff
less_trans mult.right_neutral mult_less_cancel_left_pos not_le
sgn_le_0_iff sgn_pos zero_le_one zero_le_sgn_iff zero_less_mult_iff)
have t_in: "mix_pmf \<beta> v u \<in> \<P>"
by (meson assms(1) assms(3) mix_pmf_in_lotteries preference_def rational_preference_def rpr)
have "v \<succ>[\<R>] mix_pmf (1 - \<beta>) v u"
using x_sg_y_sg_mpmf_right[of u v "1-\<beta>"] assms
by (metis atLeastAtMost_iff greaterThanAtMost_iff greaterThanLessThan_iff
less_eq_real_def pmf_inverse_switch_eqals x_sg_y_sg_mpmf_right)
have "t \<succ>[\<R>] u"
using assms(1) assms(3) ind rpr sg_imp_mix_sg t by fastforce
hence t_s: "t \<succ>[\<R>] (mix_pmf \<gamma> t u)"
proof -
have "(mix_pmf \<gamma> t u) \<in> \<P>"
by (metis assms(1) assms(3) atLeastAtMost_iff g1 mix_in_lot mix_pmf_in_lotteries
not_less order.asym preference_def rational_preference_def rpr t)
have "t \<succ>[\<R>] mix_pmf \<gamma> (mix_pmf \<beta> v u) u"
using neumann_3B_b[of t u \<gamma>] assms t g1
by (meson greaterThanAtMost_iff greaterThanLessThan_iff
ind less_eq_real_def rpr sg_imp_mix_sg)
thus ?thesis
using t by blast
qed
from product_mix_pmf_prob_distrib[of _ \<beta> v u] assms
have "mix_pmf \<beta> v u \<succ>[\<R>] mix_pmf \<alpha> v u"
by (metis t_s atLeastAtMost_iff g g1 greaterThanLessThan_iff less_eq_real_def mult.commute t)
then show ?thesis by blast
qed
section \<open> This lemma is in called step 1 in literature.
In Von Neumann and Morgenstern's book this is A:A (albeit more general) \<close>
lemma step_1_most_general:
assumes "x \<succ>[\<R>] y"
assumes "\<alpha> \<in> {0..(1::real)}"
and "\<beta> \<in> {0..(1::real)}"
assumes "\<alpha> > \<beta>"
shows "(mix_pmf \<alpha> x y) \<succ>[\<R>] (mix_pmf \<beta> x y)"
proof -
consider (ex) "\<alpha> = 1 \<and> \<beta> = 0" | (m) "\<alpha> \<noteq> 1 \<or> \<beta> \<noteq> 0"
by blast
then show ?thesis
proof (cases)
case m
consider "\<beta> = 0" | "\<beta> \<noteq> 0"
by blast
then show ?thesis
proof (cases)
case 1
then show ?thesis
using assms(1) assms(2) assms(4) ind rpr sg_imp_mix_sg by fastforce
next
case 2
let ?d = "(\<beta>/\<alpha>)"
have sg: "(mix_pmf \<alpha> x y) \<succ>[\<R>] y"
using assms(1) assms(2) assms(3) assms(4) ind rpr sg_imp_mix_sg by fastforce
have a: "\<alpha> > 0"
using assms(3) assms(4) by auto
then have div_in: "?d \<in> {0<..1}"
using assms(3) assms(4) 2 by auto
have mx_p: "(mix_pmf \<alpha> x y) \<in> \<P>"
by (meson sg preference_def rational_preference_def rpr)
have y_P: "y \<in> \<P>"
by (meson assms(1) preference_def rational_preference_def rpr)
hence "(mix_pmf ?d (mix_pmf \<alpha> x y) y) \<in> \<P>"
using div_in mx_p by (simp add: mix_in_lot)
have " mix_pmf \<beta> (mix_pmf \<alpha> x y) y \<succ>[\<R>] y"
using sg_imp_mix_sg[of "(mix_pmf \<alpha> x y)" y \<R> outcomes \<beta>] sg div_in rpr ind
a assms(2) "2" assms(3) by auto
have al1: "\<forall>r \<in> \<P>. (mix_pmf \<alpha> x r) \<succ>[\<R>] (mix_pmf \<alpha> y r)"
by (meson a assms(1) assms(2) atLeastAtMost_iff greaterThanAtMost_iff ind
independece_dest_alt preference.not_outside rational_preference_def rpr y_P)
then show ?thesis
using greater_mix_pmf_greater_step_1_aux assms
by (metis a div_in divide_less_eq_1_pos greaterThanAtMost_iff
greaterThanLessThan_iff mix_pmf_comp_with_dif_equiv neumann_3B_b sg)
qed
qed (simp add: assms(1))
qed
text \<open> Kreps refers to this lemma as 5.6 c.
The lemma after that is also significant.\<close>
lemma approx_remains_after_same_comp:
assumes "p \<approx>[\<R>] q"
and "r \<in> \<P>"
and "\<alpha> \<in> {0..1}"
shows "mix_pmf \<alpha> p r \<approx>[\<R>] mix_pmf \<alpha> q r"
using approx_indep_ge assms(1) assms(2) assms(3) ind rpr by blast
text \<open> This lemma is the symmetric version of the previous lemma.
This lemma is never mentioned in literature anywhere.
Even though it looks trivial now, due to the asymmetric nature of the
independence axiom, it is not so trivial, and definitely worth mentioning. \<close>
lemma approx_remains_after_same_comp_left:
assumes "p \<approx>[\<R>] q"
and "r \<in> \<P>"
and "\<alpha> \<in> {0..1}"
shows "mix_pmf \<alpha> r p \<approx>[\<R>] mix_pmf \<alpha> r q"
proof -
have 1: "\<alpha> \<le> 1 \<and> \<alpha> \<ge> 0" "1 - \<alpha> \<in> {0..1}"
using assms(3) by auto+
have fst: "mix_pmf \<alpha> r p \<approx>[\<R>] mix_pmf (1-\<alpha>) p r"
using assms by (metis mix_in_lot pmf_inverse_switch_eqals
rational_preference.compl relation_in_carrier rpr)
moreover have "mix_pmf \<alpha> r p \<approx>[\<R>] mix_pmf \<alpha> r q"
using approx_remains_after_same_comp[of _ _ _ \<alpha>] pmf_inverse_switch_eqals[of \<alpha> p q] 1
pmf_inverse_switch_eqals rpr mix_pmf_preferred_independence[of _ \<alpha> _ _]
by (metis assms(1) assms(2) assms(3) mix_pmf_preferred_independence)
thus ?thesis
by blast
qed
lemma mix_of_preferred_is_preferred:
assumes "p \<succeq>[\<R>] w"
assumes "q \<succeq>[\<R>] w"
assumes "\<alpha> \<in> {0..1}"
shows "mix_pmf \<alpha> p q \<succeq>[\<R>] w"
proof -
consider "p \<succeq>[\<R>] q" | "q \<succeq>[\<R>] p"
using rpr assms(1) assms(2) rational_preference.compl relation_in_carrier by blast
then show ?thesis
proof (cases)
case 1
have "mix_pmf \<alpha> p q \<succeq>[\<R>] q"
using "1" assms(3) geq_imp_mix_geq_right ind rpr by blast
moreover have "q \<succeq>[\<R>] w"
using assms by auto
ultimately show ?thesis using rpr preference.transitivity[of \<P> \<R>]
by (meson rational_preference_def transE)
next
case 2
have "mix_pmf \<alpha> p q \<succeq>[\<R>] p"
using "2" assms geq_imp_mix_geq_left ind rpr by blast
moreover have "p \<succeq>[\<R>] w"
using assms by auto
ultimately show ?thesis using rpr preference.transitivity[of \<P> \<R>]
by (meson rational_preference_def transE)
qed
qed
lemma mix_of_not_preferred_is_not_preferred:
assumes "w \<succeq>[\<R>] p"
assumes "w \<succeq>[\<R>] q"
assumes "\<alpha> \<in> {0..1}"
shows "w \<succeq>[\<R>] mix_pmf \<alpha> p q"
proof -
consider "p \<succeq>[\<R>] q" | "q \<succeq>[\<R>] p"
using rpr assms(1) assms(2) rational_preference.compl relation_in_carrier by blast
then show ?thesis
proof (cases)
case 1
moreover have "p \<succeq>[\<R>] mix_pmf \<alpha> p q"
using assms(3) neumann_3B_b_non_strict calculation by blast
moreover show ?thesis
using rpr preference.transitivity[of \<P> \<R>]
by (meson assms(1) calculation(2) rational_preference_def transE)
next
case 2
moreover have "q \<succeq>[\<R>] mix_pmf \<alpha> p q"
using assms(3) neumann_3B_b_non_strict calculation
by (metis mix_pmf_preferred_independence relation_in_carrier set_pmf_mix_eq)
moreover show ?thesis
using rpr preference.transitivity[of \<P> \<R>]
by (meson assms(2) calculation(2) rational_preference_def transE)
qed
qed
private definition degenerate_lotteries where
"degenerate_lotteries = {x \<in> \<P>. card (set_pmf x) = 1}"
private definition best where
"best = {x \<in> \<P>. (\<forall>y \<in> \<P>. x \<succeq>[\<R>] y)}"
private definition worst where
"worst = {x \<in> \<P>. (\<forall>y \<in> \<P>. y \<succeq>[\<R>] x)}"
lemma degenerate_total:
"\<forall>e \<in> degenerate_lotteries. \<forall>m \<in> \<P>. e \<succeq>[\<R>] m \<or> m \<succeq>[\<R>] e"
using degenerate_lotteries_def rational_preference.compl rpr by fastforce
lemma degen_outcome_cardinalities:
"card degenerate_lotteries = card outcomes"
using card_degen_lotteries_equals_outcomes degenerate_lotteries_def by auto
lemma degenerate_lots_subset_all: "degenerate_lotteries \<subseteq> \<P>"
by (simp add: degenerate_lotteries_def)
lemma alt_definition_of_degenerate_lotteries[iff]:
"{return_pmf x |x. x\<in> outcomes} = degenerate_lotteries"
proof (standard, goal_cases)
case 1
have "\<forall>x \<in> {return_pmf x |x. x \<in> outcomes}. x \<in> degenerate_lotteries"
proof
fix x
assume a: "x \<in> {return_pmf x |x. x \<in> outcomes}"
then have "card (set_pmf x) = 1"
by auto
moreover have "set_pmf x \<subseteq> outcomes"
using a set_pmf_subset_singleton by auto
moreover have "x \<in> \<P>"
by (simp add: lotteries_on_def calculation)
ultimately show "x \<in> degenerate_lotteries"
by (simp add: degenerate_lotteries_def)
qed
then show ?case by blast
next
case 2
have "\<forall>x \<in> degenerate_lotteries. x \<in> {return_pmf x |x. x \<in> outcomes}"
proof
fix x
assume a: "x \<in> degenerate_lotteries"
hence "card (set_pmf x) = 1"
using degenerate_lotteries_def by blast
moreover have "set_pmf x \<subseteq> outcomes"
by (meson a degenerate_lots_subset_all subset_iff support_in_outcomes)
moreover obtain e where "{e} = set_pmf x"
using calculation
by (metis card_1_singletonE)
moreover have "e \<in> outcomes"
using calculation(2) calculation(3) by blast
moreover have "x = return_pmf e"
- using calculation(3) set_pmf_subset_singleton by fastforce
+ using calculation(3) set_pmf_subset_singleton by fast
ultimately show "x \<in> {return_pmf x |x. x \<in> outcomes}"
by blast
qed
then show ?case by blast
qed
lemma best_indifferent:
"\<forall>x \<in> best. \<forall>y \<in> best. x \<approx>[\<R>] y"
by (simp add: best_def)
lemma worst_indifferent:
"\<forall>x \<in> worst. \<forall>y \<in> worst. x \<approx>[\<R>] y"
by (simp add: worst_def)
lemma best_worst_indiff_all_indiff:
assumes "b \<in> best"
and "w \<in> worst"
and "b \<approx>[\<R>] w"
shows "\<forall>e \<in> \<P>. e \<approx>[\<R>] w" "\<forall>e \<in> \<P>. e \<approx>[\<R>] b"
proof -
show "\<forall>e \<in> \<P>. e \<approx>[\<R>] w"
proof (standard)
fix e
assume a: "e \<in> \<P>"
then have "b \<succeq>[\<R>] e"
using a best_def assms by blast
moreover have "e \<succeq>[\<R>] w"
using a assms worst_def by auto
moreover have "b \<succeq>[\<R>] e"
by (simp add: calculation(1))
moreover show "e \<approx>[\<R>] w"
proof (rule ccontr)
assume "\<not> e \<approx>[\<R>] w"
then consider "e \<succ>[\<R>] w" | "w \<succ>[\<R>] e"
by (simp add: calculation(2))
then show False
proof (cases)
case 2
then show ?thesis
using calculation(2) by blast
qed (meson assms(3) calculation(1)
rational_preference.strict_is_neg_transitive relation_in_carrier rpr)
qed
qed
then show "\<forall>e\<in>local.\<P>. e \<approx>[\<R>] b"
using assms by (meson rational_preference.compl
rational_preference.strict_is_neg_transitive relation_in_carrier rpr)
qed
text \<open> Like Step 1 most general but with IFF. \<close>
lemma mix_pmf_pref_iff_more_likely [iff]:
assumes "b \<succ>[\<R>] w"
assumes "\<alpha> \<in> {0..1}"
and "\<beta> \<in> {0..1}"
shows "\<alpha> > \<beta> \<longleftrightarrow> mix_pmf \<alpha> b w \<succ>[\<R>] mix_pmf \<beta> b w" (is "?L \<longleftrightarrow> ?R")
using assms step_1_most_general[of b w \<alpha> \<beta>]
by (metis linorder_neqE_linordered_idom step_1_most_general)
lemma better_worse_good_mix_preferred[iff]:
assumes "b \<succeq>[\<R>] w"
assumes "\<alpha> \<in> {0..1}"
and "\<beta> \<in> {0..1}"
assumes "\<alpha> \<ge> \<beta>"
shows "mix_pmf \<alpha> b w \<succeq>[\<R>] mix_pmf \<beta> b w"
proof-
have "(0::real) \<le> 1"
by simp
then show ?thesis
by (metis (no_types) assms assms(1) assms(2) assms(3) atLeastAtMost_iff
less_eq_real_def mix_of_not_preferred_is_not_preferred
mix_of_preferred_is_preferred mix_pmf_preferred_independence
pmf_mix_0 relation_in_carrier step_1_most_general)
qed
subsection \<open> Add finiteness and non emptyness of outcomes \<close>
context
assumes fnt: "finite outcomes"
assumes nempty: "outcomes \<noteq> {}"
begin
lemma finite_degenerate_lotteries:
"finite degenerate_lotteries"
using degen_outcome_cardinalities fnt nempty by fastforce
lemma degenerate_has_max_preferred:
"{x \<in> degenerate_lotteries. (\<forall>y \<in> degenerate_lotteries. x \<succeq>[\<R>] y)} \<noteq> {}" (is "?l \<noteq> {}")
proof
assume a: "?l = {}"
let ?DG = "degenerate_lotteries"
obtain R where
R: "rational_preference ?DG R" "R \<subseteq> \<R>"
using degenerate_lots_subset_all rational_preference.all_carrier_ex_sub_rel rpr by blast
then have "\<exists>e \<in> ?DG. \<forall>e' \<in> ?DG. e \<succeq>[\<R>] e'"
by (metis R(1) R(2) card_0_eq degen_outcome_cardinalities
finite_degenerate_lotteries fnt nempty subset_eq
rational_preference.finite_nonempty_carrier_has_maximum )
then show False
using a by auto
qed
lemma degenerate_has_min_preferred:
"{x \<in> degenerate_lotteries. (\<forall>y \<in> degenerate_lotteries. y \<succeq>[\<R>] x)} \<noteq> {}" (is "?l \<noteq> {}")
proof
assume a: "?l = {}"
let ?DG = "degenerate_lotteries"
obtain R where
R: "rational_preference ?DG R" "R \<subseteq> \<R>"
using degenerate_lots_subset_all rational_preference.all_carrier_ex_sub_rel rpr by blast
have "\<exists>e \<in> ?DG. \<forall>e' \<in> ?DG. e' \<succeq>[\<R>] e"
by (metis R(1) R(2) card_0_eq degen_outcome_cardinalities
finite_degenerate_lotteries fnt nempty subset_eq
rational_preference.finite_nonempty_carrier_has_minimum )
then show False
using a by auto
qed
lemma exists_best_degenerate:
"\<exists>x \<in> degenerate_lotteries. \<forall>y \<in> degenerate_lotteries. x \<succeq>[\<R>] y"
using degenerate_has_max_preferred by blast
lemma exists_worst_degenerate:
"\<exists>x \<in> degenerate_lotteries. \<forall>y \<in> degenerate_lotteries. y \<succeq>[\<R>] x"
using degenerate_has_min_preferred by blast
lemma best_degenerate_in_best_overall:
"\<exists>x \<in> degenerate_lotteries. \<forall>y \<in> \<P>. x \<succeq>[\<R>] y"
proof -
obtain b where
b: "b \<in> degenerate_lotteries" "\<forall>y \<in> degenerate_lotteries. b \<succeq>[\<R>] y"
using exists_best_degenerate by blast
have asm: "finite outcomes" "set_pmf b \<subseteq> outcomes"
by (simp add: fnt) (meson b(1) degenerate_lots_subset_all subset_iff support_in_outcomes)
obtain B where B: "set_pmf b = {B}"
using b card_1_singletonE degenerate_lotteries_def by blast
have deg: "\<forall>d\<in>outcomes. b \<succeq>[\<R>] return_pmf d"
using alt_definition_of_degenerate_lotteries b(2) by blast
define P where
"P = (\<lambda>p. p \<in> \<P> \<longrightarrow> return_pmf B \<succeq>[\<R>] p)"
have "P p" for p
proof -
consider "set_pmf p \<subseteq> outcomes" | "\<not>set_pmf p \<subseteq> outcomes"
by blast
then show ?thesis
proof (cases)
case 1
have "finite outcomes" "set_pmf p \<subseteq> outcomes"
by (auto simp: 1 asm)
then show ?thesis
proof (induct rule: pmf_mix_induct')
case (degenerate x)
then show ?case
using B P_def deg set_pmf_subset_singleton by fastforce
qed (simp add: P_def lotteries_on_def mix_of_not_preferred_is_not_preferred
mix_of_not_preferred_is_not_preferred[of b p q a])
qed (simp add: lotteries_on_def P_def)
qed
moreover have "\<forall>e \<in> \<P>. b \<succeq>[\<R>] e"
using calculation B P_def set_pmf_subset_singleton by fastforce
ultimately show ?thesis
using b degenerate_lots_subset_all by blast
qed
lemma worst_degenerate_in_worst_overall:
"\<exists>x \<in> degenerate_lotteries. \<forall>y \<in> \<P>. y \<succeq>[\<R>] x"
proof -
obtain b where
b: "b \<in> degenerate_lotteries" "\<forall>y \<in> degenerate_lotteries. y \<succeq>[\<R>] b"
using exists_worst_degenerate by blast
have asm: "finite outcomes" "set_pmf b \<subseteq> outcomes"
by (simp add: fnt) (meson b(1) degenerate_lots_subset_all subset_iff support_in_outcomes)
obtain B where B: "set_pmf b = {B}"
using b card_1_singletonE degenerate_lotteries_def by blast
have deg: "\<forall>d\<in>outcomes. return_pmf d \<succeq>[\<R>] b"
using alt_definition_of_degenerate_lotteries b(2) by blast
define P where
"P = (\<lambda>p. p \<in> \<P> \<longrightarrow> p \<succeq>[\<R>] return_pmf B)"
have "P p" for p
proof -
consider "set_pmf p \<subseteq> outcomes" | "\<not>set_pmf p \<subseteq> outcomes"
by blast
then show ?thesis
proof (cases)
case 1
have "finite outcomes" "set_pmf p \<subseteq> outcomes"
by (auto simp: 1 asm)
then show ?thesis
proof (induct rule: pmf_mix_induct')
case (degenerate x)
then show ?case
using B P_def deg set_pmf_subset_singleton by fastforce
next
qed (simp add: P_def lotteries_on_def mix_of_preferred_is_preferred
mix_of_not_preferred_is_not_preferred[of b p])
qed (simp add: lotteries_on_def P_def)
qed
moreover have "\<forall>e \<in> \<P>. e \<succeq>[\<R>] b"
using calculation B P_def set_pmf_subset_singleton by fastforce
ultimately show ?thesis
using b degenerate_lots_subset_all by blast
qed
lemma overall_best_nonempty:
"best \<noteq> {}"
using best_def best_degenerate_in_best_overall degenerate_lots_subset_all by blast
lemma overall_worst_nonempty:
"worst \<noteq> {}"
using degenerate_lots_subset_all worst_def worst_degenerate_in_worst_overall by auto
lemma trans_approx:
assumes "x\<approx>[\<R>] y"
and " y \<approx>[\<R>] z"
shows "x \<approx>[\<R>] z"
using preference.indiff_trans[of \<P> \<R> x y z] assms rpr rational_preference_def by blast
text \<open> First EXPLICIT use of the axiom of choice \<close>
private definition some_best where
"some_best = (SOME x. x \<in> degenerate_lotteries \<and> x \<in> best)"
private definition some_worst where
"some_worst = (SOME x. x \<in> degenerate_lotteries \<and> x \<in> worst)"
private definition my_U :: "'a pmf \<Rightarrow> real"
where
"my_U p = (SOME \<alpha>. \<alpha>\<in>{0..1} \<and> p \<approx>[\<R>] mix_pmf \<alpha> some_best some_worst)"
lemma exists_best_and_degenerate: "degenerate_lotteries \<inter> best \<noteq> {}"
using best_def best_degenerate_in_best_overall degenerate_lots_subset_all by blast
lemma exists_worst_and_degenerate: "degenerate_lotteries \<inter> worst \<noteq> {}"
using worst_def worst_degenerate_in_worst_overall degenerate_lots_subset_all by blast
lemma some_best_in_best: "some_best \<in> best"
using exists_best_and_degenerate some_best_def
by (metis (mono_tags, lifting) Int_emptyI some_eq_ex)
lemma some_worst_in_worst: "some_worst \<in> worst"
using exists_worst_and_degenerate some_worst_def
by (metis (mono_tags, lifting) Int_emptyI some_eq_ex)
lemma best_always_at_least_as_good_mix:
assumes "\<alpha> \<in> {0..1}"
and "p \<in> \<P>"
shows "mix_pmf \<alpha> some_best p \<succeq>[\<R>] p"
using assms(1) assms(2) best_def mix_of_preferred_is_preferred
rational_preference.compl rpr some_best_in_best by fastforce
lemma geq_mix_imp_weak_pref:
assumes "\<alpha> \<in> {0..1}"
and "\<beta> \<in> {0..1}"
assumes "\<alpha> \<ge> \<beta>"
shows "mix_pmf \<alpha> some_best some_worst \<succeq>[\<R>] mix_pmf \<beta> some_best some_worst"
using assms(1) assms(2) assms(3) best_def some_best_in_best some_worst_in_worst worst_def by auto
lemma gamma_inverse:
assumes "\<alpha> \<in> {0<..<1}"
and "\<beta> \<in> {0<..<1}"
shows "(1::real) - (\<alpha> - \<beta>) / (1 - \<beta>) = (1 - \<alpha>) / (1 - \<beta>)"
proof -
have "1 - (\<alpha> - \<beta>) / (1 - \<beta>) = (1 - \<beta>)/(1 - \<beta>) - (\<alpha> - \<beta>) / (1 - \<beta>)"
using assms(2) by auto
also have "... = (1 - \<beta> - (\<alpha> - \<beta>)) / (1 - \<beta>)"
by (metis diff_divide_distrib)
also have "... = (1 - \<alpha>) / (1 - \<beta>)"
by simp
finally show ?thesis .
qed
lemma all_mix_pmf_indiff_indiff_best_worst:
assumes "l \<in> \<P>"
assumes "b \<in> best"
assumes "w \<in> worst"
assumes "b \<approx>[\<R>] w"
shows "\<forall>\<alpha> \<in>{0..1}. l \<approx>[\<R>] mix_pmf \<alpha> b w"
by (meson assms best_worst_indiff_all_indiff(1) mix_of_preferred_is_preferred
best_worst_indiff_all_indiff(2) mix_of_not_preferred_is_not_preferred)
lemma indiff_imp_same_utility_value:
assumes "some_best \<succ>[\<R>] some_worst"
assumes "\<alpha> \<in> {0..1}"
assumes "\<beta> \<in> {0..1}"
assumes "mix_pmf \<beta> some_best some_worst \<approx>[\<R>] mix_pmf \<alpha> some_best some_worst"
shows "\<beta> = \<alpha>"
using assms(1) assms(2) assms(3) assms(4) linorder_neqE_linordered_idom by blast
lemma leq_mix_imp_weak_inferior:
assumes "some_best \<succ>[\<R>] some_worst"
assumes "\<alpha> \<in> {0..1}"
and "\<beta> \<in> {0..1}"
assumes "mix_pmf \<beta> some_best some_worst \<succeq>[\<R>] mix_pmf \<alpha> some_best some_worst"
shows "\<beta> \<ge> \<alpha>"
proof -
have *: "mix_pmf \<beta> some_best some_worst \<approx>[\<R>] mix_pmf \<alpha> some_best some_worst \<Longrightarrow> \<alpha> \<le> \<beta>"
using assms(1) assms(2) assms(3) indiff_imp_same_utility_value by blast
consider "mix_pmf \<beta> some_best some_worst \<succ>[\<R>] mix_pmf \<alpha> some_best some_worst" |
"mix_pmf \<beta> some_best some_worst \<approx>[\<R>] mix_pmf \<alpha> some_best some_worst"
using assms(4) by blast
then show ?thesis
by(cases) (meson assms(2) assms(3) geq_mix_imp_weak_pref le_cases *)+
qed
lemma ge_mix_pmf_preferred:
assumes "x \<succ>[\<R>] y"
assumes "\<alpha> \<in> {0..1}"
and "\<beta> \<in> {0..1}"
assumes "\<alpha> \<ge> \<beta>"
shows "(mix_pmf \<alpha> x y) \<succeq>[\<R>] (mix_pmf \<beta> x y)"
using assms(1) assms(2) assms(3) assms(4) by blast
subsection \<open> Add continuity to assumptions \<close>
context
assumes cnt: "continuous_vnm (lotteries_on outcomes) \<R>"
begin
text \<open> In Literature this is referred to as step 2. \<close>
lemma step_2_unique_continuous_unfolding:
assumes "p \<succeq>[\<R>] q"
and "q \<succeq>[\<R>] r"
and "p \<succ>[\<R>] r"
shows "\<exists>!\<alpha> \<in> {0..1}. q \<approx>[\<R>] mix_pmf \<alpha> p r"
proof (rule ccontr)
assume neg_a: "\<nexists>!\<alpha>. \<alpha> \<in> {0..1} \<and> q \<approx>[\<R>] mix_pmf \<alpha> p r"
have "\<exists>\<alpha> \<in> {0..1}. q \<approx>[\<R>] mix_pmf \<alpha> p r"
using non_unique_continuous_unfolding[of outcomes \<R> p q r]
assms cnt rpr by blast
then obtain \<alpha> \<beta> :: real where
a_b: "\<alpha>\<in>{0..1}" "\<beta> \<in>{0..1}" "q \<approx>[\<R>] mix_pmf \<alpha> p r" "q \<approx>[\<R>] mix_pmf \<beta> p r" "\<alpha> \<noteq> \<beta>"
using neg_a by blast
consider "\<alpha> > \<beta>" | "\<beta> > \<alpha>"
using a_b by linarith
then show False
proof (cases)
case 1
with step_1_most_general[of p r \<alpha> \<beta>] assms
have "mix_pmf \<alpha> p r \<succ>[\<R>] mix_pmf \<beta> p r"
using a_b(1) a_b(2) by blast
then show ?thesis using a_b
by (meson rational_preference.strict_is_neg_transitive relation_in_carrier rpr)
next
case 2
with step_1_most_general[of p r \<beta> \<alpha>] assms have "mix_pmf \<beta> p r \<succ>[\<R>]mix_pmf \<alpha> p r"
using a_b(1) a_b(2) by blast
then show ?thesis using a_b
by (meson rational_preference.strict_is_neg_transitive relation_in_carrier rpr)
qed
qed
text \<open> These folowing two lemmas are referred to sometimes called step 2. \<close>
lemma create_unique_indiff_using_distinct_best_worst:
assumes "l \<in> \<P>"
assumes "b \<in> best"
assumes "w \<in> worst"
assumes "b \<succ>[\<R>] w"
shows "\<exists>!\<alpha> \<in>{0..1}. l \<approx>[\<R>] mix_pmf \<alpha> b w"
proof -
have "b \<succeq>[\<R>] l"
using best_def
using assms by blast
moreover have "l \<succeq>[\<R>] w"
using worst_def assms by blast
ultimately show "\<exists>!\<alpha>\<in>{0..1}. l \<approx>[\<R>] mix_pmf \<alpha> b w"
using step_2_unique_continuous_unfolding[of b l w] assms by linarith
qed
lemma exists_element_bw_mix_is_approx:
assumes "l \<in> \<P>"
assumes "b \<in> best"
assumes "w \<in> worst"
shows "\<exists>\<alpha> \<in>{0..1}. l \<approx>[\<R>] mix_pmf \<alpha> b w"
proof -
consider "b \<succ>[\<R>] w" | "b \<approx>[\<R>] w"
using assms(2) assms(3) best_def worst_def by auto
then show ?thesis
proof (cases)
case 1
then show ?thesis
using create_unique_indiff_using_distinct_best_worst assms by blast
qed (auto simp: all_mix_pmf_indiff_indiff_best_worst assms)
qed
lemma my_U_is_defined:
assumes "p \<in> \<P>"
shows "my_U p \<in> {0..1}" "p \<approx>[\<R>] mix_pmf (my_U p) some_best some_worst"
proof -
have "some_best \<in> best"
by (simp add: some_best_in_best)
moreover have "some_worst \<in> worst"
by (simp add: some_worst_in_worst)
with exists_element_bw_mix_is_approx[of p "some_best" "some_worst"] calculation assms
have e: "\<exists>\<alpha>\<in>{0..1}. p \<approx>[\<R>] mix_pmf \<alpha> some_best some_worst" by blast
then show "my_U p \<in> {0..1}"
by (metis (mono_tags, lifting) my_U_def someI_ex)
show "p \<approx>[\<R>] mix_pmf (my_U p) some_best some_worst"
by (metis (mono_tags, lifting) e my_U_def someI_ex)
qed
lemma weak_pref_mix_with_my_U_weak_pref:
assumes "p \<succeq>[\<R>] q"
shows "mix_pmf (my_U p) some_best some_worst \<succeq>[\<R>] mix_pmf (my_U q) some_best some_worst"
by (meson assms my_U_is_defined(2) relation_in_carrier rpr
rational_preference.weak_is_transitive)
lemma preferred_greater_my_U:
assumes "p \<in> \<P>"
and "q \<in> \<P>"
assumes "mix_pmf (my_U p) some_best some_worst \<succ>[\<R>] mix_pmf (my_U q) some_best some_worst"
shows "my_U p > my_U q"
proof (rule ccontr)
assume "\<not> my_U p > my_U q"
then consider "my_U p = my_U q" | "my_U p < my_U q"
by linarith
then show False
proof (cases)
case 1
then have "mix_pmf (my_U p) some_best some_worst \<approx>[\<R>] mix_pmf (my_U q) some_best some_worst"
using assms by auto
then show ?thesis using assms by blast
next
case 2
moreover have "my_U q \<in> {0..1}"
using assms(2) my_U_is_defined(1) by blast
moreover have "my_U p \<in> {0..1}"
using assms(1) my_U_is_defined(1) by blast
moreover have "mix_pmf (my_U q) some_best some_worst \<succeq>[\<R>] mix_pmf (my_U p) some_best some_worst"
using calculation geq_mix_imp_weak_pref by auto
then show ?thesis using assms by blast
qed
qed
lemma geq_my_U_imp_weak_preference:
assumes "p \<in> \<P>"
and "q \<in> \<P>"
assumes "some_best \<succ>[\<R>] some_worst"
assumes "my_U p \<ge> my_U q"
shows "p \<succeq>[\<R>] q"
proof -
have p_q: "my_U p \<in> {0..1}" "my_U q \<in> {0..1}"
using assms my_U_is_defined(1) by blast+
with ge_mix_pmf_preferred[of "some_best" "some_worst" "my_U p" "my_U q"]
p_q assms(1) assms(3) assms(4)
have "mix_pmf (my_U p) some_best some_worst \<succeq>[\<R>] mix_pmf (my_U q) some_best some_worst" by blast
consider "my_U p = my_U q" | "my_U p > my_U q"
using assms by linarith
then show ?thesis
proof (cases)
case 2
then show ?thesis
by (meson assms(1) assms(2) assms(3) p_q(1) p_q(2) rational_preference.compl
rpr step_1_most_general weak_pref_mix_with_my_U_weak_pref)
qed (metis assms(1) assms(2) my_U_is_defined(2) trans_approx)
qed
lemma my_U_represents_pref:
assumes "some_best \<succ>[\<R>] some_worst"
assumes "p \<in> \<P>"
and "q \<in> \<P>"
shows "p \<succeq>[\<R>] q \<longleftrightarrow> my_U p \<ge> my_U q" (is "?L \<longleftrightarrow> ?R")
proof -
have p_def: "my_U p\<in> {0..1}" "my_U q \<in> {0..1}"
using assms my_U_is_defined by blast+
show ?thesis
proof
assume a: ?L
hence "mix_pmf (my_U p) some_best some_worst \<succeq>[\<R>] mix_pmf (my_U q) some_best some_worst"
using weak_pref_mix_with_my_U_weak_pref by auto
then show ?R using leq_mix_imp_weak_inferior[of "my_U p" "my_U q"] p_def a
assms(1) leq_mix_imp_weak_inferior by blast
next
assume ?R
then show ?L using geq_my_U_imp_weak_preference
using assms(1) assms(2) assms(3) by blast
qed
qed
lemma first_iff_u_greater_strict_preff:
assumes "p \<in> \<P>"
and "q \<in> \<P>"
assumes "some_best \<succ>[\<R>] some_worst"
shows "my_U p > my_U q \<longleftrightarrow> mix_pmf (my_U p) some_best some_worst \<succ>[\<R>] mix_pmf (my_U q) some_best some_worst"
proof
assume a: "my_U p > my_U q"
have "my_U p \<in> {0..1}" "my_U q \<in> {0..1}"
using assms my_U_is_defined(1) by blast+
then show "mix_pmf (my_U p) some_best some_worst \<succ>[\<R>] mix_pmf (my_U q) some_best some_worst"
using a assms(3) by blast
next
assume a: "mix_pmf (my_U p) some_best some_worst \<succ>[\<R>] mix_pmf (my_U q) some_best some_worst"
have "my_U p \<in> {0..1}" "my_U q \<in> {0..1}"
using assms my_U_is_defined(1) by blast+
then show "my_U p > my_U q "
using preferred_greater_my_U[of p q] assms a by blast
qed
lemma second_iff_calib_mix_pref_strict_pref:
assumes "p \<in> \<P>"
and "q \<in> \<P>"
assumes "some_best \<succ>[\<R>] some_worst"
shows "mix_pmf (my_U p) some_best some_worst \<succ>[\<R>] mix_pmf (my_U q) some_best some_worst \<longleftrightarrow> p \<succ>[\<R>] q"
proof
assume a: "mix_pmf (my_U p) some_best some_worst \<succ>[\<R>] mix_pmf (my_U q) some_best some_worst"
have "my_U p \<in> {0..1}" "my_U q \<in> {0..1}"
using assms my_U_is_defined(1) by blast+
then show "p \<succ>[\<R>] q"
using a assms(3) assms(1) assms(2) geq_my_U_imp_weak_preference
leq_mix_imp_weak_inferior weak_pref_mix_with_my_U_weak_pref by blast
next
assume a: "p \<succ>[\<R>] q"
have "my_U p \<in> {0..1}" "my_U q \<in> {0..1}"
using assms my_U_is_defined(1) by blast+
then show "mix_pmf (my_U p) some_best some_worst \<succ>[\<R>] mix_pmf (my_U q) some_best some_worst"
using a assms(1) assms(2) assms(3) leq_mix_imp_weak_inferior my_U_represents_pref by blast
qed
lemma my_U_is_linear_function:
assumes "p \<in> \<P>"
and "q \<in> \<P>"
and "\<alpha> \<in> {0..1}"
assumes "some_best \<succ>[\<R>] some_worst"
shows "my_U (mix_pmf \<alpha> p q) = \<alpha> * my_U p + (1 - \<alpha>) * my_U q"
proof -
define B where B: "B = some_best"
define W where W:"W = some_worst"
define Up where Up: "Up = my_U p"
define Uq where Uq: "Uq = my_U q"
have long_in: "(\<alpha> * Up + (1 - \<alpha>) * Uq) \<in> {0..1}"
proof -
have "Up \<in> {0..1}"
using assms Up my_U_is_defined(1) by blast
moreover have "Uq \<in> {0..1}"
using assms Uq my_U_is_defined(1) by blast
moreover have "\<alpha> * Up \<in> {0..1}"
using \<open>Up \<in> {0..1}\<close> assms(3) mult_le_one by auto
moreover have "1-\<alpha> \<in> {0..1}"
using assms(3) by auto
moreover have "(1 - \<alpha>) * Uq \<in> {0..1}"
using mult_le_one[of "1-\<alpha>" Uq] calculation(2) calculation(4) by auto
ultimately show ?thesis
using add_nonneg_nonneg[of "\<alpha> * Up" "(1 - \<alpha>) * Uq"]
convex_bound_le[of Up 1 Uq \<alpha> "1-\<alpha>"] by simp
qed
have fst: "p \<approx>[\<R>] (mix_pmf Up B W)"
using assms my_U_is_defined[of p] B W Up by simp
have snd: "q \<approx>[\<R>] (mix_pmf Uq B W)"
using assms my_U_is_defined[of q] B W Uq by simp
have mp_in: "(mix_pmf Up B W) \<in> \<P>"
using fst relation_in_carrier by blast
have f2: "mix_pmf \<alpha> p q \<approx>[\<R>] mix_pmf \<alpha> (mix_pmf Up B W) q"
using fst assms(2) assms(3) mix_pmf_preferred_independence by blast
have **: "mix_pmf \<alpha> (mix_pmf Up B W) (mix_pmf Uq B W) =
mix_pmf (\<alpha> * Up + (1-\<alpha>) * Uq) B W" (is "?L = ?R")
proof -
let ?mixPQ = "(mix_pmf (\<alpha> * Up + (1 - \<alpha>) * Uq) B W)"
have "\<forall>e\<in>set_pmf ?L. pmf (?L) e = pmf ?mixPQ e"
proof
fix e
assume asm: "e \<in> set_pmf ?L"
have i1: "pmf (?L) e = \<alpha> * pmf (mix_pmf Up B W) e +
pmf (mix_pmf Uq B W) e - \<alpha> * pmf (mix_pmf Uq B W) e"
using pmf_mix_deeper[of \<alpha> "mix_pmf Up B W" "(mix_pmf Uq B W)" e] assms(3) by blast
have i3: "... = \<alpha> * Up * pmf B e + \<alpha> * pmf W e - \<alpha> * Up * pmf W e + Uq * pmf B e +
pmf W e - Uq * pmf W e - \<alpha> * Uq * pmf B e - \<alpha> * pmf W e + \<alpha> * Uq * pmf W e"
using left_diff_distrib' pmf_mix_deeper[of Up B W e] pmf_mix_deeper[of Uq B W e]
assms Up Uq my_U_is_defined(1) by (simp add: distrib_left right_diff_distrib)
have j4: "pmf ?mixPQ e = (\<alpha> * Up + (1 - \<alpha>) * Uq) * pmf B e +
pmf W e - (\<alpha> * Up + (1 - \<alpha>) * Uq) * pmf W e"
using pmf_mix_deeper[of "(\<alpha> * Up + (1 - \<alpha>) * Uq)" B W e] long_in by blast
then show "pmf (?L) e = pmf ?mixPQ e"
by (simp add: i1 i3 mult.commute right_diff_distrib' ring_class.ring_distribs(1))
qed
then show ?thesis using pmf_equiv_intro1 by blast
qed
have "mix_pmf \<alpha> (mix_pmf Up B W) q \<approx>[\<R>] ?L"
using approx_remains_after_same_comp_left assms(3) mp_in snd by blast
hence *: "mix_pmf \<alpha> p q \<approx>[\<R>] mix_pmf \<alpha> (mix_pmf (my_U p) B W) (mix_pmf (my_U q) B W)"
using Up Uq f2 trans_approx by blast
have "mix_pmf \<alpha> (mix_pmf (my_U p) B W) (mix_pmf (my_U q) B W) = ?R"
using Up Uq ** by blast
hence "my_U (mix_pmf \<alpha> p q) = \<alpha> * Up + (1-\<alpha>) * Uq"
by (metis * B W assms(4) indiff_imp_same_utility_value long_in
my_U_is_defined(1) my_U_is_defined(2) my_U_represents_pref relation_in_carrier)
then show ?thesis
using Up Uq by blast
qed
text \<open> Now we define a more general Utility
function that also takes the degenerate case into account \<close>
private definition general_U
where
"general_U p = (if some_best \<approx>[\<R>] some_worst then 1 else my_U p)"
lemma general_U_is_linear_function:
assumes "p \<in> \<P>"
and "q \<in> \<P>"
and "\<alpha> \<in> {0..1}"
shows "general_U (mix_pmf \<alpha> p q) = \<alpha> * (general_U p) + (1 - \<alpha>) * (general_U q)"
proof -
consider "some_best \<succ>[\<R>] some_worst" | "some_best \<approx>[\<R>] some_worst"
using best_def some_best_in_best some_worst_in_worst worst_def by auto
then show ?thesis
proof (cases, goal_cases)
case 1
then show ?case
using assms(1) assms(2) assms(3) general_U_def my_U_is_linear_function by auto
next
case 2
then show ?case
using assms(1) assms(2) assms(3) general_U_def by auto
qed
qed
lemma general_U_ordinal_Utility:
shows "ordinal_utility \<P> \<R> general_U"
proof (standard, goal_cases)
case (1 x y)
consider (a) "some_best \<succ>[\<R>] some_worst" | (b) "some_best \<approx>[\<R>] some_worst"
using best_def some_best_in_best some_worst_in_worst worst_def by auto
then show ?case
proof (cases, goal_cases)
case a
have "some_best \<succ>[\<R>] some_worst"
using a by auto
then show "x \<succeq>[\<R>] y = (general_U y \<le> general_U x)"
using 1 my_U_represents_pref[of x y] general_U_def by simp
next
case b
have "general_U x = 1" "general_U y = 1"
by (simp add: b general_U_def)+
moreover have "x \<approx>[\<R>] y" using b
by (meson "1"(1) "1"(2) best_worst_indiff_all_indiff(1)
some_best_in_best some_worst_in_worst trans_approx)
ultimately show "x \<succeq>[\<R>] y = (general_U y \<le> general_U x)"
using general_U_def by linarith
qed
next
case (2 x y)
then show ?case
using relation_in_carrier by blast
next
case (3 x y)
then show ?case
using relation_in_carrier by blast
qed
text \<open> Proof of the linearity of general-U.
If we consider the definition of expected utility
functions from Maschler, Solan, Zamir we are done. \<close>
theorem is_linear:
assumes "p \<in> \<P>"
and "q \<in> \<P>"
and "\<alpha> \<in> {0..1}"
shows "\<exists>u. u (mix_pmf \<alpha> p q) = \<alpha> * (u p) + (1-\<alpha>) * (u q)"
proof
let ?u = "general_U"
consider "some_best \<succ>[\<R>] some_worst" | "some_best \<approx>[\<R>] some_worst"
using best_def some_best_in_best some_worst_in_worst worst_def by auto
then show "?u (mix_pmf \<alpha> p q) = \<alpha> * ?u p + (1 - \<alpha>) * ?u q"
proof (cases)
case 1
then show ?thesis
using assms(1) assms(2) assms(3) general_U_def my_U_is_linear_function by auto
next
case 2
then show ?thesis
by (simp add: general_U_def)
qed
qed
text \<open> Now I define a Utility function that assigns a utility to all outcomes.
These are only finitely many \<close>
private definition ocU
where
"ocU p = general_U (return_pmf p)"
lemma geral_U_is_expected_value_of_ocU:
assumes "set_pmf p \<subseteq> outcomes"
shows "general_U p = measure_pmf.expectation p ocU"
using fnt assms
proof (induct rule: pmf_mix_induct')
case (mix p q a)
hence "general_U (mix_pmf a p q) = a * general_U p + (1-a) * general_U q"
using general_U_is_linear_function[of p q a] mix.hyps assms lotteries_on_def mix.hyps by auto
also have "... = a * measure_pmf.expectation p ocU + (1-a) * measure_pmf.expectation q ocU"
by (simp add: mix.hyps(4) mix.hyps(5))
also have "... = measure_pmf.expectation (mix_pmf a p q) ocU"
using general_U_is_linear_function expected_value_mix_pmf_distrib fnt infinite_super mix.hyps(1)
by (metis fnt mix.hyps(2) mix.hyps(3))
finally show ?case .
qed (auto simp: support_in_outcomes assms fnt integral_measure_pmf_real ocU_def)
lemma ordinal_utility_expected_value:
"ordinal_utility \<P> \<R> (\<lambda>x. measure_pmf.expectation x ocU)"
proof (standard, goal_cases)
case (1 x y)
have ocs: "set_pmf x \<subseteq> outcomes" "set_pmf y \<subseteq> outcomes"
by (meson "1" subsetI support_in_outcomes)+
have "x \<succeq>[\<R>] y \<Longrightarrow> (measure_pmf.expectation y ocU \<le> measure_pmf.expectation x ocU)"
proof -
assume "x \<succeq>[\<R>] y"
have "general_U x \<ge> general_U y"
by (meson \<open>x \<succeq>[\<R>] y\<close> general_U_ordinal_Utility ordinal_utility_def)
then show "(measure_pmf.expectation y ocU \<le> measure_pmf.expectation x ocU)"
using geral_U_is_expected_value_of_ocU ocs by auto
qed
moreover have "(measure_pmf.expectation y ocU \<le> measure_pmf.expectation x ocU) \<Longrightarrow> x \<succeq>[\<R>] y"
proof -
assume "(measure_pmf.expectation y ocU \<le> measure_pmf.expectation x ocU)"
then have "general_U x \<ge> general_U y"
by (simp add: geral_U_is_expected_value_of_ocU ocs(1) ocs(2))
then show "x \<succeq>[\<R>] y"
by (meson "1"(1) "1"(2) general_U_ordinal_Utility ordinal_utility.util_def)
qed
ultimately show ?case
by blast
next
case (2 x y)
then show ?case
using relation_in_carrier by blast
next
case (3 x y)
then show ?case
using relation_in_carrier by auto
qed
lemma ordinal_utility_expected_value':
"\<exists>u. ordinal_utility \<P> \<R> (\<lambda>x. measure_pmf.expectation x u)"
using ordinal_utility_expected_value by blast
lemma ocU_is_expected_utility_bernoulli:
shows "\<forall>x \<in> \<P>. \<forall>y \<in> \<P>. x \<succeq>[\<R>] y \<longleftrightarrow>
measure_pmf.expectation x ocU \<ge> measure_pmf.expectation y ocU"
using ordinal_utility_expected_value by (meson ordinal_utility.util_def)
end (* continuous *)
end(* finite outcomes *)
end (* system U *)
lemma expected_value_is_utility_function:
assumes fnt: "finite outcomes" and "outcomes \<noteq> {}"
assumes "x \<in> lotteries_on outcomes" and "y \<in> lotteries_on outcomes"
assumes "ordinal_utility (lotteries_on outcomes) \<R> (\<lambda>x. measure_pmf.expectation x u)"
shows "measure_pmf.expectation x u \<ge> measure_pmf.expectation y u \<longleftrightarrow> x \<succeq>[\<R>] y" (is "?L \<longleftrightarrow> ?R")
using assms(3) assms(4) assms(5) ordinal_utility.util_def_conf
ordinal_utility.ordinal_utility_left iffI by (metis (no_types, lifting))
lemma system_U_implies_vNM_utility:
assumes fnt: "finite outcomes" and "outcomes \<noteq> {}"
assumes rpr: "rational_preference (lotteries_on outcomes) \<R>"
assumes ind: "independent_vnm (lotteries_on outcomes) \<R>"
assumes cnt: "continuous_vnm (lotteries_on outcomes) \<R>"
shows "\<exists>u. ordinal_utility (lotteries_on outcomes) \<R> (\<lambda>x. measure_pmf.expectation x u)"
using ordinal_utility_expected_value'[of outcomes \<R>] assms by blast
lemma vNM_utility_implies_rationality:
assumes fnt: "finite outcomes" and "outcomes \<noteq> {}"
assumes "\<exists>u. ordinal_utility (lotteries_on outcomes) \<R> (\<lambda>x. measure_pmf.expectation x u)"
shows "rational_preference (lotteries_on outcomes) \<R>"
using assms(3) ordinal_util_imp_rat_prefs by blast
theorem vNM_utility_implies_independence:
assumes fnt: "finite outcomes" and "outcomes \<noteq> {}"
assumes "\<exists>u. ordinal_utility (lotteries_on outcomes) \<R> (\<lambda>x. measure_pmf.expectation x u)"
shows "independent_vnm (lotteries_on outcomes) \<R>"
proof (rule independent_vnmI2)
fix p q r
and \<alpha>::real
assume a1: "p \<in> \<P> outcomes"
assume a2: "q \<in> \<P> outcomes"
assume a3: "r \<in> \<P> outcomes"
assume a4: "\<alpha> \<in> {0<..1}"
have in_lots: "mix_pmf \<alpha> p r \<in> lotteries_on outcomes" "mix_pmf \<alpha> q r \<in> lotteries_on outcomes"
using a1 a3 a4 mix_in_lot apply fastforce
using a2 a3 a4 mix_in_lot by fastforce
have fnts: "finite (set_pmf p)" "finite (set_pmf q)" "finite (set_pmf r)"
using a1 a2 a3 fnt infinite_super lotteries_on_def by blast+
obtain u where
u: "ordinal_utility (lotteries_on outcomes) \<R> (\<lambda>x. measure_pmf.expectation x u)"
using assms by blast
have "p \<succeq>[\<R>] q \<Longrightarrow> mix_pmf \<alpha> p r \<succeq>[\<R>] mix_pmf \<alpha> q r"
proof -
assume "p \<succeq>[\<R>] q"
hence f: "measure_pmf.expectation p u \<ge> measure_pmf.expectation q u"
using u a1 a2 ordinal_utility.util_def by fastforce
have "measure_pmf.expectation (mix_pmf \<alpha> p r) u \<ge> measure_pmf.expectation (mix_pmf \<alpha> q r) u"
proof -
have "measure_pmf.expectation (mix_pmf \<alpha> p r) u =
\<alpha> * measure_pmf.expectation p u + (1 - \<alpha>) * measure_pmf.expectation r u"
using expected_value_mix_pmf_distrib[of p r \<alpha> u] assms fnts a4 by fastforce
moreover have "measure_pmf.expectation (mix_pmf \<alpha> q r) u =
\<alpha> * measure_pmf.expectation q u + (1 - \<alpha>) * measure_pmf.expectation r u"
using expected_value_mix_pmf_distrib[of q r \<alpha> u] assms fnts a4 by fastforce
ultimately show ?thesis using f using a4 by auto
qed
then show "mix_pmf \<alpha> p r \<succeq>[\<R>] mix_pmf \<alpha> q r"
using u ordinal_utility_expected_value' ocU_is_expected_utility_bernoulli in_lots
by (simp add: in_lots ordinal_utility_def)
qed
moreover have "mix_pmf \<alpha> p r \<succeq>[\<R>] mix_pmf \<alpha> q r \<Longrightarrow> p \<succeq>[\<R>] q"
proof -
assume "mix_pmf \<alpha> p r \<succeq>[\<R>] mix_pmf \<alpha> q r"
hence f:"measure_pmf.expectation (mix_pmf \<alpha> p r) u \<ge> measure_pmf.expectation (mix_pmf \<alpha> q r) u"
using ordinal_utility.ordinal_utility_left u by fastforce
hence "measure_pmf.expectation p u \<ge> measure_pmf.expectation q u"
proof -
have "measure_pmf.expectation (mix_pmf \<alpha> p r) u =
\<alpha> * measure_pmf.expectation p u + (1 - \<alpha>) * measure_pmf.expectation r u"
using expected_value_mix_pmf_distrib[of p r \<alpha> u] assms fnts a4 by fastforce
moreover have "measure_pmf.expectation (mix_pmf \<alpha> q r) u =
\<alpha> * measure_pmf.expectation q u + (1 - \<alpha>) * measure_pmf.expectation r u"
using expected_value_mix_pmf_distrib[of q r \<alpha> u] assms fnts a4 by fastforce
ultimately show ?thesis using f using a4 by auto
qed
then show "p \<succeq>[\<R>] q"
using a1 a2 ordinal_utility.util_def_conf u by fastforce
qed
ultimately show "p \<succeq>[\<R>] q = mix_pmf \<alpha> p r \<succeq>[\<R>] mix_pmf \<alpha> q r"
by blast
qed
lemma exists_weight_for_equality:
assumes "a > c" and "a \<ge> b" and "b \<ge> c"
shows "\<exists>(e::real) \<in> {0..1}. (1-e) * a + e * c = b"
proof -
from assms have "b \<in> closed_segment a c"
by (simp add: closed_segment_eq_real_ivl)
thus ?thesis by (auto simp: closed_segment_def)
qed
lemma vNM_utilty_implies_continuity:
assumes fnt: "finite outcomes" and "outcomes \<noteq> {}"
assumes "\<exists>u. ordinal_utility (lotteries_on outcomes) \<R> (\<lambda>x. measure_pmf.expectation x u)"
shows "continuous_vnm (lotteries_on outcomes) \<R>"
proof (rule continuous_vnmI)
fix p q r
assume a1: "p \<in> \<P> outcomes"
assume a2: "q \<in> \<P> outcomes"
assume a3: "r \<in> \<P> outcomes "
assume a4: "p \<succeq>[\<R>] q \<and> q \<succeq>[\<R>] r"
then have g: "p \<succeq>[\<R>] r"
by (meson assms(3) ordinal_utility.util_imp_trans transD)
obtain u where
u: "ordinal_utility (lotteries_on outcomes) \<R> (\<lambda>x. measure_pmf.expectation x u)"
using assms by blast
have geqa: "measure_pmf.expectation p u \<ge> measure_pmf.expectation q u"
"measure_pmf.expectation q u \<ge> measure_pmf.expectation r u"
using a4 u by (meson ordinal_utility.ordinal_utility_left)+
have fnts: "finite p" "finite q" "finite r"
using a1 a2 a3 fnt infinite_super lotteries_on_def by auto+
consider "p \<succ>[\<R>] r" | "p \<approx>[\<R>] r"
using g by auto
then show "\<exists>\<alpha>\<in>{0..1}. mix_pmf \<alpha> p r \<approx>[\<R>] q"
proof (cases)
case 1
define a where a: "a = measure_pmf.expectation p u"
define b where b: "b = measure_pmf.expectation r u"
define c where c: "c = measure_pmf.expectation q u"
have "a > b"
using "1" a1 a2 a3 a b ordinal_utility.util_def_conf u by force
have "c \<le> a" "b \<le> c"
using geqa a b c by blast+
then obtain e ::real where
e: "e \<in> {0..1}" "(1-e) * a + e * b = c"
using exists_weight_for_equality[of b a c] \<open>b < a\<close> by blast
have *:"1-e \<in> {0..1}"
using e(1) by auto
hence "measure_pmf.expectation (mix_pmf (1-e) p r) u =
(1-e) * measure_pmf.expectation p u + e * measure_pmf.expectation r u"
using expected_value_mix_pmf_distrib[of p r "1-e" u] fnts by fastforce
also have "... = (1-e) * a + e * b"
using a b by auto
also have "... = c"
using c e by auto
finally have f: "measure_pmf.expectation (mix_pmf (1-e) p r) u = measure_pmf.expectation q u"
using c by blast
hence "mix_pmf (1-e) p r \<approx>[\<R>] q"
using expected_value_is_utility_function[of outcomes "mix_pmf (1-e) p r" q \<R> u] *
proof -
have "mix_pmf (1 - e) p r \<in> \<P> outcomes"
using \<open>1 - e \<in> {0..1}\<close> a1 a3 mix_in_lot by blast
then show ?thesis
using f a2 ordinal_utility.util_def u by fastforce
qed
then show ?thesis
using exists_weight_for_equality expected_value_mix_pmf_distrib * by blast
next
case 2
have "r \<approx>[\<R>] q"
by (meson "2" a4 assms(3) ordinal_utility.util_imp_trans transD)
then show ?thesis by force
qed
qed
theorem Von_Neumann_Morgenstern_Utility_Theorem:
assumes fnt: "finite outcomes" and "outcomes \<noteq> {}"
shows "rational_preference (lotteries_on outcomes) \<R> \<and>
independent_vnm (lotteries_on outcomes) \<R> \<and>
continuous_vnm (lotteries_on outcomes) \<R> \<longleftrightarrow>
(\<exists>u. ordinal_utility (lotteries_on outcomes) \<R> (\<lambda>x. measure_pmf.expectation x u))"
using vNM_utility_implies_independence[OF assms, of \<R>]
system_U_implies_vNM_utility[OF assms, of \<R>]
vNM_utilty_implies_continuity[OF assms, of \<R>]
ordinal_util_imp_rat_prefs[of "lotteries_on outcomes" \<R>] by auto
end
diff --git a/thys/Ordered_Resolution_Prover/Abstract_Substitution.thy b/thys/Ordered_Resolution_Prover/Abstract_Substitution.thy
--- a/thys/Ordered_Resolution_Prover/Abstract_Substitution.thy
+++ b/thys/Ordered_Resolution_Prover/Abstract_Substitution.thy
@@ -1,1152 +1,1270 @@
(* Title: Abstract Substitutions
Author: Dmitriy Traytel <traytel at inf.ethz.ch>, 2014
Author: Jasmin Blanchette <j.c.blanchette at vu.nl>, 2014, 2017
Author: Anders Schlichtkrull <andschl at dtu.dk>, 2016, 2017
Maintainer: Anders Schlichtkrull <andschl at dtu.dk>
*)
section \<open>Abstract Substitutions\<close>
theory Abstract_Substitution
imports Clausal_Logic Map2
begin
text \<open>
Atoms and substitutions are abstracted away behind some locales, to avoid having a direct dependency
on the IsaFoR library.
Conventions: \<open>'s\<close> substitutions, \<open>'a\<close> atoms.
\<close>
subsection \<open>Library\<close>
lemma f_Suc_decr_eventually_const:
fixes f :: "nat \<Rightarrow> nat"
assumes leq: "\<forall>i. f (Suc i) \<le> f i"
shows "\<exists>l. \<forall>l' \<ge> l. f l' = f (Suc l')"
proof (rule ccontr)
assume a: "\<nexists>l. \<forall>l' \<ge> l. f l' = f (Suc l')"
have "\<forall>i. \<exists>i'. i' > i \<and> f i' < f i"
proof
fix i
from a have "\<exists>l' \<ge> i. f l' \<noteq> f (Suc l')"
by auto
then obtain l' where
l'_p: "l' \<ge> i \<and> f l' \<noteq> f (Suc l')"
by metis
then have "f l' > f (Suc l')"
using leq le_eq_less_or_eq by auto
moreover have "f i \<ge> f l'"
using leq l'_p by (induction l' arbitrary: i) (blast intro: lift_Suc_antimono_le)+
ultimately show "\<exists>i' > i. f i' < f i"
using l'_p less_le_trans by blast
qed
then obtain g_sm :: "nat \<Rightarrow> nat" where
g_sm_p: "\<forall>i. g_sm i > i \<and> f (g_sm i) < f i"
by metis
define c :: "nat \<Rightarrow> nat" where
"\<And>n. c n = (g_sm ^^ n) 0"
have "f (c i) > f (c (Suc i))" for i
by (induction i) (auto simp: c_def g_sm_p)
then have "\<forall>i. (f \<circ> c) i > (f \<circ> c) (Suc i)"
by auto
then have "\<exists>fc :: nat \<Rightarrow> nat. \<forall>i. fc i > fc (Suc i)"
by metis
then show False
using wf_less_than by (simp add: wf_iff_no_infinite_down_chain)
qed
subsection \<open>Substitution Operators\<close>
locale substitution_ops =
fixes
subst_atm :: "'a \<Rightarrow> 's \<Rightarrow> 'a" and
id_subst :: 's and
comp_subst :: "'s \<Rightarrow> 's \<Rightarrow> 's"
begin
abbreviation subst_atm_abbrev :: "'a \<Rightarrow> 's \<Rightarrow> 'a" (infixl "\<cdot>a" 67) where
"subst_atm_abbrev \<equiv> subst_atm"
abbreviation comp_subst_abbrev :: "'s \<Rightarrow> 's \<Rightarrow> 's" (infixl "\<odot>" 67) where
"comp_subst_abbrev \<equiv> comp_subst"
definition comp_substs :: "'s list \<Rightarrow> 's list \<Rightarrow> 's list" (infixl "\<odot>s" 67) where
"\<sigma>s \<odot>s \<tau>s = map2 comp_subst \<sigma>s \<tau>s"
definition subst_atms :: "'a set \<Rightarrow> 's \<Rightarrow> 'a set" (infixl "\<cdot>as" 67) where
"AA \<cdot>as \<sigma> = (\<lambda>A. A \<cdot>a \<sigma>) ` AA"
definition subst_atmss :: "'a set set \<Rightarrow> 's \<Rightarrow> 'a set set" (infixl "\<cdot>ass" 67) where
"AAA \<cdot>ass \<sigma> = (\<lambda>AA. AA \<cdot>as \<sigma>) ` AAA"
definition subst_atm_list :: "'a list \<Rightarrow> 's \<Rightarrow> 'a list" (infixl "\<cdot>al" 67) where
"As \<cdot>al \<sigma> = map (\<lambda>A. A \<cdot>a \<sigma>) As"
definition subst_atm_mset :: "'a multiset \<Rightarrow> 's \<Rightarrow> 'a multiset" (infixl "\<cdot>am" 67) where
"AA \<cdot>am \<sigma> = image_mset (\<lambda>A. A \<cdot>a \<sigma>) AA"
definition
subst_atm_mset_list :: "'a multiset list \<Rightarrow> 's \<Rightarrow> 'a multiset list" (infixl "\<cdot>aml" 67)
where
"AAA \<cdot>aml \<sigma> = map (\<lambda>AA. AA \<cdot>am \<sigma>) AAA"
definition
subst_atm_mset_lists :: "'a multiset list \<Rightarrow> 's list \<Rightarrow> 'a multiset list" (infixl "\<cdot>\<cdot>aml" 67)
where
"AAs \<cdot>\<cdot>aml \<sigma>s = map2 (\<cdot>am) AAs \<sigma>s"
definition subst_lit :: "'a literal \<Rightarrow> 's \<Rightarrow> 'a literal" (infixl "\<cdot>l" 67) where
"L \<cdot>l \<sigma> = map_literal (\<lambda>A. A \<cdot>a \<sigma>) L"
lemma atm_of_subst_lit[simp]: "atm_of (L \<cdot>l \<sigma>) = atm_of L \<cdot>a \<sigma>"
unfolding subst_lit_def by (cases L) simp+
definition subst_cls :: "'a clause \<Rightarrow> 's \<Rightarrow> 'a clause" (infixl "\<cdot>" 67) where
"AA \<cdot> \<sigma> = image_mset (\<lambda>A. A \<cdot>l \<sigma>) AA"
definition subst_clss :: "'a clause set \<Rightarrow> 's \<Rightarrow> 'a clause set" (infixl "\<cdot>cs" 67) where
"AA \<cdot>cs \<sigma> = (\<lambda>A. A \<cdot> \<sigma>) ` AA"
definition subst_cls_list :: "'a clause list \<Rightarrow> 's \<Rightarrow> 'a clause list" (infixl "\<cdot>cl" 67) where
"Cs \<cdot>cl \<sigma> = map (\<lambda>A. A \<cdot> \<sigma>) Cs"
definition subst_cls_lists :: "'a clause list \<Rightarrow> 's list \<Rightarrow> 'a clause list" (infixl "\<cdot>\<cdot>cl" 67) where
"Cs \<cdot>\<cdot>cl \<sigma>s = map2 (\<cdot>) Cs \<sigma>s"
definition subst_cls_mset :: "'a clause multiset \<Rightarrow> 's \<Rightarrow> 'a clause multiset" (infixl "\<cdot>cm" 67) where
"CC \<cdot>cm \<sigma> = image_mset (\<lambda>A. A \<cdot> \<sigma>) CC"
lemma subst_cls_add_mset[simp]: "add_mset L C \<cdot> \<sigma> = add_mset (L \<cdot>l \<sigma>) (C \<cdot> \<sigma>)"
unfolding subst_cls_def by simp
lemma subst_cls_mset_add_mset[simp]: "add_mset C CC \<cdot>cm \<sigma> = add_mset (C \<cdot> \<sigma>) (CC \<cdot>cm \<sigma>)"
unfolding subst_cls_mset_def by simp
definition generalizes_atm :: "'a \<Rightarrow> 'a \<Rightarrow> bool" where
"generalizes_atm A B \<longleftrightarrow> (\<exists>\<sigma>. A \<cdot>a \<sigma> = B)"
definition strictly_generalizes_atm :: "'a \<Rightarrow> 'a \<Rightarrow> bool" where
"strictly_generalizes_atm A B \<longleftrightarrow> generalizes_atm A B \<and> \<not> generalizes_atm B A"
definition generalizes_lit :: "'a literal \<Rightarrow> 'a literal \<Rightarrow> bool" where
"generalizes_lit L M \<longleftrightarrow> (\<exists>\<sigma>. L \<cdot>l \<sigma> = M)"
definition strictly_generalizes_lit :: "'a literal \<Rightarrow> 'a literal \<Rightarrow> bool" where
"strictly_generalizes_lit L M \<longleftrightarrow> generalizes_lit L M \<and> \<not> generalizes_lit M L"
-definition generalizes_cls :: "'a clause \<Rightarrow> 'a clause \<Rightarrow> bool" where
- "generalizes_cls C D \<longleftrightarrow> (\<exists>\<sigma>. C \<cdot> \<sigma> = D)"
+definition generalizes :: "'a clause \<Rightarrow> 'a clause \<Rightarrow> bool" where
+ "generalizes C D \<longleftrightarrow> (\<exists>\<sigma>. C \<cdot> \<sigma> = D)"
-definition strictly_generalizes_cls :: "'a clause \<Rightarrow> 'a clause \<Rightarrow> bool" where
- "strictly_generalizes_cls C D \<longleftrightarrow> generalizes_cls C D \<and> \<not> generalizes_cls D C"
+definition strictly_generalizes :: "'a clause \<Rightarrow> 'a clause \<Rightarrow> bool" where
+ "strictly_generalizes C D \<longleftrightarrow> generalizes C D \<and> \<not> generalizes D C"
definition subsumes :: "'a clause \<Rightarrow> 'a clause \<Rightarrow> bool" where
"subsumes C D \<longleftrightarrow> (\<exists>\<sigma>. C \<cdot> \<sigma> \<subseteq># D)"
definition strictly_subsumes :: "'a clause \<Rightarrow> 'a clause \<Rightarrow> bool" where
"strictly_subsumes C D \<longleftrightarrow> subsumes C D \<and> \<not> subsumes D C"
definition variants :: "'a clause \<Rightarrow> 'a clause \<Rightarrow> bool" where
- "variants C D \<longleftrightarrow> generalizes_cls C D \<and> generalizes_cls D C"
+ "variants C D \<longleftrightarrow> generalizes C D \<and> generalizes D C"
definition is_renaming :: "'s \<Rightarrow> bool" where
"is_renaming \<sigma> \<longleftrightarrow> (\<exists>\<tau>. \<sigma> \<odot> \<tau> = id_subst)"
definition is_renaming_list :: "'s list \<Rightarrow> bool" where
"is_renaming_list \<sigma>s \<longleftrightarrow> (\<forall>\<sigma> \<in> set \<sigma>s. is_renaming \<sigma>)"
definition inv_renaming :: "'s \<Rightarrow> 's" where
"inv_renaming \<sigma> = (SOME \<tau>. \<sigma> \<odot> \<tau> = id_subst)"
definition is_ground_atm :: "'a \<Rightarrow> bool" where
"is_ground_atm A \<longleftrightarrow> (\<forall>\<sigma>. A = A \<cdot>a \<sigma>)"
definition is_ground_atms :: "'a set \<Rightarrow> bool" where
"is_ground_atms AA = (\<forall>A \<in> AA. is_ground_atm A)"
definition is_ground_atm_list :: "'a list \<Rightarrow> bool" where
"is_ground_atm_list As \<longleftrightarrow> (\<forall>A \<in> set As. is_ground_atm A)"
definition is_ground_atm_mset :: "'a multiset \<Rightarrow> bool" where
"is_ground_atm_mset AA \<longleftrightarrow> (\<forall>A. A \<in># AA \<longrightarrow> is_ground_atm A)"
definition is_ground_lit :: "'a literal \<Rightarrow> bool" where
"is_ground_lit L \<longleftrightarrow> is_ground_atm (atm_of L)"
definition is_ground_cls :: "'a clause \<Rightarrow> bool" where
"is_ground_cls C \<longleftrightarrow> (\<forall>L. L \<in># C \<longrightarrow> is_ground_lit L)"
definition is_ground_clss :: "'a clause set \<Rightarrow> bool" where
"is_ground_clss CC \<longleftrightarrow> (\<forall>C \<in> CC. is_ground_cls C)"
definition is_ground_cls_list :: "'a clause list \<Rightarrow> bool" where
"is_ground_cls_list CC \<longleftrightarrow> (\<forall>C \<in> set CC. is_ground_cls C)"
definition is_ground_subst :: "'s \<Rightarrow> bool" where
"is_ground_subst \<sigma> \<longleftrightarrow> (\<forall>A. is_ground_atm (A \<cdot>a \<sigma>))"
definition is_ground_subst_list :: "'s list \<Rightarrow> bool" where
"is_ground_subst_list \<sigma>s \<longleftrightarrow> (\<forall>\<sigma> \<in> set \<sigma>s. is_ground_subst \<sigma>)"
definition grounding_of_cls :: "'a clause \<Rightarrow> 'a clause set" where
- "grounding_of_cls C = {C \<cdot> \<sigma> | \<sigma>. is_ground_subst \<sigma>}"
+ "grounding_of_cls C = {C \<cdot> \<sigma> |\<sigma>. is_ground_subst \<sigma>}"
definition grounding_of_clss :: "'a clause set \<Rightarrow> 'a clause set" where
"grounding_of_clss CC = (\<Union>C \<in> CC. grounding_of_cls C)"
definition is_unifier :: "'s \<Rightarrow> 'a set \<Rightarrow> bool" where
"is_unifier \<sigma> AA \<longleftrightarrow> card (AA \<cdot>as \<sigma>) \<le> 1"
definition is_unifiers :: "'s \<Rightarrow> 'a set set \<Rightarrow> bool" where
"is_unifiers \<sigma> AAA \<longleftrightarrow> (\<forall>AA \<in> AAA. is_unifier \<sigma> AA)"
definition is_mgu :: "'s \<Rightarrow> 'a set set \<Rightarrow> bool" where
"is_mgu \<sigma> AAA \<longleftrightarrow> is_unifiers \<sigma> AAA \<and> (\<forall>\<tau>. is_unifiers \<tau> AAA \<longrightarrow> (\<exists>\<gamma>. \<tau> = \<sigma> \<odot> \<gamma>))"
definition var_disjoint :: "'a clause list \<Rightarrow> bool" where
"var_disjoint Cs \<longleftrightarrow>
(\<forall>\<sigma>s. length \<sigma>s = length Cs \<longrightarrow> (\<exists>\<tau>. \<forall>i < length Cs. \<forall>S. S \<subseteq># Cs ! i \<longrightarrow> S \<cdot> \<sigma>s ! i = S \<cdot> \<tau>))"
end
subsection \<open>Substitution Lemmas\<close>
locale substitution = substitution_ops subst_atm id_subst comp_subst
for
subst_atm :: "'a \<Rightarrow> 's \<Rightarrow> 'a" and
id_subst :: 's and
comp_subst :: "'s \<Rightarrow> 's \<Rightarrow> 's" +
fixes
renamings_apart :: "'a clause list \<Rightarrow> 's list" and
atm_of_atms :: "'a list \<Rightarrow> 'a"
assumes
subst_atm_id_subst[simp]: "A \<cdot>a id_subst = A" and
subst_atm_comp_subst[simp]: "A \<cdot>a (\<sigma> \<odot> \<tau>) = (A \<cdot>a \<sigma>) \<cdot>a \<tau>" and
subst_ext: "(\<And>A. A \<cdot>a \<sigma> = A \<cdot>a \<tau>) \<Longrightarrow> \<sigma> = \<tau>" and
make_ground_subst: "is_ground_cls (C \<cdot> \<sigma>) \<Longrightarrow> \<exists>\<tau>. is_ground_subst \<tau> \<and>C \<cdot> \<tau> = C \<cdot> \<sigma>" and
wf_strictly_generalizes_atm: "wfP strictly_generalizes_atm" and
renamings_apart_length: "length (renamings_apart Cs) = length Cs" and
renamings_apart_renaming: "\<rho> \<in> set (renamings_apart Cs) \<Longrightarrow> is_renaming \<rho>" and
renamings_apart_var_disjoint: "var_disjoint (Cs \<cdot>\<cdot>cl (renamings_apart Cs))" and
atm_of_atms_subst:
"\<And>As Bs. atm_of_atms As \<cdot>a \<sigma> = atm_of_atms Bs \<longleftrightarrow> map (\<lambda>A. A \<cdot>a \<sigma>) As = Bs"
begin
lemma subst_ext_iff: "\<sigma> = \<tau> \<longleftrightarrow> (\<forall>A. A \<cdot>a \<sigma> = A \<cdot>a \<tau>)"
by (blast intro: subst_ext)
subsubsection \<open>Identity Substitution\<close>
lemma id_subst_comp_subst[simp]: "id_subst \<odot> \<sigma> = \<sigma>"
by (rule subst_ext) simp
lemma comp_subst_id_subst[simp]: "\<sigma> \<odot> id_subst = \<sigma>"
by (rule subst_ext) simp
lemma id_subst_comp_substs[simp]: "replicate (length \<sigma>s) id_subst \<odot>s \<sigma>s = \<sigma>s"
using comp_substs_def by (induction \<sigma>s) auto
lemma comp_substs_id_subst[simp]: "\<sigma>s \<odot>s replicate (length \<sigma>s) id_subst = \<sigma>s"
using comp_substs_def by (induction \<sigma>s) auto
lemma subst_atms_id_subst[simp]: "AA \<cdot>as id_subst = AA"
unfolding subst_atms_def by simp
lemma subst_atmss_id_subst[simp]: "AAA \<cdot>ass id_subst = AAA"
unfolding subst_atmss_def by simp
lemma subst_atm_list_id_subst[simp]: "As \<cdot>al id_subst = As"
unfolding subst_atm_list_def by auto
lemma subst_atm_mset_id_subst[simp]: "AA \<cdot>am id_subst = AA"
unfolding subst_atm_mset_def by simp
lemma subst_atm_mset_list_id_subst[simp]: "AAs \<cdot>aml id_subst = AAs"
unfolding subst_atm_mset_list_def by simp
lemma subst_atm_mset_lists_id_subst[simp]: "AAs \<cdot>\<cdot>aml replicate (length AAs) id_subst = AAs"
unfolding subst_atm_mset_lists_def by (induct AAs) auto
lemma subst_lit_id_subst[simp]: "L \<cdot>l id_subst = L"
unfolding subst_lit_def by (simp add: literal.map_ident)
lemma subst_cls_id_subst[simp]: "C \<cdot> id_subst = C"
unfolding subst_cls_def by simp
lemma subst_clss_id_subst[simp]: "CC \<cdot>cs id_subst = CC"
unfolding subst_clss_def by simp
lemma subst_cls_list_id_subst[simp]: "Cs \<cdot>cl id_subst = Cs"
unfolding subst_cls_list_def by simp
lemma subst_cls_lists_id_subst[simp]: "Cs \<cdot>\<cdot>cl replicate (length Cs) id_subst = Cs"
unfolding subst_cls_lists_def by (induct Cs) auto
lemma subst_cls_mset_id_subst[simp]: "CC \<cdot>cm id_subst = CC"
unfolding subst_cls_mset_def by simp
subsubsection \<open>Associativity of Composition\<close>
lemma comp_subst_assoc[simp]: "\<sigma> \<odot> (\<tau> \<odot> \<gamma>) = \<sigma> \<odot> \<tau> \<odot> \<gamma>"
by (rule subst_ext) simp
subsubsection \<open>Compatibility of Substitution and Composition\<close>
lemma subst_atms_comp_subst[simp]: "AA \<cdot>as (\<tau> \<odot> \<sigma>) = AA \<cdot>as \<tau> \<cdot>as \<sigma>"
unfolding subst_atms_def by auto
lemma subst_atmss_comp_subst[simp]: "AAA \<cdot>ass (\<tau> \<odot> \<sigma>) = AAA \<cdot>ass \<tau> \<cdot>ass \<sigma>"
unfolding subst_atmss_def by auto
lemma subst_atm_list_comp_subst[simp]: "As \<cdot>al (\<tau> \<odot> \<sigma>) = As \<cdot>al \<tau> \<cdot>al \<sigma>"
unfolding subst_atm_list_def by auto
lemma subst_atm_mset_comp_subst[simp]: "AA \<cdot>am (\<tau> \<odot> \<sigma>) = AA \<cdot>am \<tau> \<cdot>am \<sigma>"
unfolding subst_atm_mset_def by auto
lemma subst_atm_mset_list_comp_subst[simp]: "AAs \<cdot>aml (\<tau> \<odot> \<sigma>) = (AAs \<cdot>aml \<tau>) \<cdot>aml \<sigma>"
unfolding subst_atm_mset_list_def by auto
lemma subst_atm_mset_lists_comp_substs[simp]: "AAs \<cdot>\<cdot>aml (\<tau>s \<odot>s \<sigma>s) = AAs \<cdot>\<cdot>aml \<tau>s \<cdot>\<cdot>aml \<sigma>s"
unfolding subst_atm_mset_lists_def comp_substs_def map_zip_map map_zip_map2 map_zip_assoc
by (simp add: split_def)
lemma subst_lit_comp_subst[simp]: "L \<cdot>l (\<tau> \<odot> \<sigma>) = L \<cdot>l \<tau> \<cdot>l \<sigma>"
unfolding subst_lit_def by (auto simp: literal.map_comp o_def)
lemma subst_cls_comp_subst[simp]: "C \<cdot> (\<tau> \<odot> \<sigma>) = C \<cdot> \<tau> \<cdot> \<sigma>"
unfolding subst_cls_def by auto
lemma subst_clsscomp_subst[simp]: "CC \<cdot>cs (\<tau> \<odot> \<sigma>) = CC \<cdot>cs \<tau> \<cdot>cs \<sigma>"
unfolding subst_clss_def by auto
lemma subst_cls_list_comp_subst[simp]: "Cs \<cdot>cl (\<tau> \<odot> \<sigma>) = Cs \<cdot>cl \<tau> \<cdot>cl \<sigma>"
unfolding subst_cls_list_def by auto
lemma subst_cls_lists_comp_substs[simp]: "Cs \<cdot>\<cdot>cl (\<tau>s \<odot>s \<sigma>s) = Cs \<cdot>\<cdot>cl \<tau>s \<cdot>\<cdot>cl \<sigma>s"
unfolding subst_cls_lists_def comp_substs_def map_zip_map map_zip_map2 map_zip_assoc
by (simp add: split_def)
lemma subst_cls_mset_comp_subst[simp]: "CC \<cdot>cm (\<tau> \<odot> \<sigma>) = CC \<cdot>cm \<tau> \<cdot>cm \<sigma>"
unfolding subst_cls_mset_def by auto
subsubsection \<open>``Commutativity'' of Membership and Substitution\<close>
lemma Melem_subst_atm_mset[simp]: "A \<in># AA \<cdot>am \<sigma> \<longleftrightarrow> (\<exists>B. B \<in># AA \<and> A = B \<cdot>a \<sigma>)"
unfolding subst_atm_mset_def by auto
lemma Melem_subst_cls[simp]: "L \<in># C \<cdot> \<sigma> \<longleftrightarrow> (\<exists>M. M \<in># C \<and> L = M \<cdot>l \<sigma>)"
unfolding subst_cls_def by auto
lemma Melem_subst_cls_mset[simp]: "AA \<in># CC \<cdot>cm \<sigma> \<longleftrightarrow> (\<exists>BB. BB \<in># CC \<and> AA = BB \<cdot> \<sigma>)"
unfolding subst_cls_mset_def by auto
subsubsection \<open>Signs and Substitutions\<close>
lemma subst_lit_is_neg[simp]: "is_neg (L \<cdot>l \<sigma>) = is_neg L"
unfolding subst_lit_def by auto
lemma subst_lit_is_pos[simp]: "is_pos (L \<cdot>l \<sigma>) = is_pos L"
unfolding subst_lit_def by auto
lemma subst_minus[simp]: "(- L) \<cdot>l \<mu> = - (L \<cdot>l \<mu>)"
by (simp add: literal.map_sel subst_lit_def uminus_literal_def)
subsubsection \<open>Substitution on Literal(s)\<close>
lemma eql_neg_lit_eql_atm[simp]: "(Neg A' \<cdot>l \<eta>) = Neg A \<longleftrightarrow> A' \<cdot>a \<eta> = A"
by (simp add: subst_lit_def)
lemma eql_pos_lit_eql_atm[simp]: "(Pos A' \<cdot>l \<eta>) = Pos A \<longleftrightarrow> A' \<cdot>a \<eta> = A"
by (simp add: subst_lit_def)
lemma subst_cls_negs[simp]: "(negs AA) \<cdot> \<sigma> = negs (AA \<cdot>am \<sigma>)"
unfolding subst_cls_def subst_lit_def subst_atm_mset_def by auto
lemma subst_cls_poss[simp]: "(poss AA) \<cdot> \<sigma> = poss (AA \<cdot>am \<sigma>)"
unfolding subst_cls_def subst_lit_def subst_atm_mset_def by auto
lemma atms_of_subst_atms: "atms_of C \<cdot>as \<sigma> = atms_of (C \<cdot> \<sigma>)"
proof -
have "atms_of (C \<cdot> \<sigma>) = set_mset (image_mset atm_of (image_mset (map_literal (\<lambda>A. A \<cdot>a \<sigma>)) C))"
unfolding subst_cls_def subst_atms_def subst_lit_def atms_of_def by auto
also have "... = set_mset (image_mset (\<lambda>A. A \<cdot>a \<sigma>) (image_mset atm_of C))"
by simp (meson literal.map_sel)
finally show "atms_of C \<cdot>as \<sigma> = atms_of (C \<cdot> \<sigma>)"
unfolding subst_atms_def atms_of_def by auto
qed
lemma in_image_Neg_is_neg[simp]: "L \<cdot>l \<sigma> \<in> Neg ` AA \<Longrightarrow> is_neg L"
by (metis bex_imageD literal.disc(2) literal.map_disc_iff subst_lit_def)
lemma subst_lit_in_negs_subst_is_neg: "L \<cdot>l \<sigma> \<in># (negs AA) \<cdot> \<tau> \<Longrightarrow> is_neg L"
by simp
lemma subst_lit_in_negs_is_neg: "L \<cdot>l \<sigma> \<in># negs AA \<Longrightarrow> is_neg L"
by simp
subsubsection \<open>Substitution on Empty\<close>
lemma subst_atms_empty[simp]: "{} \<cdot>as \<sigma> = {}"
unfolding subst_atms_def by auto
lemma subst_atmss_empty[simp]: "{} \<cdot>ass \<sigma> = {}"
unfolding subst_atmss_def by auto
lemma comp_substs_empty_iff[simp]: "\<sigma>s \<odot>s \<eta>s = [] \<longleftrightarrow> \<sigma>s = [] \<or> \<eta>s = []"
using comp_substs_def map2_empty_iff by auto
lemma subst_atm_list_empty[simp]: "[] \<cdot>al \<sigma> = []"
unfolding subst_atm_list_def by auto
lemma subst_atm_mset_empty[simp]: "{#} \<cdot>am \<sigma> = {#}"
unfolding subst_atm_mset_def by auto
lemma subst_atm_mset_list_empty[simp]: "[] \<cdot>aml \<sigma> = []"
unfolding subst_atm_mset_list_def by auto
lemma subst_atm_mset_lists_empty[simp]: "[] \<cdot>\<cdot>aml \<sigma>s = []"
unfolding subst_atm_mset_lists_def by auto
lemma subst_cls_empty[simp]: "{#} \<cdot> \<sigma> = {#}"
unfolding subst_cls_def by auto
lemma subst_clss_empty[simp]: "{} \<cdot>cs \<sigma> = {}"
unfolding subst_clss_def by auto
lemma subst_cls_list_empty[simp]: "[] \<cdot>cl \<sigma> = []"
unfolding subst_cls_list_def by auto
lemma subst_cls_lists_empty[simp]: "[] \<cdot>\<cdot>cl \<sigma>s = []"
unfolding subst_cls_lists_def by auto
lemma subst_scls_mset_empty[simp]: "{#} \<cdot>cm \<sigma> = {#}"
unfolding subst_cls_mset_def by auto
lemma subst_atms_empty_iff[simp]: "AA \<cdot>as \<eta> = {} \<longleftrightarrow> AA = {}"
unfolding subst_atms_def by auto
lemma subst_atmss_empty_iff[simp]: "AAA \<cdot>ass \<eta> = {} \<longleftrightarrow> AAA = {}"
unfolding subst_atmss_def by auto
lemma subst_atm_list_empty_iff[simp]: "As \<cdot>al \<eta> = [] \<longleftrightarrow> As = []"
unfolding subst_atm_list_def by auto
lemma subst_atm_mset_empty_iff[simp]: "AA \<cdot>am \<eta> = {#} \<longleftrightarrow> AA = {#}"
unfolding subst_atm_mset_def by auto
lemma subst_atm_mset_list_empty_iff[simp]: "AAs \<cdot>aml \<eta> = [] \<longleftrightarrow> AAs = []"
unfolding subst_atm_mset_list_def by auto
lemma subst_atm_mset_lists_empty_iff[simp]: "AAs \<cdot>\<cdot>aml \<eta>s = [] \<longleftrightarrow> (AAs = [] \<or> \<eta>s = [])"
using map2_empty_iff subst_atm_mset_lists_def by auto
lemma subst_cls_empty_iff[simp]: "C \<cdot> \<eta> = {#} \<longleftrightarrow> C = {#}"
unfolding subst_cls_def by auto
lemma subst_clss_empty_iff[simp]: "CC \<cdot>cs \<eta> = {} \<longleftrightarrow> CC = {}"
unfolding subst_clss_def by auto
lemma subst_cls_list_empty_iff[simp]: "Cs \<cdot>cl \<eta> = [] \<longleftrightarrow> Cs = []"
unfolding subst_cls_list_def by auto
-lemma subst_cls_lists_empty_iff[simp]: "Cs \<cdot>\<cdot>cl \<eta>s = [] \<longleftrightarrow> (Cs = [] \<or> \<eta>s = [])"
+lemma subst_cls_lists_empty_iff[simp]: "Cs \<cdot>\<cdot>cl \<eta>s = [] \<longleftrightarrow> Cs = [] \<or> \<eta>s = []"
using map2_empty_iff subst_cls_lists_def by auto
lemma subst_cls_mset_empty_iff[simp]: "CC \<cdot>cm \<eta> = {#} \<longleftrightarrow> CC = {#}"
unfolding subst_cls_mset_def by auto
subsubsection \<open>Substitution on a Union\<close>
lemma subst_atms_union[simp]: "(AA \<union> BB) \<cdot>as \<sigma> = AA \<cdot>as \<sigma> \<union> BB \<cdot>as \<sigma>"
unfolding subst_atms_def by auto
lemma subst_atmss_union[simp]: "(AAA \<union> BBB) \<cdot>ass \<sigma> = AAA \<cdot>ass \<sigma> \<union> BBB \<cdot>ass \<sigma>"
unfolding subst_atmss_def by auto
lemma subst_atm_list_append[simp]: "(As @ Bs) \<cdot>al \<sigma> = As \<cdot>al \<sigma> @ Bs \<cdot>al \<sigma>"
unfolding subst_atm_list_def by auto
lemma subst_atm_mset_union[simp]: "(AA + BB) \<cdot>am \<sigma> = AA \<cdot>am \<sigma> + BB \<cdot>am \<sigma>"
unfolding subst_atm_mset_def by auto
lemma subst_atm_mset_list_append[simp]: "(AAs @ BBs) \<cdot>aml \<sigma> = AAs \<cdot>aml \<sigma> @ BBs \<cdot>aml \<sigma>"
unfolding subst_atm_mset_list_def by auto
lemma subst_cls_union[simp]: "(C + D) \<cdot> \<sigma> = C \<cdot> \<sigma> + D \<cdot> \<sigma>"
unfolding subst_cls_def by auto
lemma subst_clss_union[simp]: "(CC \<union> DD) \<cdot>cs \<sigma> = CC \<cdot>cs \<sigma> \<union> DD \<cdot>cs \<sigma>"
unfolding subst_clss_def by auto
lemma subst_cls_list_append[simp]: "(Cs @ Ds) \<cdot>cl \<sigma> = Cs \<cdot>cl \<sigma> @ Ds \<cdot>cl \<sigma>"
unfolding subst_cls_list_def by auto
+lemma subst_cls_lists_append[simp]:
+ "length Cs = length \<sigma>s \<Longrightarrow> length Cs' = length \<sigma>s' \<Longrightarrow>
+ (Cs @ Cs') \<cdot>\<cdot>cl (\<sigma>s @ \<sigma>s') = Cs \<cdot>\<cdot>cl \<sigma>s @ Cs' \<cdot>\<cdot>cl \<sigma>s'"
+ unfolding subst_cls_lists_def by auto
+
lemma subst_cls_mset_union[simp]: "(CC + DD) \<cdot>cm \<sigma> = CC \<cdot>cm \<sigma> + DD \<cdot>cm \<sigma>"
unfolding subst_cls_mset_def by auto
subsubsection \<open>Substitution on a Singleton\<close>
lemma subst_atms_single[simp]: "{A} \<cdot>as \<sigma> = {A \<cdot>a \<sigma>}"
unfolding subst_atms_def by auto
lemma subst_atmss_single[simp]: "{AA} \<cdot>ass \<sigma> = {AA \<cdot>as \<sigma>}"
unfolding subst_atmss_def by auto
lemma subst_atm_list_single[simp]: "[A] \<cdot>al \<sigma> = [A \<cdot>a \<sigma>]"
unfolding subst_atm_list_def by auto
lemma subst_atm_mset_single[simp]: "{#A#} \<cdot>am \<sigma> = {#A \<cdot>a \<sigma>#}"
unfolding subst_atm_mset_def by auto
lemma subst_atm_mset_list[simp]: "[AA] \<cdot>aml \<sigma> = [AA \<cdot>am \<sigma>]"
unfolding subst_atm_mset_list_def by auto
lemma subst_cls_single[simp]: "{#L#} \<cdot> \<sigma> = {#L \<cdot>l \<sigma>#}"
by simp
lemma subst_clss_single[simp]: "{C} \<cdot>cs \<sigma> = {C \<cdot> \<sigma>}"
unfolding subst_clss_def by auto
lemma subst_cls_list_single[simp]: "[C] \<cdot>cl \<sigma> = [C \<cdot> \<sigma>]"
unfolding subst_cls_list_def by auto
+lemma subst_cls_lists_single[simp]: "[C] \<cdot>\<cdot>cl [\<sigma>] = [C \<cdot> \<sigma>]"
+ unfolding subst_cls_lists_def by auto
+
lemma subst_cls_mset_single[simp]: "{#C#} \<cdot>cm \<sigma> = {#C \<cdot> \<sigma>#}"
by simp
subsubsection \<open>Substitution on @{term Cons}\<close>
lemma subst_atm_list_Cons[simp]: "(A # As) \<cdot>al \<sigma> = A \<cdot>a \<sigma> # As \<cdot>al \<sigma>"
unfolding subst_atm_list_def by auto
lemma subst_atm_mset_list_Cons[simp]: "(A # As) \<cdot>aml \<sigma> = A \<cdot>am \<sigma> # As \<cdot>aml \<sigma>"
unfolding subst_atm_mset_list_def by auto
lemma subst_atm_mset_lists_Cons[simp]: "(C # Cs) \<cdot>\<cdot>aml (\<sigma> # \<sigma>s) = C \<cdot>am \<sigma> # Cs \<cdot>\<cdot>aml \<sigma>s"
unfolding subst_atm_mset_lists_def by auto
lemma subst_cls_list_Cons[simp]: "(C # Cs) \<cdot>cl \<sigma> = C \<cdot> \<sigma> # Cs \<cdot>cl \<sigma>"
unfolding subst_cls_list_def by auto
lemma subst_cls_lists_Cons[simp]: "(C # Cs) \<cdot>\<cdot>cl (\<sigma> # \<sigma>s) = C \<cdot> \<sigma> # Cs \<cdot>\<cdot>cl \<sigma>s"
unfolding subst_cls_lists_def by auto
subsubsection \<open>Substitution on @{term tl}\<close>
-lemma subst_atm_list_tl[simp]: "tl (As \<cdot>al \<eta>) = tl As \<cdot>al \<eta>"
- by (induction As) auto
+lemma subst_atm_list_tl[simp]: "tl (As \<cdot>al \<sigma>) = tl As \<cdot>al \<sigma>"
+ by (cases As) auto
-lemma subst_atm_mset_list_tl[simp]: "tl (AAs \<cdot>aml \<eta>) = tl AAs \<cdot>aml \<eta>"
- by (induction AAs) auto
+lemma subst_atm_mset_list_tl[simp]: "tl (AAs \<cdot>aml \<sigma>) = tl AAs \<cdot>aml \<sigma>"
+ by (cases AAs) auto
+
+lemma subst_cls_list_tl[simp]: "tl (Cs \<cdot>cl \<sigma>) = tl Cs \<cdot>cl \<sigma>"
+ by (cases Cs) auto
+
+lemma subst_cls_lists_tl[simp]: "length Cs = length \<sigma>s \<Longrightarrow> tl (Cs \<cdot>\<cdot>cl \<sigma>s) = tl Cs \<cdot>\<cdot>cl tl \<sigma>s"
+ by (cases Cs; cases \<sigma>s) auto
subsubsection \<open>Substitution on @{term nth}\<close>
lemma comp_substs_nth[simp]:
"length \<tau>s = length \<sigma>s \<Longrightarrow> i < length \<tau>s \<Longrightarrow> (\<tau>s \<odot>s \<sigma>s) ! i = (\<tau>s ! i) \<odot> (\<sigma>s ! i)"
by (simp add: comp_substs_def)
lemma subst_atm_list_nth[simp]: "i < length As \<Longrightarrow> (As \<cdot>al \<tau>) ! i = As ! i \<cdot>a \<tau>"
unfolding subst_atm_list_def using less_Suc_eq_0_disj nth_map by force
lemma subst_atm_mset_list_nth[simp]: "i < length AAs \<Longrightarrow> (AAs \<cdot>aml \<eta>) ! i = (AAs ! i) \<cdot>am \<eta>"
unfolding subst_atm_mset_list_def by auto
lemma subst_atm_mset_lists_nth[simp]:
"length AAs = length \<sigma>s \<Longrightarrow> i < length AAs \<Longrightarrow> (AAs \<cdot>\<cdot>aml \<sigma>s) ! i = (AAs ! i) \<cdot>am (\<sigma>s ! i)"
unfolding subst_atm_mset_lists_def by auto
lemma subst_cls_list_nth[simp]: "i < length Cs \<Longrightarrow> (Cs \<cdot>cl \<tau>) ! i = (Cs ! i) \<cdot> \<tau>"
unfolding subst_cls_list_def using less_Suc_eq_0_disj nth_map by (induction Cs) auto
lemma subst_cls_lists_nth[simp]:
"length Cs = length \<sigma>s \<Longrightarrow> i < length Cs \<Longrightarrow> (Cs \<cdot>\<cdot>cl \<sigma>s) ! i = (Cs ! i) \<cdot> (\<sigma>s ! i)"
unfolding subst_cls_lists_def by auto
subsubsection \<open>Substitution on Various Other Functions\<close>
lemma subst_clss_image[simp]: "image f X \<cdot>cs \<sigma> = {f x \<cdot> \<sigma> | x. x \<in> X}"
unfolding subst_clss_def by auto
lemma subst_cls_mset_image_mset[simp]: "image_mset f X \<cdot>cm \<sigma> = {# f x \<cdot> \<sigma>. x \<in># X #}"
unfolding subst_cls_mset_def by auto
lemma mset_subst_atm_list_subst_atm_mset[simp]: "mset (As \<cdot>al \<sigma>) = mset (As) \<cdot>am \<sigma>"
unfolding subst_atm_list_def subst_atm_mset_def by auto
lemma mset_subst_cls_list_subst_cls_mset: "mset (Cs \<cdot>cl \<sigma>) = (mset Cs) \<cdot>cm \<sigma>"
unfolding subst_cls_mset_def subst_cls_list_def by auto
lemma sum_list_subst_cls_list_subst_cls[simp]: "sum_list (Cs \<cdot>cl \<eta>) = sum_list Cs \<cdot> \<eta>"
unfolding subst_cls_list_def by (induction Cs) auto
lemma set_mset_subst_cls_mset_subst_clss: "set_mset (CC \<cdot>cm \<mu>) = (set_mset CC) \<cdot>cs \<mu>"
by (simp add: subst_cls_mset_def subst_clss_def)
lemma Neg_Melem_subst_atm_subst_cls[simp]: "Neg A \<in># C \<Longrightarrow> Neg (A \<cdot>a \<sigma>) \<in># C \<cdot> \<sigma> "
by (metis Melem_subst_cls eql_neg_lit_eql_atm)
lemma Pos_Melem_subst_atm_subst_cls[simp]: "Pos A \<in># C \<Longrightarrow> Pos (A \<cdot>a \<sigma>) \<in># C \<cdot> \<sigma> "
by (metis Melem_subst_cls eql_pos_lit_eql_atm)
lemma in_atms_of_subst[simp]: "B \<in> atms_of C \<Longrightarrow> B \<cdot>a \<sigma> \<in> atms_of (C \<cdot> \<sigma>)"
by (metis atms_of_subst_atms image_iff subst_atms_def)
subsubsection \<open>Renamings\<close>
lemma is_renaming_id_subst[simp]: "is_renaming id_subst"
unfolding is_renaming_def by simp
lemma is_renamingD: "is_renaming \<sigma> \<Longrightarrow> (\<forall>A1 A2. A1 \<cdot>a \<sigma> = A2 \<cdot>a \<sigma> \<longleftrightarrow> A1 = A2)"
by (metis is_renaming_def subst_atm_comp_subst subst_atm_id_subst)
lemma inv_renaming_cancel_r[simp]: "is_renaming r \<Longrightarrow> r \<odot> inv_renaming r = id_subst"
unfolding inv_renaming_def is_renaming_def by (metis (mono_tags) someI_ex)
lemma inv_renaming_cancel_r_list[simp]:
"is_renaming_list rs \<Longrightarrow> rs \<odot>s map inv_renaming rs = replicate (length rs) id_subst"
unfolding is_renaming_list_def by (induction rs) (auto simp add: comp_substs_def)
lemma Nil_comp_substs[simp]: "[] \<odot>s s = []"
unfolding comp_substs_def by auto
lemma comp_substs_Nil[simp]: "s \<odot>s [] = []"
unfolding comp_substs_def by auto
lemma is_renaming_idempotent_id_subst: "is_renaming r \<Longrightarrow> r \<odot> r = r \<Longrightarrow> r = id_subst"
by (metis comp_subst_assoc comp_subst_id_subst inv_renaming_cancel_r)
lemma is_renaming_left_id_subst_right_id_subst:
"is_renaming r \<Longrightarrow> s \<odot> r = id_subst \<Longrightarrow> r \<odot> s = id_subst"
by (metis comp_subst_assoc comp_subst_id_subst is_renaming_def)
lemma is_renaming_closure: "is_renaming r1 \<Longrightarrow> is_renaming r2 \<Longrightarrow> is_renaming (r1 \<odot> r2)"
unfolding is_renaming_def by (metis comp_subst_assoc comp_subst_id_subst)
lemma is_renaming_inv_renaming_cancel_atm[simp]: "is_renaming \<rho> \<Longrightarrow> A \<cdot>a \<rho> \<cdot>a inv_renaming \<rho> = A"
by (metis inv_renaming_cancel_r subst_atm_comp_subst subst_atm_id_subst)
lemma is_renaming_inv_renaming_cancel_atms[simp]: "is_renaming \<rho> \<Longrightarrow> AA \<cdot>as \<rho> \<cdot>as inv_renaming \<rho> = AA"
by (metis inv_renaming_cancel_r subst_atms_comp_subst subst_atms_id_subst)
lemma is_renaming_inv_renaming_cancel_atmss[simp]: "is_renaming \<rho> \<Longrightarrow> AAA \<cdot>ass \<rho> \<cdot>ass inv_renaming \<rho> = AAA"
by (metis inv_renaming_cancel_r subst_atmss_comp_subst subst_atmss_id_subst)
lemma is_renaming_inv_renaming_cancel_atm_list[simp]: "is_renaming \<rho> \<Longrightarrow> As \<cdot>al \<rho> \<cdot>al inv_renaming \<rho> = As"
by (metis inv_renaming_cancel_r subst_atm_list_comp_subst subst_atm_list_id_subst)
lemma is_renaming_inv_renaming_cancel_atm_mset[simp]: "is_renaming \<rho> \<Longrightarrow> AA \<cdot>am \<rho> \<cdot>am inv_renaming \<rho> = AA"
by (metis inv_renaming_cancel_r subst_atm_mset_comp_subst subst_atm_mset_id_subst)
lemma is_renaming_inv_renaming_cancel_atm_mset_list[simp]: "is_renaming \<rho> \<Longrightarrow> (AAs \<cdot>aml \<rho>) \<cdot>aml inv_renaming \<rho> = AAs"
by (metis inv_renaming_cancel_r subst_atm_mset_list_comp_subst subst_atm_mset_list_id_subst)
lemma is_renaming_list_inv_renaming_cancel_atm_mset_lists[simp]:
"length AAs = length \<rho>s \<Longrightarrow> is_renaming_list \<rho>s \<Longrightarrow> AAs \<cdot>\<cdot>aml \<rho>s \<cdot>\<cdot>aml map inv_renaming \<rho>s = AAs"
- by (metis inv_renaming_cancel_r_list subst_atm_mset_lists_comp_substs subst_atm_mset_lists_id_subst)
+ by (metis inv_renaming_cancel_r_list subst_atm_mset_lists_comp_substs
+ subst_atm_mset_lists_id_subst)
lemma is_renaming_inv_renaming_cancel_lit[simp]: "is_renaming \<rho> \<Longrightarrow> (L \<cdot>l \<rho>) \<cdot>l inv_renaming \<rho> = L"
by (metis inv_renaming_cancel_r subst_lit_comp_subst subst_lit_id_subst)
lemma is_renaming_inv_renaming_cancel_cls[simp]: "is_renaming \<rho> \<Longrightarrow> C \<cdot> \<rho> \<cdot> inv_renaming \<rho> = C"
by (metis inv_renaming_cancel_r subst_cls_comp_subst subst_cls_id_subst)
-lemma is_renaming_inv_renaming_cancel_clss[simp]: "is_renaming \<rho> \<Longrightarrow> CC \<cdot>cs \<rho> \<cdot>cs inv_renaming \<rho> = CC"
+lemma is_renaming_inv_renaming_cancel_clss[simp]:
+ "is_renaming \<rho> \<Longrightarrow> CC \<cdot>cs \<rho> \<cdot>cs inv_renaming \<rho> = CC"
by (metis inv_renaming_cancel_r subst_clss_id_subst subst_clsscomp_subst)
-lemma is_renaming_inv_renaming_cancel_cls_list[simp]: "is_renaming \<rho> \<Longrightarrow> Cs \<cdot>cl \<rho> \<cdot>cl inv_renaming \<rho> = Cs"
+lemma is_renaming_inv_renaming_cancel_cls_list[simp]:
+ "is_renaming \<rho> \<Longrightarrow> Cs \<cdot>cl \<rho> \<cdot>cl inv_renaming \<rho> = Cs"
by (metis inv_renaming_cancel_r subst_cls_list_comp_subst subst_cls_list_id_subst)
lemma is_renaming_list_inv_renaming_cancel_cls_list[simp]:
"length Cs = length \<rho>s \<Longrightarrow> is_renaming_list \<rho>s \<Longrightarrow> Cs \<cdot>\<cdot>cl \<rho>s \<cdot>\<cdot>cl map inv_renaming \<rho>s = Cs"
by (metis inv_renaming_cancel_r_list subst_cls_lists_comp_substs subst_cls_lists_id_subst)
-lemma is_renaming_inv_renaming_cancel_cls_mset[simp]: "is_renaming \<rho> \<Longrightarrow> CC \<cdot>cm \<rho> \<cdot>cm inv_renaming \<rho> = CC"
+lemma is_renaming_inv_renaming_cancel_cls_mset[simp]:
+ "is_renaming \<rho> \<Longrightarrow> CC \<cdot>cm \<rho> \<cdot>cm inv_renaming \<rho> = CC"
by (metis inv_renaming_cancel_r subst_cls_mset_comp_subst subst_cls_mset_id_subst)
subsubsection \<open>Monotonicity\<close>
lemma subst_cls_mono: "set_mset C \<subseteq> set_mset D \<Longrightarrow> set_mset (C \<cdot> \<sigma>) \<subseteq> set_mset (D \<cdot> \<sigma>)"
by force
lemma subst_cls_mono_mset: "C \<subseteq># D \<Longrightarrow> C \<cdot> \<sigma> \<subseteq># D \<cdot> \<sigma>"
unfolding subst_clss_def by (metis mset_subset_eq_exists_conv subst_cls_union)
lemma subst_subset_mono: "D \<subset># C \<Longrightarrow> D \<cdot> \<sigma> \<subset># C \<cdot> \<sigma>"
unfolding subst_cls_def by (simp add: image_mset_subset_mono)
subsubsection \<open>Size after Substitution\<close>
lemma size_subst[simp]: "size (D \<cdot> \<sigma>) = size D"
unfolding subst_cls_def by auto
lemma subst_atm_list_length[simp]: "length (As \<cdot>al \<sigma>) = length As"
unfolding subst_atm_list_def by auto
lemma length_subst_atm_mset_list[simp]: "length (AAs \<cdot>aml \<eta>) = length AAs"
unfolding subst_atm_mset_list_def by auto
lemma subst_atm_mset_lists_length[simp]: "length (AAs \<cdot>\<cdot>aml \<sigma>s) = min (length AAs) (length \<sigma>s)"
unfolding subst_atm_mset_lists_def by auto
lemma subst_cls_list_length[simp]: "length (Cs \<cdot>cl \<sigma>) = length Cs"
unfolding subst_cls_list_def by auto
lemma comp_substs_length[simp]: "length (\<tau>s \<odot>s \<sigma>s) = min (length \<tau>s) (length \<sigma>s)"
unfolding comp_substs_def by auto
lemma subst_cls_lists_length[simp]: "length (Cs \<cdot>\<cdot>cl \<sigma>s) = min (length Cs) (length \<sigma>s)"
unfolding subst_cls_lists_def by auto
subsubsection \<open>Variable Disjointness\<close>
lemma var_disjoint_clauses:
assumes "var_disjoint Cs"
shows "\<forall>\<sigma>s. length \<sigma>s = length Cs \<longrightarrow> (\<exists>\<tau>. Cs \<cdot>\<cdot>cl \<sigma>s = Cs \<cdot>cl \<tau>)"
proof clarify
fix \<sigma>s :: "'s list"
assume a: "length \<sigma>s = length Cs"
then obtain \<tau> where "\<forall>i < length Cs. \<forall>S. S \<subseteq># Cs ! i \<longrightarrow> S \<cdot> \<sigma>s ! i = S \<cdot> \<tau>"
using assms unfolding var_disjoint_def by blast
then have "\<forall>i < length Cs. (Cs ! i) \<cdot> \<sigma>s ! i = (Cs ! i) \<cdot> \<tau>"
by auto
then have "Cs \<cdot>\<cdot>cl \<sigma>s = Cs \<cdot>cl \<tau>"
using a by (auto intro: nth_equalityI)
then show "\<exists>\<tau>. Cs \<cdot>\<cdot>cl \<sigma>s = Cs \<cdot>cl \<tau>"
by auto
qed
subsubsection \<open>Ground Expressions and Substitutions\<close>
lemma ex_ground_subst: "\<exists>\<sigma>. is_ground_subst \<sigma>"
using make_ground_subst[of "{#}"]
by (simp add: is_ground_cls_def)
lemma is_ground_cls_list_Cons[simp]:
"is_ground_cls_list (C # Cs) = (is_ground_cls C \<and> is_ground_cls_list Cs)"
unfolding is_ground_cls_list_def by auto
paragraph \<open>Ground union\<close>
lemma is_ground_atms_union[simp]: "is_ground_atms (AA \<union> BB) \<longleftrightarrow> is_ground_atms AA \<and> is_ground_atms BB"
unfolding is_ground_atms_def by auto
lemma is_ground_atm_mset_union[simp]:
"is_ground_atm_mset (AA + BB) \<longleftrightarrow> is_ground_atm_mset AA \<and> is_ground_atm_mset BB"
unfolding is_ground_atm_mset_def by auto
lemma is_ground_cls_union[simp]: "is_ground_cls (C + D) \<longleftrightarrow> is_ground_cls C \<and> is_ground_cls D"
unfolding is_ground_cls_def by auto
lemma is_ground_clss_union[simp]:
"is_ground_clss (CC \<union> DD) \<longleftrightarrow> is_ground_clss CC \<and> is_ground_clss DD"
unfolding is_ground_clss_def by auto
lemma is_ground_cls_list_is_ground_cls_sum_list[simp]:
"is_ground_cls_list Cs \<Longrightarrow> is_ground_cls (sum_list Cs)"
by (meson in_mset_sum_list2 is_ground_cls_def is_ground_cls_list_def)
-paragraph \<open>Ground mono\<close>
+paragraph \<open>Grounding monotonicity\<close>
lemma is_ground_cls_mono: "C \<subseteq># D \<Longrightarrow> is_ground_cls D \<Longrightarrow> is_ground_cls C"
unfolding is_ground_cls_def by (metis set_mset_mono subsetD)
lemma is_ground_clss_mono: "CC \<subseteq> DD \<Longrightarrow> is_ground_clss DD \<Longrightarrow> is_ground_clss CC"
unfolding is_ground_clss_def by blast
lemma grounding_of_clss_mono: "CC \<subseteq> DD \<Longrightarrow> grounding_of_clss CC \<subseteq> grounding_of_clss DD"
using grounding_of_clss_def by auto
lemma sum_list_subseteq_mset_is_ground_cls_list[simp]:
"sum_list Cs \<subseteq># sum_list Ds \<Longrightarrow> is_ground_cls_list Ds \<Longrightarrow> is_ground_cls_list Cs"
by (meson in_mset_sum_list is_ground_cls_def is_ground_cls_list_is_ground_cls_sum_list
is_ground_cls_mono is_ground_cls_list_def)
paragraph \<open>Substituting on ground expression preserves ground\<close>
lemma is_ground_comp_subst[simp]: "is_ground_subst \<sigma> \<Longrightarrow> is_ground_subst (\<tau> \<odot> \<sigma>)"
unfolding is_ground_subst_def is_ground_atm_def by auto
lemma ground_subst_ground_atm[simp]: "is_ground_subst \<sigma> \<Longrightarrow> is_ground_atm (A \<cdot>a \<sigma>)"
by (simp add: is_ground_subst_def)
lemma ground_subst_ground_lit[simp]: "is_ground_subst \<sigma> \<Longrightarrow> is_ground_lit (L \<cdot>l \<sigma>)"
unfolding is_ground_lit_def subst_lit_def by (cases L) auto
lemma ground_subst_ground_cls[simp]: "is_ground_subst \<sigma> \<Longrightarrow> is_ground_cls (C \<cdot> \<sigma>)"
unfolding is_ground_cls_def by auto
lemma ground_subst_ground_clss[simp]: "is_ground_subst \<sigma> \<Longrightarrow> is_ground_clss (CC \<cdot>cs \<sigma>)"
unfolding is_ground_clss_def subst_clss_def by auto
lemma ground_subst_ground_cls_list[simp]: "is_ground_subst \<sigma> \<Longrightarrow> is_ground_cls_list (Cs \<cdot>cl \<sigma>)"
unfolding is_ground_cls_list_def subst_cls_list_def by auto
lemma ground_subst_ground_cls_lists[simp]:
"\<forall>\<sigma> \<in> set \<sigma>s. is_ground_subst \<sigma> \<Longrightarrow> is_ground_cls_list (Cs \<cdot>\<cdot>cl \<sigma>s)"
unfolding is_ground_cls_list_def subst_cls_lists_def by (auto simp: set_zip)
+lemma subst_cls_eq_grounding_of_cls_subset_eq:
+ assumes "D \<cdot> \<sigma> = C"
+ shows "grounding_of_cls C \<subseteq> grounding_of_cls D"
+proof
+ fix C\<sigma>'
+ assume "C\<sigma>' \<in> grounding_of_cls C"
+ then obtain \<sigma>' where
+ C\<sigma>': "C \<cdot> \<sigma>' = C\<sigma>'" "is_ground_subst \<sigma>'"
+ unfolding grounding_of_cls_def by auto
+ then have "C \<cdot> \<sigma>' = D \<cdot> \<sigma> \<cdot> \<sigma>' \<and> is_ground_subst (\<sigma> \<odot> \<sigma>')"
+ using assms by auto
+ then show "C\<sigma>' \<in> grounding_of_cls D"
+ unfolding grounding_of_cls_def using C\<sigma>'(1) by force
+qed
paragraph \<open>Substituting on ground expression has no effect\<close>
lemma is_ground_subst_atm[simp]: "is_ground_atm A \<Longrightarrow> A \<cdot>a \<sigma> = A"
unfolding is_ground_atm_def by simp
lemma is_ground_subst_atms[simp]: "is_ground_atms AA \<Longrightarrow> AA \<cdot>as \<sigma> = AA"
unfolding is_ground_atms_def subst_atms_def image_def by auto
lemma is_ground_subst_atm_mset[simp]: "is_ground_atm_mset AA \<Longrightarrow> AA \<cdot>am \<sigma> = AA"
unfolding is_ground_atm_mset_def subst_atm_mset_def by auto
lemma is_ground_subst_atm_list[simp]: "is_ground_atm_list As \<Longrightarrow> As \<cdot>al \<sigma> = As"
unfolding is_ground_atm_list_def subst_atm_list_def by (auto intro: nth_equalityI)
lemma is_ground_subst_atm_list_member[simp]:
"is_ground_atm_list As \<Longrightarrow> i < length As \<Longrightarrow> As ! i \<cdot>a \<sigma> = As ! i"
unfolding is_ground_atm_list_def by auto
lemma is_ground_subst_lit[simp]: "is_ground_lit L \<Longrightarrow> L \<cdot>l \<sigma> = L"
unfolding is_ground_lit_def subst_lit_def by (cases L) simp_all
lemma is_ground_subst_cls[simp]: "is_ground_cls C \<Longrightarrow> C \<cdot> \<sigma> = C"
unfolding is_ground_cls_def subst_cls_def by simp
lemma is_ground_subst_clss[simp]: "is_ground_clss CC \<Longrightarrow> CC \<cdot>cs \<sigma> = CC"
unfolding is_ground_clss_def subst_clss_def image_def by auto
lemma is_ground_subst_cls_lists[simp]:
assumes "length P = length Cs" and "is_ground_cls_list Cs"
shows "Cs \<cdot>\<cdot>cl P = Cs"
using assms by (metis is_ground_cls_list_def is_ground_subst_cls min.idem nth_equalityI nth_mem
subst_cls_lists_nth subst_cls_lists_length)
lemma is_ground_subst_lit_iff: "is_ground_lit L \<longleftrightarrow> (\<forall>\<sigma>. L = L \<cdot>l \<sigma>)"
using is_ground_atm_def is_ground_lit_def subst_lit_def by (cases L) auto
lemma is_ground_subst_cls_iff: "is_ground_cls C \<longleftrightarrow> (\<forall>\<sigma>. C = C \<cdot> \<sigma>)"
by (metis ex_ground_subst ground_subst_ground_cls is_ground_subst_cls)
paragraph \<open>Members of ground expressions are ground\<close>
lemma is_ground_cls_as_atms: "is_ground_cls C \<longleftrightarrow> (\<forall>A \<in> atms_of C. is_ground_atm A)"
by (auto simp: atms_of_def is_ground_cls_def is_ground_lit_def)
lemma is_ground_cls_imp_is_ground_lit: "L \<in># C \<Longrightarrow> is_ground_cls C \<Longrightarrow> is_ground_lit L"
by (simp add: is_ground_cls_def)
lemma is_ground_cls_imp_is_ground_atm: "A \<in> atms_of C \<Longrightarrow> is_ground_cls C \<Longrightarrow> is_ground_atm A"
by (simp add: is_ground_cls_as_atms)
lemma is_ground_cls_is_ground_atms_atms_of[simp]: "is_ground_cls C \<Longrightarrow> is_ground_atms (atms_of C)"
by (simp add: is_ground_cls_imp_is_ground_atm is_ground_atms_def)
lemma grounding_ground: "C \<in> grounding_of_clss M \<Longrightarrow> is_ground_cls C"
unfolding grounding_of_clss_def grounding_of_cls_def by auto
lemma in_subset_eq_grounding_of_clss_is_ground_cls[simp]:
"C \<in> CC \<Longrightarrow> CC \<subseteq> grounding_of_clss DD \<Longrightarrow> is_ground_cls C"
unfolding grounding_of_clss_def grounding_of_cls_def by auto
lemma is_ground_cls_empty[simp]: "is_ground_cls {#}"
unfolding is_ground_cls_def by simp
lemma grounding_of_cls_ground: "is_ground_cls C \<Longrightarrow> grounding_of_cls C = {C}"
unfolding grounding_of_cls_def by (simp add: ex_ground_subst)
lemma grounding_of_cls_empty[simp]: "grounding_of_cls {#} = {{#}}"
by (simp add: grounding_of_cls_ground)
+lemma union_grounding_of_cls_ground: "is_ground_clss (\<Union> (grounding_of_cls ` N))"
+ by (simp add: grounding_ground grounding_of_clss_def is_ground_clss_def)
+
+
+paragraph \<open>Grounding idempotence\<close>
+
+lemma grounding_of_grounding_of_cls: "E \<in> grounding_of_cls D \<Longrightarrow> D \<in> grounding_of_cls C \<Longrightarrow> E = D"
+ using grounding_of_cls_def by auto
+
subsubsection \<open>Subsumption\<close>
lemma subsumes_empty_left[simp]: "subsumes {#} C"
unfolding subsumes_def subst_cls_def by simp
lemma strictly_subsumes_empty_left[simp]: "strictly_subsumes {#} C \<longleftrightarrow> C \<noteq> {#}"
unfolding strictly_subsumes_def subsumes_def subst_cls_def by simp
subsubsection \<open>Unifiers\<close>
lemma card_le_one_alt: "finite X \<Longrightarrow> card X \<le> 1 \<longleftrightarrow> X = {} \<or> (\<exists>x. X = {x})"
by (induct rule: finite_induct) auto
lemma is_unifier_subst_atm_eqI:
assumes "finite AA"
shows "is_unifier \<sigma> AA \<Longrightarrow> A \<in> AA \<Longrightarrow> B \<in> AA \<Longrightarrow> A \<cdot>a \<sigma> = B \<cdot>a \<sigma>"
unfolding is_unifier_def subst_atms_def card_le_one_alt[OF finite_imageI[OF assms]]
by (metis equals0D imageI insert_iff)
lemma is_unifier_alt:
assumes "finite AA"
shows "is_unifier \<sigma> AA \<longleftrightarrow> (\<forall>A \<in> AA. \<forall>B \<in> AA. A \<cdot>a \<sigma> = B \<cdot>a \<sigma>)"
unfolding is_unifier_def subst_atms_def card_le_one_alt[OF finite_imageI[OF assms(1)]]
by (rule iffI, metis empty_iff insert_iff insert_image, blast)
lemma is_unifiers_subst_atm_eqI:
assumes "finite AA" "is_unifiers \<sigma> AAA" "AA \<in> AAA" "A \<in> AA" "B \<in> AA"
shows "A \<cdot>a \<sigma> = B \<cdot>a \<sigma>"
by (metis assms is_unifiers_def is_unifier_subst_atm_eqI)
theorem is_unifiers_comp:
"is_unifiers \<sigma> (set_mset ` set (map2 add_mset As Bs) \<cdot>ass \<eta>) \<longleftrightarrow>
is_unifiers (\<eta> \<odot> \<sigma>) (set_mset ` set (map2 add_mset As Bs))"
unfolding is_unifiers_def is_unifier_def subst_atmss_def by auto
subsubsection \<open>Most General Unifier\<close>
lemma is_mgu_is_unifiers: "is_mgu \<sigma> AAA \<Longrightarrow> is_unifiers \<sigma> AAA"
using is_mgu_def by blast
lemma is_mgu_is_most_general: "is_mgu \<sigma> AAA \<Longrightarrow> is_unifiers \<tau> AAA \<Longrightarrow> \<exists>\<gamma>. \<tau> = \<sigma> \<odot> \<gamma>"
using is_mgu_def by blast
lemma is_unifiers_is_unifier: "is_unifiers \<sigma> AAA \<Longrightarrow> AA \<in> AAA \<Longrightarrow> is_unifier \<sigma> AA"
using is_unifiers_def by simp
subsubsection \<open>Generalization and Subsumption\<close>
+lemma variants_sym: "variants D D' \<longleftrightarrow> variants D' D"
+ unfolding variants_def by auto
+
lemma variants_iff_subsumes: "variants C D \<longleftrightarrow> subsumes C D \<and> subsumes D C"
proof
assume "variants C D"
then show "subsumes C D \<and> subsumes D C"
- unfolding variants_def generalizes_cls_def subsumes_def by (metis subset_mset.order.refl)
+ unfolding variants_def generalizes_def subsumes_def by (metis subset_mset.order.refl)
next
assume sub: "subsumes C D \<and> subsumes D C"
then have "size C = size D"
unfolding subsumes_def by (metis antisym size_mset_mono size_subst)
then show "variants C D"
- using sub unfolding subsumes_def variants_def generalizes_cls_def
+ using sub unfolding subsumes_def variants_def generalizes_def
by (metis leD mset_subset_size size_mset_mono size_subst
subset_mset.order.not_eq_order_implies_strict)
qed
-lemma wf_strictly_generalizes_cls: "wfP strictly_generalizes_cls"
+lemma wf_strictly_generalizes: "wfP strictly_generalizes"
proof -
{
- assume "\<exists>C_at. \<forall>i. strictly_generalizes_cls (C_at (Suc i)) (C_at i)"
+ assume "\<exists>C_at. \<forall>i. strictly_generalizes (C_at (Suc i)) (C_at i)"
then obtain C_at :: "nat \<Rightarrow> 'a clause" where
- sg_C: "\<And>i. strictly_generalizes_cls (C_at (Suc i)) (C_at i)"
+ sg_C: "\<And>i. strictly_generalizes (C_at (Suc i)) (C_at i)"
by blast
define n :: nat where
"n = size (C_at 0)"
have sz_C: "size (C_at i) = n" for i
proof (induct i)
case (Suc i)
then show ?case
- using sg_C[of i] unfolding strictly_generalizes_cls_def generalizes_cls_def subst_cls_def
+ using sg_C[of i] unfolding strictly_generalizes_def generalizes_def subst_cls_def
by (metis size_image_mset)
qed (simp add: n_def)
obtain \<sigma>_at :: "nat \<Rightarrow> 's" where
C_\<sigma>: "\<And>i. image_mset (\<lambda>L. L \<cdot>l \<sigma>_at i) (C_at (Suc i)) = C_at i"
- using sg_C[unfolded strictly_generalizes_cls_def generalizes_cls_def subst_cls_def] by metis
+ using sg_C[unfolded strictly_generalizes_def generalizes_def subst_cls_def] by metis
define Ls_at :: "nat \<Rightarrow> 'a literal list" where
"Ls_at = rec_nat (SOME Ls. mset Ls = C_at 0)
(\<lambda>i Lsi. SOME Ls. mset Ls = C_at (Suc i) \<and> map (\<lambda>L. L \<cdot>l \<sigma>_at i) Ls = Lsi)"
have
Ls_at_0: "Ls_at 0 = (SOME Ls. mset Ls = C_at 0)" and
Ls_at_Suc: "\<And>i. Ls_at (Suc i) =
(SOME Ls. mset Ls = C_at (Suc i) \<and> map (\<lambda>L. L \<cdot>l \<sigma>_at i) Ls = Ls_at i)"
unfolding Ls_at_def by simp+
have mset_Lt_at_0: "mset (Ls_at 0) = C_at 0"
unfolding Ls_at_0 by (rule someI_ex) (metis list_of_mset_exi)
have "mset (Ls_at (Suc i)) = C_at (Suc i) \<and> map (\<lambda>L. L \<cdot>l \<sigma>_at i) (Ls_at (Suc i)) = Ls_at i"
for i
proof (induct i)
case 0
then show ?case
by (simp add: Ls_at_Suc, rule someI_ex,
metis C_\<sigma> image_mset_of_subset_list mset_Lt_at_0)
next
case Suc
then show ?case
by (subst (1 2) Ls_at_Suc) (rule someI_ex, metis C_\<sigma> image_mset_of_subset_list)
qed
note mset_Ls = this[THEN conjunct1] and Ls_\<sigma> = this[THEN conjunct2]
have len_Ls: "\<And>i. length (Ls_at i) = n"
by (metis mset_Ls mset_Lt_at_0 not0_implies_Suc size_mset sz_C)
have is_pos_Ls: "\<And>i j. j < n \<Longrightarrow> is_pos (Ls_at (Suc i) ! j) \<longleftrightarrow> is_pos (Ls_at i ! j)"
using Ls_\<sigma> len_Ls by (metis literal.map_disc_iff nth_map subst_lit_def)
have Ls_\<tau>_strict_lit: "\<And>i \<tau>. map (\<lambda>L. L \<cdot>l \<tau>) (Ls_at i) \<noteq> Ls_at (Suc i)"
- by (metis C_\<sigma> mset_Ls Ls_\<sigma> mset_map sg_C generalizes_cls_def strictly_generalizes_cls_def
+ by (metis C_\<sigma> mset_Ls Ls_\<sigma> mset_map sg_C generalizes_def strictly_generalizes_def
subst_cls_def)
have Ls_\<tau>_strict_tm:
"map ((\<lambda>t. t \<cdot>a \<tau>) \<circ> atm_of) (Ls_at i) \<noteq> map atm_of (Ls_at (Suc i))" for i \<tau>
proof -
obtain j :: nat where
j_lt: "j < n" and
j_\<tau>: "Ls_at i ! j \<cdot>l \<tau> \<noteq> Ls_at (Suc i) ! j"
using Ls_\<tau>_strict_lit[of \<tau> i] len_Ls
by (metis (no_types, lifting) length_map list_eq_iff_nth_eq nth_map)
have "atm_of (Ls_at i ! j) \<cdot>a \<tau> \<noteq> atm_of (Ls_at (Suc i) ! j)"
using j_\<tau> is_pos_Ls[OF j_lt]
by (metis (mono_guards) literal.expand literal.map_disc_iff literal.map_sel subst_lit_def)
then show ?thesis
using j_lt len_Ls by (metis nth_map o_apply)
qed
define tm_at :: "nat \<Rightarrow> 'a" where
"\<And>i. tm_at i = atm_of_atms (map atm_of (Ls_at i))"
have "\<And>i. generalizes_atm (tm_at (Suc i)) (tm_at i)"
unfolding tm_at_def generalizes_atm_def atm_of_atms_subst
using Ls_\<sigma>[THEN arg_cong, of "map atm_of"] by (auto simp: comp_def)
moreover have "\<And>i. \<not> generalizes_atm (tm_at i) (tm_at (Suc i))"
unfolding tm_at_def generalizes_atm_def atm_of_atms_subst by (simp add: Ls_\<tau>_strict_tm)
ultimately have "\<And>i. strictly_generalizes_atm (tm_at (Suc i)) (tm_at i)"
unfolding strictly_generalizes_atm_def by blast
then have False
using wf_strictly_generalizes_atm[unfolded wfP_def wf_iff_no_infinite_down_chain] by blast
}
- then show "wfP (strictly_generalizes_cls :: 'a clause \<Rightarrow> _ \<Rightarrow> _)"
+ then show "wfP (strictly_generalizes :: 'a clause \<Rightarrow> _ \<Rightarrow> _)"
unfolding wfP_def by (blast intro: wf_iff_no_infinite_down_chain[THEN iffD2])
qed
-lemma strict_subset_subst_strictly_subsumes:
- assumes c\<eta>_sub: "C \<cdot> \<eta> \<subset># D"
- shows "strictly_subsumes C D"
- by (metis c\<eta>_sub leD mset_subset_size size_mset_mono size_subst strictly_subsumes_def
+lemma strict_subset_subst_strictly_subsumes: "C \<cdot> \<eta> \<subset># D \<Longrightarrow> strictly_subsumes C D"
+ by (metis leD mset_subset_size size_mset_mono size_subst strictly_subsumes_def
subset_mset.dual_order.strict_implies_order substitution_ops.subsumes_def)
+lemma generalizes_refl: "generalizes C C"
+ unfolding generalizes_def by (rule exI[of _ id_subst]) auto
+
+lemma generalizes_trans: "generalizes C D \<Longrightarrow> generalizes D E \<Longrightarrow> generalizes C E"
+ unfolding generalizes_def using subst_cls_comp_subst by blast
+
+lemma subsumes_refl: "subsumes C C"
+ unfolding subsumes_def by (rule exI[of _ id_subst]) auto
+
lemma subsumes_trans: "subsumes C D \<Longrightarrow> subsumes D E \<Longrightarrow> subsumes C E"
unfolding subsumes_def
by (metis (no_types) subset_mset.order.trans subst_cls_comp_subst subst_cls_mono_mset)
+lemma strictly_generalizes_irrefl: "\<not> strictly_generalizes C C"
+ unfolding strictly_generalizes_def by blast
+
+lemma strictly_generalizes_antisym: "strictly_generalizes C D \<Longrightarrow> \<not> strictly_generalizes D C"
+ unfolding strictly_generalizes_def by blast
+
+lemma strictly_generalizes_trans:
+ "strictly_generalizes C D \<Longrightarrow> strictly_generalizes D E \<Longrightarrow> strictly_generalizes C E"
+ unfolding strictly_generalizes_def using generalizes_trans by blast
+
+lemma strictly_subsumes_irrefl: "\<not> strictly_subsumes C C"
+ unfolding strictly_subsumes_def by blast
+
+lemma strictly_subsumes_antisym: "strictly_subsumes C D \<Longrightarrow> \<not> strictly_subsumes D C"
+ unfolding strictly_subsumes_def by blast
+
+lemma strictly_subsumes_trans:
+ "strictly_subsumes C D \<Longrightarrow> strictly_subsumes D E \<Longrightarrow> strictly_subsumes C E"
+ unfolding strictly_subsumes_def using subsumes_trans by blast
+
lemma subset_strictly_subsumes: "C \<subset># D \<Longrightarrow> strictly_subsumes C D"
using strict_subset_subst_strictly_subsumes[of C id_subst] by auto
+lemma strictly_generalizes_neq: "strictly_generalizes D' D \<Longrightarrow> D' \<noteq> D \<cdot> \<sigma>"
+ unfolding strictly_generalizes_def generalizes_def by blast
+
lemma strictly_subsumes_neq: "strictly_subsumes D' D \<Longrightarrow> D' \<noteq> D \<cdot> \<sigma>"
unfolding strictly_subsumes_def subsumes_def by blast
lemma strictly_subsumes_has_minimum:
assumes "CC \<noteq> {}"
shows "\<exists>C \<in> CC. \<forall>D \<in> CC. \<not> strictly_subsumes D C"
proof (rule ccontr)
assume "\<not> (\<exists>C \<in> CC. \<forall>D\<in>CC. \<not> strictly_subsumes D C)"
then have "\<forall>C \<in> CC. \<exists>D \<in> CC. strictly_subsumes D C"
by blast
then obtain f where
f_p: "\<forall>C \<in> CC. f C \<in> CC \<and> strictly_subsumes (f C) C"
by metis
from assms obtain C where
C_p: "C \<in> CC"
by auto
define c :: "nat \<Rightarrow> 'a clause" where
"\<And>n. c n = (f ^^ n) C"
have incc: "c i \<in> CC" for i
by (induction i) (auto simp: c_def f_p C_p)
have ps: "\<forall>i. strictly_subsumes (c (Suc i)) (c i)"
using incc f_p unfolding c_def by auto
have "\<forall>i. size (c i) \<ge> size (c (Suc i))"
using ps unfolding strictly_subsumes_def subsumes_def by (metis size_mset_mono size_subst)
then have lte: "\<forall>i. (size \<circ> c) i \<ge> (size \<circ> c) (Suc i)"
unfolding comp_def .
then have "\<exists>l. \<forall>l' \<ge> l. size (c l') = size (c (Suc l'))"
using f_Suc_decr_eventually_const comp_def by auto
then obtain l where
l_p: "\<forall>l' \<ge> l. size (c l') = size (c (Suc l'))"
by metis
- then have "\<forall>l' \<ge> l. strictly_generalizes_cls (c (Suc l')) (c l')"
- using ps unfolding strictly_generalizes_cls_def generalizes_cls_def
- by (metis size_subst less_irrefl strictly_subsumes_def mset_subset_size
- subset_mset_def subsumes_def strictly_subsumes_neq)
- then have "\<forall>i. strictly_generalizes_cls (c (Suc i + l)) (c (i + l))"
- unfolding strictly_generalizes_cls_def generalizes_cls_def by auto
- then have "\<exists>f. \<forall>i. strictly_generalizes_cls (f (Suc i)) (f i)"
+ then have "\<forall>l' \<ge> l. strictly_generalizes (c (Suc l')) (c l')"
+ using ps unfolding strictly_generalizes_def generalizes_def
+ by (metis size_subst less_irrefl strictly_subsumes_def mset_subset_size subset_mset_def
+ subsumes_def strictly_subsumes_neq)
+ then have "\<forall>i. strictly_generalizes (c (Suc i + l)) (c (i + l))"
+ unfolding strictly_generalizes_def generalizes_def by auto
+ then have "\<exists>f. \<forall>i. strictly_generalizes (f (Suc i)) (f i)"
by (rule exI[of _ "\<lambda>x. c (x + l)"])
then show False
- using wf_strictly_generalizes_cls
- wf_iff_no_infinite_down_chain[of "{(x, y). strictly_generalizes_cls x y}"]
+ using wf_strictly_generalizes
+ wf_iff_no_infinite_down_chain[of "{(x, y). strictly_generalizes x y}"]
unfolding wfP_def by auto
qed
+lemma wf_strictly_subsumes: "wfP strictly_subsumes"
+ using strictly_subsumes_has_minimum by (metis equals0D wfP_eq_minimal)
+
+lemma variants_imp_exists_substitution: "variants D D' \<Longrightarrow> \<exists>\<sigma>. D \<cdot> \<sigma> = D'"
+ unfolding variants_iff_subsumes subsumes_def
+ by (meson strictly_subsumes_def subset_mset_def strict_subset_subst_strictly_subsumes subsumes_def)
+
+lemma strictly_subsumes_variants:
+ assumes "strictly_subsumes E D" and "variants D D'"
+ shows "strictly_subsumes E D'"
+proof -
+ from assms obtain \<sigma> \<sigma>' where
+ \<sigma>_\<sigma>'_p: "D \<cdot> \<sigma> = D' \<and> D' \<cdot> \<sigma>' = D"
+ using variants_imp_exists_substitution variants_sym by metis
+
+ from assms obtain \<sigma>'' where
+ "E \<cdot> \<sigma>'' \<subseteq># D"
+ unfolding strictly_subsumes_def subsumes_def by auto
+ then have "E \<cdot> \<sigma>'' \<cdot> \<sigma> \<subseteq># D \<cdot> \<sigma>"
+ using subst_cls_mono_mset by blast
+ then have "E \<cdot> (\<sigma>'' \<odot> \<sigma>) \<subseteq># D'"
+ using \<sigma>_\<sigma>'_p by auto
+ moreover from assms have n: "\<nexists>\<sigma>. D \<cdot> \<sigma> \<subseteq># E"
+ unfolding strictly_subsumes_def subsumes_def by auto
+ have "\<nexists>\<sigma>. D' \<cdot> \<sigma> \<subseteq># E"
+ proof
+ assume "\<exists>\<sigma>'''. D' \<cdot> \<sigma>''' \<subseteq># E"
+ then obtain \<sigma>''' where
+ "D' \<cdot> \<sigma>''' \<subseteq># E"
+ by auto
+ then have "D \<cdot> (\<sigma> \<odot> \<sigma>''') \<subseteq># E"
+ using \<sigma>_\<sigma>'_p by auto
+ then show False
+ using n by metis
+ qed
+ ultimately show ?thesis
+ unfolding strictly_subsumes_def subsumes_def by metis
+qed
+
+lemma neg_strictly_subsumes_variants:
+ assumes "\<not> strictly_subsumes E D" and "variants D D'"
+ shows "\<not> strictly_subsumes E D'"
+ using assms strictly_subsumes_variants variants_sym by auto
+
end
subsection \<open>Most General Unifiers\<close>
locale mgu = substitution subst_atm id_subst comp_subst renamings_apart atm_of_atms
for
subst_atm :: "'a \<Rightarrow> 's \<Rightarrow> 'a" and
id_subst :: 's and
comp_subst :: "'s \<Rightarrow> 's \<Rightarrow> 's" and
atm_of_atms :: "'a list \<Rightarrow> 'a" and
renamings_apart :: "'a literal multiset list \<Rightarrow> 's list" +
fixes
mgu :: "'a set set \<Rightarrow> 's option"
assumes
mgu_sound: "finite AAA \<Longrightarrow> (\<forall>AA \<in> AAA. finite AA) \<Longrightarrow> mgu AAA = Some \<sigma> \<Longrightarrow> is_mgu \<sigma> AAA" and
mgu_complete:
"finite AAA \<Longrightarrow> (\<forall>AA \<in> AAA. finite AA) \<Longrightarrow> is_unifiers \<sigma> AAA \<Longrightarrow> \<exists>\<tau>. mgu AAA = Some \<tau>"
begin
lemmas is_unifiers_mgu = mgu_sound[unfolded is_mgu_def, THEN conjunct1]
lemmas is_mgu_most_general = mgu_sound[unfolded is_mgu_def, THEN conjunct2]
lemma mgu_unifier:
assumes
aslen: "length As = n" and
aaslen: "length AAs = n" and
mgu: "Some \<sigma> = mgu (set_mset ` set (map2 add_mset As AAs))" and
i_lt: "i < n" and
a_in: "A \<in># AAs ! i"
shows "A \<cdot>a \<sigma> = As ! i \<cdot>a \<sigma>"
proof -
from mgu have "is_mgu \<sigma> (set_mset ` set (map2 add_mset As AAs))"
using mgu_sound by auto
then have "is_unifiers \<sigma> (set_mset ` set (map2 add_mset As AAs))"
using is_mgu_is_unifiers by auto
then have "is_unifier \<sigma> (set_mset (add_mset (As ! i) (AAs ! i)))"
using i_lt aslen aaslen unfolding is_unifiers_def is_unifier_def
by simp (metis length_zip min.idem nth_mem nth_zip prod.case set_mset_add_mset_insert)
then show ?thesis
using aslen aaslen a_in is_unifier_subst_atm_eqI
by (metis finite_set_mset insertCI set_mset_add_mset_insert)
qed
end
end
diff --git a/thys/Ordered_Resolution_Prover/FO_Ordered_Resolution.thy b/thys/Ordered_Resolution_Prover/FO_Ordered_Resolution.thy
--- a/thys/Ordered_Resolution_Prover/FO_Ordered_Resolution.thy
+++ b/thys/Ordered_Resolution_Prover/FO_Ordered_Resolution.thy
@@ -1,1296 +1,1412 @@
(* Title: First-Order Ordered Resolution Calculus with Selection
Author: Anders Schlichtkrull <andschl at dtu.dk>, 2016, 2017
Author: Jasmin Blanchette <j.c.blanchette at vu.nl>, 2014, 2017
Author: Dmitriy Traytel <traytel at inf.ethz.ch>, 2014
Author: Sophie Tourret <stourret at mpi-inf.mpg.de>, 2020
Maintainer: Anders Schlichtkrull <andschl at dtu.dk>
*)
section \<open>First-Order Ordered Resolution Calculus with Selection\<close>
theory FO_Ordered_Resolution
imports Abstract_Substitution Ordered_Ground_Resolution Standard_Redundancy
begin
text \<open>
This material is based on Section 4.3 (``A Simple Resolution Prover for First-Order Clauses'') of
Bachmair and Ganzinger's chapter. Specifically, it formalizes the ordered resolution calculus for
first-order standard clauses presented in Figure 4 and its related lemmas and theorems, including
soundness and Lemma 4.12 (the lifting lemma).
The following corresponds to pages 41--42 of Section 4.3, until Figure 5 and its explanation.
\<close>
locale FO_resolution = mgu subst_atm id_subst comp_subst atm_of_atms renamings_apart mgu
for
subst_atm :: "'a :: wellorder \<Rightarrow> 's \<Rightarrow> 'a" and
id_subst :: "'s" and
comp_subst :: "'s \<Rightarrow> 's \<Rightarrow> 's" and
renamings_apart :: "'a literal multiset list \<Rightarrow> 's list" and
atm_of_atms :: "'a list \<Rightarrow> 'a" and
mgu :: "'a set set \<Rightarrow> 's option" +
fixes
less_atm :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
assumes
- less_atm_stable: "less_atm A B \<Longrightarrow> less_atm (A \<cdot>a \<sigma>) (B \<cdot>a \<sigma>)"
+ less_atm_stable: "less_atm A B \<Longrightarrow> less_atm (A \<cdot>a \<sigma>) (B \<cdot>a \<sigma>)" and
+ less_atm_ground: "is_ground_atm A \<Longrightarrow> is_ground_atm B \<Longrightarrow> less_atm A B \<Longrightarrow> A < B"
begin
subsection \<open>Library\<close>
lemma Bex_cartesian_product: "(\<exists>xy \<in> A \<times> B. P xy) \<equiv> (\<exists>x \<in> A. \<exists>y \<in> B. P (x, y))"
by simp
lemma eql_map_neg_lit_eql_atm:
assumes "map (\<lambda>L. L \<cdot>l \<eta>) (map Neg As') = map Neg As"
shows "As' \<cdot>al \<eta> = As"
using assms by (induction As' arbitrary: As) auto
lemma instance_list:
assumes "negs (mset As) = SDA' \<cdot> \<eta>"
shows "\<exists>As'. negs (mset As') = SDA' \<and> As' \<cdot>al \<eta> = As"
proof -
from assms have negL: "\<forall>L \<in># SDA'. is_neg L"
using Melem_subst_cls subst_lit_in_negs_is_neg by metis
from assms have "{#L \<cdot>l \<eta>. L \<in># SDA'#} = mset (map Neg As)"
using subst_cls_def by auto
then have "\<exists>NAs'. map (\<lambda>L. L \<cdot>l \<eta>) NAs' = map Neg As \<and> mset NAs' = SDA'"
using image_mset_of_subset_list[of "\<lambda>L. L \<cdot>l \<eta>" SDA' "map Neg As"] by auto
then obtain As' where As'_p:
"map (\<lambda>L. L \<cdot>l \<eta>) (map Neg As') = map Neg As \<and> mset (map Neg As') = SDA'"
by (metis (no_types, lifting) Neg_atm_of_iff negL ex_map_conv set_mset_mset)
have "negs (mset As') = SDA'"
using As'_p by auto
moreover have "map (\<lambda>L. L \<cdot>l \<eta>) (map Neg As') = map Neg As"
using As'_p by auto
then have "As' \<cdot>al \<eta> = As"
using eql_map_neg_lit_eql_atm by auto
ultimately show ?thesis
by blast
qed
+lemma map2_add_mset_map:
+ assumes "length AAs' = n" and "length As' = n"
+ shows "map2 add_mset (As' \<cdot>al \<eta>) (AAs' \<cdot>aml \<eta>) = map2 add_mset As' AAs' \<cdot>aml \<eta>"
+ using assms
+proof (induction n arbitrary: AAs' As')
+ case (Suc n)
+ then have "map2 add_mset (tl (As' \<cdot>al \<eta>)) (tl (AAs' \<cdot>aml \<eta>)) = map2 add_mset (tl As') (tl AAs') \<cdot>aml \<eta>"
+ by simp
+ moreover have Succ: "length (As' \<cdot>al \<eta>) = Suc n" "length (AAs' \<cdot>aml \<eta>) = Suc n"
+ using Suc(3) Suc(2) by auto
+ then have "length (tl (As' \<cdot>al \<eta>)) = n" "length (tl (AAs' \<cdot>aml \<eta>)) = n"
+ by auto
+ then have "length (map2 add_mset (tl (As' \<cdot>al \<eta>)) (tl (AAs' \<cdot>aml \<eta>))) = n"
+ "length (map2 add_mset (tl As') (tl AAs') \<cdot>aml \<eta>) = n"
+ using Suc(2,3) by auto
+ ultimately have "\<forall>i < n. tl (map2 add_mset ( (As' \<cdot>al \<eta>)) ((AAs' \<cdot>aml \<eta>))) ! i =
+ tl (map2 add_mset (As') (AAs') \<cdot>aml \<eta>) ! i"
+ using Suc(2,3) Succ by (simp add: map2_tl map_tl subst_atm_mset_list_def del: subst_atm_list_tl)
+ moreover have nn: "length (map2 add_mset ((As' \<cdot>al \<eta>)) ((AAs' \<cdot>aml \<eta>))) = Suc n"
+ "length (map2 add_mset (As') (AAs') \<cdot>aml \<eta>) = Suc n"
+ using Succ Suc by auto
+ ultimately have "\<forall>i. i < Suc n \<longrightarrow> i > 0 \<longrightarrow>
+ map2 add_mset (As' \<cdot>al \<eta>) (AAs' \<cdot>aml \<eta>) ! i = (map2 add_mset As' AAs' \<cdot>aml \<eta>) ! i"
+ by (auto simp: subst_atm_mset_list_def gr0_conv_Suc subst_atm_mset_def)
+ moreover have "add_mset (hd As' \<cdot>a \<eta>) (hd AAs' \<cdot>am \<eta>) = add_mset (hd As') (hd AAs') \<cdot>am \<eta>"
+ unfolding subst_atm_mset_def by auto
+ then have "(map2 add_mset (As' \<cdot>al \<eta>) (AAs' \<cdot>aml \<eta>)) ! 0 = (map2 add_mset (As') (AAs') \<cdot>aml \<eta>) ! 0"
+ using Suc by (simp add: Succ(2) subst_atm_mset_def)
+ ultimately have "\<forall>i < Suc n. (map2 add_mset (As' \<cdot>al \<eta>) (AAs' \<cdot>aml \<eta>)) ! i =
+ (map2 add_mset (As') (AAs') \<cdot>aml \<eta>) ! i"
+ using Suc by auto
+ then show ?case
+ using nn list_eq_iff_nth_eq by metis
+qed auto
context
fixes S :: "'a clause \<Rightarrow> 'a clause"
begin
subsection \<open>Calculus\<close>
text \<open>
The following corresponds to Figure 4.
\<close>
definition maximal_wrt :: "'a \<Rightarrow> 'a literal multiset \<Rightarrow> bool" where
"maximal_wrt A C \<longleftrightarrow> (\<forall>B \<in> atms_of C. \<not> less_atm A B)"
definition strictly_maximal_wrt :: "'a \<Rightarrow> 'a literal multiset \<Rightarrow> bool" where
"strictly_maximal_wrt A C \<equiv> \<forall>B \<in> atms_of C. A \<noteq> B \<and> \<not> less_atm A B"
lemma strictly_maximal_wrt_maximal_wrt: "strictly_maximal_wrt A C \<Longrightarrow> maximal_wrt A C"
unfolding maximal_wrt_def strictly_maximal_wrt_def by auto
+lemma maximal_wrt_subst: "maximal_wrt (A \<cdot>a \<sigma>) (C \<cdot> \<sigma>) \<Longrightarrow> maximal_wrt A C"
+ unfolding maximal_wrt_def using in_atms_of_subst less_atm_stable by blast
+
+lemma strictly_maximal_wrt_subst:
+ "strictly_maximal_wrt (A \<cdot>a \<sigma>) (C \<cdot> \<sigma>) \<Longrightarrow> strictly_maximal_wrt A C"
+ unfolding strictly_maximal_wrt_def using in_atms_of_subst less_atm_stable by blast
+
inductive eligible :: "'s \<Rightarrow> 'a list \<Rightarrow> 'a clause \<Rightarrow> bool" where
eligible:
"S DA = negs (mset As) \<or> S DA = {#} \<and> length As = 1 \<and> maximal_wrt (As ! 0 \<cdot>a \<sigma>) (DA \<cdot> \<sigma>) \<Longrightarrow>
eligible \<sigma> As DA"
inductive
ord_resolve
:: "'a clause list \<Rightarrow> 'a clause \<Rightarrow> 'a multiset list \<Rightarrow> 'a list \<Rightarrow> 's \<Rightarrow> 'a clause \<Rightarrow> bool"
where
ord_resolve:
"length CAs = n \<Longrightarrow>
length Cs = n \<Longrightarrow>
length AAs = n \<Longrightarrow>
length As = n \<Longrightarrow>
n \<noteq> 0 \<Longrightarrow>
(\<forall>i < n. CAs ! i = Cs ! i + poss (AAs ! i)) \<Longrightarrow>
(\<forall>i < n. AAs ! i \<noteq> {#}) \<Longrightarrow>
Some \<sigma> = mgu (set_mset ` set (map2 add_mset As AAs)) \<Longrightarrow>
eligible \<sigma> As (D + negs (mset As)) \<Longrightarrow>
(\<forall>i < n. strictly_maximal_wrt (As ! i \<cdot>a \<sigma>) (Cs ! i \<cdot> \<sigma>)) \<Longrightarrow>
(\<forall>i < n. S (CAs ! i) = {#}) \<Longrightarrow>
ord_resolve CAs (D + negs (mset As)) AAs As \<sigma> ((\<Union># (mset Cs) + D) \<cdot> \<sigma>)"
inductive
ord_resolve_rename
:: "'a clause list \<Rightarrow> 'a clause \<Rightarrow> 'a multiset list \<Rightarrow> 'a list \<Rightarrow> 's \<Rightarrow> 'a clause \<Rightarrow> bool"
where
ord_resolve_rename:
"length CAs = n \<Longrightarrow>
length AAs = n \<Longrightarrow>
length As = n \<Longrightarrow>
(\<forall>i < n. poss (AAs ! i) \<subseteq># CAs ! i) \<Longrightarrow>
negs (mset As) \<subseteq># DA \<Longrightarrow>
\<rho> = hd (renamings_apart (DA # CAs)) \<Longrightarrow>
\<rho>s = tl (renamings_apart (DA # CAs)) \<Longrightarrow>
ord_resolve (CAs \<cdot>\<cdot>cl \<rho>s) (DA \<cdot> \<rho>) (AAs \<cdot>\<cdot>aml \<rho>s) (As \<cdot>al \<rho>) \<sigma> E \<Longrightarrow>
ord_resolve_rename CAs DA AAs As \<sigma> E"
lemma ord_resolve_empty_main_prem: "\<not> ord_resolve Cs {#} AAs As \<sigma> E"
by (simp add: ord_resolve.simps)
lemma ord_resolve_rename_empty_main_prem: "\<not> ord_resolve_rename Cs {#} AAs As \<sigma> E"
by (simp add: ord_resolve_empty_main_prem ord_resolve_rename.simps)
subsection \<open>Soundness\<close>
text \<open>
Soundness is not discussed in the chapter, but it is an important property.
\<close>
lemma ord_resolve_ground_inst_sound:
assumes
res_e: "ord_resolve CAs DA AAs As \<sigma> E" and
cc_inst_true: "I \<Turnstile>m mset CAs \<cdot>cm \<sigma> \<cdot>cm \<eta>" and
d_inst_true: "I \<Turnstile> DA \<cdot> \<sigma> \<cdot> \<eta>" and
ground_subst_\<eta>: "is_ground_subst \<eta>"
shows "I \<Turnstile> E \<cdot> \<eta>"
using res_e
proof (cases rule: ord_resolve.cases)
case (ord_resolve n Cs D)
note da = this(1) and e = this(2) and cas_len = this(3) and cs_len = this(4) and
aas_len = this(5) and as_len = this(6) and cas = this(8) and mgu = this(10) and
len = this(1)
have len: "length CAs = length As"
using as_len cas_len by auto
have "is_ground_subst (\<sigma> \<odot> \<eta>)"
using ground_subst_\<eta> by (rule is_ground_comp_subst)
then have cc_true: "I \<Turnstile>m mset CAs \<cdot>cm \<sigma> \<cdot>cm \<eta>" and d_true: "I \<Turnstile> DA \<cdot> \<sigma> \<cdot> \<eta>"
using cc_inst_true d_inst_true by auto
from mgu have unif: "\<forall>i < n. \<forall>A\<in>#AAs ! i. A \<cdot>a \<sigma> = As ! i \<cdot>a \<sigma>"
using mgu_unifier as_len aas_len by blast
show "I \<Turnstile> E \<cdot> \<eta>"
proof (cases "\<forall>A \<in> set As. A \<cdot>a \<sigma> \<cdot>a \<eta> \<in> I")
case True
then have "\<not> I \<Turnstile> negs (mset As) \<cdot> \<sigma> \<cdot> \<eta>"
unfolding true_cls_def[of I] by auto
then have "I \<Turnstile> D \<cdot> \<sigma> \<cdot> \<eta>"
using d_true da by auto
then show ?thesis
unfolding e by auto
next
case False
then obtain i where a_in_aa: "i < length CAs" and a_false: "(As ! i) \<cdot>a \<sigma> \<cdot>a \<eta> \<notin> I"
using da len by (metis in_set_conv_nth)
define C where "C \<equiv> Cs ! i"
define BB where "BB \<equiv> AAs ! i"
have c_cf': "C \<subseteq># \<Union># (mset CAs)"
unfolding C_def using a_in_aa cas cas_len
by (metis less_subset_eq_Union_mset mset_subset_eq_add_left subset_mset.order.trans)
have c_in_cc: "C + poss BB \<in># mset CAs"
using C_def BB_def a_in_aa cas_len in_set_conv_nth cas by fastforce
{
fix B
assume "B \<in># BB"
then have "B \<cdot>a \<sigma> = (As ! i) \<cdot>a \<sigma>"
using unif a_in_aa cas_len unfolding BB_def by auto
}
then have "\<not> I \<Turnstile> poss BB \<cdot> \<sigma> \<cdot> \<eta>"
using a_false by (auto simp: true_cls_def)
moreover have "I \<Turnstile> (C + poss BB) \<cdot> \<sigma> \<cdot> \<eta>"
using c_in_cc cc_true true_cls_mset_true_cls[of I "mset CAs \<cdot>cm \<sigma> \<cdot>cm \<eta>"] by force
ultimately have "I \<Turnstile> C \<cdot> \<sigma> \<cdot> \<eta>"
by simp
then show ?thesis
unfolding e subst_cls_union using c_cf' C_def a_in_aa cas_len cs_len
by (metis (no_types, lifting) mset_subset_eq_add_left nth_mem_mset set_mset_mono sum_mset.remove true_cls_mono subst_cls_mono)
qed
qed
text \<open>
The previous lemma is not only used to prove soundness, but also the following lemma which is
used to prove Lemma 4.10.
\<close>
lemma ord_resolve_rename_ground_inst_sound:
assumes
"ord_resolve_rename CAs DA AAs As \<sigma> E" and
"\<rho>s = tl (renamings_apart (DA # CAs))" and
"\<rho> = hd (renamings_apart (DA # CAs))" and
"I \<Turnstile>m (mset (CAs \<cdot>\<cdot>cl \<rho>s)) \<cdot>cm \<sigma> \<cdot>cm \<eta>" and
"I \<Turnstile> DA \<cdot> \<rho> \<cdot> \<sigma> \<cdot> \<eta>" and
"is_ground_subst \<eta>"
shows "I \<Turnstile> E \<cdot> \<eta>"
using assms by (cases rule: ord_resolve_rename.cases) (fast intro: ord_resolve_ground_inst_sound)
text \<open>
Here follows the soundness theorem for the resolution rule.
\<close>
theorem ord_resolve_sound:
assumes
res_e: "ord_resolve CAs DA AAs As \<sigma> E" and
cc_d_true: "\<And>\<sigma>. is_ground_subst \<sigma> \<Longrightarrow> I \<Turnstile>m (mset CAs + {#DA#}) \<cdot>cm \<sigma>" and
ground_subst_\<eta>: "is_ground_subst \<eta>"
shows "I \<Turnstile> E \<cdot> \<eta>"
proof (use res_e in \<open>cases rule: ord_resolve.cases\<close>)
case (ord_resolve n Cs D)
note da = this(1) and e = this(2) and cas_len = this(3) and cs_len = this(4)
and aas_len = this(5) and as_len = this(6) and cas = this(8) and mgu = this(10)
have ground_subst_\<sigma>_\<eta>: "is_ground_subst (\<sigma> \<odot> \<eta>)"
using ground_subst_\<eta> by (rule is_ground_comp_subst)
have cas_true: "I \<Turnstile>m mset CAs \<cdot>cm \<sigma> \<cdot>cm \<eta>"
using cc_d_true ground_subst_\<sigma>_\<eta> by fastforce
have da_true: "I \<Turnstile> DA \<cdot> \<sigma> \<cdot> \<eta>"
using cc_d_true ground_subst_\<sigma>_\<eta> by fastforce
show "I \<Turnstile> E \<cdot> \<eta>"
using ord_resolve_ground_inst_sound[OF res_e cas_true da_true] ground_subst_\<eta> by auto
qed
lemma subst_sound:
assumes
- "\<And>\<sigma>. is_ground_subst \<sigma> \<Longrightarrow> I \<Turnstile> (C \<cdot> \<sigma>)" and
+ "\<And>\<sigma>. is_ground_subst \<sigma> \<Longrightarrow> I \<Turnstile> C \<cdot> \<sigma>" and
"is_ground_subst \<eta>"
- shows "I \<Turnstile> (C \<cdot> \<rho>) \<cdot> \<eta>"
+ shows "I \<Turnstile> C \<cdot> \<rho> \<cdot> \<eta>"
using assms is_ground_comp_subst subst_cls_comp_subst by metis
lemma subst_sound_scl:
assumes
len: "length P = length CAs" and
- true_cas: "\<And>\<sigma>. is_ground_subst \<sigma> \<Longrightarrow> I \<Turnstile>m (mset CAs) \<cdot>cm \<sigma>" and
+ true_cas: "\<And>\<sigma>. is_ground_subst \<sigma> \<Longrightarrow> I \<Turnstile>m mset CAs \<cdot>cm \<sigma>" and
ground_subst_\<eta>: "is_ground_subst \<eta>"
shows "I \<Turnstile>m mset (CAs \<cdot>\<cdot>cl P) \<cdot>cm \<eta>"
proof -
from true_cas have "\<And>CA. CA\<in># mset CAs \<Longrightarrow> (\<And>\<sigma>. is_ground_subst \<sigma> \<Longrightarrow> I \<Turnstile> CA \<cdot> \<sigma>)"
unfolding true_cls_mset_def by force
then have "\<forall>i < length CAs. \<forall>\<sigma>. is_ground_subst \<sigma> \<longrightarrow> (I \<Turnstile> CAs ! i \<cdot> \<sigma>)"
using in_set_conv_nth by auto
then have true_cp: "\<forall>i < length CAs. \<forall>\<sigma>. is_ground_subst \<sigma> \<longrightarrow> I \<Turnstile> CAs ! i \<cdot> P ! i \<cdot> \<sigma>"
using subst_sound len by auto
{
fix CA
assume "CA \<in># mset (CAs \<cdot>\<cdot>cl P)"
then obtain i where
i_x: "i < length (CAs \<cdot>\<cdot>cl P)" "CA = (CAs \<cdot>\<cdot>cl P) ! i"
by (metis in_mset_conv_nth)
then have "\<forall>\<sigma>. is_ground_subst \<sigma> \<longrightarrow> I \<Turnstile> CA \<cdot> \<sigma>"
using true_cp unfolding subst_cls_lists_def by (simp add: len)
}
then show ?thesis
using assms unfolding true_cls_mset_def by auto
qed
text \<open>
Here follows the soundness theorem for the resolution rule with renaming.
\<close>
lemma ord_resolve_rename_sound:
assumes
res_e: "ord_resolve_rename CAs DA AAs As \<sigma> E" and
cc_d_true: "\<And>\<sigma>. is_ground_subst \<sigma> \<Longrightarrow> I \<Turnstile>m ((mset CAs) + {#DA#}) \<cdot>cm \<sigma>" and
ground_subst_\<eta>: "is_ground_subst \<eta>"
shows "I \<Turnstile> E \<cdot> \<eta>"
using res_e
proof (cases rule: ord_resolve_rename.cases)
case (ord_resolve_rename n \<rho> \<rho>s)
note \<rho>s = this(7) and res = this(8)
have len: "length \<rho>s = length CAs"
using \<rho>s renamings_apart_length by auto
have "\<And>\<sigma>. is_ground_subst \<sigma> \<Longrightarrow> I \<Turnstile>m (mset (CAs \<cdot>\<cdot>cl \<rho>s) + {#DA \<cdot> \<rho>#}) \<cdot>cm \<sigma>"
using subst_sound_scl[OF len, of I] subst_sound cc_d_true by auto
then show "I \<Turnstile> E \<cdot> \<eta>"
using ground_subst_\<eta> ord_resolve_sound[OF res] by simp
qed
subsection \<open>Other Basic Properties\<close>
lemma ord_resolve_unique:
assumes
"ord_resolve CAs DA AAs As \<sigma> E" and
"ord_resolve CAs DA AAs As \<sigma>' E'"
shows "\<sigma> = \<sigma>' \<and> E = E'"
using assms
proof (cases rule: ord_resolve.cases[case_product ord_resolve.cases], intro conjI)
case (ord_resolve_ord_resolve CAs n Cs AAs As \<sigma>'' DA CAs' n' Cs' AAs' As' \<sigma>''' DA')
note res = this(1-17) and res' = this(18-34)
show \<sigma>: "\<sigma> = \<sigma>'"
using res(3-5,14) res'(3-5,14) by (metis option.inject)
have "Cs = Cs'"
using res(1,3,7,8,12) res'(1,3,7,8,12) by (metis add_right_imp_eq nth_equalityI)
moreover have "DA = DA'"
using res(2,4) res'(2,4) by fastforce
ultimately show "E = E'"
using res(5,6) res'(5,6) \<sigma> by blast
qed
lemma ord_resolve_rename_unique:
assumes
"ord_resolve_rename CAs DA AAs As \<sigma> E" and
"ord_resolve_rename CAs DA AAs As \<sigma>' E'"
shows "\<sigma> = \<sigma>' \<and> E = E'"
using assms unfolding ord_resolve_rename.simps using ord_resolve_unique by meson
lemma ord_resolve_max_side_prems: "ord_resolve CAs DA AAs As \<sigma> E \<Longrightarrow> length CAs \<le> size DA"
by (auto elim!: ord_resolve.cases)
lemma ord_resolve_rename_max_side_prems:
"ord_resolve_rename CAs DA AAs As \<sigma> E \<Longrightarrow> length CAs \<le> size DA"
by (elim ord_resolve_rename.cases, drule ord_resolve_max_side_prems, simp add: renamings_apart_length)
subsection \<open>Inference System\<close>
definition ord_FO_\<Gamma> :: "'a inference set" where
"ord_FO_\<Gamma> = {Infer (mset CAs) DA E | CAs DA AAs As \<sigma> E. ord_resolve_rename CAs DA AAs As \<sigma> E}"
interpretation ord_FO_resolution: inference_system ord_FO_\<Gamma> .
-lemma exists_compose: "\<exists>x. P (f x) \<Longrightarrow> \<exists>y. P y"
- by meson
-
lemma finite_ord_FO_resolution_inferences_between:
assumes fin_cc: "finite CC"
shows "finite (ord_FO_resolution.inferences_between CC C)"
proof -
let ?CCC = "CC \<union> {C}"
define all_AA where "all_AA = (\<Union>D \<in> ?CCC. atms_of D)"
define max_ary where "max_ary = Max (size ` ?CCC)"
define CAS where "CAS = {CAs. CAs \<in> lists ?CCC \<and> length CAs \<le> max_ary}"
define AS where "AS = {As. As \<in> lists all_AA \<and> length As \<le> max_ary}"
define AAS where "AAS = {AAs. AAs \<in> lists (mset ` AS) \<and> length AAs \<le> max_ary}"
note defs = all_AA_def max_ary_def CAS_def AS_def AAS_def
let ?infer_of =
"\<lambda>CAs DA AAs As. Infer (mset CAs) DA (THE E. \<exists>\<sigma>. ord_resolve_rename CAs DA AAs As \<sigma> E)"
let ?Z = "{\<gamma> | CAs DA AAs As \<sigma> E \<gamma>. \<gamma> = Infer (mset CAs) DA E
\<and> ord_resolve_rename CAs DA AAs As \<sigma> E \<and> infer_from ?CCC \<gamma> \<and> C \<in># prems_of \<gamma>}"
let ?Y = "{Infer (mset CAs) DA E | CAs DA AAs As \<sigma> E.
ord_resolve_rename CAs DA AAs As \<sigma> E \<and> set CAs \<union> {DA} \<subseteq> ?CCC}"
let ?X = "{?infer_of CAs DA AAs As | CAs DA AAs As. CAs \<in> CAS \<and> DA \<in> ?CCC \<and> AAs \<in> AAS \<and> As \<in> AS}"
let ?W = "CAS \<times> ?CCC \<times> AAS \<times> AS"
have fin_w: "finite ?W"
unfolding defs using fin_cc by (simp add: finite_lists_length_le lists_eq_set)
have "?Z \<subseteq> ?Y"
by (force simp: infer_from_def)
also have "\<dots> \<subseteq> ?X"
proof -
{
fix CAs DA AAs As \<sigma> E
assume
res_e: "ord_resolve_rename CAs DA AAs As \<sigma> E" and
da_in: "DA \<in> ?CCC" and
cas_sub: "set CAs \<subseteq> ?CCC"
have "E = (THE E. \<exists>\<sigma>. ord_resolve_rename CAs DA AAs As \<sigma> E)
\<and> CAs \<in> CAS \<and> AAs \<in> AAS \<and> As \<in> AS" (is "?e \<and> ?cas \<and> ?aas \<and> ?as")
proof (intro conjI)
show ?e
using res_e ord_resolve_rename_unique by (blast intro: the_equality[symmetric])
next
show ?cas
unfolding CAS_def max_ary_def using cas_sub
ord_resolve_rename_max_side_prems[OF res_e] da_in fin_cc
by (auto simp add: Max_ge_iff)
next
show ?aas
using res_e
proof (cases rule: ord_resolve_rename.cases)
case (ord_resolve_rename n \<rho> \<rho>s)
note len_cas = this(1) and len_aas = this(2) and len_as = this(3) and
aas_sub = this(4) and as_sub = this(5) and res_e' = this(8)
show ?thesis
unfolding AAS_def
proof (clarify, intro conjI)
show "AAs \<in> lists (mset ` AS)"
unfolding AS_def image_def
proof clarsimp
fix AA
assume "AA \<in> set AAs"
then obtain i where
i_lt: "i < n" and
aa: "AA = AAs ! i"
by (metis in_set_conv_nth len_aas)
have casi_in: "CAs ! i \<in> ?CCC"
using i_lt len_cas cas_sub nth_mem by blast
have pos_aa_sub: "poss AA \<subseteq># CAs ! i"
using aa aas_sub i_lt by blast
then have "set_mset AA \<subseteq> atms_of (CAs ! i)"
by (metis atms_of_poss lits_subseteq_imp_atms_subseteq set_mset_mono)
also have aa_sub: "\<dots> \<subseteq> all_AA"
unfolding all_AA_def using casi_in by force
finally have aa_sub: "set_mset AA \<subseteq> all_AA"
.
have "size AA = size (poss AA)"
by simp
also have "\<dots> \<le> size (CAs ! i)"
by (rule size_mset_mono[OF pos_aa_sub])
also have "\<dots> \<le> max_ary"
unfolding max_ary_def using fin_cc casi_in by auto
finally have sz_aa: "size AA \<le> max_ary"
.
let ?As' = "sorted_list_of_multiset AA"
have "?As' \<in> lists all_AA"
using aa_sub by auto
moreover have "length ?As' \<le> max_ary"
using sz_aa by simp
moreover have "AA = mset ?As'"
by simp
ultimately show "\<exists>xa. xa \<in> lists all_AA \<and> length xa \<le> max_ary \<and> AA = mset xa"
by blast
qed
next
have "length AAs = length As"
unfolding len_aas len_as ..
also have "\<dots> \<le> size DA"
using as_sub size_mset_mono by fastforce
also have "\<dots> \<le> max_ary"
unfolding max_ary_def using fin_cc da_in by auto
finally show "length AAs \<le> max_ary"
.
qed
qed
next
show ?as
unfolding AS_def
proof (clarify, intro conjI)
have "set As \<subseteq> atms_of DA"
using res_e[simplified ord_resolve_rename.simps]
by (metis atms_of_negs lits_subseteq_imp_atms_subseteq set_mset_mono set_mset_mset)
also have "\<dots> \<subseteq> all_AA"
unfolding all_AA_def using da_in by blast
finally show "As \<in> lists all_AA"
unfolding lists_eq_set by simp
next
have "length As \<le> size DA"
using res_e[simplified ord_resolve_rename.simps]
ord_resolve_rename_max_side_prems[OF res_e] by auto
also have "size DA \<le> max_ary"
unfolding max_ary_def using fin_cc da_in by auto
finally show "length As \<le> max_ary"
.
qed
qed
}
then show ?thesis
by simp fast
qed
also have "\<dots> \<subseteq> (\<lambda>(CAs, DA, AAs, As). ?infer_of CAs DA AAs As) ` ?W"
unfolding image_def Bex_cartesian_product by fast
finally show ?thesis
unfolding inference_system.inferences_between_def ord_FO_\<Gamma>_def mem_Collect_eq
by (fast intro: rev_finite_subset[OF finite_imageI[OF fin_w]])
qed
lemma ord_FO_resolution_inferences_between_empty_empty:
"ord_FO_resolution.inferences_between {} {#} = {}"
unfolding ord_FO_resolution.inferences_between_def inference_system.inferences_between_def
infer_from_def ord_FO_\<Gamma>_def
using ord_resolve_rename_empty_main_prem by auto
subsection \<open>Lifting\<close>
text \<open>
The following corresponds to the passage between Lemmas 4.11 and 4.12.
\<close>
context
fixes M :: "'a clause set"
assumes select: "selection S"
begin
interpretation selection
by (rule select)
definition S_M :: "'a literal multiset \<Rightarrow> 'a literal multiset" where
"S_M C =
(if C \<in> grounding_of_clss M then
(SOME C'. \<exists>D \<sigma>. D \<in> M \<and> C = D \<cdot> \<sigma> \<and> C' = S D \<cdot> \<sigma> \<and> is_ground_subst \<sigma>)
else
S C)"
lemma S_M_grounding_of_clss:
assumes "C \<in> grounding_of_clss M"
obtains D \<sigma> where
"D \<in> M \<and> C = D \<cdot> \<sigma> \<and> S_M C = S D \<cdot> \<sigma> \<and> is_ground_subst \<sigma>"
proof (atomize_elim, unfold S_M_def eqTrueI[OF assms] if_True, rule someI_ex)
from assms show "\<exists>C' D \<sigma>. D \<in> M \<and> C = D \<cdot> \<sigma> \<and> C' = S D \<cdot> \<sigma> \<and> is_ground_subst \<sigma>"
by (auto simp: grounding_of_clss_def grounding_of_cls_def)
qed
lemma S_M_not_grounding_of_clss: "C \<notin> grounding_of_clss M \<Longrightarrow> S_M C = S C"
unfolding S_M_def by simp
lemma S_M_selects_subseteq: "S_M C \<subseteq># C"
by (metis S_M_grounding_of_clss S_M_not_grounding_of_clss S_selects_subseteq subst_cls_mono_mset)
lemma S_M_selects_neg_lits: "L \<in># S_M C \<Longrightarrow> is_neg L"
by (metis Melem_subst_cls S_M_grounding_of_clss S_M_not_grounding_of_clss S_selects_neg_lits
subst_lit_is_neg)
end
end
text \<open>
The following corresponds to Lemma 4.12:
\<close>
-lemma map2_add_mset_map:
- assumes "length AAs' = n" and "length As' = n"
- shows "map2 add_mset (As' \<cdot>al \<eta>) (AAs' \<cdot>aml \<eta>) = map2 add_mset As' AAs' \<cdot>aml \<eta>"
- using assms
-proof (induction n arbitrary: AAs' As')
- case (Suc n)
- then have "map2 add_mset (tl (As' \<cdot>al \<eta>)) (tl (AAs' \<cdot>aml \<eta>)) = map2 add_mset (tl As') (tl AAs') \<cdot>aml \<eta>"
- by simp
- moreover have Succ: "length (As' \<cdot>al \<eta>) = Suc n" "length (AAs' \<cdot>aml \<eta>) = Suc n"
- using Suc(3) Suc(2) by auto
- then have "length (tl (As' \<cdot>al \<eta>)) = n" "length (tl (AAs' \<cdot>aml \<eta>)) = n"
- by auto
- then have "length (map2 add_mset (tl (As' \<cdot>al \<eta>)) (tl (AAs' \<cdot>aml \<eta>))) = n"
- "length (map2 add_mset (tl As') (tl AAs') \<cdot>aml \<eta>) = n"
- using Suc(2,3) by auto
- ultimately have "\<forall>i < n. tl (map2 add_mset ( (As' \<cdot>al \<eta>)) ((AAs' \<cdot>aml \<eta>))) ! i =
- tl (map2 add_mset (As') (AAs') \<cdot>aml \<eta>) ! i"
- using Suc(2,3) Succ by (simp add: map2_tl map_tl subst_atm_mset_list_def del: subst_atm_list_tl)
- moreover have nn: "length (map2 add_mset ((As' \<cdot>al \<eta>)) ((AAs' \<cdot>aml \<eta>))) = Suc n"
- "length (map2 add_mset (As') (AAs') \<cdot>aml \<eta>) = Suc n"
- using Succ Suc by auto
- ultimately have "\<forall>i. i < Suc n \<longrightarrow> i > 0 \<longrightarrow>
- map2 add_mset (As' \<cdot>al \<eta>) (AAs' \<cdot>aml \<eta>) ! i = (map2 add_mset As' AAs' \<cdot>aml \<eta>) ! i"
- by (auto simp: subst_atm_mset_list_def gr0_conv_Suc subst_atm_mset_def)
- moreover have "add_mset (hd As' \<cdot>a \<eta>) (hd AAs' \<cdot>am \<eta>) = add_mset (hd As') (hd AAs') \<cdot>am \<eta>"
- unfolding subst_atm_mset_def by auto
- then have "(map2 add_mset (As' \<cdot>al \<eta>) (AAs' \<cdot>aml \<eta>)) ! 0 = (map2 add_mset (As') (AAs') \<cdot>aml \<eta>) ! 0"
- using Suc by (simp add: Succ(2) subst_atm_mset_def)
- ultimately have "\<forall>i < Suc n. (map2 add_mset (As' \<cdot>al \<eta>) (AAs' \<cdot>aml \<eta>)) ! i =
- (map2 add_mset (As') (AAs') \<cdot>aml \<eta>) ! i"
- using Suc by auto
- then show ?case
- using nn list_eq_iff_nth_eq by metis
-qed auto
-
-lemma maximal_wrt_subst: "maximal_wrt (A \<cdot>a \<sigma>) (C \<cdot> \<sigma>) \<Longrightarrow> maximal_wrt A C"
- unfolding maximal_wrt_def using in_atms_of_subst less_atm_stable by blast
-
-lemma strictly_maximal_wrt_subst: "strictly_maximal_wrt (A \<cdot>a \<sigma>) (C \<cdot> \<sigma>) \<Longrightarrow> strictly_maximal_wrt A C"
- unfolding strictly_maximal_wrt_def using in_atms_of_subst less_atm_stable by blast
-
lemma ground_resolvent_subset:
assumes
gr_cas: "is_ground_cls_list CAs" and
gr_da: "is_ground_cls DA" and
res_e: "ord_resolve S CAs DA AAs As \<sigma> E"
shows "E \<subseteq># \<Union># (mset CAs) + DA"
using res_e
proof (cases rule: ord_resolve.cases)
case (ord_resolve n Cs D)
note da = this(1) and e = this(2) and cas_len = this(3) and cs_len = this(4)
and aas_len = this(5) and as_len = this(6) and cas = this(8) and mgu = this(10)
then have cs_sub_cas: "\<Union># (mset Cs) \<subseteq># \<Union># (mset CAs)"
using subseteq_list_Union_mset cas_len cs_len by force
then have cs_sub_cas: "\<Union># (mset Cs) \<subseteq># \<Union># (mset CAs)"
using subseteq_list_Union_mset cas_len cs_len by force
then have gr_cs: "is_ground_cls_list Cs"
using gr_cas by simp
have d_sub_da: "D \<subseteq># DA"
by (simp add: da)
then have gr_d: "is_ground_cls D"
using gr_da is_ground_cls_mono by auto
have "is_ground_cls (\<Union># (mset Cs) + D)"
using gr_cs gr_d by auto
with e have "E = \<Union># (mset Cs) + D"
by auto
then show ?thesis
using cs_sub_cas d_sub_da by (auto simp: subset_mset.add_mono)
qed
lemma ord_resolve_obtain_clauses:
assumes
res_e: "ord_resolve (S_M S M) CAs DA AAs As \<sigma> E" and
select: "selection S" and
grounding: "{DA} \<union> set CAs \<subseteq> grounding_of_clss M" and
n: "length CAs = n" and
d: "DA = D + negs (mset As)" and
c: "(\<forall>i < n. CAs ! i = Cs ! i + poss (AAs ! i))" "length Cs = n" "length AAs = n"
obtains DA0 \<eta>0 CAs0 \<eta>s0 As0 AAs0 D0 Cs0 where
"length CAs0 = n"
"length \<eta>s0 = n"
"DA0 \<in> M"
"DA0 \<cdot> \<eta>0 = DA"
"S DA0 \<cdot> \<eta>0 = S_M S M DA"
"\<forall>CA0 \<in> set CAs0. CA0 \<in> M"
"CAs0 \<cdot>\<cdot>cl \<eta>s0 = CAs"
"map S CAs0 \<cdot>\<cdot>cl \<eta>s0 = map (S_M S M) CAs"
"is_ground_subst \<eta>0"
"is_ground_subst_list \<eta>s0"
"As0 \<cdot>al \<eta>0 = As"
"AAs0 \<cdot>\<cdot>aml \<eta>s0 = AAs"
"length As0 = n"
"D0 \<cdot> \<eta>0 = D"
"DA0 = D0 + (negs (mset As0))"
"S_M S M (D + negs (mset As)) \<noteq> {#} \<Longrightarrow> negs (mset As0) = S DA0"
"length Cs0 = n"
"Cs0 \<cdot>\<cdot>cl \<eta>s0 = Cs"
"\<forall>i < n. CAs0 ! i = Cs0 ! i + poss (AAs0 ! i)"
"length AAs0 = n"
using res_e
proof (cases rule: ord_resolve.cases)
case (ord_resolve n_twin Cs_twins D_twin)
note da = this(1) and e = this(2) and cas = this(8) and mgu = this(10) and eligible = this(11)
from ord_resolve have "n_twin = n" "D_twin = D"
using n d by auto
moreover have "Cs_twins = Cs"
using c cas n calculation(1) \<open>length Cs_twins = n_twin\<close> by (auto simp add: nth_equalityI)
ultimately
have nz: "n \<noteq> 0" and cs_len: "length Cs = n" and aas_len: "length AAs = n" and as_len: "length As = n"
and da: "DA = D + negs (mset As)" and eligible: "eligible (S_M S M) \<sigma> As (D + negs (mset As))"
and cas: "\<forall>i<n. CAs ! i = Cs ! i + poss (AAs ! i)"
using ord_resolve by force+
note n = \<open>n \<noteq> 0\<close> \<open>length CAs = n\<close> \<open>length Cs = n\<close> \<open>length AAs = n\<close> \<open>length As = n\<close>
interpret S: selection S by (rule select)
\<comment> \<open>Obtain FO side premises\<close>
have "\<forall>CA \<in> set CAs. \<exists>CA0 \<eta>c0. CA0 \<in> M \<and> CA0 \<cdot> \<eta>c0 = CA \<and> S CA0 \<cdot> \<eta>c0 = S_M S M CA \<and> is_ground_subst \<eta>c0"
using grounding S_M_grounding_of_clss select by (metis (no_types) le_supE subset_iff)
then have "\<forall>i < n. \<exists>CA0 \<eta>c0. CA0 \<in> M \<and> CA0 \<cdot> \<eta>c0 = (CAs ! i) \<and> S CA0 \<cdot> \<eta>c0 = S_M S M (CAs ! i) \<and> is_ground_subst \<eta>c0"
using n by force
then obtain \<eta>s0f CAs0f where f_p:
"\<forall>i < n. CAs0f i \<in> M"
"\<forall>i < n. (CAs0f i) \<cdot> (\<eta>s0f i) = (CAs ! i)"
"\<forall>i < n. S (CAs0f i) \<cdot> (\<eta>s0f i) = S_M S M (CAs ! i)"
"\<forall>i < n. is_ground_subst (\<eta>s0f i)"
using n by (metis (no_types))
define \<eta>s0 where
"\<eta>s0 = map \<eta>s0f [0 ..<n]"
define CAs0 where
"CAs0 = map CAs0f [0 ..<n]"
have "length \<eta>s0 = n" "length CAs0 = n"
unfolding \<eta>s0_def CAs0_def by auto
note n = \<open>length \<eta>s0 = n\<close> \<open>length CAs0 = n\<close> n
\<comment> \<open>The properties we need of the FO side premises\<close>
have CAs0_in_M: "\<forall>CA0 \<in> set CAs0. CA0 \<in> M"
unfolding CAs0_def using f_p(1) by auto
have CAs0_to_CAs: "CAs0 \<cdot>\<cdot>cl \<eta>s0 = CAs"
unfolding CAs0_def \<eta>s0_def using f_p(2) by (auto simp: n intro: nth_equalityI)
have SCAs0_to_SMCAs: "(map S CAs0) \<cdot>\<cdot>cl \<eta>s0 = map (S_M S M) CAs"
unfolding CAs0_def \<eta>s0_def using f_p(3) n by (force intro: nth_equalityI)
have sub_ground: "\<forall>\<eta>c0 \<in> set \<eta>s0. is_ground_subst \<eta>c0"
unfolding \<eta>s0_def using f_p n by force
then have "is_ground_subst_list \<eta>s0"
using n unfolding is_ground_subst_list_def by auto
\<comment> \<open>Split side premises CAs0 into Cs0 and AAs0\<close>
obtain AAs0 Cs0 where AAs0_Cs0_p:
"AAs0 \<cdot>\<cdot>aml \<eta>s0 = AAs" "length Cs0 = n" "Cs0 \<cdot>\<cdot>cl \<eta>s0 = Cs"
"\<forall>i < n. CAs0 ! i = Cs0 ! i + poss (AAs0 ! i)" "length AAs0 = n"
proof -
have "\<forall>i < n. \<exists>AA0. AA0 \<cdot>am \<eta>s0 ! i = AAs ! i \<and> poss AA0 \<subseteq># CAs0 ! i"
proof (rule, rule)
fix i
assume "i < n"
have "CAs0 ! i \<cdot> \<eta>s0 ! i = CAs ! i"
using \<open>i < n\<close> \<open>CAs0 \<cdot>\<cdot>cl \<eta>s0 = CAs\<close> n by force
moreover have "poss (AAs ! i) \<subseteq># CAs !i"
using \<open>i < n\<close> cas by auto
ultimately obtain poss_AA0 where
nn: "poss_AA0 \<cdot> \<eta>s0 ! i = poss (AAs ! i) \<and> poss_AA0 \<subseteq># CAs0 ! i"
using cas image_mset_of_subset unfolding subst_cls_def by metis
then have l: "\<forall>L \<in># poss_AA0. is_pos L"
unfolding subst_cls_def by (metis Melem_subst_cls imageE literal.disc(1)
literal.map_disc_iff set_image_mset subst_cls_def subst_lit_def)
define AA0 where
"AA0 = image_mset atm_of poss_AA0"
have na: "poss AA0 = poss_AA0"
using l unfolding AA0_def by auto
then have "AA0 \<cdot>am \<eta>s0 ! i = AAs ! i"
using nn by (metis (mono_tags) literal.inject(1) multiset.inj_map_strong subst_cls_poss)
moreover have "poss AA0 \<subseteq># CAs0 ! i"
using na nn by auto
ultimately show "\<exists>AA0. AA0 \<cdot>am \<eta>s0 ! i = AAs ! i \<and> poss AA0 \<subseteq># CAs0 ! i"
by blast
qed
then obtain AAs0f where
AAs0f_p: "\<forall>i < n. AAs0f i \<cdot>am \<eta>s0 ! i = AAs ! i \<and> (poss (AAs0f i)) \<subseteq># CAs0 ! i"
by metis
define AAs0 where "AAs0 = map AAs0f [0 ..<n]"
then have "length AAs0 = n"
by auto
note n = n \<open>length AAs0 = n\<close>
from AAs0_def have "\<forall>i < n. AAs0 ! i \<cdot>am \<eta>s0 ! i = AAs ! i"
using AAs0f_p by auto
then have AAs0_AAs: "AAs0 \<cdot>\<cdot>aml \<eta>s0 = AAs"
using n by (auto intro: nth_equalityI)
from AAs0_def have AAs0_in_CAs0: "\<forall>i < n. poss (AAs0 ! i) \<subseteq># CAs0 ! i"
using AAs0f_p by auto
define Cs0 where
"Cs0 = map2 (-) CAs0 (map poss AAs0)"
have "length Cs0 = n"
using Cs0_def n by auto
note n = n \<open>length Cs0 = n\<close>
have "\<forall>i < n. CAs0 ! i = Cs0 ! i + poss (AAs0 ! i)"
using AAs0_in_CAs0 Cs0_def n by auto
then have "Cs0 \<cdot>\<cdot>cl \<eta>s0 = Cs"
using \<open>CAs0 \<cdot>\<cdot>cl \<eta>s0 = CAs\<close> AAs0_AAs cas n by (auto intro: nth_equalityI)
show ?thesis
using that
\<open>AAs0 \<cdot>\<cdot>aml \<eta>s0 = AAs\<close> \<open>Cs0 \<cdot>\<cdot>cl \<eta>s0 = Cs\<close> \<open>\<forall>i < n. CAs0 ! i = Cs0 ! i + poss (AAs0 ! i)\<close>
\<open>length AAs0 = n\<close> \<open>length Cs0 = n\<close>
by blast
qed
\<comment> \<open>Obtain FO main premise\<close>
have "\<exists>DA0 \<eta>0. DA0 \<in> M \<and> DA = DA0 \<cdot> \<eta>0 \<and> S DA0 \<cdot> \<eta>0 = S_M S M DA \<and> is_ground_subst \<eta>0"
using grounding S_M_grounding_of_clss select by (metis le_supE singletonI subsetCE)
then obtain DA0 \<eta>0 where
DA0_\<eta>0_p: "DA0 \<in> M \<and> DA = DA0 \<cdot> \<eta>0 \<and> S DA0 \<cdot> \<eta>0 = S_M S M DA \<and> is_ground_subst \<eta>0"
by auto
\<comment> \<open>The properties we need of the FO main premise\<close>
have DA0_in_M: "DA0 \<in> M"
using DA0_\<eta>0_p by auto
have DA0_to_DA: "DA0 \<cdot> \<eta>0 = DA"
using DA0_\<eta>0_p by auto
have SDA0_to_SMDA: "S DA0 \<cdot> \<eta>0 = S_M S M DA"
using DA0_\<eta>0_p by auto
have "is_ground_subst \<eta>0"
using DA0_\<eta>0_p by auto
\<comment> \<open>Split main premise DA0 into D0 and As0\<close>
obtain D0 As0 where D0As0_p:
"As0 \<cdot>al \<eta>0 = As" "length As0 = n" "D0 \<cdot> \<eta>0 = D" "DA0 = D0 + (negs (mset As0))"
"S_M S M (D + negs (mset As)) \<noteq> {#} \<Longrightarrow> negs (mset As0) = S DA0"
proof -
{
assume a: "S_M S M (D + negs (mset As)) = {#} \<and> length As = (Suc 0)
\<and> maximal_wrt (As ! 0 \<cdot>a \<sigma>) ((D + negs (mset As)) \<cdot> \<sigma>)"
then have as: "mset As = {#As ! 0#}"
by (auto intro: nth_equalityI)
then have "negs (mset As) = {#Neg (As ! 0)#}"
by (simp add: \<open>mset As = {#As ! 0#}\<close>)
then have "DA = D + {#Neg (As ! 0)#}"
using da by auto
then obtain L where "L \<in># DA0 \<and> L \<cdot>l \<eta>0 = Neg (As ! 0)"
using DA0_to_DA by (metis Melem_subst_cls mset_subset_eq_add_right single_subset_iff)
then have "Neg (atm_of L) \<in># DA0 \<and> Neg (atm_of L) \<cdot>l \<eta>0 = Neg (As ! 0)"
by (metis Neg_atm_of_iff literal.sel(2) subst_lit_is_pos)
then have "[atm_of L] \<cdot>al \<eta>0 = As \<and> negs (mset [atm_of L]) \<subseteq># DA0"
using as subst_lit_def by auto
then have "\<exists>As0. As0 \<cdot>al \<eta>0 = As \<and> negs (mset As0) \<subseteq># DA0
\<and> (S_M S M (D + negs (mset As)) \<noteq> {#} \<longrightarrow> negs (mset As0) = S DA0)"
using a by blast
}
moreover
{
assume "S_M S M (D + negs (mset As)) = negs (mset As)"
then have "negs (mset As) = S DA0 \<cdot> \<eta>0"
using da \<open>S DA0 \<cdot> \<eta>0 = S_M S M DA\<close> by auto
then have "\<exists>As0. negs (mset As0) = S DA0 \<and> As0 \<cdot>al \<eta>0 = As"
using instance_list[of As "S DA0" \<eta>0] S.S_selects_neg_lits by auto
then have "\<exists>As0. As0 \<cdot>al \<eta>0 = As \<and> negs (mset As0) \<subseteq># DA0
\<and> (S_M S M (D + negs (mset As)) \<noteq> {#} \<longrightarrow> negs (mset As0) = S DA0)"
using S.S_selects_subseteq by auto
}
ultimately have "\<exists>As0. As0 \<cdot>al \<eta>0 = As \<and> (negs (mset As0)) \<subseteq># DA0
\<and> (S_M S M (D + negs (mset As)) \<noteq> {#} \<longrightarrow> negs (mset As0) = S DA0)"
using eligible unfolding eligible.simps by auto
then obtain As0 where
As0_p: "As0 \<cdot>al \<eta>0 = As \<and> negs (mset As0) \<subseteq># DA0
\<and> (S_M S M (D + negs (mset As)) \<noteq> {#} \<longrightarrow> negs (mset As0) = S DA0)"
by blast
then have "length As0 = n"
using as_len by auto
note n = n this
have "As0 \<cdot>al \<eta>0 = As"
using As0_p by auto
define D0 where
"D0 = DA0 - negs (mset As0)"
then have "DA0 = D0 + negs (mset As0)"
using As0_p by auto
then have "D0 \<cdot> \<eta>0 = D"
using DA0_to_DA da As0_p by auto
have "S_M S M (D + negs (mset As)) \<noteq> {#} \<Longrightarrow> negs (mset As0) = S DA0"
using As0_p by blast
then show ?thesis
using that \<open>As0 \<cdot>al \<eta>0 = As\<close> \<open>D0 \<cdot> \<eta>0= D\<close> \<open>DA0 = D0 + (negs (mset As0))\<close> \<open>length As0 = n\<close>
by metis
qed
show ?thesis
using that[OF n(2,1) DA0_in_M DA0_to_DA SDA0_to_SMDA CAs0_in_M CAs0_to_CAs SCAs0_to_SMCAs
\<open>is_ground_subst \<eta>0\<close> \<open>is_ground_subst_list \<eta>s0\<close> \<open>As0 \<cdot>al \<eta>0 = As\<close>
\<open>AAs0 \<cdot>\<cdot>aml \<eta>s0 = AAs\<close>
\<open>length As0 = n\<close>
\<open>D0 \<cdot> \<eta>0 = D\<close>
\<open>DA0 = D0 + (negs (mset As0))\<close>
\<open>S_M S M (D + negs (mset As)) \<noteq> {#} \<Longrightarrow> negs (mset As0) = S DA0\<close>
\<open>length Cs0 = n\<close>
\<open>Cs0 \<cdot>\<cdot>cl \<eta>s0 = Cs\<close>
\<open>\<forall>i < n. CAs0 ! i = Cs0 ! i + poss (AAs0 ! i)\<close>
\<open>length AAs0 = n\<close>]
by auto
qed
lemma ord_resolve_rename_lifting:
assumes
sel_stable: "\<And>\<rho> C. is_renaming \<rho> \<Longrightarrow> S (C \<cdot> \<rho>) = S C \<cdot> \<rho>" and
res_e: "ord_resolve (S_M S M) CAs DA AAs As \<sigma> E" and
select: "selection S" and
grounding: "{DA} \<union> set CAs \<subseteq> grounding_of_clss M"
obtains \<eta>s \<eta> \<eta>2 CAs0 DA0 AAs0 As0 E0 \<tau> where
"is_ground_subst \<eta>"
"is_ground_subst_list \<eta>s"
"is_ground_subst \<eta>2"
"ord_resolve_rename S CAs0 DA0 AAs0 As0 \<tau> E0"
"CAs0 \<cdot>\<cdot>cl \<eta>s = CAs" "DA0 \<cdot> \<eta> = DA" "E0 \<cdot> \<eta>2 = E"
"{DA0} \<union> set CAs0 \<subseteq> M"
"length CAs0 = length CAs"
+ "length \<eta>s = length CAs"
using res_e
proof (cases rule: ord_resolve.cases)
case (ord_resolve n Cs D)
note da = this(1) and e = this(2) and cas_len = this(3) and cs_len = this(4) and
aas_len = this(5) and as_len = this(6) and nz = this(7) and cas = this(8) and
aas_not_empt = this(9) and mgu = this(10) and eligible = this(11) and str_max = this(12) and
sel_empt = this(13)
have sel_ren_list_inv:
"\<And>\<rho>s Cs. length \<rho>s = length Cs \<Longrightarrow> is_renaming_list \<rho>s \<Longrightarrow> map S (Cs \<cdot>\<cdot>cl \<rho>s) = map S Cs \<cdot>\<cdot>cl \<rho>s"
using sel_stable unfolding is_renaming_list_def by (auto intro: nth_equalityI)
note n = \<open>n \<noteq> 0\<close> \<open>length CAs = n\<close> \<open>length Cs = n\<close> \<open>length AAs = n\<close> \<open>length As = n\<close>
interpret S: selection S by (rule select)
obtain DA0 \<eta>0 CAs0 \<eta>s0 As0 AAs0 D0 Cs0 where as0:
"length CAs0 = n"
"length \<eta>s0 = n"
"DA0 \<in> M"
"DA0 \<cdot> \<eta>0 = DA"
"S DA0 \<cdot> \<eta>0 = S_M S M DA"
"\<forall>CA0 \<in> set CAs0. CA0 \<in> M"
"CAs0 \<cdot>\<cdot>cl \<eta>s0 = CAs"
"map S CAs0 \<cdot>\<cdot>cl \<eta>s0 = map (S_M S M) CAs"
"is_ground_subst \<eta>0"
"is_ground_subst_list \<eta>s0"
"As0 \<cdot>al \<eta>0 = As"
"AAs0 \<cdot>\<cdot>aml \<eta>s0 = AAs"
"length As0 = n"
"D0 \<cdot> \<eta>0 = D"
"DA0 = D0 + (negs (mset As0))"
"S_M S M (D + negs (mset As)) \<noteq> {#} \<Longrightarrow> negs (mset As0) = S DA0"
"length Cs0 = n"
"Cs0 \<cdot>\<cdot>cl \<eta>s0 = Cs"
"\<forall>i < n. CAs0 ! i = Cs0 ! i + poss (AAs0 ! i)"
"length AAs0 = n"
using ord_resolve_obtain_clauses[of S M CAs DA, OF res_e select grounding n(2) \<open>DA = D + negs (mset As)\<close>
\<open>\<forall>i<n. CAs ! i = Cs ! i + poss (AAs ! i)\<close> \<open>length Cs = n\<close> \<open>length AAs = n\<close>, of thesis] by blast
note n = \<open>length CAs0 = n\<close> \<open>length \<eta>s0 = n\<close> \<open>length As0 = n\<close> \<open>length AAs0 = n\<close> \<open>length Cs0 = n\<close> n
have "length (renamings_apart (DA0 # CAs0)) = Suc n"
using n renamings_apart_length by auto
note n = this n
define \<rho> where
"\<rho> = hd (renamings_apart (DA0 # CAs0))"
define \<rho>s where
"\<rho>s = tl (renamings_apart (DA0 # CAs0))"
define DA0' where
"DA0' = DA0 \<cdot> \<rho>"
define D0' where
"D0' = D0 \<cdot> \<rho>"
define As0' where
"As0' = As0 \<cdot>al \<rho>"
define CAs0' where
"CAs0' = CAs0 \<cdot>\<cdot>cl \<rho>s"
define Cs0' where
"Cs0' = Cs0 \<cdot>\<cdot>cl \<rho>s"
define AAs0' where
"AAs0' = AAs0 \<cdot>\<cdot>aml \<rho>s"
define \<eta>0' where
"\<eta>0' = inv_renaming \<rho> \<odot> \<eta>0"
define \<eta>s0' where
"\<eta>s0' = map inv_renaming \<rho>s \<odot>s \<eta>s0"
have renames_DA0: "is_renaming \<rho>"
using renamings_apart_length renamings_apart_renaming unfolding \<rho>_def
by (metis length_greater_0_conv list.exhaust_sel list.set_intros(1) list.simps(3))
have renames_CAs0: "is_renaming_list \<rho>s"
using renamings_apart_length renamings_apart_renaming unfolding \<rho>s_def
by (metis is_renaming_list_def length_greater_0_conv list.set_sel(2) list.simps(3))
have "length \<rho>s = n"
unfolding \<rho>s_def using n by auto
note n = n \<open>length \<rho>s = n\<close>
have "length As0' = n"
unfolding As0'_def using n by auto
have "length CAs0' = n"
using as0(1) n unfolding CAs0'_def by auto
have "length Cs0' = n"
unfolding Cs0'_def using n by auto
have "length AAs0' = n"
unfolding AAs0'_def using n by auto
have "length \<eta>s0' = n"
using as0(2) n unfolding \<eta>s0'_def by auto
note n = \<open>length CAs0' = n\<close> \<open>length \<eta>s0' = n\<close> \<open>length As0' = n\<close> \<open>length AAs0' = n\<close> \<open>length Cs0' = n\<close> n
have DA0'_DA: "DA0' \<cdot> \<eta>0' = DA"
using as0(4) unfolding \<eta>0'_def DA0'_def using renames_DA0 by simp
have D0'_D: "D0' \<cdot> \<eta>0' = D"
using as0(14) unfolding \<eta>0'_def D0'_def using renames_DA0 by simp
have As0'_As: "As0' \<cdot>al \<eta>0' = As"
using as0(11) unfolding \<eta>0'_def As0'_def using renames_DA0 by auto
have "S DA0' \<cdot> \<eta>0' = S_M S M DA"
using as0(5) unfolding \<eta>0'_def DA0'_def using renames_DA0 sel_stable by auto
have CAs0'_CAs: "CAs0' \<cdot>\<cdot>cl \<eta>s0' = CAs"
using as0(7) unfolding CAs0'_def \<eta>s0'_def using renames_CAs0 n by auto
have Cs0'_Cs: "Cs0' \<cdot>\<cdot>cl \<eta>s0' = Cs"
using as0(18) unfolding Cs0'_def \<eta>s0'_def using renames_CAs0 n by auto
have AAs0'_AAs: "AAs0' \<cdot>\<cdot>aml \<eta>s0' = AAs"
using as0(12) unfolding \<eta>s0'_def AAs0'_def using renames_CAs0 using n by auto
have "map S CAs0' \<cdot>\<cdot>cl \<eta>s0' = map (S_M S M) CAs"
unfolding CAs0'_def \<eta>s0'_def using as0(8) n renames_CAs0 sel_ren_list_inv by auto
have DA0'_split: "DA0' = D0' + negs (mset As0')"
using as0(15) DA0'_def D0'_def As0'_def by auto
then have D0'_subset_DA0': "D0' \<subseteq># DA0'"
by auto
from DA0'_split have negs_As0'_subset_DA0': "negs (mset As0') \<subseteq># DA0'"
by auto
have CAs0'_split: "\<forall>i<n. CAs0' ! i = Cs0' ! i + poss (AAs0' ! i)"
using as0(19) CAs0'_def Cs0'_def AAs0'_def n by auto
then have "\<forall>i<n. Cs0' ! i \<subseteq># CAs0' ! i"
by auto
from CAs0'_split have poss_AAs0'_subset_CAs0': "\<forall>i<n. poss (AAs0' ! i) \<subseteq># CAs0' ! i"
by auto
then have AAs0'_in_atms_of_CAs0': "\<forall>i < n. \<forall>A\<in>#AAs0' ! i. A \<in> atms_of (CAs0' ! i)"
by (auto simp add: atm_iff_pos_or_neg_lit)
have as0':
"S_M S M (D + negs (mset As)) \<noteq> {#} \<Longrightarrow> negs (mset As0') = S DA0'"
proof -
assume a: "S_M S M (D + negs (mset As)) \<noteq> {#}"
then have "negs (mset As0) \<cdot> \<rho> = S DA0 \<cdot> \<rho>"
using as0(16) unfolding \<rho>_def by metis
then show "negs (mset As0') = S DA0'"
using As0'_def DA0'_def using sel_stable[of \<rho> DA0] renames_DA0 by auto
qed
have vd: "var_disjoint (DA0' # CAs0')"
unfolding DA0'_def CAs0'_def using renamings_apart_var_disjoint
unfolding \<rho>_def \<rho>s_def
by (metis length_greater_0_conv list.exhaust_sel n(6) substitution.subst_cls_lists_Cons
substitution_axioms zero_less_Suc)
\<comment> \<open>Introduce ground substitution\<close>
from vd DA0'_DA CAs0'_CAs have "\<exists>\<eta>. \<forall>i < Suc n. \<forall>S. S \<subseteq># (DA0' # CAs0') ! i \<longrightarrow> S \<cdot> (\<eta>0'#\<eta>s0') ! i = S \<cdot> \<eta>"
unfolding var_disjoint_def using n by auto
then obtain \<eta> where \<eta>_p: "\<forall>i < Suc n. \<forall>S. S \<subseteq># (DA0' # CAs0') ! i \<longrightarrow> S \<cdot> (\<eta>0'#\<eta>s0') ! i = S \<cdot> \<eta>"
by auto
have \<eta>_p_lit: "\<forall>i < Suc n. \<forall>L. L \<in># (DA0' # CAs0') ! i \<longrightarrow> L \<cdot>l (\<eta>0'#\<eta>s0') ! i = L \<cdot>l \<eta>"
proof (rule, rule, rule, rule)
fix i :: "nat" and L :: "'a literal"
assume a:
"i < Suc n"
"L \<in># (DA0' # CAs0') ! i"
then have "\<forall>S. S \<subseteq># (DA0' # CAs0') ! i \<longrightarrow> S \<cdot> (\<eta>0' # \<eta>s0') ! i = S \<cdot> \<eta>"
using \<eta>_p by auto
then have "{# L #} \<cdot> (\<eta>0' # \<eta>s0') ! i = {# L #} \<cdot> \<eta>"
using a by (meson single_subset_iff)
then show "L \<cdot>l (\<eta>0' # \<eta>s0') ! i = L \<cdot>l \<eta>" by auto
qed
have \<eta>_p_atm: "\<forall>i < Suc n. \<forall>A. A \<in> atms_of ((DA0' # CAs0') ! i) \<longrightarrow> A \<cdot>a (\<eta>0'#\<eta>s0') ! i = A \<cdot>a \<eta>"
proof (rule, rule, rule, rule)
fix i :: "nat" and A :: "'a"
assume a:
"i < Suc n"
"A \<in> atms_of ((DA0' # CAs0') ! i)"
then obtain L where L_p: "atm_of L = A \<and> L \<in># (DA0' # CAs0') ! i"
unfolding atms_of_def by auto
then have "L \<cdot>l (\<eta>0'#\<eta>s0') ! i = L \<cdot>l \<eta>"
using \<eta>_p_lit a by auto
then show "A \<cdot>a (\<eta>0' # \<eta>s0') ! i = A \<cdot>a \<eta>"
using L_p unfolding subst_lit_def by (cases L) auto
qed
have DA0'_DA: "DA0' \<cdot> \<eta> = DA"
using DA0'_DA \<eta>_p by auto
have "D0' \<cdot> \<eta> = D" using \<eta>_p D0'_D n D0'_subset_DA0' by auto
have "As0' \<cdot>al \<eta> = As"
proof (rule nth_equalityI)
show "length (As0' \<cdot>al \<eta>) = length As"
using n by auto
next
fix i
show "i<length (As0' \<cdot>al \<eta>) \<Longrightarrow> (As0' \<cdot>al \<eta>) ! i = As ! i"
proof -
assume a: "i < length (As0' \<cdot>al \<eta>)"
have A_eq: "\<forall>A. A \<in> atms_of DA0' \<longrightarrow> A \<cdot>a \<eta>0' = A \<cdot>a \<eta>"
using \<eta>_p_atm n by force
have "As0' ! i \<in> atms_of DA0'"
using negs_As0'_subset_DA0' unfolding atms_of_def
using a n by force
then have "As0' ! i \<cdot>a \<eta>0' = As0' ! i \<cdot>a \<eta>"
using A_eq by simp
then show "(As0' \<cdot>al \<eta>) ! i = As ! i"
using As0'_As \<open>length As0' = n\<close> a by auto
qed
qed
interpret selection
by (rule select)
have "S DA0' \<cdot> \<eta> = S_M S M DA"
using \<open>S DA0' \<cdot> \<eta>0' = S_M S M DA\<close> \<eta>_p S.S_selects_subseteq by auto
from \<eta>_p have \<eta>_p_CAs0': "\<forall>i < n. (CAs0' ! i) \<cdot> (\<eta>s0' ! i) = (CAs0'! i) \<cdot> \<eta>"
using n by auto
then have "CAs0' \<cdot>\<cdot>cl \<eta>s0' = CAs0' \<cdot>cl \<eta>"
using n by (auto intro: nth_equalityI)
then have CAs0'_\<eta>_fo_CAs: "CAs0' \<cdot>cl \<eta> = CAs"
using CAs0'_CAs \<eta>_p n by auto
from \<eta>_p have "\<forall>i < n. S (CAs0' ! i) \<cdot> \<eta>s0' ! i = S (CAs0' ! i) \<cdot> \<eta>"
using S.S_selects_subseteq n by auto
then have "map S CAs0' \<cdot>\<cdot>cl \<eta>s0' = map S CAs0' \<cdot>cl \<eta>"
using n by (auto intro: nth_equalityI)
then have SCAs0'_\<eta>_fo_SMCAs: "map S CAs0' \<cdot>cl \<eta> = map (S_M S M) CAs"
using \<open>map S CAs0' \<cdot>\<cdot>cl \<eta>s0' = map (S_M S M) CAs\<close> by auto
have "Cs0' \<cdot>cl \<eta> = Cs"
proof (rule nth_equalityI)
show "length (Cs0' \<cdot>cl \<eta>) = length Cs"
using n by auto
next
fix i
show "i<length (Cs0' \<cdot>cl \<eta>) \<Longrightarrow> (Cs0' \<cdot>cl \<eta>) ! i = Cs ! i"
proof -
assume "i < length (Cs0' \<cdot>cl \<eta>)"
then have a: "i < n"
using n by force
have "(Cs0' \<cdot>\<cdot>cl \<eta>s0') ! i = Cs ! i"
using Cs0'_Cs a n by force
moreover
have \<eta>_p_CAs0': "\<forall>S. S \<subseteq># CAs0' ! i \<longrightarrow> S \<cdot> \<eta>s0' ! i = S \<cdot> \<eta>"
using \<eta>_p a by force
have "Cs0' ! i \<cdot> \<eta>s0' ! i = (Cs0' \<cdot>cl \<eta>) ! i"
using \<eta>_p_CAs0' \<open>\<forall>i<n. Cs0' ! i \<subseteq># CAs0' ! i\<close> a n by force
then have "(Cs0' \<cdot>\<cdot>cl \<eta>s0') ! i = (Cs0' \<cdot>cl \<eta>) ! i "
using a n by force
ultimately show "(Cs0' \<cdot>cl \<eta>) ! i = Cs ! i"
by auto
qed
qed
have AAs0'_AAs: "AAs0' \<cdot>aml \<eta> = AAs"
proof (rule nth_equalityI)
show "length (AAs0' \<cdot>aml \<eta>) = length AAs"
using n by auto
next
fix i
show "i<length (AAs0' \<cdot>aml \<eta>) \<Longrightarrow> (AAs0' \<cdot>aml \<eta>) ! i = AAs ! i"
proof -
assume a: "i < length (AAs0' \<cdot>aml \<eta>)"
then have "i < n"
using n by force
then have "\<forall>A. A \<in> atms_of ((DA0' # CAs0') ! Suc i) \<longrightarrow> A \<cdot>a (\<eta>0' # \<eta>s0') ! Suc i = A \<cdot>a \<eta>"
using \<eta>_p_atm n by force
then have A_eq: "\<forall>A. A \<in> atms_of (CAs0' ! i) \<longrightarrow> A \<cdot>a \<eta>s0' ! i = A \<cdot>a \<eta>"
by auto
have AAs_CAs0': "\<forall>A \<in># AAs0' ! i. A \<in> atms_of (CAs0' ! i)"
using AAs0'_in_atms_of_CAs0' unfolding atms_of_def
using a n by force
then have "AAs0' ! i \<cdot>am \<eta>s0' ! i = AAs0' ! i \<cdot>am \<eta>"
unfolding subst_atm_mset_def using A_eq unfolding subst_atm_mset_def by auto
then show "(AAs0' \<cdot>aml \<eta>) ! i = AAs ! i"
using AAs0'_AAs \<open>length AAs0' = n\<close> \<open>length \<eta>s0' = n\<close> a by auto
qed
qed
\<comment> \<open>Obtain MGU and substitution\<close>
obtain \<tau> \<phi> where \<tau>\<phi>:
"Some \<tau> = mgu (set_mset ` set (map2 add_mset As0' AAs0'))"
"\<tau> \<odot> \<phi> = \<eta> \<odot> \<sigma>"
proof -
have uu: "is_unifiers \<sigma> (set_mset ` set (map2 add_mset (As0' \<cdot>al \<eta>) (AAs0' \<cdot>aml \<eta>)))"
using mgu mgu_sound is_mgu_def unfolding \<open>AAs0' \<cdot>aml \<eta> = AAs\<close> using \<open>As0' \<cdot>al \<eta> = As\<close> by auto
have \<eta>\<sigma>uni: "is_unifiers (\<eta> \<odot> \<sigma>) (set_mset ` set (map2 add_mset As0' AAs0'))"
proof -
have "set_mset ` set (map2 add_mset As0' AAs0' \<cdot>aml \<eta>) =
set_mset ` set (map2 add_mset As0' AAs0') \<cdot>ass \<eta>"
unfolding subst_atmss_def subst_atm_mset_list_def using subst_atm_mset_def subst_atms_def
by (simp add: image_image subst_atm_mset_def subst_atms_def)
then have "is_unifiers \<sigma> (set_mset ` set (map2 add_mset As0' AAs0') \<cdot>ass \<eta>)"
using uu by (auto simp: n map2_add_mset_map)
then show ?thesis
using is_unifiers_comp by auto
qed
then obtain \<tau> where
\<tau>_p: "Some \<tau> = mgu (set_mset ` set (map2 add_mset As0' AAs0'))"
using mgu_complete
by (metis (mono_tags, hide_lams) List.finite_set finite_imageI finite_set_mset image_iff)
moreover then obtain \<phi> where \<phi>_p: "\<tau> \<odot> \<phi> = \<eta> \<odot> \<sigma>"
by (metis (mono_tags, hide_lams) finite_set \<eta>\<sigma>uni finite_imageI finite_set_mset image_iff
mgu_sound set_mset_mset substitution_ops.is_mgu_def) (* should be simpler *)
ultimately show thesis
using that by auto
qed
\<comment> \<open>Lifting eligibility\<close>
have eligible0': "eligible S \<tau> As0' (D0' + negs (mset As0'))"
proof -
have "S_M S M (D + negs (mset As)) = negs (mset As) \<or> S_M S M (D + negs (mset As)) = {#} \<and>
length As = 1 \<and> maximal_wrt (As ! 0 \<cdot>a \<sigma>) ((D + negs (mset As)) \<cdot> \<sigma>)"
using eligible unfolding eligible.simps by auto
then show ?thesis
proof
assume "S_M S M (D + negs (mset As)) = negs (mset As)"
then have "S_M S M (D + negs (mset As)) \<noteq> {#}"
using n by force
then have "S (D0' + negs (mset As0')) = negs (mset As0')"
using as0' DA0'_split by auto
then show ?thesis
unfolding eligible.simps[simplified] by auto
next
assume asm: "S_M S M (D + negs (mset As)) = {#} \<and> length As = 1 \<and>
maximal_wrt (As ! 0 \<cdot>a \<sigma>) ((D + negs (mset As)) \<cdot> \<sigma>)"
then have "S (D0' + negs (mset As0')) = {#}"
using \<open>D0' \<cdot> \<eta> = D\<close>[symmetric] \<open>As0' \<cdot>al \<eta> = As\<close>[symmetric] \<open>S (DA0') \<cdot> \<eta> = S_M S M (DA)\<close>
da DA0'_split subst_cls_empty_iff by metis
moreover from asm have l: "length As0' = 1"
using \<open>As0' \<cdot>al \<eta> = As\<close> by auto
moreover from asm have "maximal_wrt (As0' ! 0 \<cdot>a (\<tau> \<odot> \<phi>)) ((D0' + negs (mset As0')) \<cdot> (\<tau> \<odot> \<phi>))"
using \<open>As0' \<cdot>al \<eta> = As\<close> \<open>D0' \<cdot> \<eta> = D\<close> using l \<tau>\<phi> by auto
then have "maximal_wrt (As0' ! 0 \<cdot>a \<tau> \<cdot>a \<phi>) ((D0' + negs (mset As0')) \<cdot> \<tau> \<cdot> \<phi>)"
by auto
then have "maximal_wrt (As0' ! 0 \<cdot>a \<tau>) ((D0' + negs (mset As0')) \<cdot> \<tau>)"
using maximal_wrt_subst by blast
ultimately show ?thesis
unfolding eligible.simps[simplified] by auto
qed
qed
\<comment> \<open>Lifting maximality\<close>
have maximality: "\<forall>i < n. strictly_maximal_wrt (As0' ! i \<cdot>a \<tau>) (Cs0' ! i \<cdot> \<tau>)"
(* Reformulate in list notation? *)
proof -
from str_max have "\<forall>i < n. strictly_maximal_wrt ((As0' \<cdot>al \<eta>) ! i \<cdot>a \<sigma>) ((Cs0' \<cdot>cl \<eta>) ! i \<cdot> \<sigma>)"
using \<open>As0' \<cdot>al \<eta> = As\<close> \<open>Cs0' \<cdot>cl \<eta> = Cs\<close> by simp
then have "\<forall>i < n. strictly_maximal_wrt (As0' ! i \<cdot>a (\<tau> \<odot> \<phi>)) (Cs0' ! i \<cdot> (\<tau> \<odot> \<phi>))"
using n \<tau>\<phi> by simp
then have "\<forall>i < n. strictly_maximal_wrt (As0' ! i \<cdot>a \<tau> \<cdot>a \<phi>) (Cs0' ! i \<cdot> \<tau> \<cdot> \<phi>)"
by auto
then show "\<forall>i < n. strictly_maximal_wrt (As0' ! i \<cdot>a \<tau>) (Cs0' ! i \<cdot> \<tau>)"
using strictly_maximal_wrt_subst \<tau>\<phi> by blast
qed
\<comment> \<open>Lifting nothing being selected\<close>
have nothing_selected: "\<forall>i < n. S (CAs0' ! i) = {#}"
proof -
have "\<forall>i < n. (map S CAs0' \<cdot>cl \<eta>) ! i = map (S_M S M) CAs ! i"
by (simp add: \<open>map S CAs0' \<cdot>cl \<eta> = map (S_M S M) CAs\<close>)
then have "\<forall>i < n. S (CAs0' ! i) \<cdot> \<eta> = S_M S M (CAs ! i)"
using n by auto
then have "\<forall>i < n. S (CAs0' ! i) \<cdot> \<eta> = {#}"
using sel_empt \<open>\<forall>i < n. S (CAs0' ! i) \<cdot> \<eta> = S_M S M (CAs ! i)\<close> by auto
then show "\<forall>i < n. S (CAs0' ! i) = {#}"
using subst_cls_empty_iff by blast
qed
\<comment> \<open>Lifting AAs0's non-emptiness\<close>
have "\<forall>i < n. AAs0' ! i \<noteq> {#}"
using n aas_not_empt \<open>AAs0' \<cdot>aml \<eta> = AAs\<close> by auto
\<comment> \<open>Resolve the lifted clauses\<close>
define E0' where
"E0' = ((\<Union># (mset Cs0')) + D0') \<cdot> \<tau>"
have res_e0': "ord_resolve S CAs0' DA0' AAs0' As0' \<tau> E0'"
using ord_resolve.intros[of CAs0' n Cs0' AAs0' As0' \<tau> S D0',
OF _ _ _ _ _ _ \<open>\<forall>i < n. AAs0' ! i \<noteq> {#}\<close> \<tau>\<phi>(1) eligible0'
\<open>\<forall>i < n. strictly_maximal_wrt (As0' ! i \<cdot>a \<tau>) (Cs0' ! i \<cdot> \<tau>)\<close> \<open>\<forall>i < n. S (CAs0' ! i) = {#}\<close>]
unfolding E0'_def using DA0'_split n \<open>\<forall>i<n. CAs0' ! i = Cs0' ! i + poss (AAs0' ! i)\<close> by blast
\<comment> \<open>Prove resolvent instantiates to ground resolvent\<close>
have e0'\<phi>e: "E0' \<cdot> \<phi> = E"
proof -
have "E0' \<cdot> \<phi> = ((\<Union># (mset Cs0')) + D0') \<cdot> (\<tau> \<odot> \<phi>)"
unfolding E0'_def by auto
also have "\<dots> = (\<Union># (mset Cs0') + D0') \<cdot> (\<eta> \<odot> \<sigma>)"
using \<tau>\<phi> by auto
also have "\<dots> = (\<Union># (mset Cs) + D) \<cdot> \<sigma>"
using \<open>Cs0' \<cdot>cl \<eta> = Cs\<close> \<open>D0' \<cdot> \<eta> = D\<close> by auto
also have "\<dots> = E"
using e by auto
finally show e0'\<phi>e: "E0' \<cdot> \<phi> = E"
.
qed
\<comment> \<open>Replace @{term \<phi>} with a true ground substitution\<close>
obtain \<eta>2 where
ground_\<eta>2: "is_ground_subst \<eta>2" "E0' \<cdot> \<eta>2 = E"
proof -
have "is_ground_cls_list CAs" "is_ground_cls DA"
using grounding grounding_ground unfolding is_ground_cls_list_def by auto
then have "is_ground_cls E"
using res_e ground_resolvent_subset by (force intro: is_ground_cls_mono)
then show thesis
using that e0'\<phi>e make_ground_subst by auto
qed
- have \<open>length CAs0 = length CAs\<close> using n by simp
+ have \<open>length CAs0 = length CAs\<close>
+ using n by simp
+
+ have \<open>length \<eta>s0 = length CAs\<close>
+ using n by simp
\<comment> \<open>Wrap up the proof\<close>
have "ord_resolve S (CAs0 \<cdot>\<cdot>cl \<rho>s) (DA0 \<cdot> \<rho>) (AAs0 \<cdot>\<cdot>aml \<rho>s) (As0 \<cdot>al \<rho>) \<tau> E0'"
using res_e0' As0'_def \<rho>_def AAs0'_def \<rho>s_def DA0'_def \<rho>_def CAs0'_def \<rho>s_def by simp
moreover have "\<forall>i<n. poss (AAs0 ! i) \<subseteq># CAs0 ! i"
using as0(19) by auto
moreover have "negs (mset As0) \<subseteq># DA0"
using local.as0(15) by auto
ultimately have "ord_resolve_rename S CAs0 DA0 AAs0 As0 \<tau> E0'"
using ord_resolve_rename[of CAs0 n AAs0 As0 DA0 \<rho> \<rho>s S \<tau> E0'] \<rho>_def \<rho>s_def n by auto
then show thesis
using that[of \<eta>0 \<eta>s0 \<eta>2 CAs0 DA0] \<open>is_ground_subst \<eta>0\<close> \<open>is_ground_subst_list \<eta>s0\<close>
\<open>is_ground_subst \<eta>2\<close> \<open>CAs0 \<cdot>\<cdot>cl \<eta>s0 = CAs\<close> \<open>DA0 \<cdot> \<eta>0 = DA\<close> \<open>E0' \<cdot> \<eta>2 = E\<close> \<open>DA0 \<in> M\<close>
- \<open>\<forall>CA \<in> set CAs0. CA \<in> M\<close> \<open>length CAs0 = length CAs\<close>
- by blast
+ \<open>\<forall>CA \<in> set CAs0. CA \<in> M\<close> \<open>length CAs0 = length CAs\<close> \<open>length \<eta>s0 = length CAs\<close>
+ by blast
+qed
+
+lemma ground_ord_resolve_ground:
+ assumes
+ select: "selection S" and
+ CAs_p: "ground_resolution_with_selection.ord_resolve S CAs DA AAs As E" and
+ ground_cas: "is_ground_cls_list CAs" and
+ ground_da: "is_ground_cls DA"
+ shows "is_ground_cls E"
+proof -
+ have a1: "atms_of E \<subseteq> (\<Union>CA \<in> set CAs. atms_of CA) \<union> atms_of DA"
+ using ground_resolution_with_selection.ord_resolve_atms_of_concl_subset[OF _ CAs_p]
+ ground_resolution_with_selection.intro[OF select] by blast
+ {
+ fix L :: "'a literal"
+ assume "L \<in># E"
+ then have "atm_of L \<in> atms_of E"
+ by (meson atm_of_lit_in_atms_of)
+ then have "is_ground_atm (atm_of L)"
+ using a1 ground_cas ground_da is_ground_cls_imp_is_ground_atm is_ground_cls_list_def
+ by auto
+ }
+ then show ?thesis
+ unfolding is_ground_cls_def is_ground_lit_def by simp
+qed
+
+lemma ground_ord_resolve_imp_ord_resolve:
+ assumes
+ ground_da: \<open>is_ground_cls DA\<close> and
+ ground_cas: \<open>is_ground_cls_list CAs\<close> and
+ gr: "ground_resolution_with_selection S_G" and
+ gr_res: \<open>ground_resolution_with_selection.ord_resolve S_G CAs DA AAs As E\<close>
+ shows \<open>\<exists>\<sigma>. ord_resolve S_G CAs DA AAs As \<sigma> E\<close>
+proof (cases rule: ground_resolution_with_selection.ord_resolve.cases[OF gr gr_res])
+ case (1 CAs n Cs AAs As D)
+ note cas = this(1) and da = this(2) and aas = this(3) and as = this(4) and e = this(5) and
+ cas_len = this(6) and cs_len = this(7) and aas_len = this(8) and as_len = this(9) and
+ nz = this(10) and casi = this(11) and aas_not_empt = this(12) and as_aas = this(13) and
+ eligibility = this(14) and str_max = this(15) and sel_empt = this(16)
+
+ have len_aas_len_as: "length AAs = length As"
+ using aas_len as_len by auto
+
+ from as_aas have "\<forall>i < n. \<forall>A \<in># add_mset (As ! i) (AAs ! i). A = As ! i"
+ by simp
+ then have "\<forall>i < n. card (set_mset (add_mset (As ! i) (AAs ! i))) \<le> Suc 0"
+ using all_the_same by metis
+ then have "\<forall>i < length AAs. card (set_mset (add_mset (As ! i) (AAs ! i))) \<le> Suc 0"
+ using aas_len by auto
+ then have "\<forall>AA \<in> set (map2 add_mset As AAs). card (set_mset AA) \<le> Suc 0"
+ using set_map2_ex[of AAs As add_mset, OF len_aas_len_as] by auto
+ then have "is_unifiers id_subst (set_mset ` set (map2 add_mset As AAs))"
+ unfolding is_unifiers_def is_unifier_def by auto
+ moreover have "finite (set_mset ` set (map2 add_mset As AAs))"
+ by auto
+ moreover have "\<forall>AA \<in> set_mset ` set (map2 add_mset As AAs). finite AA"
+ by auto
+ ultimately obtain \<sigma> where
+ \<sigma>_p: "Some \<sigma> = mgu (set_mset ` set (map2 add_mset As AAs))"
+ using mgu_complete by metis
+
+ have ground_elig: "ground_resolution_with_selection.eligible S_G As (D + negs (mset As))"
+ using eligibility by simp
+ have ground_cs: "\<forall>i < n. is_ground_cls (Cs ! i)"
+ using cas cas_len cs_len casi ground_cas nth_mem unfolding is_ground_cls_list_def by force
+ have ground_set_as: "is_ground_atms (set As)"
+ using da ground_da by (metis atms_of_negs is_ground_cls_is_ground_atms_atms_of
+ is_ground_cls_union set_mset_mset)
+ then have ground_mset_as: "is_ground_atm_mset (mset As)"
+ unfolding is_ground_atm_mset_def is_ground_atms_def by auto
+ have ground_as: "is_ground_atm_list As"
+ using ground_set_as is_ground_atm_list_def is_ground_atms_def by auto
+ have ground_d: "is_ground_cls D"
+ using ground_da da by simp
+
+ from as_len nz have atms:
+ "atms_of D \<union> set As \<noteq> {}"
+ "finite (atms_of D \<union> set As)"
+ by auto
+ then have "Max (atms_of D \<union> set As) \<in> atms_of D \<union> set As"
+ using Max_in by metis
+ then have is_ground_Max: "is_ground_atm (Max (atms_of D \<union> set As))"
+ using ground_d ground_mset_as is_ground_cls_imp_is_ground_atm
+ unfolding is_ground_atm_mset_def by auto
+
+ have "maximal_wrt (Max (atms_of D \<union> set As)) (D + negs (mset As))"
+ unfolding maximal_wrt_def
+ by clarsimp (metis atms Max_less_iff UnCI ground_d ground_set_as infinite_growing
+ is_ground_Max is_ground_atms_def is_ground_cls_imp_is_ground_atm less_atm_ground)
+ moreover have
+ "Max (atms_of D \<union> set As) \<cdot>a \<sigma> = Max (atms_of D \<union> set As)" and
+ "D \<cdot> \<sigma> + negs (mset As \<cdot>am \<sigma>) = D + negs (mset As)"
+ using ground_elig is_ground_Max ground_mset_as ground_d by auto
+ ultimately have fo_elig: "eligible S_G \<sigma> As (D + negs (mset As))"
+ using ground_elig unfolding ground_resolution_with_selection.eligible.simps[OF gr]
+ ground_resolution_with_selection.maximal_wrt_def[OF gr] eligible.simps
+ by auto
+
+ have "\<forall>i < n. strictly_maximal_wrt (As ! i) (Cs ! i)"
+ using str_max[unfolded ground_resolution_with_selection.strictly_maximal_wrt_def[OF gr]]
+ ground_as[unfolded is_ground_atm_list_def] ground_cs as_len less_atm_ground
+ unfolding strictly_maximal_wrt_def by clarsimp (fastforce simp: is_ground_cls_as_atms)+
+ then have ll: "\<forall>i < n. strictly_maximal_wrt (As ! i \<cdot>a \<sigma>) (Cs ! i \<cdot> \<sigma>)"
+ by (simp add: ground_as ground_cs as_len)
+
+ have ground_e: "is_ground_cls E"
+ using ground_d ground_cs cs_len unfolding e is_ground_cls_def
+ by simp (metis in_mset_sum_list2 in_set_conv_nth)
+
+ show ?thesis
+ using cas da aas as e ground_e ord_resolve.intros[OF cas_len cs_len aas_len as_len nz casi
+ aas_not_empt \<sigma>_p fo_elig ll sel_empt]
+ by auto
qed
end
end
diff --git a/thys/Ordered_Resolution_Prover/FO_Ordered_Resolution_Prover.thy b/thys/Ordered_Resolution_Prover/FO_Ordered_Resolution_Prover.thy
--- a/thys/Ordered_Resolution_Prover/FO_Ordered_Resolution_Prover.thy
+++ b/thys/Ordered_Resolution_Prover/FO_Ordered_Resolution_Prover.thy
@@ -1,1544 +1,1359 @@
(* Title: An Ordered Resolution Prover for First-Order Clauses
Author: Anders Schlichtkrull <andschl at dtu.dk>, 2016, 2017
Author: Jasmin Blanchette <j.c.blanchette at vu.nl>, 2014, 2017
Author: Dmitriy Traytel <traytel at inf.ethz.ch>, 2014
Maintainer: Anders Schlichtkrull <andschl at dtu.dk>
*)
section \<open>An Ordered Resolution Prover for First-Order Clauses\<close>
theory FO_Ordered_Resolution_Prover
imports FO_Ordered_Resolution
begin
text \<open>
This material is based on Section 4.3 (``A Simple Resolution Prover for First-Order Clauses'') of
Bachmair and Ganzinger's chapter. Specifically, it formalizes the RP prover defined in Figure 5 and
its related lemmas and theorems, including Lemmas 4.10 and 4.11 and Theorem 4.13 (completeness).
\<close>
definition is_least :: "(nat \<Rightarrow> bool) \<Rightarrow> nat \<Rightarrow> bool" where
"is_least P n \<longleftrightarrow> P n \<and> (\<forall>n' < n. \<not> P n')"
lemma least_exists: "P n \<Longrightarrow> \<exists>n. is_least P n"
using exists_least_iff unfolding is_least_def by auto
text \<open>
The following corresponds to page 42 and 43 of Section 4.3, from the explanation of RP to
Lemma 4.10.
\<close>
type_synonym 'a state = "'a clause set \<times> 'a clause set \<times> 'a clause set"
locale FO_resolution_prover =
FO_resolution subst_atm id_subst comp_subst renamings_apart atm_of_atms mgu less_atm +
selection S
for
S :: "('a :: wellorder) clause \<Rightarrow> 'a clause" and
subst_atm :: "'a \<Rightarrow> 's \<Rightarrow> 'a" and
id_subst :: "'s" and
comp_subst :: "'s \<Rightarrow> 's \<Rightarrow> 's" and
renamings_apart :: "'a clause list \<Rightarrow> 's list" and
atm_of_atms :: "'a list \<Rightarrow> 'a" and
mgu :: "'a set set \<Rightarrow> 's option" and
less_atm :: "'a \<Rightarrow> 'a \<Rightarrow> bool" +
assumes
- sel_stable: "\<And>\<rho> C. is_renaming \<rho> \<Longrightarrow> S (C \<cdot> \<rho>) = S C \<cdot> \<rho>" and
- less_atm_ground: "is_ground_atm A \<Longrightarrow> is_ground_atm B \<Longrightarrow> less_atm A B \<Longrightarrow> A < B"
+ sel_stable: "\<And>\<rho> C. is_renaming \<rho> \<Longrightarrow> S (C \<cdot> \<rho>) = S C \<cdot> \<rho>"
begin
fun N_of_state :: "'a state \<Rightarrow> 'a clause set" where
"N_of_state (N, P, Q) = N"
fun P_of_state :: "'a state \<Rightarrow> 'a clause set" where
"P_of_state (N, P, Q) = P"
text \<open>
\<open>O\<close> denotes relation composition in Isabelle, so the formalization uses \<open>Q\<close> instead.
\<close>
fun Q_of_state :: "'a state \<Rightarrow> 'a clause set" where
"Q_of_state (N, P, Q) = Q"
abbreviation clss_of_state :: "'a state \<Rightarrow> 'a clause set" where
"clss_of_state St \<equiv> N_of_state St \<union> P_of_state St \<union> Q_of_state St"
abbreviation grounding_of_state :: "'a state \<Rightarrow> 'a clause set" where
"grounding_of_state St \<equiv> grounding_of_clss (clss_of_state St)"
interpretation ord_FO_resolution: inference_system "ord_FO_\<Gamma> S" .
text \<open>
The following inductive predicate formalizes the resolution prover in Figure 5.
\<close>
inductive RP :: "'a state \<Rightarrow> 'a state \<Rightarrow> bool" (infix "\<leadsto>" 50) where
tautology_deletion: "Neg A \<in># C \<Longrightarrow> Pos A \<in># C \<Longrightarrow> (N \<union> {C}, P, Q) \<leadsto> (N, P, Q)"
| forward_subsumption: "D \<in> P \<union> Q \<Longrightarrow> subsumes D C \<Longrightarrow> (N \<union> {C}, P, Q) \<leadsto> (N, P, Q)"
| backward_subsumption_P: "D \<in> N \<Longrightarrow> strictly_subsumes D C \<Longrightarrow> (N, P \<union> {C}, Q) \<leadsto> (N, P, Q)"
| backward_subsumption_Q: "D \<in> N \<Longrightarrow> strictly_subsumes D C \<Longrightarrow> (N, P, Q \<union> {C}) \<leadsto> (N, P, Q)"
| forward_reduction: "D + {#L'#} \<in> P \<union> Q \<Longrightarrow> - L = L' \<cdot>l \<sigma> \<Longrightarrow> D \<cdot> \<sigma> \<subseteq># C \<Longrightarrow>
(N \<union> {C + {#L#}}, P, Q) \<leadsto> (N \<union> {C}, P, Q)"
| backward_reduction_P: "D + {#L'#} \<in> N \<Longrightarrow> - L = L' \<cdot>l \<sigma> \<Longrightarrow> D \<cdot> \<sigma> \<subseteq># C \<Longrightarrow>
(N, P \<union> {C + {#L#}}, Q) \<leadsto> (N, P \<union> {C}, Q)"
| backward_reduction_Q: "D + {#L'#} \<in> N \<Longrightarrow> - L = L' \<cdot>l \<sigma> \<Longrightarrow> D \<cdot> \<sigma> \<subseteq># C \<Longrightarrow>
(N, P, Q \<union> {C + {#L#}}) \<leadsto> (N, P \<union> {C}, Q)"
| clause_processing: "(N \<union> {C}, P, Q) \<leadsto> (N, P \<union> {C}, Q)"
| inference_computation: "N = concls_of (ord_FO_resolution.inferences_between Q C) \<Longrightarrow>
({}, P \<union> {C}, Q) \<leadsto> (N, P, Q \<union> {C})"
lemma final_RP: "\<not> ({}, {}, Q) \<leadsto> St"
by (auto elim: RP.cases)
definition Sup_state :: "'a state llist \<Rightarrow> 'a state" where
"Sup_state Sts =
(Sup_llist (lmap N_of_state Sts), Sup_llist (lmap P_of_state Sts),
Sup_llist (lmap Q_of_state Sts))"
definition Liminf_state :: "'a state llist \<Rightarrow> 'a state" where
"Liminf_state Sts =
(Liminf_llist (lmap N_of_state Sts), Liminf_llist (lmap P_of_state Sts),
Liminf_llist (lmap Q_of_state Sts))"
context
fixes Sts Sts' :: "'a state llist"
assumes Sts: "lfinite Sts" "lfinite Sts'" "\<not> lnull Sts" "\<not> lnull Sts'" "llast Sts' = llast Sts"
begin
lemma
N_of_Liminf_state_fin: "N_of_state (Liminf_state Sts') = N_of_state (Liminf_state Sts)" and
P_of_Liminf_state_fin: "P_of_state (Liminf_state Sts') = P_of_state (Liminf_state Sts)" and
Q_of_Liminf_state_fin: "Q_of_state (Liminf_state Sts') = Q_of_state (Liminf_state Sts)"
using Sts by (simp_all add: Liminf_state_def lfinite_Liminf_llist llast_lmap)
lemma Liminf_state_fin: "Liminf_state Sts' = Liminf_state Sts"
using N_of_Liminf_state_fin P_of_Liminf_state_fin Q_of_Liminf_state_fin
by (simp add: Liminf_state_def)
end
context
fixes Sts Sts' :: "'a state llist"
assumes Sts: "\<not> lfinite Sts" "emb Sts Sts'"
begin
lemma
N_of_Liminf_state_inf: "N_of_state (Liminf_state Sts') \<subseteq> N_of_state (Liminf_state Sts)" and
P_of_Liminf_state_inf: "P_of_state (Liminf_state Sts') \<subseteq> P_of_state (Liminf_state Sts)" and
Q_of_Liminf_state_inf: "Q_of_state (Liminf_state Sts') \<subseteq> Q_of_state (Liminf_state Sts)"
using Sts by (simp_all add: Liminf_state_def emb_Liminf_llist_infinite emb_lmap)
lemma clss_of_Liminf_state_inf:
"clss_of_state (Liminf_state Sts') \<subseteq> clss_of_state (Liminf_state Sts)"
using N_of_Liminf_state_inf P_of_Liminf_state_inf Q_of_Liminf_state_inf by blast
end
definition fair_state_seq :: "'a state llist \<Rightarrow> bool" where
"fair_state_seq Sts \<longleftrightarrow> N_of_state (Liminf_state Sts) = {} \<and> P_of_state (Liminf_state Sts) = {}"
text \<open>
The following formalizes Lemma 4.10.
\<close>
context
fixes Sts :: "'a state llist"
- assumes deriv: "chain (\<leadsto>) Sts"
begin
-lemmas lhd_lmap_Sts = llist.map_sel(1)[OF chain_not_lnull[OF deriv]]
-
definition S_Q :: "'a clause \<Rightarrow> 'a clause" where
"S_Q = S_M S (Q_of_state (Liminf_state Sts))"
interpretation sq: selection S_Q
unfolding S_Q_def using S_M_selects_subseteq S_M_selects_neg_lits selection_axioms
by unfold_locales auto
interpretation gr: ground_resolution_with_selection S_Q
by unfold_locales
interpretation sr: standard_redundancy_criterion_reductive gr.ord_\<Gamma>
by unfold_locales
interpretation sr: standard_redundancy_criterion_counterex_reducing gr.ord_\<Gamma>
"ground_resolution_with_selection.INTERP S_Q"
by unfold_locales
text \<open>
The extension of ordered resolution mentioned in 4.10. We let it consist of all sound rules.
\<close>
definition ground_sound_\<Gamma>:: "'a inference set" where
"ground_sound_\<Gamma> = {Infer CC D E | CC D E. (\<forall>I. I \<Turnstile>m CC \<longrightarrow> I \<Turnstile> D \<longrightarrow> I \<Turnstile> E)}"
text \<open>
We prove that we indeed defined an extension.
\<close>
lemma gd_ord_\<Gamma>_ngd_ord_\<Gamma>: "gr.ord_\<Gamma> \<subseteq> ground_sound_\<Gamma>"
unfolding ground_sound_\<Gamma>_def using gr.ord_\<Gamma>_def gr.ord_resolve_sound by fastforce
lemma sound_ground_sound_\<Gamma>: "sound_inference_system ground_sound_\<Gamma>"
unfolding sound_inference_system_def ground_sound_\<Gamma>_def by auto
lemma sat_preserving_ground_sound_\<Gamma>: "sat_preserving_inference_system ground_sound_\<Gamma>"
using sound_ground_sound_\<Gamma> sat_preserving_inference_system.intro
sound_inference_system.\<Gamma>_sat_preserving by blast
definition sr_ext_Ri :: "'a clause set \<Rightarrow> 'a inference set" where
"sr_ext_Ri N = sr.Ri N \<union> (ground_sound_\<Gamma> - gr.ord_\<Gamma>)"
interpretation sr_ext:
- sat_preserving_redundancy_criterion "ground_sound_\<Gamma>" "sr.Rf" "sr_ext_Ri"
+ sat_preserving_redundancy_criterion ground_sound_\<Gamma> sr.Rf sr_ext_Ri
unfolding sat_preserving_redundancy_criterion_def sr_ext_Ri_def
using sat_preserving_ground_sound_\<Gamma> redundancy_criterion_standard_extension gd_ord_\<Gamma>_ngd_ord_\<Gamma>
sr.redundancy_criterion_axioms by auto
lemma strict_subset_subsumption_redundant_clause:
assumes
sub: "D \<cdot> \<sigma> \<subset># C" and
ground_\<sigma>: "is_ground_subst \<sigma>"
shows "C \<in> sr.Rf (grounding_of_cls D)"
proof -
from sub have "\<forall>I. I \<Turnstile> D \<cdot> \<sigma> \<longrightarrow> I \<Turnstile> C"
unfolding true_cls_def by blast
moreover have "C > D \<cdot> \<sigma>"
using sub by (simp add: subset_imp_less_mset)
moreover have "D \<cdot> \<sigma> \<in> grounding_of_cls D"
using ground_\<sigma> by (metis (mono_tags) mem_Collect_eq substitution_ops.grounding_of_cls_def)
ultimately have "set_mset {#D \<cdot> \<sigma>#} \<subseteq> grounding_of_cls D"
"(\<forall>I. I \<Turnstile>m {#D \<cdot> \<sigma>#} \<longrightarrow> I \<Turnstile> C)"
"(\<forall>D'. D' \<in># {#D \<cdot> \<sigma>#} \<longrightarrow> D' < C)"
by auto
then show ?thesis
using sr.Rf_def by blast
qed
lemma strict_subset_subsumption_redundant_clss:
assumes
"D \<cdot> \<sigma> \<subset># C" and
"is_ground_subst \<sigma>" and
"D \<in> CC"
shows "C \<in> sr.Rf (grounding_of_clss CC)"
using assms
proof -
have "C \<in> sr.Rf (grounding_of_cls D)"
using strict_subset_subsumption_redundant_clause assms by auto
then show ?thesis
using assms unfolding grounding_of_clss_def
by (metis (no_types) sr.Rf_mono sup_ge1 SUP_absorb contra_subsetD)
qed
lemma strict_subset_subsumption_grounding_redundant_clss:
assumes
D\<sigma>_subset_C: "D \<cdot> \<sigma> \<subset># C" and
D_in_St: "D \<in> CC"
shows "grounding_of_cls C \<subseteq> sr.Rf (grounding_of_clss CC)"
proof
fix C\<mu>
assume "C\<mu> \<in> grounding_of_cls C"
then obtain \<mu> where
\<mu>_p: "C\<mu> = C \<cdot> \<mu> \<and> is_ground_subst \<mu>"
unfolding grounding_of_cls_def by auto
have D\<sigma>\<mu>C\<mu>: "D \<cdot> \<sigma> \<cdot> \<mu> \<subset># C \<cdot> \<mu>"
using D\<sigma>_subset_C subst_subset_mono by auto
then show "C\<mu> \<in> sr.Rf (grounding_of_clss CC)"
using \<mu>_p strict_subset_subsumption_redundant_clss[of D "\<sigma> \<odot> \<mu>" "C \<cdot> \<mu>"] D_in_St by auto
qed
-lemma subst_cls_eq_grounding_of_cls_subset_eq:
- assumes "D \<cdot> \<sigma> = C"
- shows "grounding_of_cls C \<subseteq> grounding_of_cls D"
-proof
- fix C\<sigma>'
- assume "C\<sigma>' \<in> grounding_of_cls C"
- then obtain \<sigma>' where
- C\<sigma>': "C \<cdot> \<sigma>' = C\<sigma>'" "is_ground_subst \<sigma>'"
- unfolding grounding_of_cls_def by auto
- then have "C \<cdot> \<sigma>' = D \<cdot> \<sigma> \<cdot> \<sigma>' \<and> is_ground_subst (\<sigma> \<odot> \<sigma>')"
- using assms by auto
- then show "C\<sigma>' \<in> grounding_of_cls D"
- unfolding grounding_of_cls_def using C\<sigma>'(1) by force
-qed
-
lemma derive_if_remove_subsumed:
assumes
"D \<in> clss_of_state St" and
"subsumes D C"
shows "sr_ext.derive (grounding_of_state St \<union> grounding_of_cls C) (grounding_of_state St)"
proof -
from assms obtain \<sigma> where
"D \<cdot> \<sigma> = C \<or> D \<cdot> \<sigma> \<subset># C"
by (auto simp: subsumes_def subset_mset_def)
then have "D \<cdot> \<sigma> = C \<or> D \<cdot> \<sigma> \<subset># C"
by (simp add: subset_mset_def)
then show ?thesis
proof
assume "D \<cdot> \<sigma> = C"
then have "grounding_of_cls C \<subseteq> grounding_of_cls D"
using subst_cls_eq_grounding_of_cls_subset_eq by simp
then have "(grounding_of_state St \<union> grounding_of_cls C) = grounding_of_state St"
using assms unfolding grounding_of_clss_def by auto
then show ?thesis
by (auto intro: sr_ext.derive.intros)
next
assume a: "D \<cdot> \<sigma> \<subset># C"
then have "grounding_of_cls C \<subseteq> sr.Rf (grounding_of_state St)"
using strict_subset_subsumption_grounding_redundant_clss assms by auto
then show ?thesis
unfolding grounding_of_clss_def by (force intro: sr_ext.derive.intros)
qed
qed
lemma reduction_in_concls_of:
assumes
"C\<mu> \<in> grounding_of_cls C" and
"D + {#L'#} \<in> CC" and
"- L = L' \<cdot>l \<sigma>" and
"D \<cdot> \<sigma> \<subseteq># C"
shows "C\<mu> \<in> concls_of (sr_ext.inferences_from (grounding_of_clss (CC \<union> {C + {#L#}})))"
proof -
from assms
obtain \<mu> where
\<mu>_p: "C\<mu> = C \<cdot> \<mu> \<and> is_ground_subst \<mu>"
unfolding grounding_of_cls_def by auto
define \<gamma> where
- "\<gamma> = Infer {#(C + {#L#})\<cdot> \<mu>#} ((D + {#L'#}) \<cdot> \<sigma> \<cdot> \<mu>) (C \<cdot> \<mu>)"
+ "\<gamma> = Infer {#(C + {#L#}) \<cdot> \<mu>#} ((D + {#L'#}) \<cdot> \<sigma> \<cdot> \<mu>) (C \<cdot> \<mu>)"
have "(D + {#L'#}) \<cdot> \<sigma> \<cdot> \<mu> \<in> grounding_of_clss (CC \<union> {C + {#L#}})"
unfolding grounding_of_clss_def grounding_of_cls_def
by (rule UN_I[of "D + {#L'#}"], use assms(2) in simp,
metis (mono_tags, lifting) \<mu>_p is_ground_comp_subst mem_Collect_eq subst_cls_comp_subst)
moreover have "(C + {#L#}) \<cdot> \<mu> \<in> grounding_of_clss (CC \<union> {C + {#L#}})"
using \<mu>_p unfolding grounding_of_clss_def grounding_of_cls_def by auto
moreover have
"\<forall>I. I \<Turnstile> D \<cdot> \<sigma> \<cdot> \<mu> + {#- (L \<cdot>l \<mu>)#} \<longrightarrow> I \<Turnstile> C \<cdot> \<mu> + {#L \<cdot>l \<mu>#} \<longrightarrow> I \<Turnstile> D \<cdot> \<sigma> \<cdot> \<mu> + C \<cdot> \<mu>"
by auto
then have "\<forall>I. I \<Turnstile> (D + {#L'#}) \<cdot> \<sigma> \<cdot> \<mu> \<longrightarrow> I \<Turnstile> (C + {#L#}) \<cdot> \<mu> \<longrightarrow> I \<Turnstile> D \<cdot> \<sigma> \<cdot> \<mu> + C \<cdot> \<mu>"
using assms
by (metis add_mset_add_single subst_cls_add_mset subst_cls_union subst_minus)
then have "\<forall>I. I \<Turnstile> (D + {#L'#}) \<cdot> \<sigma> \<cdot> \<mu> \<longrightarrow> I \<Turnstile> (C + {#L#}) \<cdot> \<mu> \<longrightarrow> I \<Turnstile> C \<cdot> \<mu>"
using assms by (metis (no_types, lifting) subset_mset.le_iff_add subst_cls_union true_cls_union)
then have "\<forall>I. I \<Turnstile>m {#(D + {#L'#}) \<cdot> \<sigma> \<cdot> \<mu>#} \<longrightarrow> I \<Turnstile> (C + {#L#}) \<cdot> \<mu> \<longrightarrow> I \<Turnstile> C \<cdot> \<mu>"
by (meson true_cls_mset_singleton)
ultimately have "\<gamma> \<in> sr_ext.inferences_from (grounding_of_clss (CC \<union> {C + {#L#}}))"
unfolding sr_ext.inferences_from_def unfolding ground_sound_\<Gamma>_def infer_from_def \<gamma>_def by auto
then have "C \<cdot> \<mu> \<in> concls_of (sr_ext.inferences_from (grounding_of_clss (CC \<union> {C + {#L#}})))"
using image_iff unfolding \<gamma>_def by fastforce
then show "C\<mu> \<in> concls_of (sr_ext.inferences_from (grounding_of_clss (CC \<union> {C + {#L#}})))"
using \<mu>_p by auto
qed
lemma reduction_derivable:
assumes
"D + {#L'#} \<in> CC" and
"- L = L' \<cdot>l \<sigma>" and
"D \<cdot> \<sigma> \<subseteq># C"
shows "sr_ext.derive (grounding_of_clss (CC \<union> {C + {#L#}})) (grounding_of_clss (CC \<union> {C}))"
proof -
from assms have "grounding_of_clss (CC \<union> {C}) - grounding_of_clss (CC \<union> {C + {#L#}})
\<subseteq> concls_of (sr_ext.inferences_from (grounding_of_clss (CC \<union> {C + {#L#}})))"
using reduction_in_concls_of unfolding grounding_of_clss_def by auto
moreover
have "grounding_of_cls (C + {#L#}) \<subseteq> sr.Rf (grounding_of_clss (CC \<union> {C}))"
using strict_subset_subsumption_grounding_redundant_clss[of C "id_subst"]
by auto
then have "grounding_of_clss (CC \<union> {C + {#L#}}) - grounding_of_clss (CC \<union> {C})
\<subseteq> sr.Rf (grounding_of_clss (CC \<union> {C}))"
unfolding grounding_of_clss_def by auto
ultimately show
"sr_ext.derive (grounding_of_clss (CC \<union> {C + {#L#}})) (grounding_of_clss (CC \<union> {C}))"
using sr_ext.derive.intros[of "grounding_of_clss (CC \<union> {C})"
"grounding_of_clss (CC \<union> {C + {#L#}})"]
by auto
qed
text \<open>
The following corresponds the part of Lemma 4.10 that states we have a theorem proving process:
\<close>
lemma RP_ground_derive:
"St \<leadsto> St' \<Longrightarrow> sr_ext.derive (grounding_of_state St) (grounding_of_state St')"
proof (induction rule: RP.induct)
case (tautology_deletion A C N P Q)
{
fix C\<sigma>
assume "C\<sigma> \<in> grounding_of_cls C"
then obtain \<sigma> where
"C\<sigma> = C \<cdot> \<sigma>"
unfolding grounding_of_cls_def by auto
then have "Neg (A \<cdot>a \<sigma>) \<in># C\<sigma> \<and> Pos (A \<cdot>a \<sigma>) \<in># C\<sigma>"
using tautology_deletion Neg_Melem_subst_atm_subst_cls Pos_Melem_subst_atm_subst_cls by auto
then have "C\<sigma> \<in> sr.Rf (grounding_of_state (N, P, Q))"
using sr.tautology_Rf by auto
}
then have "grounding_of_state (N \<union> {C}, P, Q) - grounding_of_state (N, P, Q)
\<subseteq> sr.Rf (grounding_of_state (N, P, Q))"
unfolding grounding_of_clss_def by auto
moreover have "grounding_of_state (N, P, Q) - grounding_of_state (N \<union> {C}, P, Q) = {}"
unfolding grounding_of_clss_def by auto
ultimately show ?case
using sr_ext.derive.intros[of "grounding_of_state (N, P, Q)"
"grounding_of_state (N \<union> {C}, P, Q)"]
by auto
next
case (forward_subsumption D P Q C N)
then show ?case
using derive_if_remove_subsumed[of D "(N, P, Q)" C] unfolding grounding_of_clss_def
by (simp add: sup_commute sup_left_commute)
next
case (backward_subsumption_P D N C P Q)
then show ?case
using derive_if_remove_subsumed[of D "(N, P, Q)" C] strictly_subsumes_def
unfolding grounding_of_clss_def by (simp add: sup_commute sup_left_commute)
next
case (backward_subsumption_Q D N C P Q)
then show ?case
using derive_if_remove_subsumed[of D "(N, P, Q)" C] strictly_subsumes_def
unfolding grounding_of_clss_def by (simp add: sup_commute sup_left_commute)
next
case (forward_reduction D L' P Q L \<sigma> C N)
then show ?case
using reduction_derivable[of _ _ "N \<union> P \<union> Q"] by force
next
case (backward_reduction_P D L' N L \<sigma> C P Q)
then show ?case
using reduction_derivable[of _ _ "N \<union> P \<union> Q"] by force
next
case (backward_reduction_Q D L' N L \<sigma> C P Q)
then show ?case
using reduction_derivable[of _ _ "N \<union> P \<union> Q"] by force
next
case (clause_processing N C P Q)
then show ?case
using sr_ext.derive.intros by auto
next
case (inference_computation N Q C P)
{
fix E\<mu>
assume "E\<mu> \<in> grounding_of_clss N"
then obtain \<mu> E where
E_\<mu>_p: "E\<mu> = E \<cdot> \<mu> \<and> E \<in> N \<and> is_ground_subst \<mu>"
unfolding grounding_of_clss_def grounding_of_cls_def by auto
then have E_concl: "E \<in> concls_of (ord_FO_resolution.inferences_between Q C)"
using inference_computation by auto
then obtain \<gamma> where
\<gamma>_p: "\<gamma> \<in> ord_FO_\<Gamma> S \<and> infer_from (Q \<union> {C}) \<gamma> \<and> C \<in># prems_of \<gamma> \<and> concl_of \<gamma> = E"
unfolding ord_FO_resolution.inferences_between_def by auto
then obtain CC CAs D AAs As \<sigma> where
\<gamma>_p2: "\<gamma> = Infer CC D E \<and> ord_resolve_rename S CAs D AAs As \<sigma> E \<and> mset CAs = CC"
unfolding ord_FO_\<Gamma>_def by auto
define \<rho> where
"\<rho> = hd (renamings_apart (D # CAs))"
define \<rho>s where
"\<rho>s = tl (renamings_apart (D # CAs))"
define \<gamma>_ground where
"\<gamma>_ground = Infer (mset (CAs \<cdot>\<cdot>cl \<rho>s) \<cdot>cm \<sigma> \<cdot>cm \<mu>) (D \<cdot> \<rho> \<cdot> \<sigma> \<cdot> \<mu>) (E \<cdot> \<mu>)"
have "\<forall>I. I \<Turnstile>m mset (CAs \<cdot>\<cdot>cl \<rho>s) \<cdot>cm \<sigma> \<cdot>cm \<mu> \<longrightarrow> I \<Turnstile> D \<cdot> \<rho> \<cdot> \<sigma> \<cdot> \<mu> \<longrightarrow> I \<Turnstile> E \<cdot> \<mu>"
using ord_resolve_rename_ground_inst_sound[of _ _ _ _ _ _ _ _ _ _ \<mu>] \<rho>_def \<rho>s_def E_\<mu>_p \<gamma>_p2
by auto
then have "\<gamma>_ground \<in> {Infer cc d e | cc d e. \<forall>I. I \<Turnstile>m cc \<longrightarrow> I \<Turnstile> d \<longrightarrow> I \<Turnstile> e}"
unfolding \<gamma>_ground_def by auto
moreover have "set_mset (prems_of \<gamma>_ground) \<subseteq> grounding_of_state ({}, P \<union> {C}, Q)"
proof -
have "D = C \<or> D \<in> Q"
unfolding \<gamma>_ground_def using E_\<mu>_p \<gamma>_p2 \<gamma>_p unfolding infer_from_def
unfolding grounding_of_clss_def grounding_of_cls_def by simp
then have "D \<cdot> \<rho> \<cdot> \<sigma> \<cdot> \<mu> \<in> grounding_of_cls C \<or> (\<exists>x \<in> Q. D \<cdot> \<rho> \<cdot> \<sigma> \<cdot> \<mu> \<in> grounding_of_cls x)"
using E_\<mu>_p
unfolding grounding_of_cls_def
by (metis (mono_tags, lifting) is_ground_comp_subst mem_Collect_eq subst_cls_comp_subst)
then have "(D \<cdot> \<rho> \<cdot> \<sigma> \<cdot> \<mu> \<in> grounding_of_cls C \<or>
(\<exists>x \<in> P. D \<cdot> \<rho> \<cdot> \<sigma> \<cdot> \<mu> \<in> grounding_of_cls x) \<or>
(\<exists>x \<in> Q. D \<cdot> \<rho> \<cdot> \<sigma> \<cdot> \<mu> \<in> grounding_of_cls x))"
by metis
- moreover have "\<forall>i < length (CAs \<cdot>\<cdot>cl \<rho>s \<cdot>cl \<sigma> \<cdot>cl \<mu>). ((CAs \<cdot>\<cdot>cl \<rho>s \<cdot>cl \<sigma> \<cdot>cl \<mu>) ! i) \<in>
+ moreover have "\<forall>i < length (CAs \<cdot>\<cdot>cl \<rho>s \<cdot>cl \<sigma> \<cdot>cl \<mu>). (CAs \<cdot>\<cdot>cl \<rho>s \<cdot>cl \<sigma> \<cdot>cl \<mu>) ! i \<in>
{C \<cdot> \<sigma> |\<sigma>. is_ground_subst \<sigma>} \<union>
((\<Union>C \<in> P. {C \<cdot> \<sigma> | \<sigma>. is_ground_subst \<sigma>}) \<union> (\<Union>C\<in>Q. {C \<cdot> \<sigma> | \<sigma>. is_ground_subst \<sigma>}))"
proof (rule, rule)
fix i
assume "i < length (CAs \<cdot>\<cdot>cl \<rho>s \<cdot>cl \<sigma> \<cdot>cl \<mu>)"
then have a: "i < length CAs \<and> i < length \<rho>s"
by simp
moreover from a have "CAs ! i \<in> {C} \<union> Q"
using \<gamma>_p2 \<gamma>_p unfolding infer_from_def
by (metis (no_types, lifting) Un_subset_iff inference.sel(1) set_mset_union
sup_commute nth_mem_mset subsetCE)
ultimately have "(CAs \<cdot>\<cdot>cl \<rho>s \<cdot>cl \<sigma> \<cdot>cl \<mu>) ! i \<in>
{C \<cdot> \<sigma> |\<sigma>. is_ground_subst \<sigma>} \<or>
((CAs \<cdot>\<cdot>cl \<rho>s \<cdot>cl \<sigma> \<cdot>cl \<mu>) ! i \<in> (\<Union>C\<in>P. {C \<cdot> \<sigma> |\<sigma>. is_ground_subst \<sigma>}) \<or>
(CAs \<cdot>\<cdot>cl \<rho>s \<cdot>cl \<sigma> \<cdot>cl \<mu>) ! i \<in> (\<Union>C \<in> Q. {C \<cdot> \<sigma> | \<sigma>. is_ground_subst \<sigma>}))"
using E_\<mu>_p \<gamma>_p2 \<gamma>_p
unfolding \<gamma>_ground_def infer_from_def grounding_of_clss_def grounding_of_cls_def
apply -
apply (cases "CAs ! i = C")
subgoal
apply (rule disjI1)
apply (rule Set.CollectI)
apply (rule_tac x = "(\<rho>s ! i) \<odot> \<sigma> \<odot> \<mu>" in exI)
using \<rho>s_def using renamings_apart_length by (auto; fail)
subgoal
apply (rule disjI2)
apply (rule disjI2)
apply (rule_tac a = "CAs ! i" in UN_I)
subgoal by blast
subgoal
apply (rule Set.CollectI)
apply (rule_tac x = "(\<rho>s ! i) \<odot> \<sigma> \<odot> \<mu>" in exI)
using \<rho>s_def using renamings_apart_length by (auto; fail)
done
done
then show "(CAs \<cdot>\<cdot>cl \<rho>s \<cdot>cl \<sigma> \<cdot>cl \<mu>) ! i \<in> {C \<cdot> \<sigma> |\<sigma>. is_ground_subst \<sigma>} \<union>
((\<Union>C \<in> P. {C \<cdot> \<sigma> |\<sigma>. is_ground_subst \<sigma>}) \<union> (\<Union>C \<in> Q. {C \<cdot> \<sigma> |\<sigma>. is_ground_subst \<sigma>}))"
by blast
qed
then have "\<forall>x \<in># mset (CAs \<cdot>\<cdot>cl \<rho>s \<cdot>cl \<sigma> \<cdot>cl \<mu>). x \<in> {C \<cdot> \<sigma> |\<sigma>. is_ground_subst \<sigma>} \<union>
((\<Union>C \<in> P. {C \<cdot> \<sigma> |\<sigma>. is_ground_subst \<sigma>}) \<union> (\<Union>C \<in> Q. {C \<cdot> \<sigma> |\<sigma>. is_ground_subst \<sigma>}))"
by (metis (lifting) in_set_conv_nth set_mset_mset)
then have "set_mset (mset (CAs \<cdot>\<cdot>cl \<rho>s) \<cdot>cm \<sigma> \<cdot>cm \<mu>) \<subseteq>
grounding_of_cls C \<union> grounding_of_clss P \<union> grounding_of_clss Q"
unfolding grounding_of_cls_def grounding_of_clss_def
using mset_subst_cls_list_subst_cls_mset by auto
ultimately show ?thesis
unfolding \<gamma>_ground_def grounding_of_clss_def by auto
qed
ultimately have
"E \<cdot> \<mu> \<in> concls_of (sr_ext.inferences_from (grounding_of_state ({}, P \<union> {C}, Q)))"
unfolding sr_ext.inferences_from_def inference_system.inferences_from_def ground_sound_\<Gamma>_def
infer_from_def
- using \<gamma>_ground_def by (metis (no_types, lifting) imageI inference.sel(3) mem_Collect_eq)
+ using \<gamma>_ground_def by (metis (mono_tags, lifting) image_eqI inference.sel(3) mem_Collect_eq)
then have "E\<mu> \<in> concls_of (sr_ext.inferences_from (grounding_of_state ({}, P \<union> {C}, Q)))"
using E_\<mu>_p by auto
}
then have "grounding_of_state (N, P, Q \<union> {C}) - grounding_of_state ({}, P \<union> {C}, Q)
\<subseteq> concls_of (sr_ext.inferences_from (grounding_of_state ({}, P \<union> {C}, Q)))"
unfolding grounding_of_clss_def by auto
moreover have "grounding_of_state ({}, P \<union> {C}, Q) - grounding_of_state (N, P, Q \<union> {C}) = {}"
unfolding grounding_of_clss_def by auto
ultimately show ?case
using sr_ext.derive.intros[of "(grounding_of_state (N, P, Q \<union> {C}))"
"(grounding_of_state ({}, P \<union> {C}, Q))"] by auto
qed
text \<open>
A useful consequence:
\<close>
-theorem RP_model:
- "St \<leadsto> St' \<Longrightarrow> I \<Turnstile>s grounding_of_state St' \<longleftrightarrow> I \<Turnstile>s grounding_of_state St"
+theorem RP_model: "St \<leadsto> St' \<Longrightarrow> I \<Turnstile>s grounding_of_state St' \<longleftrightarrow> I \<Turnstile>s grounding_of_state St"
proof (drule RP_ground_derive, erule sr_ext.derive.cases, hypsubst)
let
?gSt = "grounding_of_state St" and
?gSt' = "grounding_of_state St'"
assume
deduct: "?gSt' - ?gSt \<subseteq> concls_of (sr_ext.inferences_from ?gSt)" (is "_ \<subseteq> ?concls") and
delete: "?gSt - ?gSt' \<subseteq> sr.Rf ?gSt'"
show "I \<Turnstile>s ?gSt' \<longleftrightarrow> I \<Turnstile>s ?gSt"
proof
assume bef: "I \<Turnstile>s ?gSt"
then have "I \<Turnstile>s ?concls"
unfolding ground_sound_\<Gamma>_def inference_system.inferences_from_def true_clss_def
true_cls_mset_def
by (auto simp add: image_def infer_from_def dest!: spec[of _ I])
then have diff: "I \<Turnstile>s ?gSt' - ?gSt"
using deduct by (blast intro: true_clss_mono)
then show "I \<Turnstile>s ?gSt'"
using bef unfolding true_clss_def by blast
next
assume aft: "I \<Turnstile>s ?gSt'"
have "I \<Turnstile>s ?gSt' \<union> sr.Rf ?gSt'"
by (rule sr.Rf_model) (smt Diff_eq_empty_iff Diff_subset Un_Diff aft
standard_redundancy_criterion.Rf_mono sup_bot.right_neutral sup_ge1 true_clss_mono)
then have "I \<Turnstile>s sr.Rf ?gSt'"
using true_clss_union by blast
then have diff: "I \<Turnstile>s ?gSt - ?gSt'"
using delete by (blast intro: true_clss_mono)
then show "I \<Turnstile>s ?gSt"
using aft unfolding true_clss_def by blast
qed
qed
text \<open>
Another formulation of the part of Lemma 4.10 that states we have a theorem proving process:
\<close>
-lemma ground_derive_chain: "chain sr_ext.derive (lmap grounding_of_state Sts)"
- using deriv RP_ground_derive by (simp add: chain_lmap[of "(\<leadsto>)"])
+lemma ground_derive_chain: "chain (\<leadsto>) Sts \<Longrightarrow> chain sr_ext.derive (lmap grounding_of_state Sts)"
+ using RP_ground_derive by (simp add: chain_lmap[of "(\<leadsto>)"])
text \<open>
The following is used prove to Lemma 4.11:
\<close>
-lemma in_Sup_llist_in_nth: "C \<in> Sup_llist Gs \<Longrightarrow> \<exists>j. enat j < llength Gs \<and> C \<in> lnth Gs j"
- unfolding Sup_llist_def by auto
- \<comment> \<open>Note: Gs is called Ns in the chapter\<close>
-
lemma Sup_llist_grounding_of_state_ground:
assumes "C \<in> Sup_llist (lmap grounding_of_state Sts)"
shows "is_ground_cls C"
proof -
have "\<exists>j. enat j < llength (lmap grounding_of_state Sts)
\<and> C \<in> lnth (lmap grounding_of_state Sts) j"
- using assms in_Sup_llist_in_nth by fast
- then obtain j where
- "enat j < llength (lmap grounding_of_state Sts)"
- "C \<in> lnth (lmap grounding_of_state Sts) j"
- by blast
+ using assms Sup_llist_imp_exists_index by fast
then show ?thesis
unfolding grounding_of_clss_def grounding_of_cls_def by auto
qed
lemma Liminf_grounding_of_state_ground:
"C \<in> Liminf_llist (lmap grounding_of_state Sts) \<Longrightarrow> is_ground_cls C"
using Liminf_llist_subset_Sup_llist[of "lmap grounding_of_state Sts"]
Sup_llist_grounding_of_state_ground
by blast
lemma in_Sup_llist_in_Sup_state:
assumes "C \<in> Sup_llist (lmap grounding_of_state Sts)"
shows "\<exists>D \<sigma>. D \<in> clss_of_state (Sup_state Sts) \<and> D \<cdot> \<sigma> = C \<and> is_ground_subst \<sigma>"
proof -
from assms obtain i where
i_p: "enat i < llength Sts \<and> C \<in> lnth (lmap grounding_of_state Sts) i"
- using in_Sup_llist_in_nth by fastforce
+ using Sup_llist_imp_exists_index by fastforce
then obtain D \<sigma> where
"D \<in> clss_of_state (lnth Sts i) \<and> D \<cdot> \<sigma> = C \<and> is_ground_subst \<sigma>"
using assms unfolding grounding_of_clss_def grounding_of_cls_def by fastforce
then have "D \<in> clss_of_state (Sup_state Sts) \<and> D \<cdot> \<sigma> = C \<and> is_ground_subst \<sigma>"
using i_p unfolding Sup_state_def
by (metis (no_types, lifting) UnCI UnE contra_subsetD N_of_state.simps P_of_state.simps
Q_of_state.simps llength_lmap lnth_lmap lnth_subset_Sup_llist)
then show ?thesis
by auto
qed
lemma
N_of_state_Liminf: "N_of_state (Liminf_state Sts) = Liminf_llist (lmap N_of_state Sts)" and
P_of_state_Liminf: "P_of_state (Liminf_state Sts) = Liminf_llist (lmap P_of_state Sts)"
unfolding Liminf_state_def by auto
lemma eventually_removed_from_N:
assumes
d_in: "D \<in> N_of_state (lnth Sts i)" and
fair: "fair_state_seq Sts" and
i_Sts: "enat i < llength Sts"
shows "\<exists>l. D \<in> N_of_state (lnth Sts l) \<and> D \<notin> N_of_state (lnth Sts (Suc l)) \<and> i \<le> l
\<and> enat (Suc l) < llength Sts"
proof (rule ccontr)
assume a: "\<not> ?thesis"
have "i \<le> l \<Longrightarrow> enat l < llength Sts \<Longrightarrow> D \<in> N_of_state (lnth Sts l)" for l
using d_in by (induction l, blast, metis a Suc_ile_eq le_SucE less_imp_le)
then have "D \<in> Liminf_llist (lmap N_of_state Sts)"
unfolding Liminf_llist_def using i_Sts by auto
then show False
using fair unfolding fair_state_seq_def by (simp add: N_of_state_Liminf)
qed
lemma eventually_removed_from_P:
assumes
d_in: "D \<in> P_of_state (lnth Sts i)" and
fair: "fair_state_seq Sts" and
i_Sts: "enat i < llength Sts"
shows "\<exists>l. D \<in> P_of_state (lnth Sts l) \<and> D \<notin> P_of_state (lnth Sts (Suc l)) \<and> i \<le> l
\<and> enat (Suc l) < llength Sts"
proof (rule ccontr)
assume a: "\<not> ?thesis"
have "i \<le> l \<Longrightarrow> enat l < llength Sts \<Longrightarrow> D \<in> P_of_state (lnth Sts l)" for l
using d_in by (induction l, blast, metis a Suc_ile_eq le_SucE less_imp_le)
then have "D \<in> Liminf_llist (lmap P_of_state Sts)"
unfolding Liminf_llist_def using i_Sts by auto
then show False
using fair unfolding fair_state_seq_def by (simp add: P_of_state_Liminf)
qed
lemma instance_if_subsumed_and_in_limit:
assumes
+ deriv: "chain (\<leadsto>) Sts" and
ns: "Gs = lmap grounding_of_state Sts" and
c: "C \<in> Liminf_llist Gs - sr.Rf (Liminf_llist Gs)" and
d: "D \<in> clss_of_state (lnth Sts i)" "enat i < llength Sts" "subsumes D C"
shows "\<exists>\<sigma>. D \<cdot> \<sigma> = C \<and> is_ground_subst \<sigma>"
proof -
let ?Ps = "\<lambda>i. P_of_state (lnth Sts i)"
let ?Qs = "\<lambda>i. Q_of_state (lnth Sts i)"
have ground_C: "is_ground_cls C"
using c using Liminf_grounding_of_state_ground ns by auto
have derivns: "chain sr_ext.derive Gs"
using ground_derive_chain deriv ns by auto
have "\<exists>\<sigma>. D \<cdot> \<sigma> = C"
proof (rule ccontr)
assume "\<nexists>\<sigma>. D \<cdot> \<sigma> = C"
moreover from d(3) obtain \<tau>_proto where
"D \<cdot> \<tau>_proto \<subseteq># C" unfolding subsumes_def
by blast
then obtain \<tau> where
\<tau>_p: "D \<cdot> \<tau> \<subseteq># C \<and> is_ground_subst \<tau>"
using ground_C by (metis is_ground_cls_mono make_ground_subst subset_mset.order_refl)
ultimately have subsub: "D \<cdot> \<tau> \<subset># C"
using subset_mset.le_imp_less_or_eq by auto
moreover have "is_ground_subst \<tau>"
using \<tau>_p by auto
moreover have "D \<in> clss_of_state (lnth Sts i)"
using d by auto
ultimately have "C \<in> sr.Rf (grounding_of_state (lnth Sts i))"
using strict_subset_subsumption_redundant_clss by auto
then have "C \<in> sr.Rf (Sup_llist Gs)"
using d ns by (smt contra_subsetD llength_lmap lnth_lmap lnth_subset_Sup_llist sr.Rf_mono)
then have "C \<in> sr.Rf (Liminf_llist Gs)"
unfolding ns using local.sr_ext.Rf_limit_Sup derivns ns by auto
then show False
using c by auto
qed
then obtain \<sigma> where
"D \<cdot> \<sigma> = C \<and> is_ground_subst \<sigma>"
using ground_C by (metis make_ground_subst)
then show ?thesis
by auto
qed
lemma from_Q_to_Q_inf:
assumes
+ deriv: "chain (\<leadsto>) Sts" and
fair: "fair_state_seq Sts" and
ns: "Gs = lmap grounding_of_state Sts" and
c: "C \<in> Liminf_llist Gs - sr.Rf (Liminf_llist Gs)" and
d: "D \<in> Q_of_state (lnth Sts i)" "enat i < llength Sts" "subsumes D C" and
- d_least: "\<forall>E \<in> {E. E \<in> (clss_of_state (Sup_state Sts)) \<and> subsumes E C}. \<not> strictly_subsumes E D"
+ d_least: "\<forall>E \<in> {E. E \<in> (clss_of_state (Sup_state Sts)) \<and> subsumes E C}.
+ \<not> strictly_subsumes E D"
shows "D \<in> Q_of_state (Liminf_state Sts)"
proof -
let ?Ps = "\<lambda>i. P_of_state (lnth Sts i)"
let ?Qs = "\<lambda>i. Q_of_state (lnth Sts i)"
have ground_C: "is_ground_cls C"
using c using Liminf_grounding_of_state_ground ns by auto
have derivns: "chain sr_ext.derive Gs"
using ground_derive_chain deriv ns by auto
have "\<exists>\<sigma>. D \<cdot> \<sigma> = C \<and> is_ground_subst \<sigma>"
- using instance_if_subsumed_and_in_limit c d unfolding ns by blast
+ using instance_if_subsumed_and_in_limit[OF deriv] c d unfolding ns by blast
then obtain \<sigma> where
\<sigma>: "D \<cdot> \<sigma> = C" "is_ground_subst \<sigma>"
by auto
have in_Sts_in_Sts_Suc:
"\<forall>l \<ge> i. enat (Suc l) < llength Sts \<longrightarrow> D \<in> Q_of_state (lnth Sts l) \<longrightarrow>
D \<in> Q_of_state (lnth Sts (Suc l))"
proof (rule, rule, rule, rule)
fix l
assume
len: "i \<le> l" and
llen: "enat (Suc l) < llength Sts" and
d_in_q: "D \<in> Q_of_state (lnth Sts l)"
have "lnth Sts l \<leadsto> lnth Sts (Suc l)"
using llen deriv chain_lnth_rel by blast
then show "D \<in> Q_of_state (lnth Sts (Suc l))"
proof (cases rule: RP.cases)
case (backward_subsumption_Q D' N D_removed P Q)
moreover
{
assume "D_removed = D"
then obtain D_subsumes where
D_subsumes_p: "D_subsumes \<in> N \<and> strictly_subsumes D_subsumes D"
using backward_subsumption_Q by auto
moreover from D_subsumes_p have "subsumes D_subsumes C"
using d subsumes_trans unfolding strictly_subsumes_def by blast
moreover from backward_subsumption_Q have "D_subsumes \<in> clss_of_state (Sup_state Sts)"
using D_subsumes_p llen
by (metis (no_types) UnI1 N_of_state.simps llength_lmap lnth_lmap lnth_subset_Sup_llist
rev_subsetD Sup_state_def)
ultimately have False
using d_least unfolding subsumes_def by auto
}
ultimately show ?thesis
using d_in_q by auto
next
case (backward_reduction_Q E L' N L \<sigma> D' P Q)
{
assume "D' + {#L#} = D"
then have D'_p: "strictly_subsumes D' D \<and> D' \<in> ?Ps (Suc l)"
using subset_strictly_subsumes[of D' D] backward_reduction_Q by auto
then have subc: "subsumes D' C"
using d(3) subsumes_trans unfolding strictly_subsumes_def by auto
from D'_p have "D' \<in> clss_of_state (Sup_state Sts)"
using llen by (metis (no_types) UnI1 P_of_state.simps llength_lmap lnth_lmap
lnth_subset_Sup_llist subsetCE sup_ge2 Sup_state_def)
then have False
using d_least D'_p subc by auto
}
then show ?thesis
using backward_reduction_Q d_in_q by auto
qed (use d_in_q in auto)
qed
have D_in_Sts: "D \<in> Q_of_state (lnth Sts l)" and D_in_Sts_Suc: "D \<in> Q_of_state (lnth Sts (Suc l))"
if l_i: "l \<ge> i" and enat: "enat (Suc l) < llength Sts" for l
proof -
show "D \<in> Q_of_state (lnth Sts l)"
using l_i enat
apply (induction "l - i" arbitrary: l)
subgoal using d by auto
subgoal using d(1) in_Sts_in_Sts_Suc
by (metis (no_types, lifting) Suc_ile_eq add_Suc_right add_diff_cancel_left' le_SucE
le_Suc_ex less_imp_le)
done
then show "D \<in> Q_of_state (lnth Sts (Suc l))"
using l_i enat in_Sts_in_Sts_Suc by blast
qed
have "i \<le> x \<Longrightarrow> enat x < llength Sts \<Longrightarrow> D \<in> Q_of_state (lnth Sts x)" for x
apply (cases x)
subgoal using d(1) by (auto intro!: exI[of _ i] simp: less_Suc_eq)
subgoal for x'
using d(1) D_in_Sts_Suc[of x'] by (cases \<open>i \<le> x'\<close>) (auto simp: not_less_eq_eq)
done
then have "D \<in> Liminf_llist (lmap Q_of_state Sts)"
unfolding Liminf_llist_def by (auto intro!: exI[of _ i] simp: d)
then show ?thesis
unfolding Liminf_state_def by auto
qed
lemma from_P_to_Q:
assumes
+ deriv: "chain (\<leadsto>) Sts" and
fair: "fair_state_seq Sts" and
ns: "Gs = lmap grounding_of_state Sts" and
c: "C \<in> Liminf_llist Gs - sr.Rf (Liminf_llist Gs)" and
d: "D \<in> P_of_state (lnth Sts i)" "enat i < llength Sts" "subsumes D C" and
d_least: "\<forall>E \<in> {E. E \<in> (clss_of_state (Sup_state Sts)) \<and> subsumes E C}.
\<not> strictly_subsumes E D"
shows "\<exists>l. D \<in> Q_of_state (lnth Sts l) \<and> enat l < llength Sts"
proof -
let ?Ns = "\<lambda>i. N_of_state (lnth Sts i)"
let ?Ps = "\<lambda>i. P_of_state (lnth Sts i)"
let ?Qs = "\<lambda>i. Q_of_state (lnth Sts i)"
have ground_C: "is_ground_cls C"
using c using Liminf_grounding_of_state_ground ns by auto
have derivns: "chain sr_ext.derive Gs"
using ground_derive_chain deriv ns by auto
have "\<exists>\<sigma>. D \<cdot> \<sigma> = C \<and> is_ground_subst \<sigma>"
- using instance_if_subsumed_and_in_limit ns c d by blast
+ using instance_if_subsumed_and_in_limit[OF deriv] ns c d by blast
then obtain \<sigma> where
\<sigma>: "D \<cdot> \<sigma> = C" "is_ground_subst \<sigma>"
by auto
obtain l where
l_p: "D \<in> P_of_state (lnth Sts l) \<and> D \<notin> P_of_state (lnth Sts (Suc l)) \<and> i \<le> l
\<and> enat (Suc l) < llength Sts"
using fair using eventually_removed_from_P d unfolding ns by auto
then have l_Gs: "enat (Suc l) < llength Gs"
using ns by auto
from l_p have "lnth Sts l \<leadsto> lnth Sts (Suc l)"
using deriv using chain_lnth_rel by auto
then show ?thesis
proof (cases rule: RP.cases)
case (backward_subsumption_P D' N D_twin P Q)
note lrhs = this(1,2) and D'_p = this(3,4)
then have twins: "D_twin = D" "?Ns (Suc l) = N" "?Ns l = N" "?Ps (Suc l) = P"
"?Ps l = P \<union> {D_twin}" "?Qs (Suc l) = Q" "?Qs l = Q"
using l_p by auto
note D'_p = D'_p[unfolded twins(1)]
then have subc: "subsumes D' C"
unfolding strictly_subsumes_def subsumes_def using \<sigma>
by (metis subst_cls_comp_subst subst_cls_mono_mset)
from D'_p have "D' \<in> clss_of_state (Sup_state Sts)"
unfolding twins(2)[symmetric] using l_p
by (metis (no_types) UnI1 N_of_state.simps llength_lmap lnth_lmap lnth_subset_Sup_llist
subsetCE Sup_state_def)
then have False
using d_least D'_p subc by auto
then show ?thesis
by auto
next
case (backward_reduction_P E L' N L \<sigma> D' P Q)
then have twins: "D' + {#L#} = D" "?Ns (Suc l) = N" "?Ns l = N" "?Ps (Suc l) = P \<union> {D'}"
"?Ps l = P \<union> {D' + {#L#}}" "?Qs (Suc l) = Q" "?Qs l = Q"
using l_p by auto
then have D'_p: "strictly_subsumes D' D \<and> D' \<in> ?Ps (Suc l)"
using subset_strictly_subsumes[of D' D] by auto
then have subc: "subsumes D' C"
using d(3) subsumes_trans unfolding strictly_subsumes_def by auto
from D'_p have "D' \<in> clss_of_state (Sup_state Sts)"
using l_p by (metis (no_types) UnI1 P_of_state.simps llength_lmap lnth_lmap
lnth_subset_Sup_llist subsetCE sup_ge2 Sup_state_def)
then have False
using d_least D'_p subc by auto
then show ?thesis
by auto
next
case (inference_computation N Q D_twin P)
then have twins: "D_twin = D" "?Ps (Suc l) = P" "?Ps l = P \<union> {D_twin}"
"?Qs (Suc l) = Q \<union> {D_twin}" "?Qs l = Q"
using l_p by auto
then show ?thesis
using d \<sigma> l_p by auto
qed (use l_p in auto)
qed
-lemma variants_sym: "variants D D' \<longleftrightarrow> variants D' D"
- unfolding variants_def by auto
-
-lemma variants_imp_exists_subtitution: "variants D D' \<Longrightarrow> \<exists>\<sigma>. D \<cdot> \<sigma> = D'"
- unfolding variants_iff_subsumes subsumes_def
- by (meson strictly_subsumes_def subset_mset_def strict_subset_subst_strictly_subsumes subsumes_def)
-
-lemma properly_subsume_variants:
- assumes "strictly_subsumes E D" and "variants D D'"
- shows "strictly_subsumes E D'"
-proof -
- from assms obtain \<sigma> \<sigma>' where
- \<sigma>_\<sigma>'_p: "D \<cdot> \<sigma> = D' \<and> D' \<cdot> \<sigma>' = D"
- using variants_imp_exists_subtitution variants_sym by metis
-
- from assms obtain \<sigma>'' where
- "E \<cdot> \<sigma>'' \<subseteq># D"
- unfolding strictly_subsumes_def subsumes_def by auto
- then have "E \<cdot> \<sigma>'' \<cdot> \<sigma> \<subseteq># D \<cdot> \<sigma>"
- using subst_cls_mono_mset by blast
- then have "E \<cdot> (\<sigma>'' \<odot> \<sigma>) \<subseteq># D'"
- using \<sigma>_\<sigma>'_p by auto
- moreover from assms have n: "(\<nexists>\<sigma>. D \<cdot> \<sigma> \<subseteq># E)"
- unfolding strictly_subsumes_def subsumes_def by auto
- have "\<nexists>\<sigma>. D' \<cdot> \<sigma> \<subseteq># E"
- proof
- assume "\<exists>\<sigma>'''. D' \<cdot> \<sigma>''' \<subseteq># E"
- then obtain \<sigma>''' where
- "D' \<cdot> \<sigma>''' \<subseteq># E"
- by auto
- then have "D \<cdot> (\<sigma> \<odot> \<sigma>''') \<subseteq># E"
- using \<sigma>_\<sigma>'_p by auto
- then show False
- using n by metis
- qed
- ultimately show ?thesis
- unfolding strictly_subsumes_def subsumes_def by metis
-qed
-
-lemma neg_properly_subsume_variants:
- assumes "\<not> strictly_subsumes E D" and "variants D D'"
- shows "\<not> strictly_subsumes E D'"
- using assms properly_subsume_variants variants_sym by auto
-
lemma from_N_to_P_or_Q:
assumes
+ deriv: "chain (\<leadsto>) Sts" and
fair: "fair_state_seq Sts" and
ns: "Gs = lmap grounding_of_state Sts" and
c: "C \<in> Liminf_llist Gs - sr.Rf (Liminf_llist Gs)" and
d: "D \<in> N_of_state (lnth Sts i)" "enat i < llength Sts" "subsumes D C" and
d_least: "\<forall>E \<in> {E. E \<in> (clss_of_state (Sup_state Sts)) \<and> subsumes E C}. \<not> strictly_subsumes E D"
shows "\<exists>l D' \<sigma>'. D' \<in> P_of_state (lnth Sts l) \<union> Q_of_state (lnth Sts l) \<and>
enat l < llength Sts \<and>
(\<forall>E \<in> {E. E \<in> (clss_of_state (Sup_state Sts)) \<and> subsumes E C}. \<not> strictly_subsumes E D') \<and>
D' \<cdot> \<sigma>' = C \<and> is_ground_subst \<sigma>' \<and> subsumes D' C"
proof -
let ?Ns = "\<lambda>i. N_of_state (lnth Sts i)"
let ?Ps = "\<lambda>i. P_of_state (lnth Sts i)"
let ?Qs = "\<lambda>i. Q_of_state (lnth Sts i)"
have ground_C: "is_ground_cls C"
using c using Liminf_grounding_of_state_ground ns by auto
have derivns: "chain sr_ext.derive Gs"
using ground_derive_chain deriv ns by auto
have "\<exists>\<sigma>. D \<cdot> \<sigma> = C \<and> is_ground_subst \<sigma>"
- using instance_if_subsumed_and_in_limit ns c d by blast
+ using instance_if_subsumed_and_in_limit[OF deriv] ns c d by blast
then obtain \<sigma> where
\<sigma>: "D \<cdot> \<sigma> = C" "is_ground_subst \<sigma>"
by auto
from c have no_taut: "\<not> (\<exists>A. Pos A \<in># C \<and> Neg A \<in># C)"
using sr.tautology_Rf by auto
have "\<exists>l. D \<in> N_of_state (lnth Sts l)
\<and> D \<notin> N_of_state (lnth Sts (Suc l)) \<and> i \<le> l \<and> enat (Suc l) < llength Sts"
using fair using eventually_removed_from_N d unfolding ns by auto
then obtain l where
l_p: "D \<in> N_of_state (lnth Sts l) \<and> D \<notin> N_of_state (lnth Sts (Suc l)) \<and> i \<le> l
\<and> enat (Suc l) < llength Sts"
by auto
then have l_Gs: "enat (Suc l) < llength Gs"
using ns by auto
from l_p have "lnth Sts l \<leadsto> lnth Sts (Suc l)"
using deriv using chain_lnth_rel by auto
then show ?thesis
proof (cases rule: RP.cases)
case (tautology_deletion A D_twin N P Q)
then have "D_twin = D"
using l_p by auto
then have "Pos (A \<cdot>a \<sigma>) \<in># C \<and> Neg (A \<cdot>a \<sigma>) \<in># C"
using tautology_deletion(3,4) \<sigma>
by (metis Melem_subst_cls eql_neg_lit_eql_atm eql_pos_lit_eql_atm)
then have False
using no_taut by metis
then show ?thesis
by blast
next
case (forward_subsumption D' P Q D_twin N)
note lrhs = this(1,2) and D'_p = this(3,4)
then have twins: "D_twin = D" "?Ns (Suc l) = N" "?Ns l = N \<union> {D_twin}" "?Ps (Suc l) = P "
"?Ps l = P" "?Qs (Suc l) = Q" "?Qs l = Q"
using l_p by auto
note D'_p = D'_p[unfolded twins(1)]
from D'_p(2) have subs: "subsumes D' C"
using d(3) by (blast intro: subsumes_trans)
moreover have "D' \<in> clss_of_state (Sup_state Sts)"
using twins D'_p l_p unfolding Sup_state_def
by simp (metis (no_types) contra_subsetD llength_lmap lnth_lmap lnth_subset_Sup_llist)
ultimately have "\<not> strictly_subsumes D' D"
using d_least by auto
then have "subsumes D D'"
unfolding strictly_subsumes_def using D'_p by auto
then have v: "variants D D'"
using D'_p unfolding variants_iff_subsumes by auto
then have mini: "\<forall>E \<in> {E \<in> clss_of_state (Sup_state Sts). subsumes E C}.
\<not> strictly_subsumes E D'"
- using d_least D'_p neg_properly_subsume_variants[of _ D D'] by auto
+ using d_least D'_p neg_strictly_subsumes_variants[of _ D D'] by auto
from v have "\<exists>\<sigma>'. D' \<cdot> \<sigma>' = C"
- using \<sigma> variants_imp_exists_subtitution variants_sym by (metis subst_cls_comp_subst)
+ using \<sigma> variants_imp_exists_substitution variants_sym by (metis subst_cls_comp_subst)
then have "\<exists>\<sigma>'. D' \<cdot> \<sigma>' = C \<and> is_ground_subst \<sigma>'"
using ground_C by (meson make_ground_subst refl)
then obtain \<sigma>' where
\<sigma>'_p: "D' \<cdot> \<sigma>' = C \<and> is_ground_subst \<sigma>'"
by metis
show ?thesis
using D'_p twins l_p subs mini \<sigma>'_p by auto
next
case (forward_reduction E L' P Q L \<sigma> D' N)
then have twins: "D' + {#L#} = D" "?Ns (Suc l) = N \<union> {D'}" "?Ns l = N \<union> {D' + {#L#}}"
"?Ps (Suc l) = P " "?Ps l = P" "?Qs (Suc l) = Q" "?Qs l = Q"
using l_p by auto
then have D'_p: "strictly_subsumes D' D \<and> D' \<in> ?Ns (Suc l)"
using subset_strictly_subsumes[of D' D] by auto
then have subc: "subsumes D' C"
using d(3) subsumes_trans unfolding strictly_subsumes_def by blast
from D'_p have "D' \<in> clss_of_state (Sup_state Sts)"
using l_p by (metis (no_types) UnI1 N_of_state.simps llength_lmap lnth_lmap
lnth_subset_Sup_llist subsetCE Sup_state_def)
then have False
using d_least D'_p subc by auto
then show ?thesis
by auto
next
case (clause_processing N D_twin P Q)
then have twins: "D_twin = D" "?Ns (Suc l) = N" "?Ns l = N \<union> {D}" "?Ps (Suc l) = P \<union> {D}"
"?Ps l = P" "?Qs (Suc l) = Q" "?Qs l = Q"
using l_p by auto
then show ?thesis
using d \<sigma> l_p d_least by blast
qed (use l_p in auto)
qed
lemma eventually_in_Qinf:
assumes
+ deriv: "chain (\<leadsto>) Sts" and
D_p: "D \<in> clss_of_state (Sup_state Sts)"
"subsumes D C" "\<forall>E \<in> {E. E \<in> (clss_of_state (Sup_state Sts)) \<and> subsumes E C}.
\<not> strictly_subsumes E D" and
fair: "fair_state_seq Sts" and
- (* We could also, we guess, in this proof obtain a D with property D_p(3) from one with only
- properties D_p(2,3). *)
ns: "Gs = lmap grounding_of_state Sts" and
c: "C \<in> Liminf_llist Gs - sr.Rf (Liminf_llist Gs)" and
ground_C: "is_ground_cls C"
shows "\<exists>D' \<sigma>'. D' \<in> Q_of_state (Liminf_state Sts) \<and> D' \<cdot> \<sigma>' = C \<and> is_ground_subst \<sigma>'"
proof -
let ?Ns = "\<lambda>i. N_of_state (lnth Sts i)"
let ?Ps = "\<lambda>i. P_of_state (lnth Sts i)"
let ?Qs = "\<lambda>i. Q_of_state (lnth Sts i)"
from D_p obtain i where
i_p: "i < llength Sts" "D \<in> ?Ns i \<or> D \<in> ?Ps i \<or> D \<in> ?Qs i"
unfolding Sup_state_def
- by simp_all (metis (no_types) in_Sup_llist_in_nth llength_lmap lnth_lmap)
+ by simp_all (metis (no_types) Sup_llist_imp_exists_index llength_lmap lnth_lmap)
have derivns: "chain sr_ext.derive Gs"
using ground_derive_chain deriv ns by auto
have "\<exists>\<sigma>. D \<cdot> \<sigma> = C \<and> is_ground_subst \<sigma>"
- using instance_if_subsumed_and_in_limit[OF ns c] D_p i_p by blast
+ using instance_if_subsumed_and_in_limit[OF deriv ns c] D_p i_p by blast
then obtain \<sigma> where
\<sigma>: "D \<cdot> \<sigma> = C" "is_ground_subst \<sigma>"
by blast
{
assume a: "D \<in> ?Ns i"
then obtain D' \<sigma>' l where D'_p:
"D' \<in> ?Ps l \<union> ?Qs l"
"D' \<cdot> \<sigma>' = C"
"enat l < llength Sts"
"is_ground_subst \<sigma>'"
"\<forall>E \<in> {E. E \<in> (clss_of_state (Sup_state Sts)) \<and> subsumes E C}. \<not> strictly_subsumes E D'"
"subsumes D' C"
using from_N_to_P_or_Q deriv fair ns c i_p(1) D_p(2) D_p(3) by blast
then obtain l' where
l'_p: "D' \<in> ?Qs l'" "l' < llength Sts"
- using from_P_to_Q[OF fair ns c _ D'_p(3) D'_p(6) D'_p(5)] by blast
+ using from_P_to_Q[OF deriv fair ns c _ D'_p(3) D'_p(6) D'_p(5)] by blast
then have "D' \<in> Q_of_state (Liminf_state Sts)"
- using from_Q_to_Q_inf[OF fair ns c _ l'_p(2)] D'_p by auto
+ using from_Q_to_Q_inf[OF deriv fair ns c _ l'_p(2)] D'_p by auto
then have ?thesis
using D'_p by auto
}
moreover
{
assume a: "D \<in> ?Ps i"
then obtain l' where
l'_p: "D \<in> ?Qs l'" "l' < llength Sts"
- using from_P_to_Q[OF fair ns c a i_p(1) D_p(2) D_p(3)] by auto
+ using from_P_to_Q[OF deriv fair ns c a i_p(1) D_p(2) D_p(3)] by auto
then have "D \<in> Q_of_state (Liminf_state Sts)"
- using from_Q_to_Q_inf[OF fair ns c l'_p(1) l'_p(2)] D_p(3) \<sigma>(1) \<sigma>(2) D_p(2) by auto
+ using from_Q_to_Q_inf[OF deriv fair ns c l'_p(1) l'_p(2)] D_p(3) \<sigma>(1) \<sigma>(2) D_p(2) by auto
then have ?thesis
using D_p \<sigma> by auto
}
moreover
{
assume a: "D \<in> ?Qs i"
then have "D \<in> Q_of_state (Liminf_state Sts)"
- using from_Q_to_Q_inf[OF fair ns c a i_p(1)] \<sigma> D_p(2,3) by auto
+ using from_Q_to_Q_inf[OF deriv fair ns c a i_p(1)] \<sigma> D_p(2,3) by auto
then have ?thesis
using D_p \<sigma> by auto
}
ultimately show ?thesis
using i_p by auto
qed
text \<open>
The following corresponds to Lemma 4.11:
\<close>
lemma fair_imp_Liminf_minus_Rf_subset_ground_Liminf_state:
assumes
+ deriv: "chain (\<leadsto>) Sts" and
fair: "fair_state_seq Sts" and
ns: "Gs = lmap grounding_of_state Sts"
- shows "Liminf_llist Gs - sr.Rf (Liminf_llist Gs) \<subseteq> grounding_of_clss (Q_of_state (Liminf_state Sts))"
+ shows "Liminf_llist Gs - sr.Rf (Liminf_llist Gs)
+ \<subseteq> grounding_of_clss (Q_of_state (Liminf_state Sts))"
proof
let ?Ns = "\<lambda>i. N_of_state (lnth Sts i)"
let ?Ps = "\<lambda>i. P_of_state (lnth Sts i)"
let ?Qs = "\<lambda>i. Q_of_state (lnth Sts i)"
have SQinf: "clss_of_state (Liminf_state Sts) = Liminf_llist (lmap Q_of_state Sts)"
using fair unfolding fair_state_seq_def Liminf_state_def by auto
fix C
assume C_p: "C \<in> Liminf_llist Gs - sr.Rf (Liminf_llist Gs)"
then have "C \<in> Sup_llist Gs"
using Liminf_llist_subset_Sup_llist[of Gs] by blast
then obtain D_proto where
"D_proto \<in> clss_of_state (Sup_state Sts) \<and> subsumes D_proto C"
using in_Sup_llist_in_Sup_state unfolding ns subsumes_def by blast
then obtain D where
D_p: "D \<in> clss_of_state (Sup_state Sts)"
"subsumes D C"
"\<forall>E \<in> {E. E \<in> clss_of_state (Sup_state Sts) \<and> subsumes E C}. \<not> strictly_subsumes E D"
using strictly_subsumes_has_minimum[of "{E. E \<in> clss_of_state (Sup_state Sts) \<and> subsumes E C}"]
by auto
have ground_C: "is_ground_cls C"
using C_p using Liminf_grounding_of_state_ground ns by auto
have "\<exists>D' \<sigma>'. D' \<in> Q_of_state (Liminf_state Sts) \<and> D' \<cdot> \<sigma>' = C \<and> is_ground_subst \<sigma>'"
- using eventually_in_Qinf[of D C Gs] using D_p(1-3) fair ns C_p ground_C by auto
+ using eventually_in_Qinf[of D C Gs] using D_p(1-3) deriv fair ns C_p ground_C by auto
then obtain D' \<sigma>' where
D'_p: "D' \<in> Q_of_state (Liminf_state Sts) \<and> D' \<cdot> \<sigma>' = C \<and> is_ground_subst \<sigma>'"
by blast
then have "D' \<in> clss_of_state (Liminf_state Sts)"
by simp
then have "C \<in> grounding_of_state (Liminf_state Sts)"
unfolding grounding_of_clss_def grounding_of_cls_def using D'_p by auto
then show "C \<in> grounding_of_clss (Q_of_state (Liminf_state Sts))"
using SQinf fair fair_state_seq_def by auto
qed
text \<open>
The following corresponds to (one direction of) Theorem 4.13:
\<close>
-lemma ground_subclauses:
- assumes
- "\<forall>i < length CAs. CAs ! i = Cs ! i + poss (AAs ! i)" and
- "length Cs = length CAs" and
- "is_ground_cls_list CAs"
- shows "is_ground_cls_list Cs"
- unfolding is_ground_cls_list_def
- by (metis assms in_set_conv_nth is_ground_cls_list_def is_ground_cls_union)
-
lemma subseteq_Liminf_state_eventually_always:
fixes CC
assumes
"finite CC" and
"CC \<noteq> {}" and
"CC \<subseteq> Q_of_state (Liminf_state Sts)"
shows "\<exists>j. enat j < llength Sts \<and> (\<forall>j' \<ge> enat j. j' < llength Sts \<longrightarrow> CC \<subseteq> Q_of_state (lnth Sts j'))"
proof -
from assms(3) have "\<forall>C \<in> CC. \<exists>j. enat j < llength Sts \<and>
(\<forall>j' \<ge> enat j. j' < llength Sts \<longrightarrow> C \<in> Q_of_state (lnth Sts j'))"
unfolding Liminf_state_def Liminf_llist_def by force
then obtain f where
f_p: "\<forall>C \<in> CC. f C < llength Sts \<and> (\<forall>j' \<ge> enat (f C). j' < llength Sts \<longrightarrow> C \<in> Q_of_state (lnth Sts j'))"
by moura
define j :: nat where
"j = Max (f ` CC)"
have "enat j < llength Sts"
unfolding j_def using f_p assms(1)
by (metis (mono_tags) Max_in assms(2) finite_imageI imageE image_is_empty)
moreover have "\<forall>C j'. C \<in> CC \<longrightarrow> enat j \<le> j' \<longrightarrow> j' < llength Sts \<longrightarrow> C \<in> Q_of_state (lnth Sts j')"
proof (intro allI impI)
fix C :: "'a clause" and j' :: nat
assume a: "C \<in> CC" "enat j \<le> enat j'" "enat j' < llength Sts"
then have "f C \<le> j'"
unfolding j_def using assms(1) Max.bounded_iff by auto
then show "C \<in> Q_of_state (lnth Sts j')"
using f_p a by auto
qed
ultimately show ?thesis
by auto
qed
lemma empty_clause_in_Q_of_Liminf_state:
assumes
- empty_in: "{#} \<in> Liminf_llist (lmap grounding_of_state Sts)" and
- fair: "fair_state_seq Sts"
+ deriv: "chain (\<leadsto>) Sts" and
+ fair: "fair_state_seq Sts" and
+ empty_in: "{#} \<in> Liminf_llist (lmap grounding_of_state Sts)"
shows "{#} \<in> Q_of_state (Liminf_state Sts)"
proof -
define Gs :: "'a clause set llist" where
ns: "Gs = lmap grounding_of_state Sts"
from empty_in have in_Liminf_not_Rf: "{#} \<in> Liminf_llist Gs - sr.Rf (Liminf_llist Gs)"
unfolding ns sr.Rf_def by auto
then have "{#} \<in> grounding_of_clss (Q_of_state (Liminf_state Sts))"
- using fair_imp_Liminf_minus_Rf_subset_ground_Liminf_state[OF fair ns] by auto
+ using fair_imp_Liminf_minus_Rf_subset_ground_Liminf_state[OF deriv fair ns] by auto
then show ?thesis
unfolding grounding_of_clss_def grounding_of_cls_def by auto
qed
lemma grounding_of_state_Liminf_state_subseteq:
"grounding_of_state (Liminf_state Sts) \<subseteq> Liminf_llist (lmap grounding_of_state Sts)"
proof
fix C :: "'a clause"
assume "C \<in> grounding_of_state (Liminf_state Sts)"
then obtain D \<sigma> where
D_\<sigma>_p: "D \<in> clss_of_state (Liminf_state Sts)" "D \<cdot> \<sigma> = C" "is_ground_subst \<sigma>"
unfolding grounding_of_clss_def grounding_of_cls_def by auto
then have ii: "D \<in> Liminf_llist (lmap N_of_state Sts)
\<or> D \<in> Liminf_llist (lmap P_of_state Sts) \<or> D \<in> Liminf_llist (lmap Q_of_state Sts)"
unfolding Liminf_state_def by simp
then have "C \<in> Liminf_llist (lmap grounding_of_clss (lmap N_of_state Sts))
\<or> C \<in> Liminf_llist (lmap grounding_of_clss (lmap P_of_state Sts))
\<or> C \<in> Liminf_llist (lmap grounding_of_clss (lmap Q_of_state Sts))"
unfolding Liminf_llist_def grounding_of_clss_def grounding_of_cls_def
+ using D_\<sigma>_p
apply -
apply (erule disjE)
subgoal
apply (rule disjI1)
using D_\<sigma>_p by auto
subgoal
- apply (erule HOL.disjE)
+ apply (erule disjE)
subgoal
apply (rule disjI2)
apply (rule disjI1)
using D_\<sigma>_p by auto
subgoal
apply (rule disjI2)
apply (rule disjI2)
using D_\<sigma>_p by auto
done
done
then show "C \<in> Liminf_llist (lmap grounding_of_state Sts)"
unfolding Liminf_llist_def grounding_of_clss_def by auto
qed
theorem RP_sound:
- assumes "{#} \<in> clss_of_state (Liminf_state Sts)"
+ assumes
+ deriv: "chain (\<leadsto>) Sts" and
+ "{#} \<in> clss_of_state (Liminf_state Sts)"
shows "\<not> satisfiable (grounding_of_state (lhd Sts))"
proof -
from assms have "{#} \<in> grounding_of_state (Liminf_state Sts)"
unfolding grounding_of_clss_def by (force intro: ex_ground_subst)
then have "{#} \<in> Liminf_llist (lmap grounding_of_state Sts)"
using grounding_of_state_Liminf_state_subseteq by auto
then have "\<not> satisfiable (Liminf_llist (lmap grounding_of_state Sts))"
using true_clss_def by auto
then have "\<not> satisfiable (lhd (lmap grounding_of_state Sts))"
- using sr_ext.sat_limit_iff ground_derive_chain by blast
+ using sr_ext.sat_limit_iff ground_derive_chain deriv by blast
then show ?thesis
- unfolding lhd_lmap_Sts .
-qed
-
-lemma ground_ord_resolve_ground:
- assumes
- CAs_p: "gr.ord_resolve CAs DA AAs As E" and
- ground_cas: "is_ground_cls_list CAs" and
- ground_da: "is_ground_cls DA"
- shows "is_ground_cls E"
-proof -
- have a1: "atms_of E \<subseteq> (\<Union>CA \<in> set CAs. atms_of CA) \<union> atms_of DA"
- using gr.ord_resolve_atms_of_concl_subset[of CAs DA _ _ E] CAs_p by auto
- {
- fix L :: "'a literal"
- assume "L \<in># E"
- then have "atm_of L \<in> atms_of E"
- by (meson atm_of_lit_in_atms_of)
- then have "is_ground_atm (atm_of L)"
- using a1 ground_cas ground_da is_ground_cls_imp_is_ground_atm is_ground_cls_list_def
- by auto
- }
- then show ?thesis
- unfolding is_ground_cls_def is_ground_lit_def by simp
+ using chain_not_lnull deriv by fastforce
qed
theorem RP_saturated_if_fair:
assumes
+ deriv: "chain (\<leadsto>) Sts" and
fair: "fair_state_seq Sts" and
empty_Q0: "Q_of_state (lhd Sts) = {}"
shows "sr.saturated_upto (Liminf_llist (lmap grounding_of_state Sts))"
proof -
define Gs :: "'a clause set llist" where
ns: "Gs = lmap grounding_of_state Sts"
let ?N = "\<lambda>i. grounding_of_state (lnth Sts i)"
let ?Ns = "\<lambda>i. N_of_state (lnth Sts i)"
let ?Ps = "\<lambda>i. P_of_state (lnth Sts i)"
let ?Qs = "\<lambda>i. Q_of_state (lnth Sts i)"
have ground_ns_in_ground_limit_st:
"Liminf_llist Gs - sr.Rf (Liminf_llist Gs) \<subseteq> grounding_of_clss (Q_of_state (Liminf_state Sts))"
using fair deriv fair_imp_Liminf_minus_Rf_subset_ground_Liminf_state ns by blast
have derivns: "chain sr_ext.derive Gs"
using ground_derive_chain deriv ns by auto
{
fix \<gamma> :: "'a inference"
assume \<gamma>_p: "\<gamma> \<in> gr.ord_\<Gamma>"
let ?CC = "side_prems_of \<gamma>"
let ?DA = "main_prem_of \<gamma>"
let ?E = "concl_of \<gamma>"
assume a: "set_mset ?CC \<union> {?DA}
\<subseteq> Liminf_llist (lmap grounding_of_state Sts)
- sr.Rf (Liminf_llist (lmap grounding_of_state Sts))"
have ground_ground_Liminf: "is_ground_clss (Liminf_llist (lmap grounding_of_state Sts))"
using Liminf_grounding_of_state_ground unfolding is_ground_clss_def by auto
have ground_cc: "is_ground_clss (set_mset ?CC)"
using a ground_ground_Liminf is_ground_clss_def by auto
have ground_da: "is_ground_cls ?DA"
using a grounding_ground singletonI ground_ground_Liminf
by (simp add: Liminf_grounding_of_state_ground)
from \<gamma>_p obtain CAs AAs As where
CAs_p: "gr.ord_resolve CAs ?DA AAs As ?E \<and> mset CAs = ?CC"
unfolding gr.ord_\<Gamma>_def by auto
have DA_CAs_in_ground_Liminf:
"{?DA} \<union> set CAs \<subseteq> grounding_of_clss (Q_of_state (Liminf_state Sts))"
using a CAs_p fair unfolding fair_state_seq_def
by (metis (no_types, lifting) Un_empty_left ground_ns_in_ground_limit_st a ns set_mset_mset
subset_trans sup_commute)
then have ground_cas: "is_ground_cls_list CAs"
using CAs_p unfolding is_ground_cls_list_def by auto
- then have ground_e: "is_ground_cls ?E"
- using ground_ord_resolve_ground CAs_p ground_da by auto
-
- have "\<exists>AAs As \<sigma>. ord_resolve (S_M S (Q_of_state (Liminf_state Sts))) CAs ?DA AAs As \<sigma> ?E"
- using CAs_p[THEN conjunct1]
- proof (cases rule: gr.ord_resolve.cases)
- case (ord_resolve n Cs D)
- note DA = this(1) and e = this(2) and cas_len = this(3) and cs_len = this(4) and
- aas_len = this(5) and as_len = this(6) and nz = this(7) and cas = this(8) and
- aas_not_empt = this(9) and as_aas = this(10) and eligibility = this(11) and
- str_max = this(12) and sel_empt = this(13)
-
- have len_aas_len_as: "length AAs = length As"
- using aas_len as_len by auto
-
- from as_aas have "\<forall>i<n. \<forall>A \<in># add_mset (As ! i) (AAs ! i). A = As ! i"
- using ord_resolve by simp
- then have "\<forall>i < n. card (set_mset (add_mset (As ! i) (AAs ! i))) \<le> Suc 0"
- using all_the_same by metis
- then have "\<forall>i < length AAs. card (set_mset (add_mset (As ! i) (AAs ! i))) \<le> Suc 0"
- using aas_len by auto
- then have "\<forall>AA \<in> set (map2 add_mset As AAs). card (set_mset AA) \<le> Suc 0"
- using set_map2_ex[of AAs As add_mset, OF len_aas_len_as] by auto
- then have "is_unifiers id_subst (set_mset ` set (map2 add_mset As AAs))"
- unfolding is_unifiers_def is_unifier_def by auto
- moreover have "finite (set_mset ` set (map2 add_mset As AAs))"
- by auto
- moreover have "\<forall>AA \<in> set_mset ` set (map2 add_mset As AAs). finite AA"
- by auto
- ultimately obtain \<sigma> where
- \<sigma>_p: "Some \<sigma> = mgu (set_mset ` set (map2 add_mset As AAs))"
- using mgu_complete by metis
-
- have ground_elig: "gr.eligible As (D + negs (mset As))"
- using ord_resolve by simp
- have ground_cs: "\<forall>i < n. is_ground_cls (Cs ! i)"
- using ord_resolve(8) ord_resolve(3,4) ground_cas
- using ground_subclauses[of CAs Cs AAs] unfolding is_ground_cls_list_def by auto
- have ground_set_as: "is_ground_atms (set As)"
- using ord_resolve(1) ground_da
- by (metis atms_of_negs is_ground_cls_union set_mset_mset
- is_ground_cls_is_ground_atms_atms_of)
- then have ground_mset_as: "is_ground_atm_mset (mset As)"
- unfolding is_ground_atm_mset_def is_ground_atms_def by auto
- have ground_as: "is_ground_atm_list As"
- using ground_set_as is_ground_atm_list_def is_ground_atms_def by auto
- have ground_d: "is_ground_cls D"
- using ground_da ord_resolve by simp
-
- from as_len nz have "atms_of D \<union> set As \<noteq> {}" "finite (atms_of D \<union> set As)"
- by auto
- then have "Max (atms_of D \<union> set As) \<in> atms_of D \<union> set As"
- using Max_in by metis
- then have is_ground_Max: "is_ground_atm (Max (atms_of D \<union> set As))"
- using ground_d ground_mset_as is_ground_cls_imp_is_ground_atm
- unfolding is_ground_atm_mset_def by auto
- then have Max\<sigma>_is_Max: "\<forall>\<sigma>. Max (atms_of D \<union> set As) \<cdot>a \<sigma> = Max (atms_of D \<union> set As)"
- by auto
-
- have ann1: "maximal_wrt (Max (atms_of D \<union> set As)) (D + negs (mset As))"
- unfolding maximal_wrt_def
- by clarsimp (metis Max_less_iff UnCI \<open>atms_of D \<union> set As \<noteq> {}\<close>
- \<open>finite (atms_of D \<union> set As)\<close> ground_d ground_set_as infinite_growing is_ground_Max
- is_ground_atms_def is_ground_cls_imp_is_ground_atm less_atm_ground)
-
- from ground_elig have ann2:
- "Max (atms_of D \<union> set As) \<cdot>a \<sigma> = Max (atms_of D \<union> set As)"
- "D \<cdot> \<sigma> + negs (mset As \<cdot>am \<sigma>) = D + negs (mset As)"
- using is_ground_Max ground_mset_as ground_d by auto
-
- from ground_elig have fo_elig:
- "eligible (S_M S (Q_of_state (Liminf_state Sts))) \<sigma> As (D + negs (mset As))"
- unfolding gr.eligible.simps eligible.simps gr.maximal_wrt_def using ann1 ann2
- by (auto simp: S_Q_def)
-
- have l: "\<forall>i < n. gr.strictly_maximal_wrt (As ! i) (Cs ! i)"
- using ord_resolve by simp
- then have "\<forall>i < n. strictly_maximal_wrt (As ! i) (Cs ! i)"
- unfolding gr.strictly_maximal_wrt_def strictly_maximal_wrt_def
- using ground_as[unfolded is_ground_atm_list_def] ground_cs as_len less_atm_ground
- by clarsimp (fastforce simp: is_ground_cls_as_atms)+
-
- then have ll: "\<forall>i < n. strictly_maximal_wrt (As ! i \<cdot>a \<sigma>) (Cs ! i \<cdot> \<sigma>)"
- by (simp add: ground_as ground_cs as_len)
-
- have m: "\<forall>i < n. S_Q (CAs ! i) = {#}"
- using ord_resolve by simp
-
- have ground_e: "is_ground_cls (\<Union># (mset Cs) + D)"
- using ground_d ground_cs ground_e e by simp
- show ?thesis
- using ord_resolve.intros
- [OF cas_len cs_len aas_len as_len nz cas aas_not_empt \<sigma>_p fo_elig ll] m DA e ground_e
- unfolding S_Q_def by auto
- qed
- then obtain AAs As \<sigma> where
- \<sigma>_p: "ord_resolve (S_M S (Q_of_state (Liminf_state Sts))) CAs ?DA AAs As \<sigma> ?E"
+ have "\<exists>\<sigma>. ord_resolve S_Q CAs ?DA AAs As \<sigma> ?E"
+ by (rule ground_ord_resolve_imp_ord_resolve[OF ground_da ground_cas
+ gr.ground_resolution_with_selection_axioms CAs_p[THEN conjunct1]])
+ then obtain \<sigma> where
+ \<sigma>_p: "ord_resolve S_Q CAs ?DA AAs As \<sigma> ?E"
by auto
then obtain \<eta>s' \<eta>' \<eta>2' CAs' DA' AAs' As' \<tau>' E' where s_p:
"is_ground_subst \<eta>'"
"is_ground_subst_list \<eta>s'"
"is_ground_subst \<eta>2'"
"ord_resolve_rename S CAs' DA' AAs' As' \<tau>' E'"
"CAs' \<cdot>\<cdot>cl \<eta>s' = CAs"
"DA' \<cdot> \<eta>' = ?DA"
"E' \<cdot> \<eta>2' = ?E"
"{DA'} \<union> set CAs' \<subseteq> Q_of_state (Liminf_state Sts)"
using ord_resolve_rename_lifting[OF sel_stable, of "Q_of_state (Liminf_state Sts)" CAs ?DA]
- \<sigma>_p selection_axioms DA_CAs_in_ground_Liminf by metis
+ \<sigma>_p[unfolded S_Q_def] selection_axioms DA_CAs_in_ground_Liminf by metis
from this(8) have "\<exists>j. enat j < llength Sts \<and> (set CAs' \<union> {DA'} \<subseteq> ?Qs j)"
unfolding Liminf_llist_def
using subseteq_Liminf_state_eventually_always[of "{DA'} \<union> set CAs'"] by auto
then obtain j where
j_p: "is_least (\<lambda>j. enat j < llength Sts \<and> set CAs' \<union> {DA'} \<subseteq> ?Qs j) j"
using least_exists[of "\<lambda>j. enat j < llength Sts \<and> set CAs' \<union> {DA'} \<subseteq> ?Qs j"] by force
then have j_p': "enat j < llength Sts" "set CAs' \<union> {DA'} \<subseteq> ?Qs j"
unfolding is_least_def by auto
then have jn0: "j \<noteq> 0"
using empty_Q0 by (metis bot_eq_sup_iff gr_implies_not_zero insert_not_empty llength_lnull
lnth_0_conv_lhd sup.orderE)
then have j_adds_CAs': "\<not> set CAs' \<union> {DA'} \<subseteq> ?Qs (j - 1)" "set CAs' \<union> {DA'} \<subseteq> ?Qs j"
using j_p unfolding is_least_def
apply (metis (no_types) One_nat_def Suc_diff_Suc Suc_ile_eq diff_diff_cancel diff_zero
less_imp_le less_one neq0_conv zero_less_diff)
using j_p'(2) by blast
have "lnth Sts (j - 1) \<leadsto> lnth Sts j"
using j_p'(1) jn0 deriv chain_lnth_rel[of _ _ "j - 1"] by force
then obtain C' where C'_p:
"?Ns (j - 1) = {}"
"?Ps (j - 1) = ?Ps j \<union> {C'}"
"?Qs j = ?Qs (j - 1) \<union> {C'}"
"?Ns j = concls_of (ord_FO_resolution.inferences_between (?Qs (j - 1)) C')"
"C' \<in> set CAs' \<union> {DA'}"
"C' \<notin> ?Qs (j - 1)"
using j_adds_CAs' by (induction rule: RP.cases) auto
have "E' \<in> ?Ns j"
proof -
have "E' \<in> concls_of (ord_FO_resolution.inferences_between (Q_of_state (lnth Sts (j - 1))) C')"
unfolding infer_from_def ord_FO_\<Gamma>_def inference_system.inferences_between_def
apply (rule_tac x = "Infer (mset CAs') DA' E'" in image_eqI)
subgoal by auto
subgoal
unfolding infer_from_def
- by (rule ord_resolve_rename.cases[OF s_p(4)])
- (use s_p(4) C'_p(3) C'_p(5) j_p'(2) in force)
+ by (rule ord_resolve_rename.cases[OF s_p(4)]) (use s_p(4) C'_p(3,5) j_p'(2) in force)
done
then show ?thesis
using C'_p(4) by auto
qed
then have "E' \<in> clss_of_state (lnth Sts j)"
using j_p' by auto
then have "?E \<in> grounding_of_state (lnth Sts j)"
using s_p(7) s_p(3) unfolding grounding_of_clss_def grounding_of_cls_def by force
then have "\<gamma> \<in> sr.Ri (grounding_of_state (lnth Sts j))"
using sr.Ri_effective \<gamma>_p by auto
then have "\<gamma> \<in> sr_ext_Ri (?N j)"
unfolding sr_ext_Ri_def by auto
then have "\<gamma> \<in> sr_ext_Ri (Sup_llist (lmap grounding_of_state Sts))"
using j_p' contra_subsetD llength_lmap lnth_lmap lnth_subset_Sup_llist sr_ext.Ri_mono by smt
then have "\<gamma> \<in> sr_ext_Ri (Liminf_llist (lmap grounding_of_state Sts))"
using sr_ext.Ri_limit_Sup[of Gs] derivns ns by blast
}
then have "sr_ext.saturated_upto (Liminf_llist (lmap grounding_of_state Sts))"
unfolding sr_ext.saturated_upto_def sr_ext.inferences_from_def infer_from_def sr_ext_Ri_def
by auto
then show ?thesis
using gd_ord_\<Gamma>_ngd_ord_\<Gamma> sr.redundancy_criterion_axioms
redundancy_criterion_standard_extension_saturated_upto_iff[of gr.ord_\<Gamma>]
unfolding sr_ext_Ri_def by auto
qed
corollary RP_complete_if_fair:
assumes
+ deriv: "chain (\<leadsto>) Sts" and
fair: "fair_state_seq Sts" and
empty_Q0: "Q_of_state (lhd Sts) = {}" and
unsat: "\<not> satisfiable (grounding_of_state (lhd Sts))"
shows "{#} \<in> Q_of_state (Liminf_state Sts)"
proof -
have "\<not> satisfiable (Liminf_llist (lmap grounding_of_state Sts))"
- unfolding sr_ext.sat_limit_iff[OF ground_derive_chain]
- by (rule unsat[folded lhd_lmap_Sts[of grounding_of_state]])
+ using unsat sr_ext.sat_limit_iff[OF ground_derive_chain] chain_not_lnull deriv by fastforce
moreover have "sr.saturated_upto (Liminf_llist (lmap grounding_of_state Sts))"
- by (rule RP_saturated_if_fair[OF fair empty_Q0, simplified])
+ by (rule RP_saturated_if_fair[OF deriv fair empty_Q0, simplified])
ultimately have "{#} \<in> Liminf_llist (lmap grounding_of_state Sts)"
using sr.saturated_upto_complete_if by auto
then show ?thesis
- using empty_clause_in_Q_of_Liminf_state fair by auto
+ using empty_clause_in_Q_of_Liminf_state[OF deriv fair] by auto
qed
end
end
end
diff --git a/thys/Ordered_Resolution_Prover/Herbrand_Interpretation.thy b/thys/Ordered_Resolution_Prover/Herbrand_Interpretation.thy
--- a/thys/Ordered_Resolution_Prover/Herbrand_Interpretation.thy
+++ b/thys/Ordered_Resolution_Prover/Herbrand_Interpretation.thy
@@ -1,119 +1,143 @@
(* Title: Herbrand Interpretation
Author: Jasmin Blanchette <j.c.blanchette at vu.nl>, 2014, 2017
Author: Dmitriy Traytel <traytel at inf.ethz.ch>, 2014
Maintainer: Jasmin Blanchette <j.c.blanchette at vu.nl>
*)
section \<open>Herbrand Intepretation\<close>
theory Herbrand_Interpretation
imports Clausal_Logic
begin
text \<open>
The material formalized here corresponds roughly to Sections 2.2 (``Herbrand
Interpretations'') of Bachmair and Ganzinger, excluding the formula and term
syntax.
A Herbrand interpretation is a set of ground atoms that are to be considered true.
\<close>
type_synonym 'a interp = "'a set"
definition true_lit :: "'a interp \<Rightarrow> 'a literal \<Rightarrow> bool" (infix "\<Turnstile>l" 50) where
"I \<Turnstile>l L \<longleftrightarrow> (if is_pos L then (\<lambda>P. P) else Not) (atm_of L \<in> I)"
lemma true_lit_simps[simp]:
"I \<Turnstile>l Pos A \<longleftrightarrow> A \<in> I"
"I \<Turnstile>l Neg A \<longleftrightarrow> A \<notin> I"
unfolding true_lit_def by auto
lemma true_lit_iff[iff]: "I \<Turnstile>l L \<longleftrightarrow> (\<exists>A. L = Pos A \<and> A \<in> I \<or> L = Neg A \<and> A \<notin> I)"
by (cases L) simp+
definition true_cls :: "'a interp \<Rightarrow> 'a clause \<Rightarrow> bool" (infix "\<Turnstile>" 50) where
"I \<Turnstile> C \<longleftrightarrow> (\<exists>L \<in># C. I \<Turnstile>l L)"
lemma true_cls_empty[iff]: "\<not> I \<Turnstile> {#}"
unfolding true_cls_def by simp
lemma true_cls_singleton[iff]: "I \<Turnstile> {#L#} \<longleftrightarrow> I \<Turnstile>l L"
unfolding true_cls_def by simp
lemma true_cls_add_mset[iff]: "I \<Turnstile> add_mset C D \<longleftrightarrow> I \<Turnstile>l C \<or> I \<Turnstile> D"
unfolding true_cls_def by auto
lemma true_cls_union[iff]: "I \<Turnstile> C + D \<longleftrightarrow> I \<Turnstile> C \<or> I \<Turnstile> D"
unfolding true_cls_def by auto
lemma true_cls_mono: "set_mset C \<subseteq> set_mset D \<Longrightarrow> I \<Turnstile> C \<Longrightarrow> I \<Turnstile> D"
unfolding true_cls_def subset_eq by metis
lemma
assumes "I \<subseteq> J"
shows
false_to_true_imp_ex_pos: "\<not> I \<Turnstile> C \<Longrightarrow> J \<Turnstile> C \<Longrightarrow> \<exists>A \<in> J. Pos A \<in># C" and
true_to_false_imp_ex_neg: "I \<Turnstile> C \<Longrightarrow> \<not> J \<Turnstile> C \<Longrightarrow> \<exists>A \<in> J. Neg A \<in># C"
using assms unfolding subset_iff true_cls_def by (metis literal.collapse true_lit_simps)+
lemma true_cls_replicate_mset[iff]: "I \<Turnstile> replicate_mset n L \<longleftrightarrow> n \<noteq> 0 \<and> I \<Turnstile>l L"
by (simp add: true_cls_def)
lemma pos_literal_in_imp_true_cls[intro]: "Pos A \<in># C \<Longrightarrow> A \<in> I \<Longrightarrow> I \<Turnstile> C"
using true_cls_def by blast
lemma neg_literal_notin_imp_true_cls[intro]: "Neg A \<in># C \<Longrightarrow> A \<notin> I \<Longrightarrow> I \<Turnstile> C"
using true_cls_def by blast
lemma pos_neg_in_imp_true: "Pos A \<in># C \<Longrightarrow> Neg A \<in># C \<Longrightarrow> I \<Turnstile> C"
using true_cls_def by blast
definition true_clss :: "'a interp \<Rightarrow> 'a clause set \<Rightarrow> bool" (infix "\<Turnstile>s" 50) where
"I \<Turnstile>s CC \<longleftrightarrow> (\<forall>C \<in> CC. I \<Turnstile> C)"
lemma true_clss_empty[iff]: "I \<Turnstile>s {}"
by (simp add: true_clss_def)
lemma true_clss_singleton[iff]: "I \<Turnstile>s {C} \<longleftrightarrow> I \<Turnstile> C"
unfolding true_clss_def by blast
lemma true_clss_insert[iff]: "I \<Turnstile>s insert C DD \<longleftrightarrow> I \<Turnstile> C \<and> I \<Turnstile>s DD"
unfolding true_clss_def by blast
lemma true_clss_union[iff]: "I \<Turnstile>s CC \<union> DD \<longleftrightarrow> I \<Turnstile>s CC \<and> I \<Turnstile>s DD"
unfolding true_clss_def by blast
+lemma true_clss_Union[iff]: "I \<Turnstile>s \<Union> CCC \<longleftrightarrow> (\<forall>CC \<in> CCC. I \<Turnstile>s CC)"
+ unfolding true_clss_def by simp
+
lemma true_clss_mono: "DD \<subseteq> CC \<Longrightarrow> I \<Turnstile>s CC \<Longrightarrow> I \<Turnstile>s DD"
by (simp add: subsetD true_clss_def)
+lemma true_clss_mono_strong: "(\<forall>D \<in> DD. \<exists>C \<in> CC. C \<subseteq># D) \<Longrightarrow> I \<Turnstile>s CC \<Longrightarrow> I \<Turnstile>s DD"
+ unfolding true_clss_def true_cls_def true_lit_def by (meson mset_subset_eqD)
+
+lemma true_clss_subclause: "C \<subseteq># D \<Longrightarrow> I \<Turnstile>s {C} \<Longrightarrow> I \<Turnstile>s {D}"
+ by (rule true_clss_mono_strong[of _ "{C}"]) auto
+
abbreviation satisfiable :: "'a clause set \<Rightarrow> bool" where
"satisfiable CC \<equiv> \<exists>I. I \<Turnstile>s CC"
+lemma satisfiable_antimono: "CC \<subseteq> DD \<Longrightarrow> satisfiable DD \<Longrightarrow> satisfiable CC"
+ using true_clss_mono by blast
+
+lemma unsatisfiable_mono: "CC \<subseteq> DD \<Longrightarrow> \<not> satisfiable CC \<Longrightarrow> \<not> satisfiable DD"
+ using satisfiable_antimono by blast
+
definition true_cls_mset :: "'a interp \<Rightarrow> 'a clause multiset \<Rightarrow> bool" (infix "\<Turnstile>m" 50) where
"I \<Turnstile>m CC \<longleftrightarrow> (\<forall>C \<in># CC. I \<Turnstile> C)"
lemma true_cls_mset_empty[iff]: "I \<Turnstile>m {#}"
unfolding true_cls_mset_def by auto
lemma true_cls_mset_singleton[iff]: "I \<Turnstile>m {#C#} \<longleftrightarrow> I \<Turnstile> C"
by (simp add: true_cls_mset_def)
lemma true_cls_mset_union[iff]: "I \<Turnstile>m CC + DD \<longleftrightarrow> I \<Turnstile>m CC \<and> I \<Turnstile>m DD"
unfolding true_cls_mset_def by auto
+lemma true_cls_mset_Union[iff]: "I \<Turnstile>m \<Union># CCC \<longleftrightarrow> (\<forall>CC \<in># CCC. I \<Turnstile>m CC)"
+ unfolding true_cls_mset_def by simp
+
lemma true_cls_mset_add_mset[iff]: "I \<Turnstile>m add_mset C CC \<longleftrightarrow> I \<Turnstile> C \<and> I \<Turnstile>m CC"
unfolding true_cls_mset_def by auto
lemma true_cls_mset_image_mset[iff]: "I \<Turnstile>m image_mset f A \<longleftrightarrow> (\<forall>x \<in># A. I \<Turnstile> f x)"
unfolding true_cls_mset_def by auto
lemma true_cls_mset_mono: "set_mset DD \<subseteq> set_mset CC \<Longrightarrow> I \<Turnstile>m CC \<Longrightarrow> I \<Turnstile>m DD"
unfolding true_cls_mset_def subset_iff by auto
+lemma true_cls_mset_mono_strong: "(\<forall>D \<in># DD. \<exists>C \<in># CC. C \<subseteq># D) \<Longrightarrow> I \<Turnstile>m CC \<Longrightarrow> I \<Turnstile>m DD"
+ unfolding true_cls_mset_def true_cls_def true_lit_def by (meson mset_subset_eqD)
+
lemma true_clss_set_mset[iff]: "I \<Turnstile>s set_mset CC \<longleftrightarrow> I \<Turnstile>m CC"
unfolding true_clss_def true_cls_mset_def by auto
+lemma true_clss_mset_set[simp]: "finite CC \<Longrightarrow> I \<Turnstile>m mset_set CC \<longleftrightarrow> I \<Turnstile>s CC"
+ unfolding true_clss_def true_cls_mset_def by auto
+
lemma true_cls_mset_true_cls: "I \<Turnstile>m CC \<Longrightarrow> C \<in># CC \<Longrightarrow> I \<Turnstile> C"
using true_cls_mset_def by auto
end
diff --git a/thys/Ordered_Resolution_Prover/Lazy_List_Liminf.thy b/thys/Ordered_Resolution_Prover/Lazy_List_Liminf.thy
--- a/thys/Ordered_Resolution_Prover/Lazy_List_Liminf.thy
+++ b/thys/Ordered_Resolution_Prover/Lazy_List_Liminf.thy
@@ -1,147 +1,283 @@
(* Title: Liminf of Lazy Lists
Author: Jasmin Blanchette <j.c.blanchette at vu.nl>, 2014, 2017
Author: Dmitriy Traytel <traytel at inf.ethz.ch>, 2014
Maintainer: Jasmin Blanchette <j.c.blanchette at vu.nl>
*)
section \<open>Liminf of Lazy Lists\<close>
theory Lazy_List_Liminf
imports Coinductive.Coinductive_List
begin
text \<open>
Lazy lists, as defined in the \emph{Archive of Formal Proofs}, provide finite and infinite lists in
one type, defined coinductively. The present theory introduces the concept of the union of all
elements of a lazy list of sets and the limit of such a lazy list. The definitions are stated more
generally in terms of lattices. The basis for this theory is Section 4.1 (``Theorem Proving
Processes'') of Bachmair and Ganzinger's chapter.
\<close>
definition Sup_llist :: "'a set llist \<Rightarrow> 'a set" where
"Sup_llist Xs = (\<Union>i \<in> {i. enat i < llength Xs}. lnth Xs i)"
-lemma lnth_subset_Sup_llist: "enat i < llength xs \<Longrightarrow> lnth xs i \<subseteq> Sup_llist xs"
+lemma lnth_subset_Sup_llist: "enat i < llength Xs \<Longrightarrow> lnth Xs i \<subseteq> Sup_llist Xs"
+ unfolding Sup_llist_def by auto
+
+lemma Sup_llist_imp_exists_index: "x \<in> Sup_llist Xs \<Longrightarrow> \<exists>i. enat i < llength Xs \<and> x \<in> lnth Xs i"
+ unfolding Sup_llist_def by auto
+
+lemma exists_index_imp_Sup_llist: "enat i < llength Xs \<Longrightarrow> x \<in> lnth Xs i \<Longrightarrow> x \<in> Sup_llist Xs"
unfolding Sup_llist_def by auto
lemma Sup_llist_LNil[simp]: "Sup_llist LNil = {}"
unfolding Sup_llist_def by auto
lemma Sup_llist_LCons[simp]: "Sup_llist (LCons X Xs) = X \<union> Sup_llist Xs"
unfolding Sup_llist_def
proof (intro subset_antisym subsetI)
fix x
assume "x \<in> (\<Union>i \<in> {i. enat i < llength (LCons X Xs)}. lnth (LCons X Xs) i)"
then obtain i where len: "enat i < llength (LCons X Xs)" and nth: "x \<in> lnth (LCons X Xs) i"
by blast
from nth have "x \<in> X \<or> i > 0 \<and> x \<in> lnth Xs (i - 1)"
by (metis lnth_LCons' neq0_conv)
then have "x \<in> X \<or> (\<exists>i. enat i < llength Xs \<and> x \<in> lnth Xs i)"
by (metis len Suc_pred' eSuc_enat iless_Suc_eq less_irrefl llength_LCons not_less order_trans)
then show "x \<in> X \<union> (\<Union>i \<in> {i. enat i < llength Xs}. lnth Xs i)"
by blast
qed ((auto)[], metis i0_lb lnth_0 zero_enat_def, metis Suc_ile_eq lnth_Suc_LCons)
lemma lhd_subset_Sup_llist: "\<not> lnull Xs \<Longrightarrow> lhd Xs \<subseteq> Sup_llist Xs"
by (cases Xs) simp_all
definition Sup_upto_llist :: "'a set llist \<Rightarrow> nat \<Rightarrow> 'a set" where
"Sup_upto_llist Xs j = (\<Union>i \<in> {i. enat i < llength Xs \<and> i \<le> j}. lnth Xs i)"
+lemma Sup_upto_llist_0[simp]: "Sup_upto_llist Xs 0 = (if 0 < llength Xs then lnth Xs 0 else {})"
+ unfolding Sup_upto_llist_def image_def by (simp add: enat_0)
+
+lemma Sup_upto_llist_Suc[simp]:
+ "Sup_upto_llist Xs (Suc j) =
+ Sup_upto_llist Xs j \<union> (if enat (Suc j) < llength Xs then lnth Xs (Suc j) else {})"
+ unfolding Sup_upto_llist_def image_def by (auto intro: le_SucI elim: le_SucE)
+
lemma Sup_upto_llist_mono: "j \<le> k \<Longrightarrow> Sup_upto_llist Xs j \<subseteq> Sup_upto_llist Xs k"
unfolding Sup_upto_llist_def by auto
lemma Sup_upto_llist_subset_Sup_llist: "j \<le> k \<Longrightarrow> Sup_upto_llist Xs j \<subseteq> Sup_llist Xs"
unfolding Sup_llist_def Sup_upto_llist_def by auto
lemma elem_Sup_llist_imp_Sup_upto_llist:
"x \<in> Sup_llist Xs \<Longrightarrow> \<exists>j < llength Xs. x \<in> Sup_upto_llist Xs j"
unfolding Sup_llist_def Sup_upto_llist_def by blast
+lemma lnth_subset_Sup_upto_llist: "enat j < llength Xs \<Longrightarrow> lnth Xs j \<subseteq> Sup_upto_llist Xs j"
+ unfolding Sup_upto_llist_def by auto
+
lemma finite_Sup_llist_imp_Sup_upto_llist:
assumes "finite X" and "X \<subseteq> Sup_llist Xs"
shows "\<exists>k. X \<subseteq> Sup_upto_llist Xs k"
using assms
proof induct
case (insert x X)
then have x: "x \<in> Sup_llist Xs" and X: "X \<subseteq> Sup_llist Xs"
by simp+
from x obtain k where k: "x \<in> Sup_upto_llist Xs k"
using elem_Sup_llist_imp_Sup_upto_llist by fast
from X obtain k' where k': "X \<subseteq> Sup_upto_llist Xs k'"
using insert.hyps(3) by fast
have "insert x X \<subseteq> Sup_upto_llist Xs (max k k')"
using k k'
by (metis insert_absorb insert_subset Sup_upto_llist_mono max.cobounded2 max.commute
order.trans)
then show ?case
by fast
qed simp
definition Liminf_llist :: "'a set llist \<Rightarrow> 'a set" where
"Liminf_llist Xs =
(\<Union>i \<in> {i. enat i < llength Xs}. \<Inter>j \<in> {j. i \<le> j \<and> enat j < llength Xs}. lnth Xs j)"
-lemma Liminf_llist_subset_Sup_llist: "Liminf_llist Xs \<subseteq> Sup_llist Xs"
- unfolding Liminf_llist_def Sup_llist_def by fast
-
lemma Liminf_llist_LNil[simp]: "Liminf_llist LNil = {}"
unfolding Liminf_llist_def by simp
lemma Liminf_llist_LCons:
"Liminf_llist (LCons X Xs) = (if lnull Xs then X else Liminf_llist Xs)" (is "?lhs = ?rhs")
proof (cases "lnull Xs")
case nnull: False
show ?thesis
proof
{
fix x
assume "\<exists>i. enat i \<le> llength Xs
\<and> (\<forall>j. i \<le> j \<and> enat j \<le> llength Xs \<longrightarrow> x \<in> lnth (LCons X Xs) j)"
then have "\<exists>i. enat (Suc i) \<le> llength Xs
\<and> (\<forall>j. Suc i \<le> j \<and> enat j \<le> llength Xs \<longrightarrow> x \<in> lnth (LCons X Xs) j)"
by (cases "llength Xs",
metis not_lnull_conv[THEN iffD1, OF nnull] Suc_le_D eSuc_enat eSuc_ile_mono
llength_LCons not_less_eq_eq zero_enat_def zero_le,
metis Suc_leD enat_ord_code(3))
then have "\<exists>i. enat i < llength Xs \<and> (\<forall>j. i \<le> j \<and> enat j < llength Xs \<longrightarrow> x \<in> lnth Xs j)"
by (metis Suc_ile_eq Suc_n_not_le_n lift_Suc_mono_le lnth_Suc_LCons nat_le_linear)
}
then show "?lhs \<subseteq> ?rhs"
by (simp add: Liminf_llist_def nnull) (rule subsetI, simp)
{
fix x
assume "\<exists>i. enat i < llength Xs \<and> (\<forall>j. i \<le> j \<and> enat j < llength Xs \<longrightarrow> x \<in> lnth Xs j)"
then obtain i where
i: "enat i < llength Xs" and
j: "\<forall>j. i \<le> j \<and> enat j < llength Xs \<longrightarrow> x \<in> lnth Xs j"
by blast
have "enat (Suc i) \<le> llength Xs"
using i by (simp add: Suc_ile_eq)
moreover have "\<forall>j. Suc i \<le> j \<and> enat j \<le> llength Xs \<longrightarrow> x \<in> lnth (LCons X Xs) j"
using Suc_ile_eq Suc_le_D j by force
ultimately have "\<exists>i. enat i \<le> llength Xs \<and> (\<forall>j. i \<le> j \<and> enat j \<le> llength Xs \<longrightarrow>
x \<in> lnth (LCons X Xs) j)"
by blast
}
then show "?rhs \<subseteq> ?lhs"
by (simp add: Liminf_llist_def nnull) (rule subsetI, simp)
qed
qed (simp add: Liminf_llist_def enat_0_iff(1))
lemma lfinite_Liminf_llist: "lfinite Xs \<Longrightarrow> Liminf_llist Xs = (if lnull Xs then {} else llast Xs)"
proof (induction rule: lfinite_induct)
case (LCons xs)
then obtain y ys where
xs: "xs = LCons y ys"
by (meson not_lnull_conv)
show ?case
unfolding xs by (simp add: Liminf_llist_LCons LCons.IH[unfolded xs, simplified] llast_LCons)
qed (simp add: Liminf_llist_def)
lemma Liminf_llist_ltl: "\<not> lnull (ltl Xs) \<Longrightarrow> Liminf_llist Xs = Liminf_llist (ltl Xs)"
by (metis Liminf_llist_LCons lhd_LCons_ltl lnull_ltlI)
+lemma Liminf_llist_subset_Sup_llist: "Liminf_llist Xs \<subseteq> Sup_llist Xs"
+ unfolding Liminf_llist_def Sup_llist_def by fast
+
+lemma image_Liminf_llist_subset: "f ` Liminf_llist Ns \<subseteq> Liminf_llist (lmap ((`) f) Ns)"
+ unfolding Liminf_llist_def by auto
+
+lemma Liminf_llist_imp_exists_index:
+ "x \<in> Liminf_llist Xs \<Longrightarrow> \<exists>i. enat i < llength Xs \<and> x \<in> lnth Xs i"
+ unfolding Liminf_llist_def by auto
+
+lemma not_Liminf_llist_imp_exists_index:
+ "\<not> lnull Xs \<Longrightarrow> x \<notin> Liminf_llist Xs \<Longrightarrow> enat i < llength Xs \<Longrightarrow>
+ (\<exists>j. i \<le> j \<and> enat j < llength Xs \<and> x \<notin> lnth Xs j)"
+ unfolding Liminf_llist_def by auto
+
+lemma finite_subset_Liminf_llist_imp_exists_index:
+ assumes
+ nnil: "\<not> lnull Xs" and
+ fin: "finite X" and
+ in_lim: "X \<subseteq> Liminf_llist Xs"
+ shows "\<exists>i. enat i < llength Xs \<and> X \<subseteq> \<Inter> (lnth Xs ` {j. i \<le> j \<and> enat j < llength Xs})"
+proof -
+ show ?thesis
+ proof (cases "X = {}")
+ case True
+ then show ?thesis
+ using nnil by (auto intro: exI[of _ 0] simp: zero_enat_def[symmetric])
+ next
+ case nemp: False
+
+ have in_lim':
+ "\<forall>x \<in> X. \<exists>i. enat i < llength Xs \<and> x \<in> \<Inter> (lnth Xs ` {j. i \<le> j \<and> enat j < llength Xs})"
+ using in_lim[unfolded Liminf_llist_def] in_mono by fastforce
+ obtain i_of where
+ i_of_lt: "\<forall>x \<in> X. enat (i_of x) < llength Xs" and
+ in_inter: "\<forall>x \<in> X. x \<in> \<Inter> (lnth Xs ` {j. i_of x \<le> j \<and> enat j < llength Xs})"
+ using bchoice[OF in_lim'] by blast
+
+ define i_max where
+ "i_max = Max (i_of ` X)"
+
+ have "i_max \<in> i_of ` X"
+ by (simp add: fin i_max_def nemp)
+ then obtain x_max where
+ x_max_in: "x_max \<in> X" and
+ i_max_is: "i_max = i_of x_max"
+ unfolding i_max_def by blast
+ have le_i_max: "\<forall>x \<in> X. i_of x \<le> i_max"
+ unfolding i_max_def by (simp add: fin)
+ have "enat i_max < llength Xs"
+ using i_of_lt x_max_in i_max_is by auto
+ moreover have "X \<subseteq> \<Inter> (lnth Xs ` {j. i_max \<le> j \<and> enat j < llength Xs})"
+ proof
+ fix x
+ assume x_in: "x \<in> X"
+ then have x_in_inter: "x \<in> \<Inter> (lnth Xs ` {j. i_of x \<le> j \<and> enat j < llength Xs})"
+ using in_inter by auto
+ moreover have "{j. i_max \<le> j \<and> enat j < llength Xs}
+ \<subseteq> {j. i_of x \<le> j \<and> enat j < llength Xs}"
+ using x_in le_i_max by auto
+ ultimately show "x \<in> \<Inter> (lnth Xs ` {j. i_max \<le> j \<and> enat j < llength Xs})"
+ by auto
+ qed
+ ultimately show ?thesis
+ by auto
+ qed
+qed
+
+lemma Liminf_llist_lmap_image:
+ assumes f_inj: "inj_on f (Sup_llist (lmap g xs))"
+ shows "Liminf_llist (lmap (\<lambda>x. f ` g x) xs) = f ` Liminf_llist (lmap g xs)" (is "?lhs = ?rhs")
+proof
+ show "?lhs \<subseteq> ?rhs"
+ proof
+ fix x
+ assume "x \<in> Liminf_llist (lmap (\<lambda>x. f ` g x) xs)"
+ then obtain i where
+ i_lt: "enat i < llength xs" and
+ x_in_fgj: "\<forall>j. i \<le> j \<longrightarrow> enat j < llength xs \<longrightarrow> x \<in> f ` g (lnth xs j)"
+ unfolding Liminf_llist_def by auto
+
+ have ex_in_gi: "\<exists>y. y \<in> g (lnth xs i) \<and> x = f y"
+ using f_inj i_lt x_in_fgj unfolding inj_on_def Sup_llist_def by auto
+ have "\<exists>y. \<forall>j. i \<le> j \<longrightarrow> enat j < llength xs \<longrightarrow> y \<in> g (lnth xs j) \<and> x = f y"
+ apply (rule exI[of _ "SOME y. y \<in> g (lnth xs i) \<and> x = f y"])
+ using someI_ex[OF ex_in_gi] x_in_fgj f_inj i_lt x_in_fgj unfolding inj_on_def Sup_llist_def
+ by simp (metis (no_types, lifting) imageE)
+ then show "x \<in> f ` Liminf_llist (lmap g xs)"
+ using i_lt unfolding Liminf_llist_def by auto
+ qed
+next
+ show "?rhs \<subseteq> ?lhs"
+ using image_Liminf_llist_subset[of f "lmap g xs", unfolded llist.map_comp] by auto
+qed
+
+lemma Liminf_llist_lmap_union:
+ assumes "\<forall>x \<in> lset xs. \<forall>Y \<in> lset xs. g x \<inter> h Y = {}"
+ shows "Liminf_llist (lmap (\<lambda>x. g x \<union> h x) xs) =
+ Liminf_llist (lmap g xs) \<union> Liminf_llist (lmap h xs)" (is "?lhs = ?rhs")
+proof (intro equalityI subsetI)
+ fix x
+ assume x_in: "x \<in> ?lhs"
+ then obtain i where
+ i_lt: "enat i < llength xs" and
+ j: "\<forall>j. i \<le> j \<and> enat j < llength xs \<longrightarrow> x \<in> g (lnth xs j) \<or> x \<in> h (lnth xs j)"
+ using x_in[unfolded Liminf_llist_def, simplified] by blast
+
+ then have "(\<exists>i'. enat i' < llength xs \<and> (\<forall>j. i' \<le> j \<and> enat j < llength xs \<longrightarrow> x \<in> g (lnth xs j)))
+ \<or> (\<exists>i'. enat i' < llength xs \<and> (\<forall>j. i' \<le> j \<and> enat j < llength xs \<longrightarrow> x \<in> h (lnth xs j)))"
+ using assms[unfolded disjoint_iff_not_equal] by (metis in_lset_conv_lnth)
+ then show "x \<in> ?rhs"
+ unfolding Liminf_llist_def by simp
+next
+ fix x
+ show "x \<in> ?rhs \<Longrightarrow> x \<in> ?lhs"
+ using assms unfolding Liminf_llist_def by auto
+qed
+
+lemma Liminf_set_filter_commute:
+ "Liminf_llist (lmap (\<lambda>X. {x \<in> X. p x}) Xs) = {x \<in> Liminf_llist Xs. p x}"
+ unfolding Liminf_llist_def by force
+
end
diff --git a/thys/Ordered_Resolution_Prover/Ordered_Ground_Resolution.thy b/thys/Ordered_Resolution_Prover/Ordered_Ground_Resolution.thy
--- a/thys/Ordered_Resolution_Prover/Ordered_Ground_Resolution.thy
+++ b/thys/Ordered_Resolution_Prover/Ordered_Ground_Resolution.thy
@@ -1,468 +1,468 @@
(* Title: Ground Ordered Resolution Calculus with Selection
Author: Anders Schlichtkrull <andschl at dtu.dk>, 2016, 2017
Author: Jasmin Blanchette <j.c.blanchette at vu.nl>, 2014, 2017
Author: Dmitriy Traytel <traytel at inf.ethz.ch>, 2014
Maintainer: Anders Schlichtkrull <andschl at dtu.dk>
*)
section \<open>Ground Ordered Resolution Calculus with Selection\<close>
theory Ordered_Ground_Resolution
imports Inference_System Ground_Resolution_Model
begin
text \<open>
Ordered ground resolution with selection is the second inference system studied in Section~3
(``Standard Resolution'') of Bachmair and Ganzinger's chapter.
\<close>
subsection \<open>Inference Rule\<close>
text \<open>
Ordered ground resolution consists of a single rule, called \<open>ord_resolve\<close> below. Like
\<open>unord_resolve\<close>, the rule is sound and counterexample-reducing. In addition, it is reductive.
\<close>
context ground_resolution_with_selection
begin
text \<open>
The following inductive definition corresponds to Figure 2.
\<close>
definition maximal_wrt :: "'a \<Rightarrow> 'a literal multiset \<Rightarrow> bool" where
"maximal_wrt A DA \<longleftrightarrow> DA = {#} \<or> A = Max (atms_of DA)"
definition strictly_maximal_wrt :: "'a \<Rightarrow> 'a literal multiset \<Rightarrow> bool" where
"strictly_maximal_wrt A CA \<longleftrightarrow> (\<forall>B \<in> atms_of CA. B < A)"
inductive eligible :: "'a list \<Rightarrow> 'a clause \<Rightarrow> bool" where
eligible: "(S DA = negs (mset As)) \<or> (S DA = {#} \<and> length As = 1 \<and> maximal_wrt (As ! 0) DA) \<Longrightarrow>
eligible As DA"
lemma "(S DA = negs (mset As) \<or> S DA = {#} \<and> length As = 1 \<and> maximal_wrt (As ! 0) DA) \<longleftrightarrow>
eligible As DA"
using eligible.intros ground_resolution_with_selection.eligible.cases ground_resolution_with_selection_axioms by blast
inductive
ord_resolve :: "'a clause list \<Rightarrow> 'a clause \<Rightarrow> 'a multiset list \<Rightarrow> 'a list \<Rightarrow> 'a clause \<Rightarrow> bool"
where
ord_resolve:
"length CAs = n \<Longrightarrow>
length Cs = n \<Longrightarrow>
length AAs = n \<Longrightarrow>
length As = n \<Longrightarrow>
n \<noteq> 0 \<Longrightarrow>
(\<forall>i < n. CAs ! i = Cs ! i + poss (AAs ! i)) \<Longrightarrow>
(\<forall>i < n. AAs ! i \<noteq> {#}) \<Longrightarrow>
(\<forall>i < n. \<forall>A \<in># AAs ! i. A = As ! i) \<Longrightarrow>
eligible As (D + negs (mset As)) \<Longrightarrow>
(\<forall>i < n. strictly_maximal_wrt (As ! i) (Cs ! i)) \<Longrightarrow>
(\<forall>i < n. S (CAs ! i) = {#}) \<Longrightarrow>
ord_resolve CAs (D + negs (mset As)) AAs As (\<Union># (mset Cs) + D)"
lemma ord_resolve_sound:
assumes
res_e: "ord_resolve CAs DA AAs As E" and
cc_true: "I \<Turnstile>m mset CAs" and
d_true: "I \<Turnstile> DA"
shows "I \<Turnstile> E"
using res_e
proof (cases rule: ord_resolve.cases)
case (ord_resolve n Cs D)
note DA = this(1) and e = this(2) and cas_len = this(3) and cs_len = this(4) and
as_len = this(6) and cas = this(8) and aas_ne = this(9) and a_eq = this(10)
show ?thesis
proof (cases "\<forall>A \<in> set As. A \<in> I")
case True
then have "\<not> I \<Turnstile> negs (mset As)"
unfolding true_cls_def by fastforce
then have "I \<Turnstile> D"
using d_true DA by fast
then show ?thesis
unfolding e by blast
next
case False
then obtain i where
a_in_aa: "i < n" and
a_false: "As ! i \<notin> I"
using cas_len as_len by (metis in_set_conv_nth)
have "\<not> I \<Turnstile> poss (AAs ! i)"
using a_false a_eq aas_ne a_in_aa unfolding true_cls_def by auto
moreover have "I \<Turnstile> CAs ! i"
using a_in_aa cc_true unfolding true_cls_mset_def using cas_len by auto
ultimately have "I \<Turnstile> Cs ! i"
using cas a_in_aa by auto
then show ?thesis
using a_in_aa cs_len unfolding e true_cls_def
by (meson in_Union_mset_iff nth_mem_mset union_iff)
qed
qed
lemma filter_neg_atm_of_S: "{#Neg (atm_of L). L \<in># S C#} = S C"
by (simp add: S_selects_neg_lits)
text \<open>
This corresponds to Lemma 3.13:
\<close>
lemma ord_resolve_reductive:
assumes "ord_resolve CAs DA AAs As E"
shows "E < DA"
using assms
proof (cases rule: ord_resolve.cases)
case (ord_resolve n Cs D)
note DA = this(1) and e = this(2) and cas_len = this(3) and cs_len = this(4) and
ai_len = this(6) and nz = this(7) and cas = this(8) and maxim = this(12)
show ?thesis
proof (cases "\<Union># (mset Cs) = {#}")
case True
have "negs (mset As) \<noteq> {#}"
using nz ai_len by auto
then show ?thesis
unfolding True e DA by auto
next
case False
define max_A_of_Cs where
"max_A_of_Cs = Max (atms_of (\<Union># (mset Cs)))"
have
mc_in: "max_A_of_Cs \<in> atms_of (\<Union># (mset Cs))" and
mc_max: "\<And>B. B \<in> atms_of (\<Union># (mset Cs)) \<Longrightarrow> B \<le> max_A_of_Cs"
using max_A_of_Cs_def False by auto
then have "\<exists>C_max \<in> set Cs. max_A_of_Cs \<in> atms_of (C_max)"
by (metis atm_imp_pos_or_neg_lit in_Union_mset_iff neg_lit_in_atms_of pos_lit_in_atms_of
set_mset_mset)
then obtain max_i where
cm_in_cas: "max_i < length CAs" and
mc_in_cm: "max_A_of_Cs \<in> atms_of (Cs ! max_i)"
using in_set_conv_nth[of _ CAs] by (metis cas_len cs_len in_set_conv_nth)
define CA_max where "CA_max = CAs ! max_i"
define A_max where "A_max = As ! max_i"
define C_max where "C_max = Cs ! max_i"
have mc_lt_ma: "max_A_of_Cs < A_max"
using maxim cm_in_cas mc_in_cm cas_len unfolding strictly_maximal_wrt_def A_max_def by auto
then have ucas_ne_neg_aa: "\<Union># (mset Cs) \<noteq> negs (mset As)"
using mc_in mc_max mc_lt_ma cm_in_cas cas_len ai_len unfolding A_max_def
by (metis atms_of_negs nth_mem set_mset_mset leD)
moreover have ucas_lt_ma: "\<forall>B \<in> atms_of (\<Union># (mset Cs)). B < A_max"
using mc_max mc_lt_ma by fastforce
moreover have "\<not> Neg A_max \<in># \<Union># (mset Cs)"
using ucas_lt_ma neg_lit_in_atms_of[of A_max "\<Union># (mset Cs)"] by auto
moreover have "Neg A_max \<in># negs (mset As)"
using cm_in_cas cas_len ai_len A_max_def by auto
ultimately have "\<Union># (mset Cs) < negs (mset As)"
unfolding less_multiset\<^sub>H\<^sub>O
by (metis (no_types) atms_less_eq_imp_lit_less_eq_neg count_greater_zero_iff
count_inI le_imp_less_or_eq less_imp_not_less not_le)
then show ?thesis
unfolding e DA by auto
qed
qed
text \<open>
This corresponds to Theorem 3.15:
\<close>
theorem ord_resolve_counterex_reducing:
assumes
ec_ni_n: "{#} \<notin> N" and
d_in_n: "DA \<in> N" and
d_cex: "\<not> INTERP N \<Turnstile> DA" and
d_min: "\<And>C. C \<in> N \<Longrightarrow> \<not> INTERP N \<Turnstile> C \<Longrightarrow> DA \<le> C"
obtains CAs AAs As E where
"set CAs \<subseteq> N"
"INTERP N \<Turnstile>m mset CAs"
"\<And>CA. CA \<in> set CAs \<Longrightarrow> productive N CA"
"ord_resolve CAs DA AAs As E"
"\<not> INTERP N \<Turnstile> E"
"E < DA"
proof -
have d_ne: "DA \<noteq> {#}"
using d_in_n ec_ni_n by blast
have "\<exists>As. As \<noteq> [] \<and> negs (mset As) \<le># DA \<and> eligible As DA"
proof (cases "S DA = {#}")
assume s_d_e: "S DA = {#}"
define A where "A = Max (atms_of DA)"
define As where "As = [A]"
define D where "D = DA-{#Neg A #}"
have na_in_d: "Neg A \<in># DA"
unfolding A_def using s_d_e d_ne d_in_n d_cex d_min
by (metis Max_in_lits Max_lit_eq_pos_or_neg_Max_atm max_pos_imp_Interp Interp_imp_INTERP)
then have das: "DA = D + negs (mset As)"
unfolding D_def As_def by auto
moreover from na_in_d have "negs (mset As) \<subseteq># DA"
by (simp add: As_def)
moreover have hd: "As ! 0 = Max (atms_of (D + negs (mset As)))"
using A_def As_def das by auto
then have "eligible As DA"
using eligible s_d_e As_def das maximal_wrt_def by auto
ultimately show ?thesis
using As_def by blast
next
assume s_d_e: "S DA \<noteq> {#}"
define As :: "'a list" where
"As = list_of_mset {#atm_of L. L \<in># S DA#}"
define D :: "'a clause" where
"D = DA - negs {#atm_of L. L \<in># S DA#}"
have "As \<noteq> []" unfolding As_def using s_d_e
by (metis image_mset_is_empty_iff list_of_mset_empty)
moreover have da_sub_as: "negs {#atm_of L. L \<in># S DA#} \<subseteq># DA"
using S_selects_subseteq by (auto simp: filter_neg_atm_of_S)
then have "negs (mset As) \<subseteq># DA"
unfolding As_def by auto
moreover have das: "DA = D + negs (mset As)"
using da_sub_as unfolding D_def As_def by auto
moreover have "S DA = negs {#atm_of L. L \<in># S DA#}"
by (auto simp: filter_neg_atm_of_S)
then have "S DA = negs (mset As)"
unfolding As_def by auto
then have "eligible As DA"
unfolding das using eligible by auto
ultimately show ?thesis
by blast
qed
then obtain As :: "'a list" where
as_ne: "As \<noteq> []" and
negs_as_le_d: "negs (mset As) \<le># DA" and
s_d: "eligible As DA"
by blast
define D :: "'a clause" where
"D = DA - negs (mset As)"
have "set As \<subseteq> INTERP N"
using d_cex negs_as_le_d by force
then have prod_ex: "\<forall>A \<in> set As. \<exists>D. produces N D A"
unfolding INTERP_def
by (metis (no_types, lifting) INTERP_def subsetCE UN_E not_produces_imp_notin_production)
then have "\<And>A. \<exists>D. produces N D A \<longrightarrow> A \<in> set As"
using ec_ni_n by (auto intro: productive_in_N)
then have "\<And>A. \<exists>D. produces N D A \<longleftrightarrow> A \<in> set As"
using prod_ex by blast
then obtain CA_of where c_of0: "\<And>A. produces N (CA_of A) A \<longleftrightarrow> A \<in> set As"
by metis
then have prod_c0: "\<forall>A \<in> set As. produces N (CA_of A) A"
by blast
define C_of where
"\<And>A. C_of A = {#L \<in># CA_of A. L \<noteq> Pos A#}"
define Aj_of where
"\<And>A. Aj_of A = image_mset atm_of {#L \<in># CA_of A. L = Pos A#}"
have pospos: "\<And>LL A. {#Pos (atm_of x). x \<in># {#L \<in># LL. L = Pos A#}#} = {#L \<in># LL. L = Pos A#}"
by (metis (mono_tags, lifting) image_filter_cong literal.sel(1) multiset.map_ident)
have ca_of_c_of_aj_of: "\<And>A. CA_of A = C_of A + poss (Aj_of A)"
using pospos[of _ "CA_of _"] by (simp add: C_of_def Aj_of_def)
define n :: nat where
"n = length As"
define Cs :: "'a clause list" where
"Cs = map C_of As"
define AAs :: "'a multiset list" where
"AAs = map Aj_of As"
define CAs :: "'a literal multiset list" where
"CAs = map CA_of As"
have m_nz: "\<And>A. A \<in> set As \<Longrightarrow> Aj_of A \<noteq> {#}"
unfolding Aj_of_def using prod_c0 produces_imp_Pos_in_lits
by (metis (full_types) filter_mset_empty_conv image_mset_is_empty_iff)
have prod_c: "productive N CA" if ca_in: "CA \<in> set CAs" for CA
proof -
obtain i where i_p: "i < length CAs" "CAs ! i = CA"
using ca_in by (meson in_set_conv_nth)
have "production N (CA_of (As ! i)) = {As ! i}"
using i_p CAs_def prod_c0 by auto
then show "productive N CA"
using i_p CAs_def by auto
qed
then have cs_subs_n: "set CAs \<subseteq> N"
using productive_in_N by auto
have cs_true: "INTERP N \<Turnstile>m mset CAs"
unfolding true_cls_mset_def using prod_c productive_imp_INTERP by auto
have "\<And>A. A \<in> set As \<Longrightarrow> \<not> Neg A \<in># CA_of A"
using prod_c0 produces_imp_neg_notin_lits by auto
then have a_ni_c': "\<And>A. A \<in> set As \<Longrightarrow> A \<notin> atms_of (C_of A)"
unfolding C_of_def using atm_imp_pos_or_neg_lit by force
have c'_le_c: "\<And>A. C_of A \<le> CA_of A"
unfolding C_of_def by (auto intro: subset_eq_imp_le_multiset)
have a_max_c: "\<And>A. A \<in> set As \<Longrightarrow> A = Max (atms_of (CA_of A))"
using prod_c0 productive_imp_produces_Max_atom[of N] by auto
then have "\<And>A. A \<in> set As \<Longrightarrow> C_of A \<noteq> {#} \<Longrightarrow> Max (atms_of (C_of A)) \<le> A"
using c'_le_c by (metis less_eq_Max_atms_of)
moreover have "\<And>A. A \<in> set As \<Longrightarrow> C_of A \<noteq> {#} \<Longrightarrow> Max (atms_of (C_of A)) \<noteq> A"
using a_ni_c' Max_in by (metis (no_types) atms_empty_iff_empty finite_atms_of)
ultimately have max_c'_lt_a: "\<And>A. A \<in> set As \<Longrightarrow> C_of A \<noteq> {#} \<Longrightarrow> Max (atms_of (C_of A)) < A"
by (metis order.strict_iff_order)
have le_cs_as: "length CAs = length As"
unfolding CAs_def by simp
have "length CAs = n"
by (simp add: le_cs_as n_def)
moreover have "length Cs = n"
by (simp add: Cs_def n_def)
moreover have "length AAs = n"
by (simp add: AAs_def n_def)
moreover have "length As = n"
using n_def by auto
moreover have "n \<noteq> 0"
by (simp add: as_ne n_def)
moreover have " \<forall>i. i < length AAs \<longrightarrow> (\<forall>A \<in># AAs ! i. A = As ! i)"
using AAs_def Aj_of_def by auto
have "\<And>x B. production N (CA_of x) = {x} \<Longrightarrow> B \<in># CA_of x \<Longrightarrow> B \<noteq> Pos x \<Longrightarrow> atm_of B < x"
by (metis atm_of_lit_in_atms_of insert_not_empty le_imp_less_or_eq Pos_atm_of_iff
Neg_atm_of_iff pos_neg_in_imp_true produces_imp_Pos_in_lits produces_imp_atms_leq
productive_imp_not_interp)
then have "\<And>B A. A\<in>set As \<Longrightarrow> B \<in># CA_of A \<Longrightarrow> B \<noteq> Pos A \<Longrightarrow> atm_of B < A"
using prod_c0 by auto
have "\<forall>i. i < length AAs \<longrightarrow> AAs ! i \<noteq> {#}"
unfolding AAs_def using m_nz by simp
have "\<forall>i < n. CAs ! i = Cs ! i + poss (AAs ! i)"
unfolding CAs_def Cs_def AAs_def using ca_of_c_of_aj_of by (simp add: n_def)
moreover have "\<forall>i < n. AAs ! i \<noteq> {#}"
using \<open>\<forall>i < length AAs. AAs ! i \<noteq> {#}\<close> calculation(3) by blast
moreover have "\<forall>i < n. \<forall>A \<in># AAs ! i. A = As ! i"
by (simp add: \<open>\<forall>i < length AAs. \<forall>A \<in># AAs ! i. A = As ! i\<close> calculation(3))
moreover have "eligible As DA"
using s_d by auto
then have "eligible As (D + negs (mset As))"
using D_def negs_as_le_d by auto
moreover have "\<And>i. i < length AAs \<Longrightarrow> strictly_maximal_wrt (As ! i) ((Cs ! i))"
by (simp add: C_of_def Cs_def \<open>\<And>x B. \<lbrakk>production N (CA_of x) = {x}; B \<in># CA_of x; B \<noteq> Pos x\<rbrakk> \<Longrightarrow> atm_of B < x\<close> atms_of_def calculation(3) n_def prod_c0 strictly_maximal_wrt_def)
have "\<forall>i < n. strictly_maximal_wrt (As ! i) (Cs ! i)"
by (simp add: \<open>\<And>i. i < length AAs \<Longrightarrow> strictly_maximal_wrt (As ! i) (Cs ! i)\<close> calculation(3))
moreover have "\<forall>CA \<in> set CAs. S CA = {#}"
using prod_c producesD productive_imp_produces_Max_literal by blast
have "\<forall>CA\<in>set CAs. S CA = {#}"
using \<open>\<forall>CA\<in>set CAs. S CA = {#}\<close> by simp
then have "\<forall>i < n. S (CAs ! i) = {#}"
using \<open>length CAs = n\<close> nth_mem by blast
ultimately have res_e: "ord_resolve CAs (D + negs (mset As)) AAs As (\<Union># (mset Cs) + D)"
using ord_resolve by auto
have "\<And>A. A \<in> set As \<Longrightarrow> \<not> interp N (CA_of A) \<Turnstile> CA_of A"
by (simp add: prod_c0 producesD)
then have "\<And>A. A \<in> set As \<Longrightarrow> \<not> Interp N (CA_of A) \<Turnstile> C_of A"
unfolding prod_c0 C_of_def Interp_def true_cls_def using true_lit_def not_gr_zero prod_c0
by auto
then have c'_at_n: "\<And>A. A \<in> set As \<Longrightarrow> \<not> INTERP N \<Turnstile> C_of A"
using a_max_c c'_le_c max_c'_lt_a not_Interp_imp_not_INTERP unfolding true_cls_def
by (metis true_cls_def true_cls_empty)
have "\<not> INTERP N \<Turnstile> \<Union># (mset Cs)"
unfolding Cs_def true_cls_def using c'_at_n by fastforce
moreover have "\<not> INTERP N \<Turnstile> D"
using d_cex by (metis D_def add_diff_cancel_right' negs_as_le_d subset_mset.add_diff_assoc2
true_cls_def union_iff)
ultimately have e_cex: "\<not> INTERP N \<Turnstile> \<Union># (mset Cs) + D"
by simp
have "set CAs \<subseteq> N"
by (simp add: cs_subs_n)
moreover have "INTERP N \<Turnstile>m mset CAs"
by (simp add: cs_true)
moreover have "\<And>CA. CA \<in> set CAs \<Longrightarrow> productive N CA"
by (simp add: prod_c)
moreover have "ord_resolve CAs DA AAs As (\<Union># (mset Cs) + D)"
using D_def negs_as_le_d res_e by auto
moreover have "\<not> INTERP N \<Turnstile> \<Union># (mset Cs) + D"
using e_cex by simp
moreover have "\<Union># (mset Cs) + D < DA"
using calculation(4) ord_resolve_reductive by auto
ultimately show thesis
..
qed
lemma ord_resolve_atms_of_concl_subset:
assumes "ord_resolve CAs DA AAs As E"
shows "atms_of E \<subseteq> (\<Union>C \<in> set CAs. atms_of C) \<union> atms_of DA"
using assms
proof (cases rule: ord_resolve.cases)
case (ord_resolve n Cs D)
note DA = this(1) and e = this(2) and cas_len = this(3) and cs_len = this(4) and cas = this(8)
have "\<forall>i < n. set_mset (Cs ! i) \<subseteq> set_mset (CAs ! i)"
using cas by auto
then have "\<forall>i < n. Cs ! i \<subseteq># \<Union># (mset CAs)"
by (metis cas cas_len mset_subset_eq_add_left nth_mem_mset sum_mset.remove union_assoc)
then have "\<forall>C \<in> set Cs. C \<subseteq># \<Union># (mset CAs)"
using cs_len in_set_conv_nth[of _ Cs] by auto
then have "set_mset (\<Union># (mset Cs)) \<subseteq> set_mset (\<Union># (mset CAs))"
by auto (meson in_mset_sum_list2 mset_subset_eqD)
then have "atms_of (\<Union># (mset Cs)) \<subseteq> atms_of (\<Union># (mset CAs))"
by (meson lits_subseteq_imp_atms_subseteq mset_subset_eqD subsetI)
moreover have "atms_of (\<Union># (mset CAs)) = (\<Union>CA \<in> set CAs. atms_of CA)"
by (intro set_eqI iffI, simp_all,
metis in_mset_sum_list2 atm_imp_pos_or_neg_lit neg_lit_in_atms_of pos_lit_in_atms_of,
metis in_mset_sum_list atm_imp_pos_or_neg_lit neg_lit_in_atms_of pos_lit_in_atms_of)
ultimately have "atms_of (\<Union># (mset Cs)) \<subseteq> (\<Union>CA \<in> set CAs. atms_of CA)"
by auto
moreover have "atms_of D \<subseteq> atms_of DA"
using DA by auto
ultimately show ?thesis
unfolding e by auto
qed
subsection \<open>Inference System\<close>
text \<open>
Theorem 3.16 is subsumed in the counterexample-reducing inference system framework, which is
instantiated below. Unlike its unordered cousin, ordered resolution is additionally a reductive
inference system.
\<close>
definition ord_\<Gamma> :: "'a inference set" where
"ord_\<Gamma> = {Infer (mset CAs) DA E | CAs DA AAs As E. ord_resolve CAs DA AAs As E}"
sublocale ord_\<Gamma>_sound_counterex_reducing?:
sound_counterex_reducing_inference_system "ground_resolution_with_selection.ord_\<Gamma> S"
"ground_resolution_with_selection.INTERP S" +
reductive_inference_system "ground_resolution_with_selection.ord_\<Gamma> S"
proof unfold_locales
- fix DA :: "'a clause" and N :: "'a clause set"
+ fix N :: "'a clause set" and DA :: "'a clause"
assume "{#} \<notin> N" and "DA \<in> N" and "\<not> INTERP N \<Turnstile> DA" and "\<And>C. C \<in> N \<Longrightarrow> \<not> INTERP N \<Turnstile> C \<Longrightarrow> DA \<le> C"
then obtain CAs AAs As E where
dd_sset_n: "set CAs \<subseteq> N" and
dd_true: "INTERP N \<Turnstile>m mset CAs" and
res_e: "ord_resolve CAs DA AAs As E" and
e_cex: "\<not> INTERP N \<Turnstile> E" and
e_lt_c: "E < DA"
using ord_resolve_counterex_reducing[of N DA thesis] by auto
have "Infer (mset CAs) DA E \<in> ord_\<Gamma>"
using res_e unfolding ord_\<Gamma>_def by (metis (mono_tags, lifting) mem_Collect_eq)
then show "\<exists>CC E. set_mset CC \<subseteq> N \<and> INTERP N \<Turnstile>m CC \<and> Infer CC DA E \<in> ord_\<Gamma>
\<and> \<not> INTERP N \<Turnstile> E \<and> E < DA"
using dd_sset_n dd_true e_cex e_lt_c by (metis set_mset_mset)
qed (auto simp: ord_\<Gamma>_def intro: ord_resolve_sound ord_resolve_reductive)
lemmas clausal_logic_compact = ord_\<Gamma>_sound_counterex_reducing.clausal_logic_compact
end
text \<open>
A second proof of Theorem 3.12, compactness of clausal logic:
\<close>
lemmas clausal_logic_compact = ground_resolution_with_selection.clausal_logic_compact
end
diff --git a/thys/Password_Authentication_Protocol/Propaedeutics.thy b/thys/Password_Authentication_Protocol/Propaedeutics.thy
--- a/thys/Password_Authentication_Protocol/Propaedeutics.thy
+++ b/thys/Password_Authentication_Protocol/Propaedeutics.thy
@@ -1,1958 +1,1957 @@
(* Title: Verification of a Diffie-Hellman Password-based Authentication Protocol by Extending the Inductive Method
Author: Pasquale Noce
Security Certification Specialist at Arjo Systems, Italy
pasquale dot noce dot lavoro at gmail dot com
pasquale dot noce at arjosystems dot com
*)
section "Propaedeutic definitions and lemmas"
theory Propaedeutics
imports Complex_Main "HOL-Library.Countable"
begin
declare [[goals_limit = 20]]
text \<open>
\null
\emph{This paper is an achievement of the whole OS Development and Certification team of the Arjo
Systems site at Arzano, Italy, because it would have never been born without the contributions of my
colleagues, the discussions we had, the ideas they shared with me. Particularly, the intuition that
the use of Chip Authentication Mapping makes the secrecy of the PACE authentication key unnecessary
is not mine. I am very grateful to all the team members for these essential contributions, and even
more for these unforgettable years of work together.}
\<close>
subsection "Introduction"
text \<open>
Password-based authentication in an insecure environment -- such as password-based authentication
between a user and a smart card, which is the subject of this paper -- requires that the password be
exchanged on a secure channel, so as to prevent it from falling into the hands of an eavesdropper.
A possible method to establish such a channel is Password Authenticated Connection Establishment
(PACE), which itself is a password-based Diffie-Hellman key agreement protocol, specified in the
form of a smart card protocol in \cite{R4}. Thus, in addition to the user's password, another
password is needed if PACE is used, namely the one from which the PACE authentication key is
derived.
A simple choice allowing to reduce the number of the passwords that the user has to manage would be
to employ the same password both as key derivation password, verified implicitly by means of the
PACE protocol, and as direct use password, verified explicitly by comparison. However, this approach
has the following shortcomings:
\begin{itemize}
\item
A usual countermeasure against trial-and-error attacks aimed at disclosing the user's password
consists of blocking its use after a number of consecutive verification failures exceeding a given
threshold. If the PACE authentication key is derived from the user's password, such key has to be
blocked as well. Thus, an additional PACE authentication key would be needed for any user's
operation not requiring to be preceded by the verification of the user's password, but only to be
performed on a secure channel, such as the verification of a Personal Unblocking Code (PUC) by means
of command RESET RETRY COUNTER \cite{R5} to unblock the password. On the contrary, a single PACE
authentication key is sufficient for all user's operations provided it is independent of the user's
password, which leads to a simpler system.
\item
The user is typically allowed to change her password, e.g. by means of command CHANGE REFERENCE DATA
\cite{R5}. If the PACE authentication key is derived from the user's password, such key has to be
changed as well. This gives rise to additional functional requirements which can be nontrivial to
meet, particularly in the case of a preexisting implementation having to be adapted. For instance,
if the key itself is stored on the smart card rather than being derived at run time from the user's
password, which improves performance and prevents side channel attacks, the update of the password
and the key must be performed as an atomic operation to ensure their consistency. On the contrary,
the PACE authentication key can remain unchanged provided it is independent of the user's password,
which leads to a simpler system.
\end{itemize}
Therefore, a PACE password distinct from the user's password seems to be preferable. As the user's
password is a secret known by the user only, the derivation of the PACE authentication key from the
user's password would guarantee the secrecy of the key as well. If the PACE authentication key is
rather derived from an independent password, then a new question arises: is this key required to be
secret?
In order to find the answer, it is useful to schematize the protocol applying the informal notation
used in \cite{R1}. If Generic Mapping is employed as mapping method (cf. \cite{R4}), the protocol
takes the following form, where agents $U$ and $C$ stand for a given user and her own smart card,
step C$n$ for the $n$th command APDU, and step R$n$ for the $n$th response APDU (for further
information, cf. \cite{R4} and \cite{R5}).
\null
\qquad R1. $C \rightarrow U : \{s\}_K$
\qquad C2. $U \rightarrow C : PK_{Map,PCD}$
\qquad R2. $C \rightarrow U : PK_{Map,IC}$
\qquad C3. $U \rightarrow C : PK_{DH,PCD}$
\qquad R3. $C \rightarrow U : PK_{DH,IC}$
\qquad C4. $U \rightarrow C : \{PK_{DH,IC}\}_{KS}$
\qquad R4. $C \rightarrow U : \{PK_{DH,PCD}\}_{KS}$
\qquad C5. $U \rightarrow C : \{$\emph{User's password}$\}_{KS}$
\qquad R5. $C \rightarrow U : \{$\emph{Success code}$\}_{KS}$
\null
Being irrelevant for the security analysis of the protocol, the initial MANAGE SECURITY ENVIRONMENT:
SET AT command/response pair, as well as the first GENERAL AUTHENTICATE command requesting nonce
$s$, are not included in the scheme.
In the response to the first GENERAL AUTHENTICATE command (step R1), the card returns nonce $s$
encrypted with the PACE authentication key $K$.
In the second GENERAL AUTHENTICATE command/response pair (steps C2 and R2), the user and the card
exchange the respective ephemeral public keys $PK_{Map,PCD} = [SK_{Map,PCD}]G$ and $PK_{Map,IC} =
[SK_{Map,IC}]G$, where $G$ is the static cryptographic group generator (the notation used in
\cite{R6} is applied). Then, both parties compute the ephemeral generator $G' = [s + SK_{Map,PCD}
\times SK_{Map,IC}]G$.
In the third GENERAL AUTHENTICATE command/response pair (steps C3 and R3), the user and the card
exchange another pair of ephemeral public keys $PK_{DH,PCD} = [SK_{DH,PCD}]G'$ and $PK_{DH,IC} =
[SK_{DH,IC}]G'$, and then compute the shared secret $[SK_{DH,PCD} \times SK_{DH,IC}]G'$, from which
session keys $KS_{Enc}$ and $KS_{MAC}$ are derived. In order to abstract from unnecessary details,
the above scheme considers a single session key $KS$.
In the last GENERAL AUTHENTICATE command/response pair (steps C4 and R4), the user and the card
exchange the respective authentication tokens, obtained by computing a Message Authentication Code
(MAC) of the ephemeral public keys $PK_{DH,IC}$ and $PK_{DH,PCD}$ with session key $KS_{MAC}$. In
order to abstract from unnecessary details, the above scheme represents these MACs as cryptograms
generated using the single session key $KS$.
Finally, in steps C5 and R5, the user sends her password to the card on the secure messaging channel
established by session keys $KS_{Enc}$ and $KS_{MAC}$, e.g. via command VERIFY \cite{R5}, and the
card returns the success status word 0x9000 \cite{R5} over the same channel. In order to abstract
from unnecessary details, the above scheme represents both messages as cryptograms generated using
the single session key $KS$.
So, what if the PACE authentication key $K$ were stolen by an attacker -- henceforth called
\emph{spy} as done in \cite{R1}? In this case, even if the user's terminal were protected from
attacks, the spy could get hold of the user's password by replacing the user's smart card with a
fake one capable of performing a remote data transmission, so as to pull off a \emph{grandmaster
chess attack} \cite{R2}. In this way, the following scenario would occur, where agents $F$ and $S$
stand for the fake card and the spy.
\null
\qquad R1. $F \rightarrow U : \{s\}_K$
\qquad C2. $U \rightarrow F : PK_{Map,PCD}$
\qquad R2. $F \rightarrow U : PK_{Map,IC}$
\qquad C3. $U \rightarrow F : PK_{DH,PCD}$
\qquad R3. $F \rightarrow U : PK_{DH,IC}$
\qquad C4. $U \rightarrow F : \{PK_{DH,IC}\}_{KS}$
\qquad R4. $F \rightarrow U : \{PK_{DH,PCD}\}_{KS}$
\qquad C5. $U \rightarrow F : \{$\emph{User's password}$\}_{KS}$
\qquad C5'. $F \rightarrow S : $ \emph{User's password}
\null
Since the spy has stored key $K$ in its memory, the fake card can encrypt nonce $s$ with $K$, so
that it computes the same session keys as the user in step R3. As a result, the user receives a
correct authentication token in step R4, and then agrees to send her password to the fake card in
step C5. At this point, in order to accomplish the attack, the fake card has to do nothing but
decrypt the user's password and send it to the spy on a remote communication channel, which is what
happens in the final step C5'.
This argument demonstrates that the answer to the pending question is affirmative, namely the PACE
authentication key is indeed required to be secret, if Generic Mapping is used. Moreover, the same
conclusion can be drawn on the basis of a similar argument in case the mapping method being used is
Integrated Mapping (cf. \cite{R4}). Therefore, the PACE password from which the key is derived must
be secret as well.
This requirement has a significant impact on both the security and the usability of the system. In
fact, the only way to prevent the user from having to input the PACE password in addition to the
direct use one is providing such password to the user's terminal by other means. In the case of a
stand-alone application, this implies that either the PACE password itself or data allowing its
computation must be stored somewhere in the user's terminal, which gives rise to a risk of leakage.
The alternative is to have the PACE password typed in by the user, which renders longer the overall
credentials that the user is in charge of managing securely. Furthermore, any operation having to be
performed on a secure messaging channel before the user types in her password -- such as identifying
the user in case the smart card is endowed with an identity application compliant with \cite{R3} and
\cite{R4} -- would require an additional PACE password independent of the user's one. Hence, such
preliminary operations and the subsequent user's password verification would have to be performed on
distinct secure messaging channels, which would cause a deterioration in the system performance.
In case Chip Authentication Mapping is used as mapping method instead (cf. \cite{R4}), the resulting
protocol can be schematized as follows.
\null
\qquad R1. $C \rightarrow U : \{s\}_K$
\qquad C2. $U \rightarrow C : PK_{Map,PCD}$
\qquad R2. $C \rightarrow U : PK_{Map,IC}$
\qquad C3. $U \rightarrow C : PK_{DH,PCD}$
\qquad R3. $C \rightarrow U : PK_{DH,IC}$
\qquad C4. $U \rightarrow C : \{PK_{DH,IC}\}_{KS}$
\qquad R4. $C \rightarrow U : \{PK_{DH,PCD}$, $(SK_{IC})^{-1} \times SK_{Map,IC}$ \emph{mod n},
\qquad \qquad $PK_{IC}$, $PK_{IC}$ \emph{signature}$\}_{KS}$
\qquad C5. $U \rightarrow C : \{$\emph{User's password}$\}_{KS}$
\qquad R5. $C \rightarrow U : \{$\emph{Success code}$\}_{KS}$
\null
In the response to the last GENERAL AUTHENTICATE command (step R4), in addition to the MAC of
$PK_{DH,PCD}$ computed with session key $KS_{MAC}$, the smart card returns also the \emph{Encrypted
Chip Authentication Data} ($A_{IC}$) if Chip Authentication Mapping is used. These data result from
the encryption with session key $KS_{Enc}$ of the \emph{Chip Authentication Data} ($CA_{IC}$), which
consist of the product modulo $n$, where $n$ is the group order, of the inverse modulo $n$ of the
static private key $SK_{IC}$ with the ephemeral private key $SK_{Map,IC}$.
The user can then verify the authenticity of the chip applying the following procedure.
\begin{enumerate}
\item
Read the static public key $PK_{IC} = [SK_{IC}]G$ from a dedicated file of the smart card, named
\emph{EF.CardSecurity}.
\\Because of the read access conditions to be enforced by this file, it must be read over the secure
messaging channel established by session keys $KS_{Enc}$ and $KS_{MAC}$ (cf. \cite{R3}).
\item
Verify the signature contained in file EF.CardSecurity, generated over the contents of the file by a
trusted Certification Authority (CA).
\\To perform this operation, the user's terminal is supposed to be provided by secure means with the
public key corresponding to the private key used by the CA for signature generation.
\item
Decrypt the received $A_{IC}$ to recover $CA_{IC}$ and verify that $[CA_{IC}]PK_{IC} = PK_{Map,IC}$.
\\Since this happens just in case $CA_{IC} = (SK_{IC})^{-1} \times SK_{Map,IC}$ \emph{mod n}, the
success of such verification proves that the chip knows the private key $SK_{IC}$ corresponding to
the certified public key $PK_{IC}$, and thus is authentic.
\end{enumerate}
The reading of file EF.CardSecurity is performed next to the last GENERAL AUTHENTICATE command as a
separate operation, by sending one or more READ BINARY commands on the secure messaging channel
established by session keys $KS_{Enc}$ and $KS_{MAC}$ (cf. \cite{R3}, \cite{R4}, and \cite{R5}). The
above scheme represents this operation by inserting the public key $PK_{IC}$ and its signature into
the cryptogram returned by the last GENERAL AUTHENTICATE command, so as to abstract from unnecessary
details once again.
A successful verification of Chip Authentication Data provides the user with a proof of the fact
that the party knowing private key $SK_{Map,IC}$, and then sharing the same session keys $KS_{Enc}$
and $KS_{MAC}$, is an authentic chip. Thus, the protocol ensures that the user accepts to send her
password to an authentic chip only. As a result, the grandmaster chess attack described previously
is not applicable, so that the user's password cannot be stolen by the spy any longer. What is more,
this is true independently of the secrecy of the PACE authentication key. Therefore, this key is no
longer required to be secret, which solves all the problems ensuing from such requirement.
The purpose of this paper is indeed to construct a formal model of the above protocol in the Chip
Authentication Mapping case and prove its security, applying Paulson's Inductive Method as described
in \cite{R1}. In more detail, the formal development is aimed at proving that such protocol enforces
the following security properties.
\begin{itemize}
\item
Secrecy theorem \<open>pr_key_secrecy\<close>: if a user other than the spy sends her password to some
smart card (not necessarily her own one), then the spy cannot disclose the session key used to
encrypt the password. This property ensures that the protocol is successful in establishing
trustworthy secure messaging channels between users and smart cards.
\item
Secrecy theorem \<open>pr_passwd_secrecy\<close>: the spy cannot disclose the passwords of other users.
This property ensures that the protocol is successful in preserving the secrecy of users' passwords.
\item
Authenticity theorem \<open>pr_user_authenticity\<close>: if a smart card receives the password of a user
(not necessarily the cardholder), then the message must have been originally sent by that user. This
property ensures that the protocol enables users to authenticate themselves to their smart cards,
viz. provides an \emph{external authentication} service (cf. \cite{R5}).
\item
Authenticity theorem \<open>pr_card_authenticity\<close>: if a user sends her password to a smart card and
receives a success code as response, then the card is her own one and the response must have been
originally sent by that card. This property ensures that the protocol enables smart cards to
authenticate themselves to their cardholders, viz. provides an \emph{internal authentication}
service (cf. \cite{R5}).
\end{itemize}
Remarkably, none of these theorems turns out to require the secrecy of the PACE authentication key
as an assumption, so that all of them are valid independently of whether this key is secret or not.
The main technical difficulties arising from this formal development are the following ones.
\begin{itemize}
\item
Data such as private keys for Diffie-Hellman key agreement and session keys do not necessarily occur
as components of exchanged messages, viz. they may be computed by some agent without being ever sent
to any other agent. In this case, whichever protocol trace \<open>evs\<close> is given, any such key
\<open>x\<close> will not be contained in either set \<open>analz (spies evs)\<close> or \<open>used evs\<close>, so
that statements such as \<open>x \<in> analz (spies evs)\<close> or \<open>x \<in> used evs\<close> will be vacuously
false. Thus, some way must be found to formalize a state of affairs where \<open>x\<close> is known by the
spy or has already been used in some protocol run.
\item
As private keys for Diffie-Hellman key agreement do not necessarily occur as components of exchanged
messages, some way must be found to record the private keys that each agent has either generated or
accepted from some other agent (possibly implicitly, in the form of the corresponding public keys)
in each protocol run.
\item
The public keys for Diffie-Hellman key agreement being used are comprised of the elements of a
cryptographic cyclic group of prime order $n$, and the private keys are the elements of the finite
field comprised of the integers from 0 to $n$ - 1 (cf. \cite{R4}, \cite{R6}). Hence, the operations
defined in these algebraic structures, as well as the generation of public keys from known private
keys, correspond to additional ways in which the spy can generate fake messages starting from known
ones. A possible option to reflect this in the formal model would be to extend the inductive
definition of set \<open>synth H\<close> with rules enabling to obtain new Diffie-Hellman private and
public keys from those contained in set \<open>H\<close>, but the result would be an overly complex
definition. Thus, an alternative formalization ought to be found.
\end{itemize}
These difficulties are solved by extending the Inductive Method, with respect to the form specified
in \cite{R1}, as follows.
\begin{itemize}
\item
The protocol is no longer defined as a set of event lists, but rather as a set of 4-tuples
@{term "(evs, S, A, U)"} where \<open>evs\<close> is an event list, \<open>S\<close> is the current protocol
\emph{state} -- viz. a function that maps each agent to the private keys for Diffie-Hellman key
agreement generated or accepted in each protocol run --, \<open>A\<close> is the set of the Diffie-Hellman
private keys and session keys currently known by the spy, and \<open>U\<close> is the set of the
Diffie-Hellman private keys and session keys which have already been used in some protocol run.
\\In this way, the first two difficulties are solved. Particularly, the full set of the messages
currently known by the spy can be formalized as the set \<open>analz (A \<union> spies evs)\<close>.
\item
The inductive definition of the protocol does not contain a single \emph{fake} rule any longer, but
rather one \emph{fake} rule for each protocol step. Each \emph{fake} rule is denoted by adding
letter "F" to the identifier of the corresponding protocol step, e.g. the \emph{fake} rules
associated to steps C2 and R5 are given the names \emph{FC2} and \emph{FR5}, respectively.
\\In this way, the third difficulty is solved, too. In fact, for each protocol step, the related
\emph{fake} rule extends the spy's capabilities to generate fake messages with the operations on
known Diffie-Hellman private and public keys relevant for that step, which makes an augmentation of
set \<open>synth H\<close> with such operations unnecessary.
\end{itemize}
Throughout this paper, the salient points of definitions and proofs are commented; for additional
information, cf. Isabelle documentation, particularly \cite{R7}, \cite{R8}, \cite{R9}, and
\cite{R10}.
Paulson's Inductive Method is described in \cite{R1}, and further information is provided in
\cite{R7} as a case study. The formal developments described in \cite{R1} and \cite{R7} are included
in the Isabelle distribution.
Additional information on the involved cryptography can be found in \cite{R4} and \cite{R6}.
\<close>
subsection "Propaedeutic definitions"
text \<open>
First of all, the data types of encryption/signature keys, Diffie-Hellman private keys, and
Diffie-Hellman public keys are defined. Following \cite{R7}, encryption/signature keys are
identified with natural numbers, whereas Diffie-Hellman private keys and public keys are represented
as rational and integer numbers in order to model the algebraic structures that they form (a field
and a group, respectively; cf. above).
\null
\<close>
type_synonym key = nat
type_synonym pri_agrk = rat
type_synonym pub_agrk = int
text \<open>
\null
Agents are comprised of an infinite quantity of users and smart cards, plus the Certification
Authority (CA) signing public key $PK_{IC}$. For each \<open>n\<close>, \<open>User n\<close> is the cardholder
of smart card \<open>Card n\<close>.
\null
\<close>
datatype agent = CA | Card nat | User nat
text \<open>
\null
In addition to the kinds of messages considered in \cite{R1}, the data type of messages comprises
also users' passwords, Diffie-Hellman private and public keys, and Chip Authentication Data.
Particularly, for each \<open>n\<close>, \<open>Passwd n\<close> is the password of @{term "User n"}, accepted
as being the correct one by @{term "Card n"}.
\null
\<close>
datatype msg =
Agent agent |
Number nat |
Nonce nat |
Key key |
Hash msg |
Passwd nat |
Pri_AgrK pri_agrk |
Pub_AgrK pub_agrk |
Auth_Data pri_agrk pri_agrk |
Crypt key msg |
MPair msg msg
syntax
"_MTuple" :: "['a, args] \<Rightarrow> 'a * 'b" ("(2\<lbrace>_,/ _\<rbrace>)")
translations
"\<lbrace>x, y, z\<rbrace>" \<rightleftharpoons> "\<lbrace>x, \<lbrace>y, z\<rbrace>\<rbrace>"
"\<lbrace>x, y\<rbrace>" \<rightleftharpoons> "CONST MPair x y"
text \<open>
\null
As regards data type \<open>event\<close>, constructor \<open>Says\<close> is extended with three additional
parameters of type @{typ nat}, respectively identifying the communication channel, the protocol run,
and the protocol step (ranging from 1 to 5) in which the message is exchanged. Communication
channels are associated to smart cards, so that if a user receives an encrypted nonce $s$ on channel
$n$, she will answer by sending her ephemeral public key $PK_{Map,PCD}$ for generator mapping to
smart card @{term "Card n"}.
\null
\<close>
datatype event = Says nat nat nat agent agent msg
text \<open>
\null
The record data type \<open>session\<close> is used to store the Diffie-Hellman private keys that each
agent has generated or accepted in each protocol run. In more detail:
\begin{itemize}
\item
Field \<open>NonceS\<close> is deputed to contain the nonce $s$, if any, having been generated internally
(in the case of a smart card) or accepted from the external world (in the case of a user).
\item
Field \<open>IntMapK\<close> is deputed to contain the ephemeral private key for generator mapping, if any,
having been generated internally.
\item
Field \<open>ExtMapK\<close> is deputed to contain the ephemeral private key for generator mapping, if any,
having been implicitly accepted from the external world in the form of the corresponding public key.
\item
Field \<open>IntAgrK\<close> is deputed to contain the ephemeral private key for key agreement, if any,
having been generated internally.
\item
Field \<open>ExtAgrK\<close> is deputed to contain the ephemeral private key for key agreement, if any,
having been implicitly accepted from the external world in the form of the corresponding public key.
\end{itemize}
\null
\<close>
record session =
NonceS :: "pri_agrk option"
IntMapK :: "pri_agrk option"
ExtMapK :: "pri_agrk option"
IntAgrK :: "pri_agrk option"
ExtAgrK :: "pri_agrk option"
text \<open>
\null
Then, the data type of protocol states is defined as the type of the functions that map any 3-tuple
@{term "(X, n, run)"}, where \<open>X\<close> is an agent, \<open>n\<close> identifies a communication channel,
and \<open>run\<close> identifies a protocol run taking place on that communication channel, to a record of
type @{typ session}.
\null
\<close>
type_synonym state = "agent \<times> nat \<times> nat \<Rightarrow> session"
text \<open>
\null
Set \<open>bad\<close> collects the numerical identifiers of the PACE authentication keys known by the spy,
viz. for each \<open>n\<close>, @{term "n \<in> bad"} just in case the spy knows the PACE authentication key
shared by agents @{term "User n"} and @{term "Card n"}.
\null
\<close>
consts bad :: "nat set"
text \<open>
\null
Function \<open>invK\<close> maps each encryption/signature key to the corresponding inverse key, matching
the original key just in case it is symmetric.
\null
\<close>
consts invK :: "key \<Rightarrow> key"
text \<open>
\null
Function \<open>agrK\<close> maps each Diffie-Hellman private key $x$ to the corresponding public key
$[x]G$, where $G$ is the static cryptographic group generator being used.
\null
\<close>
consts agrK :: "pri_agrk \<Rightarrow> pub_agrk"
text \<open>
\null
Function \<open>sesK\<close> maps each Diffie-Hellman private key $x$ to the session key resulting from
shared secret $[x]G$, where $G$ is the static cryptographic group generator being used.
\null
\<close>
consts sesK :: "pri_agrk \<Rightarrow> key"
text \<open>
\null
Function \<open>symK\<close> maps each natural number \<open>n\<close> to the PACE authentication key shared by
agents @{term "User n"} and @{term "Card n"}.
\null
\<close>
consts symK :: "nat \<Rightarrow> key"
text \<open>
\null
Function \<open>priAK\<close> maps each natural number \<open>n\<close> to the static Diffie-Hellman private key
$SK_{IC}$ assigned to smart card @{term "Card n"} for Chip Authentication.
\null
\<close>
consts priAK :: "nat \<Rightarrow> pri_agrk"
text \<open>
\null
Function \<open>priSK\<close> maps each agent to her own private key for digital signature generation, even
if the only such key being actually significant for the model is the Certification Authority's one,
i.e. @{term "priSK CA"}.
\null
\<close>
consts priSK :: "agent \<Rightarrow> key"
text \<open>
\null
The spy is modeled as a user, specifically the one identified by number 0, i.e. @{term "User 0"}.
In this way, in addition to the peculiar privilege of being able to generate fake messages, the spy
is endowed with the capability of performing any operation that a generic user can do.
\null
\<close>
abbreviation Spy :: agent where
"Spy \<equiv> User 0"
text \<open>
\null
Functions \<open>pubAK\<close> and \<open>pubSK\<close> are abbreviations useful to make the formal development
more readable. The former function maps each Diffie-Hellman private key \<open>x\<close> to the message
comprised of the corresponding public key @{term "agrK x"}, whereas the latter maps each agent to
the corresponding public key for digital signature verification.
\null
\<close>
abbreviation pubAK :: "pri_agrk \<Rightarrow> msg" where
"pubAK a \<equiv> Pub_AgrK (agrK a)"
abbreviation pubSK :: "agent \<Rightarrow> key" where
"pubSK X \<equiv> invK (priSK X)"
text \<open>
\null
Function \<open>start_S\<close> represents the initial protocol state, i.e. the one in which no ephemeral
Diffie-Hellman private key has been generated or accepted by any agent yet.
\null
\<close>
abbreviation start_S :: state where
"start_S \<equiv> \<lambda>x. \<lparr>NonceS = None, IntMapK = None, ExtMapK = None,
IntAgrK = None, ExtAgrK = None\<rparr>"
text \<open>
\null
Set \<open>start_A\<close> is comprised of the messages initially known by the spy, namely:
\begin{itemize}
\item
her own password as a user,
\item
the compromised PACE authentication keys,
\item
the public keys for digital signature verification, and
\item
the static Diffie-Hellman public keys assigned to smart cards for Chip Authentication.
\end{itemize}
\null
\<close>
abbreviation start_A :: "msg set" where
"start_A \<equiv> insert (Passwd 0) (Key ` symK ` bad \<union> Key ` range pubSK \<union> pubAK ` range priAK)"
text \<open>
\null
Set \<open>start_U\<close> is comprised of the messages which have already been used before the execution
of the protocol starts, namely:
\begin{itemize}
\item
all users' passwords,
\item
all PACE authentication keys,
\item
the private and public keys for digital signature generation/verification, and
\item
the static Diffie-Hellman private and public keys assigned to smart cards for Chip Authentication.
\end{itemize}
\null
\<close>
abbreviation start_U :: "msg set" where
"start_U \<equiv> range Passwd \<union> Key ` range symK \<union> Key ` range priSK \<union> Key ` range pubSK \<union>
Pri_AgrK ` range priAK \<union> pubAK ` range priAK"
text \<open>
\null
As in \cite{R1}, function \<open>spies\<close> models the set of the messages that the spy can see in a
protocol trace. However, it is no longer necessary to identify \<open>spies []\<close> with the initial
knowledge of the spy, since her current knowledge in correspondence with protocol state
@{term "(evs, S, A, U)"} is represented as set \<open>analz (A \<union> spies evs)\<close>, where
@{term "start_A \<subseteq> A"}. Therefore, this formal development defines \<open>spies []\<close> as the empty
set.
\null
\<close>
fun spies :: "event list \<Rightarrow> msg set" where
"spies [] = {}" |
"spies (Says i j k A B X # evs) = insert X (spies evs)"
text \<open>
\null
Here below is the specification of the axioms about the constants defined previously which are used
in the formal proofs. A model of the constants satisfying the axioms is also provided in order to
ensure the consistency of the formal development. In more detail:
\begin{enumerate}
\item
Axiom \<open>agrK_inj\<close> states that function @{term agrK} is injective, and formalizes the fact that
distinct Diffie-Hellman private keys generate distinct public keys.
\\Since the former keys are represented as rational numbers and the latter as integer numbers (cf.
above), a model of function @{term agrK} satisfying the axiom is built by means of the injective
function @{term "inv nat_to_rat_surj"} provided by the Isabelle distribution, which maps rational
numbers to natural numbers.
\item
Axiom \<open>sesK_inj\<close> states that function @{term sesK} is injective, and formalizes the fact that
the key derivation function specified in \cite{R4} for deriving session keys from shared secrets
makes use of robust hash functions, so that collisions are negligible.
\\Since Diffie-Hellman private keys are represented as rational numbers and encryption/signature
keys as natural numbers (cf. above), a model of function @{term sesK} satisfying the axiom is built
by means of the injective function @{term "inv nat_to_rat_surj"}, too.
\item
Axiom \<open>priSK_pubSK\<close> formalizes the fact that every private key for signature generation is
distinct from whichever public key for signature verification. For example, in the case of the RSA
algorithm, small fixed values are typically used as public exponents to make signature verification
more efficient, whereas the corresponding private exponents are of the same order of magnitude as
the modulus.
\item
Axiom \<open>priSK_symK\<close> formalizes the fact that private keys for signature generation are
distinct from PACE authentication keys, which is obviously true since the former keys are asymmetric
whereas the latter are symmetric.
\item
Axiom \<open>pubSK_symK\<close> formalizes the fact that public keys for signature verification are
distinct from PACE authentication keys, which is obviously true since the former keys are asymmetric
whereas the latter are symmetric.
\item
Axiom \<open>invK_sesK\<close> formalizes the fact that session keys are symmetric.
\item
Axiom \<open>invK_symK\<close> formalizes the fact that PACE authentication keys are symmetric.
\item
Axiom \<open>symK_bad\<close> states that set @{term bad} is closed with respect to the identity of PACE
authentication keys, viz. if a compromised user has the same PACE authentication key as another
user, then the latter user is compromised as well.
\end{enumerate}
It is worth remarking that there is no axiom stating that distinct PACE authentication keys are
assigned to distinct users. As a result, the formal development does not depend on the enforcement
of this condition.
\null
\<close>
specification (bad invK agrK sesK symK priSK)
agrK_inj: "inj agrK"
sesK_inj: "inj sesK"
priSK_pubSK: "priSK X \<noteq> pubSK X'"
priSK_symK: "priSK X \<noteq> symK n"
pubSK_symK: "pubSK X \<noteq> symK n"
invK_sesK: "invK (sesK a) = sesK a"
invK_symK: "invK (symK n) = symK n"
symK_bad: "m \<in> bad \<Longrightarrow> symK n = symK m \<Longrightarrow> n \<in> bad"
apply (rule_tac x = "{}" in exI)
apply (rule_tac x = "\<lambda>n. if even n then n else Suc n" in exI)
apply (rule_tac x = "\<lambda>x. int (inv nat_to_rat_surj x)" in exI)
apply (rule_tac x = "\<lambda>x. 2 * inv nat_to_rat_surj x" in exI)
apply (rule_tac x = "\<lambda>n. 0" in exI)
apply (rule_tac x = "\<lambda>X. Suc 0" in exI)
proof (simp add: inj_on_def, (rule allI)+, rule impI)
fix x y
have "surj nat_to_rat_surj"
by (rule surj_nat_to_rat_surj)
hence "inj (inv nat_to_rat_surj)"
by (rule surj_imp_inj_inv)
moreover assume "inv nat_to_rat_surj x = inv nat_to_rat_surj y"
ultimately show "x = y"
by (rule injD)
qed
text \<open>
\null
Here below are the inductive definitions of sets \<open>parts\<close>, \<open>analz\<close>, and \<open>synth\<close>.
With respect to the definitions given in the protocol library included in the Isabelle distribution,
those of \<open>parts\<close> and \<open>analz\<close> are extended with rules extracting Diffie-Hellman private
keys from Chip Authentication Data, whereas the definition of \<open>synth\<close> contains a further rule
that models the inverse operation, i.e. the construction of Chip Authentication Data starting from
private keys. Particularly, the additional \<open>analz\<close> rules formalize the fact that, for any two
private keys $x$ and $y$, if $x \times y$ \emph{mod n} and $x$ are known, where $n$ is the group
order, then $y$ can be obtained by computing $x \times y \times x^{-1}$ \emph{mod n}, and similarly,
$x$ can be obtained if $y$ is known.
An additional set, named \<open>items\<close>, is also defined inductively in what follows. This set is a
hybrid of \<open>parts\<close> and \<open>analz\<close>, as it shares with \<open>parts\<close> the rule applying to
cryptograms and with \<open>analz\<close> the rules applying to Chip Authentication Data. Since the former
rule is less strict than the corresponding one in the definition of \<open>analz\<close>, it turns out that
@{term "analz H \<subseteq> items H"} for any message set \<open>H\<close>. As a result, for any message \<open>X\<close>,
@{term "X \<notin> items (A \<union> spies evs)"} implies @{term "X \<notin> analz (A \<union> spies evs)"}. Therefore, set
\<open>items\<close> is useful to prove the secrecy of the Diffie-Hellman private keys utilized to compute
Chip Authentication Data without bothering with case distinctions concerning the secrecy of
encryption keys, as would happen if set \<open>analz\<close> were directly employed instead.
\null
\<close>
inductive_set parts :: "msg set \<Rightarrow> msg set" for H :: "msg set" where
Inj: "X \<in> H \<Longrightarrow> X \<in> parts H" |
Fst: "\<lbrace>X, Y\<rbrace> \<in> parts H \<Longrightarrow> X \<in> parts H" |
Snd: "\<lbrace>X, Y\<rbrace> \<in> parts H \<Longrightarrow> Y \<in> parts H" |
Body: "Crypt K X \<in> parts H \<Longrightarrow> X \<in> parts H" |
Auth_Fst: "Auth_Data x y \<in> parts H \<Longrightarrow> Pri_AgrK x \<in> parts H" |
Auth_Snd: "Auth_Data x y \<in> parts H \<Longrightarrow> Pri_AgrK y \<in> parts H"
inductive_set items :: "msg set \<Rightarrow> msg set" for H :: "msg set" where
Inj: "X \<in> H \<Longrightarrow> X \<in> items H" |
Fst: "\<lbrace>X, Y\<rbrace> \<in> items H \<Longrightarrow> X \<in> items H" |
Snd: "\<lbrace>X, Y\<rbrace> \<in> items H \<Longrightarrow> Y \<in> items H" |
Body: "Crypt K X \<in> items H \<Longrightarrow> X \<in> items H" |
Auth_Fst: "\<lbrakk>Auth_Data x y \<in> items H; Pri_AgrK y \<in> items H\<rbrakk> \<Longrightarrow> Pri_AgrK x \<in> items H" |
Auth_Snd: "\<lbrakk>Auth_Data x y \<in> items H; Pri_AgrK x \<in> items H\<rbrakk> \<Longrightarrow> Pri_AgrK y \<in> items H"
inductive_set analz :: "msg set \<Rightarrow> msg set" for H :: "msg set" where
Inj: "X \<in> H \<Longrightarrow> X \<in> analz H" |
Fst: "\<lbrace>X, Y\<rbrace> \<in> analz H \<Longrightarrow> X \<in> analz H" |
Snd: "\<lbrace>X, Y\<rbrace> \<in> analz H \<Longrightarrow> Y \<in> analz H" |
Decrypt: "\<lbrakk>Crypt K X \<in> analz H; Key (invK K) \<in> analz H\<rbrakk> \<Longrightarrow> X \<in> analz H" |
Auth_Fst: "\<lbrakk>Auth_Data x y \<in> analz H; Pri_AgrK y \<in> analz H\<rbrakk> \<Longrightarrow> Pri_AgrK x \<in> analz H" |
Auth_Snd: "\<lbrakk>Auth_Data x y \<in> analz H; Pri_AgrK x \<in> analz H\<rbrakk> \<Longrightarrow> Pri_AgrK y \<in> analz H"
inductive_set synth :: "msg set \<Rightarrow> msg set" for H :: "msg set" where
Inj: "X \<in> H \<Longrightarrow> X \<in> synth H" |
Agent: "Agent X \<in> synth H" |
Number: "Number n \<in> synth H" |
Hash: "X \<in> synth H \<Longrightarrow> Hash X \<in> synth H" |
MPair: "\<lbrakk>X \<in> synth H; Y \<in> synth H\<rbrakk> \<Longrightarrow> \<lbrace>X, Y\<rbrace> \<in> synth H" |
Crypt: "\<lbrakk>X \<in> synth H; Key K \<in> H\<rbrakk> \<Longrightarrow> Crypt K X \<in> synth H" |
Auth: "\<lbrakk>Pri_AgrK x \<in> H; Pri_AgrK y \<in> H\<rbrakk> \<Longrightarrow> Auth_Data x y \<in> synth H"
subsection "Propaedeutic lemmas"
text \<open>
This section contains the lemmas about sets @{term parts}, @{term items}, @{term analz}, and
@{term synth} required for protocol verification. Since their proofs mainly consist of initial rule
inductions followed by sequences of rule applications and simplifications, \emph{apply}-style is
used.
\null
\<close>
lemma set_spies [rule_format]:
"Says i j k A B X \<in> set evs \<longrightarrow> X \<in> spies evs"
apply (induction evs rule: spies.induct)
apply simp_all
done
lemma parts_subset:
"H \<subseteq> parts H"
by (rule subsetI, rule parts.Inj)
lemma parts_idem:
"parts (parts H) = parts H"
apply (rule equalityI)
apply (rule subsetI)
apply (erule parts.induct)
apply assumption
apply (erule parts.Fst)
apply (erule parts.Snd)
apply (erule parts.Body)
apply (erule parts.Auth_Fst)
apply (erule parts.Auth_Snd)
apply (rule parts_subset)
done
lemma parts_simp:
"H \<subseteq> range Agent \<union>
range Number \<union>
range Nonce \<union>
range Key \<union>
range Hash \<union>
range Passwd \<union>
range Pri_AgrK \<union>
range Pub_AgrK \<Longrightarrow>
parts H = H"
apply (rule equalityI [OF _ parts_subset])
apply (rule subsetI)
apply (erule parts.induct)
apply blast+
done
lemma parts_mono:
"G \<subseteq> H \<Longrightarrow> parts G \<subseteq> parts H"
apply (rule subsetI)
apply (erule parts.induct)
apply (drule subsetD)
apply assumption
apply (erule parts.Inj)
apply (erule parts.Fst)
apply (erule parts.Snd)
apply (erule parts.Body)
apply (erule parts.Auth_Fst)
apply (erule parts.Auth_Snd)
done
lemma parts_insert:
"insert X (parts H) \<subseteq> parts (insert X H)"
apply (rule subsetI)
apply simp
apply (erule disjE)
apply simp
apply (rule parts.Inj)
apply simp
apply (erule rev_subsetD)
apply (rule parts_mono)
apply blast
done
lemma parts_simp_insert:
"X \<in> range Agent \<union>
range Number \<union>
range Nonce \<union>
range Key \<union>
range Hash \<union>
range Passwd \<union>
range Pri_AgrK \<union>
range Pub_AgrK \<Longrightarrow>
parts (insert X H) = insert X (parts H)"
apply (rule equalityI [OF _ parts_insert])
apply (rule subsetI)
apply (erule parts.induct)
apply simp_all
apply (rotate_tac [!])
apply (erule disjE)
apply simp
apply (rule disjI2)
apply (erule parts.Inj)
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule parts.Fst)
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule parts.Snd)
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule parts.Body)
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule parts.Auth_Fst)
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule parts.Auth_Snd)
done
lemma parts_auth_data_1:
"parts (insert (Auth_Data x y) H) \<subseteq>
{Pri_AgrK x, Pri_AgrK y, Auth_Data x y} \<union> parts H"
apply (rule subsetI)
apply (erule parts.induct)
apply simp_all
apply (erule disjE)
apply simp
apply (rule_tac [1-4] disjI2)+
apply (erule parts.Inj)
apply (erule parts.Fst)
apply (erule parts.Snd)
apply (erule parts.Body)
apply (erule disjE)
apply simp
apply (rule disjI2)+
apply (erule parts.Auth_Fst)
apply (erule disjE)
apply simp
apply (rule disjI2)+
apply (erule parts.Auth_Snd)
done
lemma parts_auth_data_2:
"{Pri_AgrK x, Pri_AgrK y, Auth_Data x y} \<union> parts H \<subseteq>
parts (insert (Auth_Data x y) H)"
apply (rule subsetI)
apply simp
apply (erule disjE)
apply simp
apply (rule parts.Auth_Fst [of _ y])
apply (rule parts.Inj)
apply simp
apply (erule disjE)
apply simp
apply (rule parts.Auth_Snd [of x])
apply (rule parts.Inj)
apply simp
apply (erule disjE)
apply simp
apply (rule parts.Inj)
apply simp
apply (erule rev_subsetD)
apply (rule parts_mono)
apply blast
done
lemma parts_auth_data:
"parts (insert (Auth_Data x y) H) =
{Pri_AgrK x, Pri_AgrK y, Auth_Data x y} \<union> parts H"
by (rule equalityI, rule parts_auth_data_1, rule parts_auth_data_2)
lemma parts_crypt_1:
"parts (insert (Crypt K X) H) \<subseteq> insert (Crypt K X) (parts (insert X H))"
apply (rule subsetI)
apply (erule parts.induct)
apply simp_all
apply (erule disjE)
apply simp
apply (rule_tac [1-3] disjI2)
apply (rule parts.Inj)
apply simp
apply (erule parts.Fst)
apply (erule parts.Snd)
apply (erule disjE)
apply simp
- apply (rule disjI2)
apply (rule parts.Inj)
apply simp
apply (rule disjI2)
apply (erule parts.Body)
apply (erule parts.Auth_Fst)
apply (erule parts.Auth_Snd)
done
lemma parts_crypt_2:
"insert (Crypt K X) (parts (insert X H)) \<subseteq> parts (insert (Crypt K X) H)"
apply (rule subsetI)
apply simp
apply (erule disjE)
apply simp
apply (rule parts.Inj)
apply simp
apply (subst parts_idem [symmetric])
apply (erule rev_subsetD)
apply (rule parts_mono)
apply (rule subsetI)
apply simp
apply (erule disjE)
apply simp
apply (rule parts.Body [of K])
apply (rule parts.Inj)
apply simp
apply (rule parts.Inj)
apply simp
done
lemma parts_crypt:
"parts (insert (Crypt K X) H) = insert (Crypt K X) (parts (insert X H))"
by (rule equalityI, rule parts_crypt_1, rule parts_crypt_2)
lemma parts_mpair_1:
"parts (insert \<lbrace>X, Y\<rbrace> H) \<subseteq> insert \<lbrace>X, Y\<rbrace> (parts ({X, Y} \<union> H))"
apply (rule subsetI)
apply (erule parts.induct)
apply simp_all
apply (erule disjE)
apply simp
apply (rule_tac [1-4] disjI2)
apply (rule parts.Inj)
apply simp
apply (erule disjE)
apply simp
apply (rule parts.Inj)
apply simp
apply (erule parts.Fst)
apply (erule disjE)
apply simp
apply (rule parts.Inj)
apply simp
apply (erule parts.Snd)
apply (erule parts.Body)
apply (erule parts.Auth_Fst)
apply (erule parts.Auth_Snd)
done
lemma parts_mpair_2:
"insert \<lbrace>X, Y\<rbrace> (parts ({X, Y} \<union> H)) \<subseteq> parts (insert \<lbrace>X, Y\<rbrace> H)"
apply (rule subsetI)
apply simp
apply (erule disjE)
apply (rule parts.Inj)
apply simp
apply (subst parts_idem [symmetric])
apply (erule rev_subsetD)
apply (rule parts_mono)
apply (rule subsetI)
apply simp
apply (erule disjE)
apply simp
apply (rule parts.Fst [of _ Y])
apply (rule parts.Inj)
apply simp
apply (erule disjE)
apply simp
apply (rule parts.Snd [of X])
apply (rule parts.Inj)
apply simp
apply (rule parts.Inj)
apply simp
done
lemma parts_mpair:
"parts (insert \<lbrace>X, Y\<rbrace> H) = insert \<lbrace>X, Y\<rbrace> (parts ({X, Y} \<union> H))"
by (rule equalityI, rule parts_mpair_1, rule parts_mpair_2)
lemma items_subset:
"H \<subseteq> items H"
by (rule subsetI, rule items.Inj)
lemma items_idem:
"items (items H) = items H"
apply (rule equalityI)
apply (rule subsetI)
apply (erule items.induct)
apply assumption
apply (erule items.Fst)
apply (erule items.Snd)
apply (erule items.Body)
apply (erule items.Auth_Fst)
apply assumption
apply (erule items.Auth_Snd)
apply assumption
apply (rule items_subset)
done
lemma items_parts_subset:
"items H \<subseteq> parts H"
apply (rule subsetI)
apply (erule items.induct)
apply (erule parts.Inj)
apply (erule parts.Fst)
apply (erule parts.Snd)
apply (erule parts.Body)
apply (erule parts.Auth_Fst)
apply (erule parts.Auth_Snd)
done
lemma items_simp:
"H \<subseteq> range Agent \<union>
range Number \<union>
range Nonce \<union>
range Key \<union>
range Hash \<union>
range Passwd \<union>
range Pri_AgrK \<union>
range Pub_AgrK \<Longrightarrow>
items H = H"
apply (rule equalityI)
apply (subst (3) parts_simp [symmetric])
apply assumption
apply (rule items_parts_subset)
apply (rule items_subset)
done
lemma items_mono:
"G \<subseteq> H \<Longrightarrow> items G \<subseteq> items H"
apply (rule subsetI)
apply (erule items.induct)
apply (drule subsetD)
apply assumption
apply (erule items.Inj)
apply (erule items.Fst)
apply (erule items.Snd)
apply (erule items.Body)
apply (erule items.Auth_Fst)
apply assumption
apply (erule items.Auth_Snd)
apply assumption
done
lemma items_insert:
"insert X (items H) \<subseteq> items (insert X H)"
apply (rule subsetI)
apply simp
apply (erule disjE)
apply simp
apply (rule items.Inj)
apply simp
apply (erule rev_subsetD)
apply (rule items_mono)
apply blast
done
lemma items_simp_insert_1:
"X \<in> items H \<Longrightarrow> items (insert X H) = items H"
apply (rule equalityI)
apply (rule subsetI)
apply (erule items.induct [of _ "insert X H"])
apply simp
apply (erule disjE)
apply simp
apply (erule items.Inj)
apply (erule items.Fst)
apply (erule items.Snd)
apply (erule items.Body)
apply (erule items.Auth_Fst)
apply assumption
apply (erule items.Auth_Snd)
apply assumption
apply (rule items_mono)
apply blast
done
lemma items_simp_insert_2:
"X \<in> range Agent \<union>
range Number \<union>
range Nonce \<union>
range Key \<union>
range Hash \<union>
range Passwd \<union>
range Pub_AgrK \<Longrightarrow>
items (insert X H) = insert X (items H)"
apply (rule equalityI [OF _ items_insert])
apply (rule subsetI)
apply (erule items.induct)
apply simp_all
apply (rotate_tac [!])
apply (erule disjE)
apply simp
apply (rule disjI2)
apply (erule items.Inj)
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule items.Fst)
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule items.Snd)
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule items.Body)
apply (erule disjE)
apply blast
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule items.Auth_Fst)
apply assumption
apply (erule disjE)
apply blast
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule items.Auth_Snd)
apply assumption
done
lemma items_pri_agrk_out:
"Pri_AgrK x \<notin> parts H \<Longrightarrow>
items (insert (Pri_AgrK x) H) = insert (Pri_AgrK x) (items H)"
apply (rule equalityI [OF _ items_insert])
apply (rule subsetI)
apply (erule items.induct)
apply simp_all
apply (erule disjE)
apply simp
apply (rule_tac [1-4] disjI2)
apply (erule items.Inj)
apply (erule items.Fst)
apply (erule items.Snd)
apply (erule items.Body)
apply (erule disjE)
apply simp
apply (drule subsetD [OF items_parts_subset [of H]])
apply (drule parts.Auth_Snd)
apply simp
apply (rule disjI2)
apply (erule items.Auth_Fst)
apply assumption
apply (erule disjE)
apply simp
apply (drule subsetD [OF items_parts_subset [of H]])
apply (drule parts.Auth_Fst)
apply simp
apply (rule disjI2)
apply (erule items.Auth_Snd)
apply assumption
done
lemma items_auth_data_in_1:
"items (insert (Auth_Data x y) H) \<subseteq>
insert (Auth_Data x y) (items ({Pri_AgrK x, Pri_AgrK y} \<union> H))"
apply (rule subsetI)
apply (erule items.induct)
apply simp_all
apply (erule disjE)
apply simp
apply (rule_tac [1-4] disjI2)
apply (rule items.Inj)
apply simp
apply (erule items.Fst)
apply (erule items.Snd)
apply (erule items.Body)
apply (erule disjE)
apply simp
apply (rule items.Inj)
apply simp
apply (erule items.Auth_Fst)
apply assumption
apply (erule disjE)
apply simp
apply (rule items.Inj)
apply simp
apply (erule items.Auth_Snd)
apply assumption
done
lemma items_auth_data_in_2:
"Pri_AgrK x \<in> items H \<or> Pri_AgrK y \<in> items H \<Longrightarrow>
insert (Auth_Data x y) (items ({Pri_AgrK x, Pri_AgrK y} \<union> H)) \<subseteq>
items (insert (Auth_Data x y) H)"
apply (rule subsetI)
apply simp
apply rotate_tac
apply (erule disjE)
apply (rule items.Inj)
apply simp
apply (subst items_idem [symmetric])
apply (erule rev_subsetD)
apply (rule items_mono)
apply (rule subsetI)
apply simp
apply rotate_tac
apply (erule disjE)
apply simp
apply (erule disjE)
apply (erule rev_subsetD)
apply (rule items_mono)
apply blast
apply (rule items.Auth_Fst [of _ y])
apply (rule items.Inj)
apply simp
apply (erule rev_subsetD)
apply (rule items_mono)
apply blast
apply rotate_tac
apply (erule disjE)
apply simp
apply (erule disjE)
apply (rule items.Auth_Snd [of x])
apply (rule items.Inj)
apply simp
apply (erule rev_subsetD)
apply (rule items_mono)
apply blast
apply (erule rev_subsetD)
apply (rule items_mono)
apply blast
apply (rule items.Inj)
apply simp
done
lemma items_auth_data_in:
"Pri_AgrK x \<in> items H \<or> Pri_AgrK y \<in> items H \<Longrightarrow>
items (insert (Auth_Data x y) H) =
insert (Auth_Data x y) (items ({Pri_AgrK x, Pri_AgrK y} \<union> H))"
by (rule equalityI, rule items_auth_data_in_1, rule items_auth_data_in_2)
lemma items_auth_data_out:
"\<lbrakk>Pri_AgrK x \<notin> items H; Pri_AgrK y \<notin> items H\<rbrakk> \<Longrightarrow>
items (insert (Auth_Data x y) H) = insert (Auth_Data x y) (items H)"
apply (rule equalityI [OF _ items_insert])
apply (rule subsetI)
apply (erule items.induct)
apply simp_all
apply (erule disjE)
apply simp
apply (rule_tac [1-4] disjI2)
apply (erule items.Inj)
apply (erule items.Fst)
apply (erule items.Snd)
apply (erule items.Body)
apply (erule disjE)
apply simp
apply (erule items.Auth_Fst)
apply assumption
apply (erule disjE)
apply simp
apply (erule items.Auth_Snd)
apply assumption
done
lemma items_crypt_1:
"items (insert (Crypt K X) H) \<subseteq> insert (Crypt K X) (items (insert X H))"
apply (rule subsetI)
apply (erule items.induct)
apply simp_all
apply (erule disjE)
apply simp
apply (rule_tac [1-4] disjI2)
apply (rule items.Inj)
apply simp
apply (erule items.Fst)
apply (erule items.Snd)
apply (erule disjE)
apply simp
apply (rule items.Inj)
apply simp
apply (erule items.Body)
apply (erule items.Auth_Fst)
apply assumption
apply (erule items.Auth_Snd)
apply assumption
done
lemma items_crypt_2:
"insert (Crypt K X) (items (insert X H)) \<subseteq> items (insert (Crypt K X) H)"
apply (rule subsetI)
apply simp
apply (erule disjE)
apply simp
apply (rule items.Inj)
apply simp
apply (erule items.induct)
apply simp
apply (erule disjE)
apply simp
apply (rule items.Body [of K])
apply (rule items.Inj)
apply simp
apply (rule items.Inj)
apply simp
apply (erule items.Fst)
apply (erule items.Snd)
apply (erule items.Body)
apply (erule items.Auth_Fst)
apply assumption
apply (erule items.Auth_Snd)
apply assumption
done
lemma items_crypt:
"items (insert (Crypt K X) H) = insert (Crypt K X) (items (insert X H))"
by (rule equalityI, rule items_crypt_1, rule items_crypt_2)
lemma items_mpair_1:
"items (insert \<lbrace>X, Y\<rbrace> H) \<subseteq> insert \<lbrace>X, Y\<rbrace> (items ({X, Y} \<union> H))"
apply (rule subsetI)
apply (erule items.induct)
apply simp_all
apply (erule disjE)
apply simp
apply (rule_tac [1-4] disjI2)
apply (rule items.Inj)
apply simp
apply (erule disjE)
apply simp
apply (rule items.Inj)
apply simp
apply (erule items.Fst)
apply (erule disjE)
apply simp
apply (rule items.Inj)
apply simp
apply (erule items.Snd)
apply (erule items.Body)
apply (erule items.Auth_Fst)
apply assumption
apply (erule items.Auth_Snd)
apply assumption
done
lemma items_mpair_2:
"insert \<lbrace>X, Y\<rbrace> (items ({X, Y} \<union> H)) \<subseteq> items (insert \<lbrace>X, Y\<rbrace> H)"
apply (rule subsetI)
apply simp
apply (erule disjE)
apply (rule items.Inj)
apply simp
apply (erule items.induct)
apply simp
apply (erule disjE)
apply simp
apply (rule items.Fst [of _ Y])
apply (rule items.Inj)
apply simp
apply (erule disjE)
apply simp
apply (rule items.Snd [of X])
apply (rule items.Inj)
apply simp
apply (rule items.Inj)
apply simp
apply (erule items.Fst)
apply (erule items.Snd)
apply (erule items.Body)
apply (erule items.Auth_Fst)
apply assumption
apply (erule items.Auth_Snd)
apply assumption
done
lemma items_mpair:
"items (insert \<lbrace>X, Y\<rbrace> H) = insert \<lbrace>X, Y\<rbrace> (items ({X, Y} \<union> H))"
by (rule equalityI, rule items_mpair_1, rule items_mpair_2)
lemma analz_subset:
"H \<subseteq> analz H"
by (rule subsetI, rule analz.Inj)
lemma analz_idem:
"analz (analz H) = analz H"
apply (rule equalityI)
apply (rule subsetI)
apply (erule analz.induct)
apply assumption
apply (erule analz.Fst)
apply (erule analz.Snd)
apply (erule analz.Decrypt)
apply assumption
apply (erule analz.Auth_Fst)
apply assumption
apply (erule analz.Auth_Snd)
apply assumption
apply (rule analz_subset)
done
lemma analz_parts_subset:
"analz H \<subseteq> parts H"
apply (rule subsetI)
apply (erule analz.induct)
apply (erule parts.Inj)
apply (erule parts.Fst)
apply (erule parts.Snd)
apply (erule parts.Body)
apply (erule parts.Auth_Fst)
apply (erule parts.Auth_Snd)
done
lemma analz_items_subset:
"analz H \<subseteq> items H"
apply (rule subsetI)
apply (erule analz.induct)
apply (erule items.Inj)
apply (erule items.Fst)
apply (erule items.Snd)
apply (erule items.Body)
apply (erule items.Auth_Fst)
apply assumption
apply (erule items.Auth_Snd)
apply assumption
done
lemma analz_simp:
"H \<subseteq> range Agent \<union>
range Number \<union>
range Nonce \<union>
range Key \<union>
range Hash \<union>
range Passwd \<union>
range Pri_AgrK \<union>
range Pub_AgrK \<Longrightarrow>
analz H = H"
apply (rule equalityI)
apply (subst (3) parts_simp [symmetric])
apply assumption
apply (rule analz_parts_subset)
apply (rule analz_subset)
done
lemma analz_mono:
"G \<subseteq> H \<Longrightarrow> analz G \<subseteq> analz H"
apply (rule subsetI)
apply (erule analz.induct)
apply (drule subsetD)
apply assumption
apply (erule analz.Inj)
apply (erule analz.Fst)
apply (erule analz.Snd)
apply (erule analz.Decrypt)
apply assumption
apply (erule analz.Auth_Fst)
apply assumption
apply (erule analz.Auth_Snd)
apply assumption
done
lemma analz_insert:
"insert X (analz H) \<subseteq> analz (insert X H)"
apply (rule subsetI)
apply simp
apply (erule disjE)
apply simp
apply (rule analz.Inj)
apply simp
apply (erule rev_subsetD)
apply (rule analz_mono)
apply blast
done
lemma analz_simp_insert_1:
"X \<in> analz H \<Longrightarrow> analz (insert X H) = analz H"
apply (rule equalityI)
apply (rule subsetI)
apply (erule analz.induct [of _ "insert X H"])
apply simp
apply (erule disjE)
apply simp
apply (erule analz.Inj)
apply (erule analz.Fst)
apply (erule analz.Snd)
apply (erule analz.Decrypt)
apply assumption
apply (erule analz.Auth_Fst)
apply assumption
apply (erule analz.Auth_Snd)
apply assumption
apply (rule analz_mono)
apply blast
done
lemma analz_simp_insert_2:
"X \<in> range Agent \<union>
range Number \<union>
range Nonce \<union>
range Hash \<union>
range Passwd \<union>
range Pub_AgrK \<Longrightarrow>
analz (insert X H) = insert X (analz H)"
apply (rule equalityI [OF _ analz_insert])
apply (rule subsetI)
apply (erule analz.induct)
apply simp_all
apply (rotate_tac [!])
apply (erule disjE)
apply simp
apply (rule disjI2)
apply (erule analz.Inj)
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule analz.Fst)
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule analz.Snd)
apply (erule disjE)
apply blast
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule analz.Decrypt)
apply assumption
apply (erule disjE)
apply blast
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule analz.Auth_Fst)
apply assumption
apply (erule disjE)
apply blast
apply (erule disjE)
apply blast
apply (rule disjI2)
apply (erule analz.Auth_Snd)
apply assumption
done
lemma analz_auth_data_in_1:
"analz (insert (Auth_Data x y) H) \<subseteq>
insert (Auth_Data x y) (analz ({Pri_AgrK x, Pri_AgrK y} \<union> H))"
apply (rule subsetI)
apply (erule analz.induct)
apply simp_all
apply (erule disjE)
apply simp
apply (rule_tac [1-4] disjI2)
apply (rule analz.Inj)
apply simp
apply (erule analz.Fst)
apply (erule analz.Snd)
apply (erule analz.Decrypt)
apply assumption
apply (erule disjE)
apply simp
apply (rule analz.Inj)
apply simp
apply (erule analz.Auth_Fst)
apply assumption
apply (erule disjE)
apply simp
apply (rule analz.Inj)
apply simp
apply (erule analz.Auth_Snd)
apply assumption
done
lemma analz_auth_data_in_2:
"Pri_AgrK x \<in> analz H \<or> Pri_AgrK y \<in> analz H \<Longrightarrow>
insert (Auth_Data x y) (analz ({Pri_AgrK x, Pri_AgrK y} \<union> H)) \<subseteq>
analz (insert (Auth_Data x y) H)"
apply (rule subsetI)
apply simp
apply rotate_tac
apply (erule disjE)
apply (rule analz.Inj)
apply simp
apply (subst analz_idem [symmetric])
apply (erule rev_subsetD)
apply (rule analz_mono)
apply (rule subsetI)
apply simp
apply rotate_tac
apply (erule disjE)
apply simp
apply (erule disjE)
apply (erule rev_subsetD)
apply (rule analz_mono)
apply blast
apply (rule analz.Auth_Fst [of _ y])
apply (rule analz.Inj)
apply simp
apply (erule rev_subsetD)
apply (rule analz_mono)
apply blast
apply rotate_tac
apply (erule disjE)
apply simp
apply (erule disjE)
apply (rule analz.Auth_Snd [of x])
apply (rule analz.Inj)
apply simp
apply (erule rev_subsetD)
apply (rule analz_mono)
apply blast
apply (erule rev_subsetD)
apply (rule analz_mono)
apply blast
apply (rule analz.Inj)
apply simp
done
lemma analz_auth_data_in:
"Pri_AgrK x \<in> analz H \<or> Pri_AgrK y \<in> analz H \<Longrightarrow>
analz (insert (Auth_Data x y) H) =
insert (Auth_Data x y) (analz ({Pri_AgrK x, Pri_AgrK y} \<union> H))"
by (rule equalityI, rule analz_auth_data_in_1, rule analz_auth_data_in_2)
lemma analz_auth_data_out:
"\<lbrakk>Pri_AgrK x \<notin> analz H; Pri_AgrK y \<notin> analz H\<rbrakk> \<Longrightarrow>
analz (insert (Auth_Data x y) H) = insert (Auth_Data x y) (analz H)"
apply (rule equalityI [OF _ analz_insert])
apply (rule subsetI)
apply (erule analz.induct)
apply simp_all
apply (erule disjE)
apply simp
apply (rule_tac [1-4] disjI2)
apply (erule analz.Inj)
apply (erule analz.Fst)
apply (erule analz.Snd)
apply (erule analz.Decrypt)
apply assumption
apply (erule disjE)
apply simp
apply (erule analz.Auth_Fst)
apply assumption
apply (erule disjE)
apply simp
apply (erule analz.Auth_Snd)
apply assumption
done
lemma analz_crypt_in_1:
"analz (insert (Crypt K X) H) \<subseteq> insert (Crypt K X) (analz (insert X H))"
apply (rule subsetI)
apply (erule analz.induct)
apply simp_all
apply (erule disjE)
apply simp
apply (rule_tac [1-4] disjI2)
apply (rule analz.Inj)
apply simp
apply (erule analz.Fst)
apply (erule analz.Snd)
apply (erule disjE)
apply simp
apply (rule analz.Inj)
apply simp
apply (erule analz.Decrypt)
apply assumption
apply (erule analz.Auth_Fst)
apply assumption
apply (erule analz.Auth_Snd)
apply assumption
done
lemma analz_crypt_in_2:
"Key (invK K) \<in> analz H \<Longrightarrow>
insert (Crypt K X) (analz (insert X H)) \<subseteq> analz (insert (Crypt K X) H)"
apply (rule subsetI)
apply simp
apply (erule disjE)
apply simp
apply (rule analz.Inj)
apply simp
apply rotate_tac
apply (erule analz.induct)
apply simp
apply (erule disjE)
apply simp
apply (rule analz.Decrypt [of K])
apply (rule analz.Inj)
apply simp
apply (erule rev_subsetD)
apply (rule analz_mono)
apply blast
apply (rule analz.Inj)
apply simp
apply (erule analz.Fst)
apply (erule analz.Snd)
apply (erule analz.Decrypt)
apply assumption
apply (erule analz.Auth_Fst)
apply assumption
apply (erule analz.Auth_Snd)
apply assumption
done
lemma analz_crypt_in:
"Key (invK K) \<in> analz H \<Longrightarrow>
analz (insert (Crypt K X) H) = insert (Crypt K X) (analz (insert X H))"
by (rule equalityI, rule analz_crypt_in_1, rule analz_crypt_in_2)
lemma analz_crypt_out:
"Key (invK K) \<notin> analz H \<Longrightarrow>
analz (insert (Crypt K X) H) = insert (Crypt K X) (analz H)"
apply (rule equalityI [OF _ analz_insert])
apply (rule subsetI)
apply (erule analz.induct)
apply simp_all
apply (erule disjE)
apply simp
apply (rule_tac [1-4] disjI2)
apply (erule analz.Inj)
apply (erule analz.Fst)
apply (erule analz.Snd)
apply (erule disjE)
apply simp
apply (erule analz.Decrypt)
apply assumption
apply (erule analz.Auth_Fst)
apply assumption
apply (erule analz.Auth_Snd)
apply assumption
done
lemma analz_mpair_1:
"analz (insert \<lbrace>X, Y\<rbrace> H) \<subseteq> insert \<lbrace>X, Y\<rbrace> (analz ({X, Y} \<union> H))"
apply (rule subsetI)
apply (erule analz.induct)
apply simp_all
apply (erule disjE)
apply simp
apply (rule_tac [1-4] disjI2)
apply (rule analz.Inj)
apply simp
apply (erule disjE)
apply simp
apply (rule analz.Inj)
apply simp
apply (erule analz.Fst)
apply (erule disjE)
apply simp
apply (rule analz.Inj)
apply simp
apply (erule analz.Snd)
apply (erule analz.Decrypt)
apply assumption
apply (erule analz.Auth_Fst)
apply assumption
apply (erule analz.Auth_Snd)
apply assumption
done
lemma analz_mpair_2:
"insert \<lbrace>X, Y\<rbrace> (analz ({X, Y} \<union> H)) \<subseteq> analz (insert \<lbrace>X, Y\<rbrace> H)"
apply (rule subsetI)
apply simp
apply (erule disjE)
apply (rule analz.Inj)
apply simp
apply (erule analz.induct)
apply simp
apply (erule disjE)
apply simp
apply (rule analz.Fst [of _ Y])
apply (rule analz.Inj)
apply simp
apply (erule disjE)
apply simp
apply (rule analz.Snd [of X])
apply (rule analz.Inj)
apply simp
apply (rule analz.Inj)
apply simp
apply (erule analz.Fst)
apply (erule analz.Snd)
apply (erule analz.Decrypt)
apply assumption
apply (erule analz.Auth_Fst)
apply assumption
apply (erule analz.Auth_Snd)
apply assumption
done
lemma analz_mpair:
"analz (insert \<lbrace>X, Y\<rbrace> H) = insert \<lbrace>X, Y\<rbrace> (analz ({X, Y} \<union> H))"
by (rule equalityI, rule analz_mpair_1, rule analz_mpair_2)
lemma synth_simp_intro:
"X \<in> synth H \<Longrightarrow>
X \<in> range Nonce \<union>
range Key \<union>
range Passwd \<union>
range Pri_AgrK \<union>
range Pub_AgrK \<Longrightarrow>
X \<in> H"
by (erule synth.cases, blast+)
lemma synth_auth_data:
"Auth_Data x y \<in> synth H \<Longrightarrow>
Auth_Data x y \<in> H \<or> Pri_AgrK x \<in> H \<and> Pri_AgrK y \<in> H"
by (erule synth.cases, simp_all)
lemma synth_crypt:
"Crypt K X \<in> synth H \<Longrightarrow> Crypt K X \<in> H \<or> X \<in> synth H \<and> Key K \<in> H"
by (erule synth.cases, simp_all)
lemma synth_mpair:
"\<lbrace>X, Y\<rbrace> \<in> synth H \<Longrightarrow> \<lbrace>X, Y\<rbrace> \<in> H \<or> X \<in> synth H \<and> Y \<in> synth H"
by (erule synth.cases, simp_all)
lemma synth_analz_fst:
"\<lbrace>X, Y\<rbrace> \<in> synth (analz H) \<Longrightarrow> X \<in> synth (analz H)"
proof (drule_tac synth_mpair, erule_tac disjE)
qed (drule analz.Fst, erule synth.Inj, erule conjE)
lemma synth_analz_snd:
"\<lbrace>X, Y\<rbrace> \<in> synth (analz H) \<Longrightarrow> Y \<in> synth (analz H)"
proof (drule_tac synth_mpair, erule_tac disjE)
qed (drule analz.Snd, erule synth.Inj, erule conjE)
end
diff --git a/thys/Probabilistic_Prime_Tests/Algebraic_Auxiliaries.thy b/thys/Probabilistic_Prime_Tests/Algebraic_Auxiliaries.thy
--- a/thys/Probabilistic_Prime_Tests/Algebraic_Auxiliaries.thy
+++ b/thys/Probabilistic_Prime_Tests/Algebraic_Auxiliaries.thy
@@ -1,480 +1,480 @@
(*
File: Algebraic_Auxiliaries.thy
Authors: Daniel Stüwe
Miscellaneous facts about algebra and number theory
*)
section \<open>Auxiliary Material\<close>
theory Algebraic_Auxiliaries
imports
"HOL-Algebra.Algebra"
"HOL-Computational_Algebra.Squarefree"
"HOL-Number_Theory.Number_Theory"
begin
hide_const (open) Divisibility.prime
lemma sum_of_bool_eq_card:
assumes "finite S"
shows "(\<Sum>a \<in> S. of_bool (P a)) = real (card {a \<in> S . P a })"
proof -
have "(\<Sum>a \<in> S. of_bool (P a) :: real) = (\<Sum>a \<in> {x\<in>S. P x}. 1)"
using assms by (intro sum.mono_neutral_cong_right) auto
thus ?thesis by simp
qed
lemma mod_natE:
fixes a n b :: nat
assumes "a mod n = b"
shows "\<exists> l. a = n * l + b"
using assms mod_mult_div_eq[of a n] by (metis add.commute)
lemma (in group) r_coset_is_image: "H #> a = (\<lambda> x. x \<otimes> a) ` H"
unfolding r_coset_def image_def
by blast
lemma (in group) FactGroup_order:
assumes "subgroup H G" "finite H"
shows "order G = order (G Mod H) * card H"
using lagrange assms unfolding FactGroup_def order_def by simp
corollary (in group) FactGroup_order_div:
assumes "subgroup H G" "finite H"
shows "order (G Mod H) = order G div card H"
using assms FactGroup_order subgroupE(2)[OF \<open>subgroup H G\<close>] by (auto simp: order_def)
lemma group_hom_imp_group_hom_image:
assumes "group_hom G G h"
shows "group_hom G (G\<lparr>carrier := h ` carrier G\<rparr>) h"
using group_hom.axioms[OF assms] group_hom.img_is_subgroup[OF assms] group.subgroup_imp_group
by(auto intro!: group_hom.intro simp: group_hom_axioms_def hom_def)
theorem homomorphism_thm:
assumes "group_hom G G h"
shows "G Mod kernel G (G\<lparr>carrier := h ` carrier G\<rparr>) h \<cong> G \<lparr>carrier := h ` carrier G\<rparr>"
by (intro group_hom.FactGroup_iso group_hom_imp_group_hom_image assms) simp
lemma is_iso_imp_same_card:
assumes "H \<cong> G "
shows "order H = order G"
proof -
from assms obtain h where "bij_betw h (carrier H) (carrier G)"
unfolding is_iso_def iso_def
by blast
then show ?thesis
unfolding order_def
by (rule bij_betw_same_card)
qed
corollary homomorphism_thm_order:
assumes "group_hom G G h"
shows "order (G\<lparr>carrier := h ` carrier G\<rparr>) * card (kernel G (G\<lparr>carrier := h ` carrier G\<rparr>) h) = order G "
proof -
have "order (G\<lparr>carrier := h ` carrier G\<rparr>) = order (G Mod (kernel G (G\<lparr>carrier := h ` carrier G\<rparr>) h))"
using is_iso_imp_same_card[OF homomorphism_thm] \<open>group_hom G G h\<close>
by fastforce
moreover have "group G" using \<open>group_hom G G h\<close> group_hom.axioms by blast
ultimately show ?thesis
using \<open>group_hom G G h\<close> and group_hom_imp_group_hom_image[OF \<open>group_hom G G h\<close>]
unfolding FactGroup_def
by (simp add: group.lagrange group_hom.subgroup_kernel order_def)
qed
lemma (in group_hom) kernel_subset: "kernel G H h \<subseteq> carrier G"
using subgroup_kernel G.subgroupE(1) by blast
lemma (in group) proper_subgroup_imp_bound_on_card:
assumes "H \<subset> carrier G" "subgroup H G" "finite (carrier G)"
shows "card H \<le> order G div 2"
proof -
from \<open>finite (carrier G)\<close> have "finite (rcosets H)"
by (simp add: RCOSETS_def)
note subgroup.subgroup_in_rcosets[OF \<open>subgroup H G\<close> is_group]
then obtain J where "J \<noteq> H" "J \<in> rcosets H"
using rcosets_part_G[OF \<open>subgroup H G\<close>] and \<open>H \<subset> carrier G\<close>
by (metis Sup_le_iff inf.absorb_iff2 inf.idem inf.strict_order_iff)
then have "2 \<le> card (rcosets H)"
using \<open>H \<in> rcosets H\<close> card_mono[OF \<open>finite (rcosets H)\<close>, of "{H, J}"]
by simp
then show ?thesis
using mult_le_mono[of 2 "card (rcosets H)" "card H" "card H"]
unfolding lagrange[OF \<open>subgroup H G\<close>]
by force
qed
lemma cong_exp_trans[trans]:
"[a ^ b = c] (mod n) \<Longrightarrow> [a = d] (mod n) \<Longrightarrow> [d ^ b = c] (mod n)"
"[c = a ^ b] (mod n) \<Longrightarrow> [a = d] (mod n) \<Longrightarrow> [c = d ^ b] (mod n)"
using cong_pow cong_sym cong_trans by blast+
lemma cong_exp_mod[simp]:
"[(a mod n) ^ b = c] (mod n) \<longleftrightarrow> [a ^ b = c] (mod n)"
"[c = (a mod n) ^ b] (mod n) \<longleftrightarrow> [c = a ^ b] (mod n)"
by (auto simp add: cong_def mod_simps)
lemma cong_mult_mod[simp]:
"[(a mod n) * b = c] (mod n) \<longleftrightarrow> [a * b = c] (mod n)"
"[a * (b mod n) = c] (mod n) \<longleftrightarrow> [a * b = c] (mod n)"
by (auto simp add: cong_def mod_simps)
lemma cong_add_mod[simp]:
"[(a mod n) + b = c] (mod n) \<longleftrightarrow> [a + b = c] (mod n)"
"[a + (b mod n) = c] (mod n) \<longleftrightarrow> [a + b = c] (mod n)"
"[\<Sum>i\<in>A. f i mod n = c] (mod n) \<longleftrightarrow> [\<Sum>i\<in>A. f i = c] (mod n)"
by (auto simp add: cong_def mod_simps)
lemma cong_add_trans[trans]:
"[a = b + x] (mod n) \<Longrightarrow> [x = y] (mod n) \<Longrightarrow> [a = b + y] (mod n)"
"[a = x + b] (mod n) \<Longrightarrow> [x = y] (mod n) \<Longrightarrow> [a = y + b] (mod n)"
"[b + x = a] (mod n) \<Longrightarrow> [x = y] (mod n) \<Longrightarrow> [b + y = a] (mod n)"
"[x + b = a] (mod n) \<Longrightarrow> [x = y] (mod n) \<Longrightarrow> [y + b = a] (mod n)"
unfolding cong_def
using mod_simps(1, 2)
by metis+
lemma cong_mult_trans[trans]:
"[a = b * x] (mod n) \<Longrightarrow> [x = y] (mod n) \<Longrightarrow> [a = b * y] (mod n)"
"[a = x * b] (mod n) \<Longrightarrow> [x = y] (mod n) \<Longrightarrow> [a = y * b] (mod n)"
"[b * x = a] (mod n) \<Longrightarrow> [x = y] (mod n) \<Longrightarrow> [b * y = a] (mod n)"
"[x * b = a] (mod n) \<Longrightarrow> [x = y] (mod n) \<Longrightarrow> [y * b = a] (mod n)"
unfolding cong_def
using mod_simps(4, 5)
by metis+
lemma cong_diff_trans[trans]:
"[a = b - x] (mod n) \<Longrightarrow> [x = y] (mod n) \<Longrightarrow> [a = b - y] (mod n)"
"[a = x - b] (mod n) \<Longrightarrow> [x = y] (mod n) \<Longrightarrow> [a = y - b] (mod n)"
"[b - x = a] (mod n) \<Longrightarrow> [x = y] (mod n) \<Longrightarrow> [b - y = a] (mod n)"
"[x - b = a] (mod n) \<Longrightarrow> [x = y] (mod n) \<Longrightarrow> [y - b = a] (mod n)"
for a :: "'a :: {unique_euclidean_semiring, euclidean_ring_cancel}"
unfolding cong_def
by (metis mod_diff_eq)+
lemma eq_imp_eq_mod_int: "a = b \<Longrightarrow> [a = b] (mod m)" for a b :: int by simp
lemma eq_imp_eq_mod_nat: "a = b \<Longrightarrow> [a = b] (mod m)" for a b :: nat by simp
lemma cong_pow_I: "a = b \<Longrightarrow> [x^a = x^b](mod n)" by simp
lemma gre1I: "(n = 0 \<Longrightarrow> False) \<Longrightarrow> (1 :: nat) \<le> n"
by presburger
lemma gre1I_nat: "(n = 0 \<Longrightarrow> False) \<Longrightarrow> (Suc 0 :: nat) \<le> n"
by presburger
lemma totient_less_not_prime:
assumes "\<not> prime n" "1 < n"
shows "totient n < n - 1"
using totient_imp_prime totient_less assms
by (metis One_nat_def Suc_pred le_less_trans less_SucE zero_le_one)
lemma power2_diff_nat: "x \<ge> y \<Longrightarrow> (x - y)\<^sup>2 = x\<^sup>2 + y\<^sup>2 - 2 * x * y" for x y :: nat
by (simp add: algebra_simps power2_eq_square mult_2_right)
(meson Nat.diff_diff_right le_add2 le_trans mult_le_mono order_refl)
lemma square_inequality: "1 < n \<Longrightarrow> (n + n) \<le> (n * n)" for n :: nat
by (metis Suc_eq_plus1_left Suc_leI mult_le_mono1 semiring_normalization_rules(4))
lemma square_one_cong_one:
assumes "[x = 1](mod n)"
shows "[x^2 = 1](mod n)"
using assms cong_pow by fastforce
lemma cong_square_alt_int:
"prime p \<Longrightarrow> [a * a = 1] (mod p) \<Longrightarrow> [a = 1] (mod p) \<or> [a = p - 1] (mod p)"
for a p :: "'a :: {normalization_semidom, linordered_idom, unique_euclidean_ring}"
using dvd_add_triv_right_iff[of p "a - (p - 1)"]
by (auto simp add: cong_iff_dvd_diff square_diff_one_factored dest!: prime_dvd_multD)
lemma cong_square_alt:
"prime p \<Longrightarrow> [a * a = 1] (mod p) \<Longrightarrow> [a = 1] (mod p) \<or> [a = p - 1] (mod p)"
for a p :: nat
- using cong_square_alt_int and cong_int_iff prime_nat_int_transfer
- by (metis (mono_tags) int_ops(2) int_ops(7) less_imp_le_nat of_nat_diff prime_gt_1_nat)
+ using cong_square_alt_int[of "int p" "int a"] prime_nat_int_transfer[of p] prime_gt_1_nat[of p]
+ by (simp flip: cong_int_iff add: of_nat_diff)
lemma square_minus_one_cong_one:
fixes n x :: nat
assumes "1 < n" "[x = n - 1](mod n)"
shows "[x^2 = 1](mod n)"
proof -
have "[x^2 = (n - 1) * (n - 1)] (mod n)"
using cong_mult[OF assms(2) assms(2)]
by (simp add: algebra_simps power2_eq_square)
also have "[(n - 1) * (n - 1) = Suc (n * n) - (n + n)] (mod n)"
using power2_diff_nat[of 1 n] \<open>1 < n\<close>
by (simp add: algebra_simps power2_eq_square)
also have "[Suc (n * n) - (n + n) = Suc (n * n)] (mod n)"
proof -
have "n * n + 0 * n = n * n" by linarith
moreover have "n * n - (n + n) + (n + n) = n * n"
using square_inequality[OF \<open>1 < n\<close>] le_add_diff_inverse2 by blast
moreover have "(Suc 0 + 1) * n = n + n"
by simp
ultimately show ?thesis
using square_inequality[OF \<open>1 < n\<close>]
by (metis (no_types) Suc_diff_le add_Suc cong_iff_lin_nat)
qed
also have "[Suc (n * n) = 1] (mod n)"
using cong_to_1'_nat by auto
finally show ?thesis .
qed
lemma odd_prime_gt_2_int:
"2 < p" if "odd p" "prime p" for p :: int
using prime_ge_2_int[OF \<open>prime p\<close>] \<open>odd p\<close>
by (cases "p = 2") auto
lemma odd_prime_gt_2_nat:
"2 < p" if "odd p" "prime p" for p :: nat
using prime_ge_2_nat[OF \<open>prime p\<close>] \<open>odd p\<close>
by (cases "p = 2") auto
lemma gt_one_imp_gt_one_power_if_coprime:
"1 \<le> x \<Longrightarrow> 1 < n \<Longrightarrow> coprime x n \<Longrightarrow> 1 \<le> x ^ (n - 1) mod n"
by (rule gre1I) (auto simp: coprime_commute dest: coprime_absorb_left)
lemma residue_one_dvd: "a mod n = 1 \<Longrightarrow> n dvd a - 1" for a n :: nat
by (fastforce intro!: cong_to_1_nat simp: cong_def)
lemma coprimeI_power_mod:
fixes x r n :: nat
assumes "x ^ r mod n = 1" "r \<noteq> 0" "n \<noteq> 0"
shows "coprime x n"
proof -
have "coprime (x ^ r mod n) n"
using coprime_1_right \<open>x ^ r mod n = 1\<close>
by (simp add: coprime_commute)
thus ?thesis using \<open>r \<noteq> 0\<close> \<open>n \<noteq> 0\<close> by simp
qed
(* MOVE - EXTRA *)
lemma prime_dvd_choose:
assumes "0 < k" "k < p" "prime p"
shows "p dvd (p choose k)"
proof -
have "k \<le> p" using \<open>k < p\<close> by auto
have "p dvd fact p" using \<open>prime p\<close> by (simp add: prime_dvd_fact_iff)
moreover have "\<not> p dvd fact k * fact (p - k)"
unfolding prime_dvd_mult_iff[OF \<open>prime p\<close>] prime_dvd_fact_iff[OF \<open>prime p\<close>]
using assms by simp
ultimately show ?thesis
unfolding binomial_fact_lemma[OF \<open>k \<le> p\<close>, symmetric]
using assms prime_dvd_multD by blast
qed
lemma cong_eq_0_I: "(\<forall>i\<in>A. [f i mod n = 0] (mod n)) \<Longrightarrow> [\<Sum>i\<in>A. f i = 0] (mod n)"
using cong_sum by fastforce
lemma power_mult_cong:
assumes "[x^n = a](mod m)" "[y^n = b](mod m)"
shows "[(x*y)^n = a*b](mod m)"
using assms cong_mult[of "x^n" a m "y^n" b] power_mult_distrib
by metis
lemma
fixes n :: nat
assumes "n > 1"
shows odd_pow_cong: "odd m \<Longrightarrow> [(n - 1) ^ m = n - 1] (mod n)"
and even_pow_cong: "even m \<Longrightarrow> [(n - 1) ^ m = 1] (mod n)"
proof (induction m)
case (Suc m)
case 1
with Suc have IH: "[(n - 1) ^ m = 1] (mod n)" by auto
show ?case using \<open>1 < n\<close> cong_mult[OF cong_refl IH] by simp
next
case (Suc m)
case 2
with Suc have IH: "[(n - 1) ^ m = n - 1] (mod n)" by auto
show ?case
using cong_mult[OF cong_refl IH, of "(n - 1)"] and square_minus_one_cong_one[OF \<open>1 < n\<close>, of "n - 1"]
by (auto simp: power2_eq_square intro: cong_trans)
qed simp_all
lemma cong_mult_uneq':
fixes a :: "'a::{unique_euclidean_ring, ring_gcd}"
assumes "coprime d a"
shows "[b \<noteq> c] (mod a) \<Longrightarrow> [d = e] (mod a) \<Longrightarrow> [b * d \<noteq> c * e] (mod a)"
using cong_mult_rcancel[OF assms]
using cong_trans[of "b*d" "c*e" a "c*d"]
using cong_scalar_left cong_sym by blast
lemma p_coprime_right_nat: "prime p \<Longrightarrow> coprime a p = (\<not> p dvd a)" for p a :: nat
by (meson coprime_absorb_left coprime_commute not_prime_unit prime_imp_coprime_nat)
lemma squarefree_mult_imp_coprime:
assumes "squarefree (a * b :: 'a :: semiring_gcd)"
shows "coprime a b"
proof (rule coprimeI)
fix l assume "l dvd a" "l dvd b"
then obtain a' b' where "a = l * a'" "b = l * b'"
by (auto elim!: dvdE)
with assms have "squarefree (l\<^sup>2 * (a' * b'))"
by (simp add: power2_eq_square mult_ac)
thus "l dvd 1" by (rule squarefreeD) auto
qed
lemma prime_divisor_exists_strong:
fixes m :: int
assumes "m > 1" "\<not>prime m"
shows "\<exists>n k. m = n * k \<and> 1 < n \<and> n < m \<and> 1 < k \<and> k < m"
proof -
from assms obtain n k where nk: "n * k > 1" "n \<ge> 0" "m = n * k" "n \<noteq> 1" "n \<noteq> 0" "k \<noteq> 1"
using assms unfolding prime_int_iff dvd_def by auto
from nk have "n > 1" by linarith
from nk assms have "n * k > 0" by simp
with \<open>n \<ge> 0\<close> have "k > 0"
using zero_less_mult_pos by force
with \<open>k \<noteq> 1\<close> have "k > 1" by linarith
from nk have "n > 1" by linarith
from \<open>k > 1\<close> nk have "n < m" "k < m" by simp_all
with nk \<open>k > 1\<close> \<open>n > 1\<close> show ?thesis by blast
qed
lemma prime_divisor_exists_strong_nat:
fixes m :: nat
assumes "1 < m" "\<not>prime m"
shows "\<exists>p k. m = p * k \<and> 1 < p \<and> p < m \<and> 1 < k \<and> k < m \<and> prime p"
proof -
obtain p where p_def: "prime p" "p dvd m" "p \<noteq> m" "1 < p"
using assms prime_prime_factor and prime_gt_1_nat
by blast
moreover define k where "k = m div p"
with \<open>p dvd m\<close> have "m = p * k" by simp
moreover have "p < m"
using \<open>p \<noteq> m\<close> dvd_imp_le[OF \<open>p dvd m\<close>] and \<open>m > 1\<close>
by simp
moreover have "1 < k" "k < m"
using \<open>1 < m\<close> \<open>1 < p\<close> and \<open>p \<noteq> m\<close>
unfolding \<open>m = p * k\<close>
by (force intro: Suc_lessI Nat.gr0I)+
ultimately show ?thesis using \<open>1 < m\<close> by blast
qed
(* TODO Remove *)
lemma prime_factorization_eqI:
assumes "\<And>p. p \<in># P \<Longrightarrow> prime p" "prod_mset P = n"
shows "prime_factorization n = P"
using prime_factorization_prod_mset_primes[of P] assms by simp
lemma prime_factorization_prime_elem:
assumes "prime_elem p"
shows "prime_factorization p = {#normalize p#}"
proof -
have "prime_factorization p = prime_factorization (normalize p)"
by (metis normalize_idem prime_factorization_cong)
also have "\<dots> = {#normalize p#}"
by (rule prime_factorization_prime) (use assms in auto)
finally show ?thesis .
qed
lemma size_prime_factorization_eq_Suc_0_iff [simp]:
fixes n :: "'a :: factorial_semiring_multiplicative"
shows "size (prime_factorization n) = Suc 0 \<longleftrightarrow> prime_elem n"
proof
assume size: "size (prime_factorization n) = Suc 0"
hence [simp]: "n \<noteq> 0" by auto
from size obtain p where *: "prime_factorization n = {#p#}"
by (auto elim!: size_mset_SucE)
hence p: "p \<in> prime_factors n" by auto
have "prime_elem (normalize p)"
using p by (auto simp: in_prime_factors_iff)
also have "p = prod_mset (prime_factorization n)"
using * by simp
also have "normalize \<dots> = normalize n"
by (rule prod_mset_prime_factorization_weak) auto
finally show "prime_elem n" by simp
qed (auto simp: prime_factorization_prime_elem)
(* END TODO *)
(* TODO Move *)
lemma squarefree_prime_elem [simp, intro]:
fixes p :: "'a :: algebraic_semidom"
assumes "prime_elem p"
shows "squarefree p"
proof (rule squarefreeI)
fix x assume "x\<^sup>2 dvd p"
show "is_unit x"
proof (rule ccontr)
assume "\<not>is_unit x"
hence "\<not>is_unit (x\<^sup>2)"
by (simp add: is_unit_power_iff)
from assms and this and \<open>x\<^sup>2 dvd p\<close> have "prime_elem (x\<^sup>2)"
by (rule prime_elem_mono)
thus False by (simp add: prime_elem_power_iff)
qed
qed
lemma squarefree_prime [simp, intro]: "prime p \<Longrightarrow> squarefree p"
by auto
lemma not_squarefree_primepow:
assumes "primepow n"
shows "squarefree n \<longleftrightarrow> prime n"
using assms by (auto simp: primepow_def squarefree_power_iff prime_power_iff)
lemma prime_factorization_normalize [simp]:
"prime_factorization (normalize n) = prime_factorization n"
by (rule prime_factorization_cong) auto
lemma one_prime_factor_iff_primepow:
fixes n :: "'a :: factorial_semiring_multiplicative"
shows "card (prime_factors n) = Suc 0 \<longleftrightarrow> primepow (normalize n)"
proof
assume "primepow (normalize n)"
then obtain p k where pk: "prime p" "normalize n = p ^ k" "k > 0"
by (auto simp: primepow_def)
hence "card (prime_factors (normalize n)) = Suc 0"
by (subst pk) (simp add: prime_factors_power prime_factorization_prime)
thus "card (prime_factors n) = Suc 0"
by simp
next
assume *: "card (prime_factors n) = Suc 0"
from * have "(\<Prod>p\<in>prime_factors n. p ^ multiplicity p n) = normalize n"
by (intro prod_prime_factors) auto
also from * have "card (prime_factors n) = 1" by simp
then obtain p where p: "prime_factors n = {p}"
by (elim card_1_singletonE)
finally have "normalize n = p ^ multiplicity p n"
by simp
moreover from p have "prime p" "multiplicity p n > 0"
by (auto simp: prime_factors_multiplicity)
ultimately show "primepow (normalize n)"
unfolding primepow_def by blast
qed
lemma squarefree_imp_prod_prime_factors_eq:
fixes x :: "'a :: factorial_semiring_multiplicative"
assumes "squarefree x"
shows "\<Prod>(prime_factors x) = normalize x"
proof -
from assms have [simp]: "x \<noteq> 0" by auto
have "(\<Prod>p\<in>prime_factors x. p ^ multiplicity p x) = normalize x"
by (intro prod_prime_factors) auto
also have "(\<Prod>p\<in>prime_factors x. p ^ multiplicity p x) = (\<Prod>p\<in>prime_factors x. p)"
using assms by (intro prod.cong refl) (auto simp: squarefree_factorial_semiring')
finally show ?thesis by simp
qed
(* END TODO *)
end
\ No newline at end of file
diff --git a/thys/ROOTS b/thys/ROOTS
--- a/thys/ROOTS
+++ b/thys/ROOTS
@@ -1,528 +1,531 @@
+ADS_Functor
AODV
+Attack_Trees
Auto2_HOL
Auto2_Imperative_HOL
AVL-Trees
AWN
Abortable_Linearizable_Modules
Abs_Int_ITP2012
Abstract-Hoare-Logics
Abstract-Rewriting
Abstract_Completeness
Abstract_Soundness
Adaptive_State_Counting
Affine_Arithmetic
Aggregation_Algebras
Akra_Bazzi
Algebraic_Numbers
Algebraic_VCs
Allen_Calculus
Amortized_Complexity
AnselmGod
Applicative_Lifting
Approximation_Algorithms
Architectural_Design_Patterns
Aristotles_Assertoric_Syllogistic
Arith_Prog_Rel_Primes
ArrowImpossibilityGS
AutoFocus-Stream
Automatic_Refinement
AxiomaticCategoryTheory
BDD
BNF_Operations
Bell_Numbers_Spivey
Berlekamp_Zassenhaus
Bernoulli
Bertrands_Postulate
Bicategory
BinarySearchTree
Binding_Syntax_Theory
Binomial-Heaps
Binomial-Queues
BNF_CC
Bondy
Boolean_Expression_Checkers
Bounded_Deducibility_Security
Buchi_Complementation
Budan_Fourier
Buffons_Needle
Buildings
BytecodeLogicJmlTypes
C2KA_DistributedSystems
CAVA_Automata
CAVA_LTL_Modelchecker
CCS
CISC-Kernel
CRDT
CYK
CakeML
CakeML_Codegen
Call_Arity
Card_Equiv_Relations
Card_Multisets
Card_Number_Partitions
Card_Partitions
Cartan_FP
Case_Labeling
Catalan_Numbers
Category
Category2
Category3
Cauchy
Cayley_Hamilton
Certification_Monads
Chord_Segments
Circus
Clean
ClockSynchInst
Closest_Pair_Points
CofGroups
Coinductive
Coinductive_Languages
Collections
Comparison_Sort_Lower_Bound
Compiling-Exceptions-Correctly
Completeness
Complete_Non_Orders
Complex_Geometry
Complx
ComponentDependencies
ConcurrentGC
ConcurrentIMP
Concurrent_Ref_Alg
Concurrent_Revisions
Consensus_Refined
Constructive_Cryptography
Constructor_Funs
Containers
CoreC++
Core_DOM
Count_Complex_Roots
CryptHOL
CryptoBasedCompositionalProperties
DFS_Framework
DPT-SAT-Solver
DataRefinementIBP
Datatype_Order_Generator
Decl_Sem_Fun_PL
Decreasing-Diagrams
Decreasing-Diagrams-II
Deep_Learning
Density_Compiler
Dependent_SIFUM_Refinement
Dependent_SIFUM_Type_Systems
Depth-First-Search
Derangements
Deriving
Descartes_Sign_Rule
Dict_Construction
Differential_Dynamic_Logic
Differential_Game_Logic
Dijkstra_Shortest_Path
Diophantine_Eqns_Lin_Hom
Dirichlet_L
Dirichlet_Series
Discrete_Summation
DiscretePricing
DiskPaxos
DynamicArchitectures
Dynamic_Tables
E_Transcendental
Echelon_Form
EdmondsKarp_Maxflow
Efficient-Mergesort
Elliptic_Curves_Group_Law
Encodability_Process_Calculi
Epistemic_Logic
Ergodic_Theory
Error_Function
Euler_MacLaurin
Euler_Partition
Example-Submission
Factored_Transition_System_Bounding
Farkas
FFT
FLP
FOL-Fitting
FOL_Harrison
FOL_Seq_Calc1
Falling_Factorial_Sum
FeatherweightJava
Featherweight_OCL
Fermat3_4
FileRefinement
FinFun
Finger-Trees
Finite_Automata_HF
First_Order_Terms
First_Welfare_Theorem
Fishburn_Impossibility
Fisher_Yates
Flow_Networks
Floyd_Warshall
Flyspeck-Tame
FocusStreamsCaseStudies
Formal_SSA
Formula_Derivatives
Fourier
Free-Boolean-Algebra
Free-Groups
FunWithFunctions
FunWithTilings
Functional-Automata
Functional_Ordered_Resolution_Prover
Furstenberg_Topology
GPU_Kernel_PL
Gabow_SCC
Game_Based_Crypto
Gauss-Jordan-Elim-Fun
Gauss_Jordan
Gauss_Sums
GenClock
General-Triangle
Generalized_Counting_Sort
Generic_Deriving
Generic_Join
GewirthPGCProof
Girth_Chromatic
GoedelGod
Goodstein_Lambda
GraphMarkingIBP
Graph_Saturation
Graph_Theory
Green
Groebner_Bases
Groebner_Macaulay
Gromov_Hyperbolicity
Group-Ring-Module
HOL-CSP
HOLCF-Prelude
HRB-Slicing
Heard_Of
Hello_World
HereditarilyFinite
Hermite
Hidden_Markov_Models
Higher_Order_Terms
Hoare_Time
HotelKeyCards
Huffman
Hybrid_Logic
Hybrid_Multi_Lane_Spatial_Logic
Hybrid_Systems_VCs
HyperCTL
IEEE_Floating_Point
IMAP-CRDT
IMO2019
IMP2
IMP2_Binary_Heap
IP_Addresses
Imperative_Insertion_Sort
Impossible_Geometry
Incompleteness
Incredible_Proof_Machine
Inductive_Confidentiality
InfPathElimination
InformationFlowSlicing
InformationFlowSlicing_Inter
Integration
Interval_Arithmetic_Word32
Iptables_Semantics
Irrationality_J_Hancl
Isabelle_C
Isabelle_Meta_Model
Jacobson_Basic_Algebra
Jinja
JinjaThreads
JiveDataStoreModel
Jordan_Hoelder
Jordan_Normal_Form
KAD
KAT_and_DRA
KBPs
KD_Tree
Key_Agreement_Strong_Adversaries
Kleene_Algebra
Knot_Theory
Knuth_Morris_Pratt
Koenigsberg_Friendship
Kruskal
Kuratowski_Closure_Complement
LLL_Basis_Reduction
LLL_Factorization
LOFT
LTL
LTL_to_DRA
LTL_to_GBA
LTL_Master_Theorem
Lam-ml-Normalization
LambdaAuth
LambdaMu
Lambda_Free_KBOs
Lambda_Free_RPOs
Landau_Symbols
Laplace_Transform
Latin_Square
LatticeProperties
Lambda_Free_EPO
Launchbury
Lazy-Lists-II
Lazy_Case
Lehmer
Lifting_Definition_Option
LightweightJava
LinearQuantifierElim
Linear_Inequalities
Linear_Programming
Linear_Recurrences
Liouville_Numbers
List-Index
List-Infinite
List_Interleaving
List_Inversions
List_Update
LocalLexing
Localization_Ring
Locally-Nameless-Sigma
Lowe_Ontological_Argument
Lower_Semicontinuous
Lp
+Lucas_Theorem
MFMC_Countable
MSO_Regex_Equivalence
Markov_Models
Marriage
Mason_Stothers
Matrix
Matrix_Tensor
Matroids
Max-Card-Matching
Median_Of_Medians_Selection
Menger
Mersenne_Primes
MFODL_Monitor_Optimized
MFOTL_Monitor
MiniML
Minimal_SSA
Minkowskis_Theorem
Minsky_Machines
Modal_Logics_for_NTS
Modular_Assembly_Kit_Security
Monad_Memo_DP
Monad_Normalisation
MonoBoolTranAlgebra
MonoidalCategory
Monomorphic_Monad
MuchAdoAboutTwo
Multirelations
Multi_Party_Computation
Myhill-Nerode
Name_Carrying_Type_Inference
Nat-Interval-Logic
Native_Word
Nested_Multisets_Ordinals
Network_Security_Policy_Verification
Neumann_Morgenstern_Utility
No_FTL_observers
Nominal2
Noninterference_CSP
Noninterference_Concurrent_Composition
Noninterference_Generic_Unwinding
Noninterference_Inductive_Unwinding
Noninterference_Ipurge_Unwinding
Noninterference_Sequential_Composition
NormByEval
Nullstellensatz
Octonions
Open_Induction
OpSets
Optics
Optimal_BST
Orbit_Stabiliser
Order_Lattice_Props
Ordered_Resolution_Prover
Ordinal
Ordinals_and_Cardinals
Ordinary_Differential_Equations
PCF
PLM
Pell
POPLmark-deBruijn
PSemigroupsConvolution
Pairing_Heap
Paraconsistency
Parity_Game
Partial_Function_MR
Partial_Order_Reduction
Password_Authentication_Protocol
Perfect-Number-Thm
Perron_Frobenius
Pi_Calculus
Pi_Transcendental
Planarity_Certificates
Polynomial_Factorization
Polynomial_Interpolation
Polynomials
Poincare_Bendixson
Poincare_Disc
Pop_Refinement
Posix-Lexing
Possibilistic_Noninterference
Pratt_Certificate
Presburger-Automata
Prim_Dijkstra_Simple
Prime_Distribution_Elementary
Prime_Harmonic_Series
Prime_Number_Theorem
Priority_Queue_Braun
Priority_Search_Trees
Probabilistic_Noninterference
Probabilistic_Prime_Tests
Probabilistic_System_Zoo
Probabilistic_Timed_Automata
Probabilistic_While
Projective_Geometry
Program-Conflict-Analysis
Promela
Proof_Strategy_Language
PropResPI
Propositional_Proof_Systems
Prpu_Maxflow
PseudoHoops
Psi_Calculi
Ptolemys_Theorem
QHLProver
QR_Decomposition
Quantales
Quaternions
Quick_Sort_Cost
RIPEMD-160-SPARK
ROBDD
RSAPSS
Ramsey-Infinite
Random_BSTs
Randomised_BSTs
Random_Graph_Subgraph_Threshold
Randomised_Social_Choice
Rank_Nullity_Theorem
Real_Impl
Recursion-Theory-I
Refine_Imperative_HOL
Refine_Monadic
RefinementReactive
Regex_Equivalence
Regular-Sets
Regular_Algebras
Relation_Algebra
Relational-Incorrectness-Logic
Rep_Fin_Groups
Residuated_Lattices
Resolution_FOL
Rewriting_Z
Ribbon_Proofs
Robbins-Conjecture
Root_Balanced_Tree
Routing
Roy_Floyd_Warshall
SATSolverVerification
SDS_Impossibility
SIFPL
SIFUM_Type_Systems
SPARCv8
Safe_OCL
Saturation_Framework
Secondary_Sylow
Security_Protocol_Refinement
Selection_Heap_Sort
SenSocialChoice
Separata
Separation_Algebra
Separation_Logic_Imperative_HOL
SequentInvertibility
Shivers-CFA
ShortestPath
Show
Sigma_Commit_Crypto
Signature_Groebner
Simpl
Simple_Firewall
Simplex
Skew_Heap
Skip_Lists
Slicing
Sliding_Window_Algorithm
Smooth_Manifolds
Sort_Encodings
Source_Coding_Theorem
Special_Function_Bounds
Splay_Tree
Sqrt_Babylonian
Stable_Matching
Statecharts
Stellar_Quorums
Stern_Brocot
Stewart_Apollonius
Stirling_Formula
Stochastic_Matrices
Stone_Algebras
Stone_Kleene_Relation_Algebras
Stone_Relation_Algebras
Store_Buffer_Reduction
Stream-Fusion
Stream_Fusion_Code
Strong_Security
Sturm_Sequences
Sturm_Tarski
Stuttering_Equivalence
Subresultants
Subset_Boolean_Algebras
SumSquares
SuperCalc
Surprise_Paradox
Symmetric_Polynomials
Szpilrajn
TESL_Language
TLA
Tail_Recursive_Functions
Tarskis_Geometry
Taylor_Models
Timed_Automata
Topology
TortoiseHare
Transcendence_Series_Hancl_Rucki
Transformer_Semantics
Transition_Systems_and_Automata
Transitive-Closure
Transitive-Closure-II
Treaps
Tree-Automata
Tree_Decomposition
Triangle
Trie
Twelvefold_Way
Tycon
Types_Tableaus_and_Goedels_God
Universal_Turing_Machine
UPF
UPF_Firewall
UpDown_Scheme
UTP
Valuation
VectorSpace
VeriComp
Verified-Prover
VerifyThis2018
VerifyThis2019
Vickrey_Clarke_Groves
VolpanoSmith
WHATandWHERE_Security
WebAssembly
Weight_Balanced_Trees
Well_Quasi_Orders
Winding_Number_Eval
WOOT_Strong_Eventual_Consistency
Word_Lib
WorkerWrapper
XML
Zeta_Function
Zeta_3_Irrational
ZFC_in_HOL
pGCL
diff --git a/thys/SPARCv8/SparcModel_MMU/MMU.thy b/thys/SPARCv8/SparcModel_MMU/MMU.thy
--- a/thys/SPARCv8/SparcModel_MMU/MMU.thy
+++ b/thys/SPARCv8/SparcModel_MMU/MMU.thy
@@ -1,312 +1,312 @@
(* Title: Memory.thy
Author: David Sanán, Trinity College Dublin, 2012
Zhe Hou, NTU, 2016.
*)
section \<open>Memory Management Unit (MMU)\<close>
theory MMU
imports Main RegistersOps Sparc_Types
begin
section \<open>MMU Sizing\<close>
text\<open>
We need some citation here for documentation about the MMU.
\<close>
text\<open>The MMU uses the Address Space Identifiers (ASI) to control memory access.
ASI = 8, 10 are for user; ASI = 9, 11 are for supervisor.\<close>
subsection "MMU Types"
type_synonym word_PTE_flags = word8
type_synonym word_length_PTE_flags = word_length8
subsection "MMU length values"
text\<open>Definitions for the length of the virtua address, page size,
virtual translation tables indexes, virtual address offset and Page protection flags\<close>
definition length_entry_type :: "nat"
where "length_entry_type \<equiv> LENGTH(word_length_entry_type)"
definition length_phys_address:: "nat"
where "length_phys_address \<equiv> LENGTH(word_length_phys_address)"
definition length_virtua_address:: "nat"
where "length_virtua_address \<equiv> LENGTH(word_length_virtua_address)"
definition length_page:: "nat" where "length_page \<equiv> LENGTH(word_length_page)"
definition length_t1:: "nat" where "length_t1 \<equiv> LENGTH(word_length_t1)"
definition length_t2:: "nat" where "length_t2 \<equiv> LENGTH(word_length_t2)"
definition length_t3:: "nat" where "length_t3 \<equiv> LENGTH(word_length_t3)"
definition length_offset:: "nat" where "length_offset \<equiv> LENGTH(word_length_offset)"
definition length_PTE_flags :: "nat" where
"length_PTE_flags \<equiv> LENGTH(word_length_PTE_flags)"
subsection "MMU index values"
definition va_t1_index :: "nat" where "va_t1_index \<equiv> length_virtua_address - length_t1"
definition va_t2_index :: "nat" where "va_t2_index \<equiv> va_t1_index - length_t2"
definition va_t3_index :: "nat" where "va_t3_index \<equiv> va_t2_index - length_t3"
definition va_offset_index :: "nat" where "va_offset_index \<equiv> va_t3_index - length_offset"
definition pa_page_index :: "nat"
where "pa_page_index \<equiv> length_phys_address - length_page"
definition pa_offset_index :: "nat" where
"pa_offset_index \<equiv> pa_page_index -length_page"
section \<open>MMU Definition\<close>
record MMU_state =
registers :: "MMU_context"
(* contexts:: context_table*)
text \<open>The following functions access MMU registers via addresses.
See UT699LEON3FT manual page 35.\<close>
definition mmu_reg_val:: "MMU_state \<Rightarrow> virtua_address \<Rightarrow> machine_word option"
where "mmu_reg_val mmu_state addr \<equiv>
if addr = 0x000 then \<comment> \<open>MMU control register\<close>
Some ((registers mmu_state) CR)
else if addr = 0x100 then \<comment> \<open>Context pointer register\<close>
Some ((registers mmu_state) CTP)
else if addr = 0x200 then \<comment> \<open>Context register\<close>
Some ((registers mmu_state) CNR)
else if addr = 0x300 then \<comment> \<open>Fault status register\<close>
Some ((registers mmu_state) FTSR)
else if addr = 0x400 then \<comment> \<open>Fault address register\<close>
Some ((registers mmu_state) FAR)
else None"
definition mmu_reg_mod:: "MMU_state \<Rightarrow> virtua_address \<Rightarrow> machine_word \<Rightarrow>
MMU_state option" where
"mmu_reg_mod mmu_state addr w \<equiv>
if addr = 0x000 then \<comment> \<open>MMU control register\<close>
Some (mmu_state\<lparr>registers := (registers mmu_state)(CR := w)\<rparr>)
else if addr = 0x100 then \<comment> \<open>Context pointer register\<close>
Some (mmu_state\<lparr>registers := (registers mmu_state)(CTP := w)\<rparr>)
else if addr = 0x200 then \<comment> \<open>Context register\<close>
Some (mmu_state\<lparr>registers := (registers mmu_state)(CNR := w)\<rparr>)
else if addr = 0x300 then \<comment> \<open>Fault status register\<close>
Some (mmu_state\<lparr>registers := (registers mmu_state)(FTSR := w)\<rparr>)
else if addr = 0x400 then \<comment> \<open>Fault address register\<close>
Some (mmu_state\<lparr>registers := (registers mmu_state)(FAR := w)\<rparr>)
else None"
section \<open>Virtual Memory\<close>
subsection \<open>MMU Auxiliary Definitions\<close>
definition getCTPVal:: "MMU_state \<Rightarrow> machine_word"
where "getCTPVal mmu \<equiv> (registers mmu) CTP"
definition getCNRVal::"MMU_state \<Rightarrow> machine_word"
where "getCNRVal mmu \<equiv> (registers mmu) CNR"
text\<open>
The physical context table address is got from the ConText Pointer register (CTP) and the
Context Register (CNR) MMU registers.
The CTP is shifted to align it with
the physical address (36 bits) and we add the table index given on CNR.
CTP is right shifted 2 bits, cast to phys address and left shifted 6 bytes
to be aligned with the context register.
CNR is 2 bits left shifted for alignment with the context table.
\<close>
definition compose_context_table_addr :: "machine_word \<Rightarrow>machine_word
\<Rightarrow> phys_address"
where
"compose_context_table_addr ctp cnr
\<equiv> ((ucast (ctp >> 2)) << 6) + (ucast cnr << 2)"
subsection \<open>Virtual Address Translation\<close>
text\<open>Get the context table phys address from the MMU registers\<close>
definition get_context_table_addr :: "MMU_state \<Rightarrow> phys_address"
where
"get_context_table_addr mmu
\<equiv> compose_context_table_addr (getCTPVal mmu) (getCNRVal mmu)"
definition va_list_index :: "nat list" where
"va_list_index \<equiv> [va_t1_index,va_t2_index,va_t3_index,0]"
definition offset_index :: "nat list" where
"offset_index
\<equiv> [ length_machine_word
, length_machine_word-length_t1
, length_machine_word-length_t1-length_t2
, length_machine_word-length_t1-length_t2-length_t3
]"
definition index_len_table :: "nat list" where "index_len_table \<equiv> [8,6,6,0]"
definition n_context_tables :: "nat" where "n_context_tables \<equiv> 3"
text \<open>The following are basic physical memory read functions.
At this level we don't need the write memory yet.\<close>
definition mem_context_val:: "asi_type \<Rightarrow> phys_address \<Rightarrow>
mem_context \<Rightarrow> mem_val_type option"
where
"mem_context_val asi add m \<equiv>
let asi8 = word_of_int 8;
r1 = m asi add
in
if r1 = None then
m asi8 add
else r1
"
text \<open>Given an ASI (word8), an address (word32) addr,
read the 32bit value from the memory addresses
starting from address addr' where addr' = addr
exception that the last two bits are 0's.
That is, read the data from
addr', addr'+1, addr'+2, addr'+3.\<close>
definition mem_context_val_w32 :: "asi_type \<Rightarrow> phys_address \<Rightarrow>
mem_context \<Rightarrow> word32 option"
where
"mem_context_val_w32 asi addr m \<equiv>
- let addr' = bitAND addr 0b111111111111111111111111111111111100;
- addr0 = bitOR addr' 0b000000000000000000000000000000000000;
- addr1 = bitOR addr' 0b000000000000000000000000000000000001;
- addr2 = bitOR addr' 0b000000000000000000000000000000000010;
- addr3 = bitOR addr' 0b000000000000000000000000000000000011;
+ let addr' = (AND) addr 0b111111111111111111111111111111111100;
+ addr0 = (OR) addr' 0b000000000000000000000000000000000000;
+ addr1 = (OR) addr' 0b000000000000000000000000000000000001;
+ addr2 = (OR) addr' 0b000000000000000000000000000000000010;
+ addr3 = (OR) addr' 0b000000000000000000000000000000000011;
r0 = mem_context_val asi addr0 m;
r1 = mem_context_val asi addr1 m;
r2 = mem_context_val asi addr2 m;
r3 = mem_context_val asi addr3 m
in
if r0 = None \<or> r1 = None \<or> r2 = None \<or> r3 = None then
None
else
let byte0 = case r0 of Some v \<Rightarrow> v;
byte1 = case r1 of Some v \<Rightarrow> v;
byte2 = case r2 of Some v \<Rightarrow> v;
byte3 = case r3 of Some v \<Rightarrow> v
in
- Some (bitOR (bitOR (bitOR ((ucast(byte0)) << 24)
+ Some ((OR) ((OR) ((OR) ((ucast(byte0)) << 24)
((ucast(byte1)) << 16))
((ucast(byte2)) << 8))
(ucast(byte3)))
"
text \<open>
@{term "get_addr_from_table"} browses the page description tables
until it finds a PTE (bits==suc (suc 0).
If it is a PTE it aligns the 24 most significant bits of the entry
with the most significant bits of the phys address and or-ed with the offset,
which will vary depending on the entry level.
In the case we are looking at the last table level (level 3),
the offset is aligned to 0 otherwise it will be 2.
If the table entry is a PTD (bits== Suc 0),
the index is obtained from the virtual address depending on the current level and or-ed with the PTD.
\<close>
function ptd_lookup:: "virtua_address \<Rightarrow> virtua_address \<Rightarrow>
mem_context \<Rightarrow> nat \<Rightarrow> (phys_address \<times> PTE_flags) option"
where "ptd_lookup va pt m lvl = (
if lvl > 3 then None
else
let thislvl_offset = (
if lvl = 1 then (ucast ((ucast (va >> 24))::word8))::word32
else if lvl = 2 then (ucast ((ucast (va >> 18))::word6))::word32
else (ucast ((ucast (va >> 12))::word6))::word32);
- thislvl_addr = bitOR pt thislvl_offset;
+ thislvl_addr = (OR) pt thislvl_offset;
thislvl_data = mem_context_val_w32 (word_of_int 9) (ucast thislvl_addr) m
in
case thislvl_data of
Some v \<Rightarrow> (
- let et_val = bitAND v 0b00000000000000000000000000000011 in
+ let et_val = (AND) v 0b00000000000000000000000000000011 in
if et_val = 0 then \<comment> \<open>Invalid\<close>
None
else if et_val = 1 then \<comment> \<open>Page Table Descriptor\<close>
- let ptp = bitAND v 0b11111111111111111111111111111100 in
+ let ptp = (AND) v 0b11111111111111111111111111111100 in
ptd_lookup va ptp m (lvl+1)
else if et_val = 2 then \<comment> \<open>Page Table Entry\<close>
let ppn = (ucast (v >> 8))::word24;
va_offset = (ucast ((ucast va)::word12))::word36
in
- Some ((bitOR (((ucast ppn)::word36) << 12) va_offset),
+ Some (((OR) (((ucast ppn)::word36) << 12) va_offset),
((ucast v)::word8))
else \<comment> \<open>\<open>et_val = 3\<close>, reserved.\<close>
None
)
|None \<Rightarrow> None)
"
by pat_completeness auto
termination
by (relation "measure (\<lambda> (va, (pt, (m, lvl))). 4 - lvl)") auto
definition get_acc_flag:: "PTE_flags \<Rightarrow> word3" where
"get_acc_flag w8 \<equiv> (ucast (w8 >> 2))::word3"
definition mmu_readable:: "word3 \<Rightarrow> asi_type \<Rightarrow> bool" where
"mmu_readable f asi \<equiv>
if uint asi \<in> {8, 10} then
if uint f \<in> {0,1,2,3,5} then True
else False
else if uint asi \<in> {9, 11} then
if uint f \<in> {0,1,2,3,5,6,7} then True
else False
else False
"
definition mmu_writable:: "word3 \<Rightarrow> asi_type \<Rightarrow> bool" where
"mmu_writable f asi \<equiv>
if uint asi \<in> {8, 10} then
if uint f \<in> {1,3} then True
else False
else if uint asi \<in> {9, 11} then
if uint f \<in> {1,3,5,7} then True
else False
else False
"
definition virt_to_phys :: "virtua_address \<Rightarrow> MMU_state \<Rightarrow> mem_context \<Rightarrow>
(phys_address \<times> PTE_flags) option"
where
"virt_to_phys va mmu m \<equiv>
let ctp_val = mmu_reg_val mmu (0x100);
cnr_val = mmu_reg_val mmu (0x200);
mmu_cr_val = (registers mmu) CR
in
- if bitAND mmu_cr_val 1 \<noteq> 0 then \<comment> \<open>MMU enabled.\<close>
+ if (AND) mmu_cr_val 1 \<noteq> 0 then \<comment> \<open>MMU enabled.\<close>
case (ctp_val,cnr_val) of
(Some v1, Some v2) \<Rightarrow>
- let context_table_entry = bitOR ((v1 >> 11) << 11)
- ((bitAND v2 0b00000000000000000000000111111111) << 2);
+ let context_table_entry = (OR) ((v1 >> 11) << 11)
+ (((AND) v2 0b00000000000000000000000111111111) << 2);
context_table_data = mem_context_val_w32 (word_of_int 9)
(ucast context_table_entry) m
in (
case context_table_data of
Some lvl1_page_table \<Rightarrow>
ptd_lookup va lvl1_page_table m 1
|None \<Rightarrow> None)
|_ \<Rightarrow> None
else Some ((ucast va), ((0b11101111)::word8))
"
text \<open>
\newpage
The below function gives the initial values of MMU registers.
In particular, the MMU context register CR is 0 because:
We don't know the bits for IMPL, VER, and SC;
the bits for PSO are 0s because we use TSO;
the reserved bits are 0s;
we assume NF bits are 0s;
and most importantly, the E bit is 0 because when the machine
starts up, MMU is disabled.
An initial boot procedure (bootloader or something like that) should
configure the MMU and then enable it if the OS uses MMU.\<close>
definition MMU_registers_init :: "MMU_context"
where "MMU_registers_init r \<equiv> 0"
definition mmu_setup :: "MMU_state"
where "mmu_setup \<equiv> \<lparr>registers=MMU_registers_init\<rparr>"
end
diff --git a/thys/SPARCv8/SparcModel_MMU/Sparc_Execution.thy b/thys/SPARCv8/SparcModel_MMU/Sparc_Execution.thy
--- a/thys/SPARCv8/SparcModel_MMU/Sparc_Execution.thy
+++ b/thys/SPARCv8/SparcModel_MMU/Sparc_Execution.thy
@@ -1,431 +1,431 @@
(*
* Copyright 2016, NTU
*
* This software may be distributed and modified according to the terms of
* the BSD 2-Clause license. Note that NO WARRANTY is provided.
* See "LICENSE_BSD2.txt" for details.
*
* Author: Zhe Hou, David Sanan.
*)
theory Sparc_Execution
imports Main Sparc_Instruction Sparc_State Sparc_Types
"HOL-Eisbach.Eisbach_Tools"
begin
primrec sum :: "nat \<Rightarrow> nat" where
"sum 0 = 0" |
"sum (Suc n) = Suc n + sum n"
definition select_trap :: "unit \<Rightarrow> ('a,unit) sparc_state_monad"
where "select_trap _ \<equiv>
do
traps \<leftarrow> gets (\<lambda>s. (get_trap_set s));
rt_val \<leftarrow> gets (\<lambda>s. (reset_trap_val s));
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
et_val \<leftarrow> gets (\<lambda>s. (get_ET psr_val));
modify (\<lambda>s. (emp_trap_set s));
if rt_val = True then \<comment> \<open>ignore \<open>ET\<close>, and leave \<open>tt\<close> unchaged\<close>
return ()
else if et_val = 0 then \<comment> \<open>go to error mode, machine needs reset\<close>
do
set_err_mode True;
set_exe_mode False;
fail ()
od
\<comment> \<open>By the SPARCv8 manual only 1 of the following traps could be in traps.\<close>
else if data_store_error \<in> traps then
do
write_cpu_tt (0b00101011::word8);
return ()
od
else if instruction_access_error \<in> traps then
do
write_cpu_tt (0b00100001::word8);
return ()
od
else if r_register_access_error \<in> traps then
do
write_cpu_tt (0b00100000::word8);
return ()
od
else if instruction_access_exception \<in> traps then
do
write_cpu_tt (0b00000001::word8);
return ()
od
else if privileged_instruction \<in> traps then
do
write_cpu_tt (0b00000011::word8);
return ()
od
else if illegal_instruction \<in> traps then
do
write_cpu_tt (0b00000010::word8);
return ()
od
else if fp_disabled \<in> traps then
do
write_cpu_tt (0b00000100::word8);
return ()
od
else if cp_disabled \<in> traps then
do
write_cpu_tt (0b00100100::word8);
return ()
od
else if unimplemented_FLUSH \<in> traps then
do
write_cpu_tt (0b00100101::word8);
return ()
od
else if window_overflow \<in> traps then
do
write_cpu_tt (0b00000101::word8);
return ()
od
else if window_underflow \<in> traps then
do
write_cpu_tt (0b00000110::word8);
return ()
od
else if mem_address_not_aligned \<in> traps then
do
write_cpu_tt (0b00000111::word8);
return ()
od
else if fp_exception \<in> traps then
do
write_cpu_tt (0b00001000::word8);
return ()
od
else if cp_exception \<in> traps then
do
write_cpu_tt (0b00101000::word8);
return ()
od
else if data_access_error \<in> traps then
do
write_cpu_tt (0b00101001::word8);
return ()
od
else if data_access_exception \<in> traps then
do
write_cpu_tt (0b00001001::word8);
return ()
od
else if tag_overflow \<in> traps then
do
write_cpu_tt (0b00001010::word8);
return ()
od
else if division_by_zero \<in> traps then
do
write_cpu_tt (0b00101010::word8);
return ()
od
else if trap_instruction \<in> traps then
do
ticc_trap_type \<leftarrow> gets (\<lambda>s. (ticc_trap_type_val s));
write_cpu_tt (word_cat (1::word1) ticc_trap_type);
return ()
od
\<^cancel>\<open>else if interrupt_level > 0 then\<close>
\<comment> \<open>We don't consider \<open>interrupt_level\<close>\<close>
else return ()
od"
definition exe_trap_st_pc :: "unit \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "exe_trap_st_pc _ \<equiv>
do
annul \<leftarrow> gets (\<lambda>s. (annul_val s));
pc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PC s));
npc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val nPC s));
curr_win \<leftarrow> get_curr_win();
if annul = False then
do
write_reg pc_val curr_win (word_of_int 17);
write_reg npc_val curr_win (word_of_int 18);
return ()
od
else \<comment> \<open>\<open>annul = True\<close>\<close>
do
write_reg npc_val curr_win (word_of_int 17);
write_reg (npc_val + 4) curr_win (word_of_int 18);
set_annul False;
return ()
od
od"
definition exe_trap_wr_pc :: "unit \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "exe_trap_wr_pc _ \<equiv>
do
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
new_psr_val \<leftarrow> gets (\<lambda>s. (update_S (1::word1) psr_val));
write_cpu new_psr_val PSR;
reset_trap \<leftarrow> gets (\<lambda>s. (reset_trap_val s));
tbr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val TBR s));
if reset_trap = False then
do
write_cpu tbr_val PC;
write_cpu (tbr_val + 4) nPC;
return ()
od
else \<comment> \<open>\<open>reset_trap = True\<close>\<close>
do
write_cpu 0 PC;
write_cpu 4 nPC;
set_reset_trap False;
return ()
od
od"
definition execute_trap :: "unit \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "execute_trap _ \<equiv>
do
select_trap();
err_mode \<leftarrow> gets (\<lambda>s. (err_mode_val s));
if err_mode = True then
\<comment> \<open>The SparcV8 manual doesn't say what to do.\<close>
return ()
else
do
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
s_val \<leftarrow> gets (\<lambda>s. ((ucast (get_S psr_val))::word1));
curr_win \<leftarrow> get_curr_win();
new_cwp \<leftarrow> gets (\<lambda>s. ((word_of_int (((uint curr_win) - 1) mod NWINDOWS)))::word5);
new_psr_val \<leftarrow> gets (\<lambda>s. (update_PSR_exe_trap new_cwp (0::word1) s_val psr_val));
write_cpu new_psr_val PSR;
exe_trap_st_pc();
exe_trap_wr_pc();
return ()
od
od"
definition dispatch_instruction :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "dispatch_instruction instr \<equiv>
let instr_name = fst instr in
do
traps \<leftarrow> gets (\<lambda>s. (get_trap_set s));
if traps = {} then
if instr_name \<in> {load_store_type LDSB,load_store_type LDUB,
load_store_type LDUBA,load_store_type LDUH,load_store_type LD,
load_store_type LDA,load_store_type LDD} then
load_instr instr
else if instr_name \<in> {load_store_type STB,load_store_type STH,
load_store_type ST,load_store_type STA,load_store_type STD} then
store_instr instr
else if instr_name \<in> {sethi_type SETHI} then
sethi_instr instr
else if instr_name \<in> {nop_type NOP} then
nop_instr instr
else if instr_name \<in> {logic_type ANDs,logic_type ANDcc,logic_type ANDN,
logic_type ANDNcc,logic_type ORs,logic_type ORcc,logic_type ORN,
logic_type XORs,logic_type XNOR} then
logical_instr instr
else if instr_name \<in> {shift_type SLL,shift_type SRL,shift_type SRA} then
shift_instr instr
else if instr_name \<in> {arith_type ADD,arith_type ADDcc,arith_type ADDX} then
add_instr instr
else if instr_name \<in> {arith_type SUB,arith_type SUBcc,arith_type SUBX} then
sub_instr instr
else if instr_name \<in> {arith_type UMUL,arith_type SMUL,arith_type SMULcc} then
mul_instr instr
else if instr_name \<in> {arith_type UDIV,arith_type UDIVcc,arith_type SDIV} then
div_instr instr
else if instr_name \<in> {ctrl_type SAVE,ctrl_type RESTORE} then
save_restore_instr instr
else if instr_name \<in> {call_type CALL} then
call_instr instr
else if instr_name \<in> {ctrl_type JMPL} then
jmpl_instr instr
else if instr_name \<in> {ctrl_type RETT} then
rett_instr instr
else if instr_name \<in> {sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,
sreg_type RDTBR} then
read_state_reg_instr instr
else if instr_name \<in> {sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,
sreg_type WRTBR} then
write_state_reg_instr instr
else if instr_name \<in> {load_store_type FLUSH} then
flush_instr instr
else if instr_name \<in> {bicc_type BE,bicc_type BNE,bicc_type BGU,
bicc_type BLE,bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,bicc_type BN} then
branch_instr instr
else fail ()
else return ()
od"
definition supported_instruction :: "sparc_operation \<Rightarrow> bool"
where "supported_instruction instr \<equiv>
if instr \<in> {load_store_type LDSB,load_store_type LDUB,load_store_type LDUBA,
load_store_type LDUH,load_store_type LD,load_store_type LDA,
load_store_type LDD,
load_store_type STB,load_store_type STH,load_store_type ST,
load_store_type STA,load_store_type STD,
sethi_type SETHI,
nop_type NOP,
logic_type ANDs,logic_type ANDcc,logic_type ANDN,logic_type ANDNcc,
logic_type ORs,logic_type ORcc,logic_type ORN,logic_type XORs,
logic_type XNOR,
shift_type SLL,shift_type SRL,shift_type SRA,
arith_type ADD,arith_type ADDcc,arith_type ADDX,
arith_type SUB,arith_type SUBcc,arith_type SUBX,
arith_type UMUL,arith_type SMUL,arith_type SMULcc,
arith_type UDIV,arith_type UDIVcc,arith_type SDIV,
ctrl_type SAVE,ctrl_type RESTORE,
call_type CALL,
ctrl_type JMPL,
ctrl_type RETT,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}
then True
else False
"
definition execute_instr_sub1 :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "execute_instr_sub1 instr \<equiv>
do
instr_name \<leftarrow> gets (\<lambda>s. (fst instr));
traps2 \<leftarrow> gets (\<lambda>s. (get_trap_set s));
if traps2 = {} \<and> instr_name \<notin> {call_type CALL,ctrl_type RETT,ctrl_type JMPL,
bicc_type BE,bicc_type BNE,bicc_type BGU,
bicc_type BLE,bicc_type BL,bicc_type BGE,
bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,
bicc_type BA,bicc_type BN} then
do
npc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val nPC s));
write_cpu npc_val PC;
write_cpu (npc_val + 4) nPC;
return ()
od
else return ()
od"
definition execute_instruction :: "unit \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "execute_instruction _ \<equiv>
do
traps \<leftarrow> gets (\<lambda>s. (get_trap_set s));
if traps = {} then
do
exe_mode \<leftarrow> gets (\<lambda>s. (exe_mode_val s));
if exe_mode = True then
do
modify (\<lambda>s. (delayed_pool_write s));
fetch_result \<leftarrow> gets (\<lambda>s. (fetch_instruction s));
case fetch_result of
Inl e1 \<Rightarrow> (do \<comment> \<open>Memory address in PC is not aligned.\<close>
\<comment> \<open>Actually, SparcV8 manual doens't check alignment here.\<close>
raise_trap instruction_access_exception;
return ()
od)
| Inr v1 \<Rightarrow> (do
dec \<leftarrow> gets (\<lambda>s. (decode_instruction v1));
case dec of
Inl e2 \<Rightarrow> (\<comment> \<open>Instruction is ill-formatted.\<close>
fail ()
)
| Inr v2 \<Rightarrow> (do
instr \<leftarrow> gets (\<lambda>s. (v2));
annul \<leftarrow> gets (\<lambda>s. (annul_val s));
if annul = False then
do
dispatch_instruction instr;
execute_instr_sub1 instr;
return ()
od
else \<comment> \<open>\<open>annul \<noteq> False\<close>\<close>
do
set_annul False;
npc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val nPC s));
write_cpu npc_val PC;
write_cpu (npc_val + 4) nPC;
return ()
od
od)
od)
od
else return () \<comment> \<open>Not in \<open>execute_mode\<close>.\<close>
od
else \<comment> \<open>traps is not empty, which means \<open>trap = 1\<close>.\<close>
do
execute_trap();
return ()
od
od"
definition NEXT :: "('a::len0)sparc_state \<Rightarrow> ('a)sparc_state option"
where "NEXT s \<equiv> case execute_instruction () s of (_,True) \<Rightarrow> None
| (s',False) \<Rightarrow> Some (snd s')"
definition good_context :: "('a::len0) sparc_state \<Rightarrow> bool"
where "good_context s \<equiv>
let traps = get_trap_set s;
psr_val = cpu_reg_val PSR s;
et_val = get_ET psr_val;
rt_val = reset_trap_val s
in
if traps \<noteq> {} \<and> rt_val = False \<and> et_val = 0 then False \<comment> \<open>enter \<open>error_mode\<close> in \<open>select_traps\<close>.\<close>
else
let s' = delayed_pool_write s in
case fetch_instruction s' of
\<comment> \<open>\<open>instruction_access_exception\<close> is handled in the next state.\<close>
Inl _ \<Rightarrow> True
|Inr v \<Rightarrow> (
case decode_instruction v of
Inl _ \<Rightarrow> False
|Inr instr \<Rightarrow> (
let annul = annul_val s' in
if annul = True then True
else \<comment> \<open>\<open>annul = False\<close>\<close>
if supported_instruction (fst instr) then
\<comment> \<open>The only instruction that could fail is \<open>RETT\<close>.\<close>
if (fst instr) = ctrl_type RETT then
let curr_win_r = (get_CWP (cpu_reg_val PSR s'));
new_cwp_int_r = (((uint curr_win_r) + 1) mod NWINDOWS);
wim_val_r = cpu_reg_val WIM s';
psr_val_r = cpu_reg_val PSR s';
et_val_r = get_ET psr_val_r;
s_val_r = (ucast (get_S psr_val_r))::word1;
op_list_r = snd instr;
addr_r = get_addr (snd instr) s'
in
if et_val_r = 1 then True
else if s_val_r = 0 then False
else if (get_WIM_bit (nat new_cwp_int_r) wim_val_r) \<noteq> 0 then False
- else if (bitAND addr_r (0b00000000000000000000000000000011::word32)) \<noteq> 0 then False
+ else if ((AND) addr_r (0b00000000000000000000000000000011::word32)) \<noteq> 0 then False
else True
else True
else False \<comment> \<open>Unsupported instruction.\<close>
)
)
"
function (sequential) seq_exec:: "nat \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "seq_exec 0 = return ()"
|
"seq_exec n = (do execute_instruction();
(seq_exec (n-1))
od)
"
by pat_completeness auto
termination by lexicographic_order
type_synonym leon3_state = "(word_length5) sparc_state"
type_synonym ('e) leon3_state_monad = "(leon3_state, 'e) det_monad"
definition execute_leon3_instruction:: "unit \<Rightarrow> (unit) leon3_state_monad"
where "execute_leon3_instruction \<equiv> execute_instruction"
definition seq_exec_leon3:: "nat \<Rightarrow> (unit) leon3_state_monad"
where "seq_exec_leon3 \<equiv> seq_exec"
end
diff --git a/thys/SPARCv8/SparcModel_MMU/Sparc_Instruction.thy b/thys/SPARCv8/SparcModel_MMU/Sparc_Instruction.thy
--- a/thys/SPARCv8/SparcModel_MMU/Sparc_Instruction.thy
+++ b/thys/SPARCv8/SparcModel_MMU/Sparc_Instruction.thy
@@ -1,2788 +1,2788 @@
(*
* Copyright 2016, NTU
*
* This software may be distributed and modified according to the terms of
* the BSD 2-Clause license. Note that NO WARRANTY is provided.
* See "LICENSE_BSD2.txt" for details.
*
* Author: Zhe Hou, David Sanan.
*)
section \<open>SPARC instruction model\<close>
theory Sparc_Instruction
imports Main Sparc_Types Sparc_State "HOL-Eisbach.Eisbach_Tools"
begin
text\<open>
This theory provides a formal model for assembly instruction to be executed in the model.
An instruction is defined as a tuple composed of a @{term sparc_operation} element,
defining the operation the instruction carries out, and a list of operands
@{term inst_operand}. @{term inst_operand} can be a user register @{term user_reg}
or a memory address @{term mem_add_type}.
\<close>
datatype inst_operand =
W5 word5
|W30 word30
|W22 word22
|Cond word4
|Flag word1
|Asi asi_type
|Simm13 word13
|Opf word9
|Imm7 word7
primrec get_operand_w5::"inst_operand \<Rightarrow> word5"
where "get_operand_w5 (W5 r) = r"
primrec get_operand_w30::"inst_operand \<Rightarrow> word30"
where "get_operand_w30 (W30 r) = r"
primrec get_operand_w22::"inst_operand \<Rightarrow> word22"
where "get_operand_w22 (W22 r) = r"
primrec get_operand_cond::"inst_operand \<Rightarrow> word4"
where "get_operand_cond (Cond r) = r"
primrec get_operand_flag::"inst_operand \<Rightarrow> word1"
where "get_operand_flag (Flag r) = r"
primrec get_operand_asi::"inst_operand \<Rightarrow> asi_type"
where "get_operand_asi (Asi r) = r"
primrec get_operand_simm13::"inst_operand \<Rightarrow> word13"
where "get_operand_simm13 (Simm13 r) = r"
primrec get_operand_opf::"inst_operand \<Rightarrow> word9"
where "get_operand_opf (Opf r) = r"
primrec get_operand_imm7:: "inst_operand \<Rightarrow> word7"
where "get_operand_imm7 (Imm7 r) = r"
type_synonym instruction = "(sparc_operation \<times> inst_operand list)"
definition get_op::"word32 \<Rightarrow> int"
where "get_op w \<equiv> uint (w >> 30)"
definition get_op2::"word32 \<Rightarrow> int"
where "get_op2 w \<equiv>
let mask_op2 = 0b00000001110000000000000000000000 in
- uint ((bitAND mask_op2 w) >> 22)"
+ uint (((AND) mask_op2 w) >> 22)"
definition get_op3::"word32 \<Rightarrow> int"
where "get_op3 w \<equiv>
let mask_op3 = 0b00000001111110000000000000000000 in
- uint ((bitAND mask_op3 w) >> 19)"
+ uint (((AND) mask_op3 w) >> 19)"
definition get_disp30::"word32 \<Rightarrow> int"
where "get_disp30 w \<equiv>
let mask_disp30 = 0b00111111111111111111111111111111 in
- uint (bitAND mask_disp30 w)"
+ uint ((AND) mask_disp30 w)"
definition get_a::"word32 \<Rightarrow> int"
where "get_a w \<equiv>
let mask_a = 0b00100000000000000000000000000000 in
- uint ((bitAND mask_a w) >> 29)"
+ uint (((AND) mask_a w) >> 29)"
definition get_cond::"word32 \<Rightarrow> int"
where "get_cond w \<equiv>
let mask_cond = 0b00011110000000000000000000000000 in
- uint ((bitAND mask_cond w) >> 25)"
+ uint (((AND) mask_cond w) >> 25)"
definition get_disp_imm22::"word32 \<Rightarrow> int"
where "get_disp_imm22 w \<equiv>
let mask_disp_imm22 = 0b00000000001111111111111111111111 in
- uint (bitAND mask_disp_imm22 w)"
+ uint ((AND) mask_disp_imm22 w)"
definition get_rd::"word32 \<Rightarrow> int"
where "get_rd w \<equiv>
let mask_rd = 0b00111110000000000000000000000000 in
- uint ((bitAND mask_rd w) >> 25)"
+ uint (((AND) mask_rd w) >> 25)"
definition get_rs1::"word32 \<Rightarrow> int"
where "get_rs1 w \<equiv>
let mask_rs1 = 0b00000000000001111100000000000000 in
- uint ((bitAND mask_rs1 w) >> 14)"
+ uint (((AND) mask_rs1 w) >> 14)"
definition get_i::"word32 \<Rightarrow> int"
where "get_i w \<equiv>
let mask_i = 0b00000000000000000010000000000000 in
- uint ((bitAND mask_i w) >> 13)"
+ uint (((AND) mask_i w) >> 13)"
definition get_opf::"word32 \<Rightarrow> int"
where "get_opf w \<equiv>
let mask_opf = 0b00000000000000000011111111100000 in
- uint ((bitAND mask_opf w) >> 5)"
+ uint (((AND) mask_opf w) >> 5)"
definition get_rs2::"word32 \<Rightarrow> int"
where "get_rs2 w \<equiv>
let mask_rs2 = 0b00000000000000000000000000011111 in
- uint (bitAND mask_rs2 w)"
+ uint ((AND) mask_rs2 w)"
definition get_simm13::"word32 \<Rightarrow> int"
where "get_simm13 w \<equiv>
let mask_simm13 = 0b00000000000000000001111111111111 in
- uint (bitAND mask_simm13 w)"
+ uint ((AND) mask_simm13 w)"
definition get_asi::"word32 \<Rightarrow> int"
where "get_asi w \<equiv>
let mask_asi = 0b00000000000000000001111111100000 in
- uint ((bitAND mask_asi w) >> 5)"
+ uint (((AND) mask_asi w) >> 5)"
definition get_trap_cond:: "word32 \<Rightarrow> int"
where "get_trap_cond w \<equiv>
let mask_cond = 0b00011110000000000000000000000000 in
- uint ((bitAND mask_cond w) >> 25)"
+ uint (((AND) mask_cond w) >> 25)"
definition get_trap_imm7:: "word32 \<Rightarrow> int"
where "get_trap_imm7 w \<equiv>
let mask_imm7 = 0b00000000000000000000000001111111 in
- uint (bitAND mask_imm7 w)"
+ uint ((AND) mask_imm7 w)"
definition parse_instr_f1::"word32 \<Rightarrow>
(Exception list + instruction)"
where \<comment> \<open>\<open>CALL\<close>, with a single operand \<open>disp30+"00"\<close>\<close>
"parse_instr_f1 w \<equiv>
Inr (call_type CALL,[W30 (word_of_int (get_disp30 w))])"
definition parse_instr_f2::"word32 \<Rightarrow>
(Exception list + instruction)"
where "parse_instr_f2 w \<equiv>
let op2 = get_op2 w in
if op2 = uint(0b100::word3) then \<comment> \<open>\<open>SETHI\<close> or \<open>NOP\<close>\<close>
let rd = get_rd w in
let imm22 = get_disp_imm22 w in
if rd = 0 \<and> imm22 = 0 then \<comment> \<open>\<open>NOP\<close>\<close>
Inr (nop_type NOP,[])
else \<comment> \<open>\<open>SETHI\<close>, with operands \<open>[imm22,rd]\<close>\<close>
Inr (sethi_type SETHI,[(W22 (word_of_int imm22)),
(W5 (word_of_int rd))])
else if op2 = uint(0b010::word3) then \<comment> \<open>\<open>Bicc\<close>, with operands \<open>[a,disp22]\<close>\<close>
let cond = get_cond w in
let flaga = Flag (word_of_int (get_a w)) in
let disp22 = W22 (word_of_int (get_disp_imm22 w)) in
if cond = uint(0b0001::word4) then \<comment> \<open>\<open>BE\<close>\<close>
Inr (bicc_type BE,[flaga,disp22])
else if cond = uint(0b1001::word4) then \<comment> \<open>\<open>BNE\<close>\<close>
Inr (bicc_type BNE,[flaga,disp22])
else if cond = uint(0b1100::word4) then \<comment> \<open>\<open>BGU\<close>\<close>
Inr (bicc_type BGU,[flaga,disp22])
else if cond = uint(0b0010::word4) then \<comment> \<open>\<open>BLE\<close>\<close>
Inr (bicc_type BLE,[flaga,disp22])
else if cond = uint(0b0011::word4) then \<comment> \<open>\<open>BL\<close>\<close>
Inr (bicc_type BL,[flaga,disp22])
else if cond = uint(0b1011::word4) then \<comment> \<open>\<open>BGE\<close>\<close>
Inr (bicc_type BGE,[flaga,disp22])
else if cond = uint(0b0110::word4) then \<comment> \<open>\<open>BNEG\<close>\<close>
Inr (bicc_type BNEG,[flaga,disp22])
else if cond = uint(0b1010::word4) then \<comment> \<open>\<open>BG\<close>\<close>
Inr (bicc_type BG,[flaga,disp22])
else if cond = uint(0b0101::word4) then \<comment> \<open>\<open>BCS\<close>\<close>
Inr (bicc_type BCS,[flaga,disp22])
else if cond = uint(0b0100::word4) then \<comment> \<open>\<open>BLEU\<close>\<close>
Inr (bicc_type BLEU,[flaga,disp22])
else if cond = uint(0b1101::word4) then \<comment> \<open>\<open>BCC\<close>\<close>
Inr (bicc_type BCC,[flaga,disp22])
else if cond = uint(0b1000::word4) then \<comment> \<open>\<open>BA\<close>\<close>
Inr (bicc_type BA,[flaga,disp22])
else if cond = uint(0b0000::word4) then \<comment> \<open>\<open>BN\<close>\<close>
Inr (bicc_type BN,[flaga,disp22])
else if cond = uint(0b1110::word4) then \<comment> \<open>\<open>BPOS\<close>\<close>
Inr (bicc_type BPOS,[flaga,disp22])
else if cond = uint(0b1111::word4) then \<comment> \<open>\<open>BVC\<close>\<close>
Inr (bicc_type BVC,[flaga,disp22])
else if cond = uint(0b0111::word4) then \<comment> \<open>\<open>BVS\<close>\<close>
Inr (bicc_type BVS,[flaga,disp22])
else Inl [invalid_cond_f2]
else Inl [invalid_op2_f2]
"
text \<open>We don't consider floating-point operations,
so we don't consider the third type of format 3.\<close>
definition parse_instr_f3::"word32 \<Rightarrow> (Exception list + instruction)"
where "parse_instr_f3 w \<equiv>
let this_op = get_op w in
let rd = get_rd w in
let op3 = get_op3 w in
let rs1 = get_rs1 w in
let flagi = get_i w in
let asi = get_asi w in
let rs2 = get_rs2 w in
let simm13 = get_simm13 w in
if this_op = uint(0b11::word2) then \<comment> \<open>Load and Store\<close>
\<comment> \<open>If an instruction accesses alternative space but \<open>flagi = 1\<close>,\<close>
\<comment> \<open>may need to throw a trap.\<close>
if op3 = uint(0b001001::word6) then \<comment> \<open>\<open>LDSB\<close>\<close>
if flagi = 1 then \<comment> \<open>Operant list is \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (load_store_type LDSB,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else \<comment> \<open>Operant list is \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (load_store_type LDSB,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else if op3 = uint(0b011001::word6) then \<comment> \<open>\<open>LDSBA\<close>\<close>
Inr (load_store_type LDSBA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(Asi (word_of_int asi)),
(W5 (word_of_int rd))])
else if op3 = uint(0b001010::word6) then \<comment> \<open>\<open>LDSH\<close>\<close>
if flagi = 1 then \<comment> \<open>Operant list is \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (load_store_type LDSH,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else \<comment> \<open>Operant list is \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (load_store_type LDSH,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else if op3 = uint(0b011010::word6) then \<comment> \<open>\<open>LDSHA\<close>\<close>
Inr (load_store_type LDSHA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(Asi (word_of_int asi)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000001::word6) then \<comment> \<open>\<open>LDUB\<close>\<close>
if flagi = 1 then \<comment> \<open>Operant list is \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (load_store_type LDUB,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else \<comment> \<open>Operant list is \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (load_store_type LDUB,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010001::word6) then \<comment> \<open>\<open>LDUBA\<close>\<close>
Inr (load_store_type LDUBA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(Asi (word_of_int asi)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000010::word6) then \<comment> \<open>\<open>LDUH\<close>\<close>
if flagi = 1 then \<comment> \<open>Operant list is \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (load_store_type LDUH,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else \<comment> \<open>Operant list is \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (load_store_type LDUH,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010010::word6) then \<comment> \<open>\<open>LDUHA\<close>\<close>
Inr (load_store_type LDUHA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(Asi (word_of_int asi)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000000::word6) then \<comment> \<open>\<open>LD\<close>\<close>
if flagi = 1 then \<comment> \<open>Operant list is \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (load_store_type LD,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else \<comment> \<open>Operant list is \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (load_store_type LD,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010000::word6) then \<comment> \<open>\<open>LDA\<close>\<close>
Inr (load_store_type LDA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(Asi (word_of_int asi)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000011::word6) then \<comment> \<open>\<open>LDD\<close>\<close>
if flagi = 1 then \<comment> \<open>Operant list is \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (load_store_type LDD,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else \<comment> \<open>Operant list is \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (load_store_type LDD,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010011::word6) then \<comment> \<open>\<open>LDDA\<close>\<close>
Inr (load_store_type LDDA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(Asi (word_of_int asi)),
(W5 (word_of_int rd))])
else if op3 = uint(0b001101::word6) then \<comment> \<open>\<open>LDSTUB\<close>\<close>
if flagi = 1 then \<comment> \<open>Operant list is \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (load_store_type LDSTUB,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else \<comment> \<open>Operant list is \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (load_store_type LDSTUB,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else if op3 = uint(0b011101::word6) then \<comment> \<open>\<open>LDSTUBA\<close>\<close>
Inr (load_store_type LDSTUBA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(Asi (word_of_int asi)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000101::word6) then \<comment> \<open>\<open>STB\<close>\<close>
if flagi = 1 then \<comment> \<open>Operant list is \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (load_store_type STB,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else \<comment> \<open>Operant list is \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (load_store_type STB,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010101::word6) then \<comment> \<open>\<open>STBA\<close>\<close>
Inr (load_store_type STBA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(Asi (word_of_int asi)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000110::word6) then \<comment> \<open>\<open>STH\<close>\<close>
if flagi = 1 then \<comment> \<open>Operant list is \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (load_store_type STH,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else \<comment> \<open>Operant list is \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (load_store_type STH,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010110::word6) then \<comment> \<open>\<open>STHA\<close>\<close>
Inr (load_store_type STHA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(Asi (word_of_int asi)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000100::word6) then \<comment> \<open>\<open>ST\<close>\<close>
if flagi = 1 then \<comment> \<open>Operant list is \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (load_store_type ST,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else \<comment> \<open>Operant list is \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (load_store_type ST,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010100::word6) then \<comment> \<open>\<open>STA\<close>\<close>
Inr (load_store_type STA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(Asi (word_of_int asi)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000111::word6) then \<comment> \<open>\<open>STD\<close>\<close>
if flagi = 1 then \<comment> \<open>Operant list is \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (load_store_type STD,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else \<comment> \<open>Operant list is \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (load_store_type STD,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010111::word6) then \<comment> \<open>\<open>STDA\<close>\<close>
Inr (load_store_type STDA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(Asi (word_of_int asi)),
(W5 (word_of_int rd))])
else if op3 = uint(0b001111::word6) then \<comment> \<open>\<open>SWAP\<close>\<close>
if flagi = 1 then \<comment> \<open>Operant list is \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (load_store_type SWAP,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else \<comment> \<open>Operant list is \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (load_store_type SWAP,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else if op3 = uint(0b011111::word6) then \<comment> \<open>\<open>SWAPA\<close>\<close>
Inr (load_store_type SWAPA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(Asi (word_of_int asi)),
(W5 (word_of_int rd))])
else Inl [invalid_op3_f3_op11]
else if this_op = uint(0b10::word2) then \<comment> \<open>Others\<close>
if op3 = uint(0b111000::word6) then \<comment> \<open>\<open>JMPL\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (ctrl_type JMPL,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (ctrl_type JMPL,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b111001::word6) then \<comment> \<open>\<open>RETT\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ctrl_type RETT,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,simm13]\<close>\<close>
Inr (ctrl_type RETT,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13))])
\<comment> \<open>The following are Read and Write instructions,\<close>
\<comment> \<open>only return \<open>[rs1,rd]\<close> as operand.\<close>
else if op3 = uint(0b101000::word6) \<and> rs1 \<noteq> 0 then \<comment> \<open>\<open>RDASR\<close>\<close>
if rs1 = uint(0b01111::word6) \<and> rd = 0 then \<comment> \<open>\<open>STBAR\<close> is a special case of \<open>RDASR\<close>\<close>
Inr (load_store_type STBAR,[])
else Inr (sreg_type RDASR,[(W5 (word_of_int rs1)),
(W5 (word_of_int rd))])
else if op3 = uint(0b101000::word6) \<and> rs1 = 0 then \<comment> \<open>\<open>RDY\<close>\<close>
Inr (sreg_type RDY,[(W5 (word_of_int rs1)),
(W5 (word_of_int rd))])
else if op3 = uint(0b101001::word6) then \<comment> \<open>\<open>RDPSR\<close>\<close>
Inr (sreg_type RDPSR,[(W5 (word_of_int rs1)),
(W5 (word_of_int rd))])
else if op3 = uint(0b101010::word6) then \<comment> \<open>\<open>RDWIM\<close>\<close>
Inr (sreg_type RDWIM,[(W5 (word_of_int rs1)),
(W5 (word_of_int rd))])
else if op3 = uint(0b101011::word6) then \<comment> \<open>\<open>RDTBR\<close>\<close>
Inr (sreg_type RDTBR,[(W5 (word_of_int rs1)),
(W5 (word_of_int rd))])
else if op3 = uint(0b110000::word6) \<and> rd \<noteq> 0 then \<comment> \<open>\<open>WRASR\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (sreg_type WRASR,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (sreg_type WRASR,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b110000::word6) \<and> rd = 0 then \<comment> \<open>\<open>WRY\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (sreg_type WRY,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (sreg_type WRY,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b110001::word6) then \<comment> \<open>\<open>WRPSR\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (sreg_type WRPSR,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (sreg_type WRPSR,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b110010::word6) then \<comment> \<open>\<open>WRWIM\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (sreg_type WRWIM,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (sreg_type WRWIM,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b110011::word6) then \<comment> \<open>\<open>WRTBR\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (sreg_type WRTBR,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (sreg_type WRTBR,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
\<comment> \<open>\<open>FLUSH\<close> instruction\<close>
else if op3 = uint(0b111011::word6) then \<comment> \<open>\<open>FLUSH\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[1,rs1,rs2]\<close>\<close>
Inr (load_store_type FLUSH,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,simm13]\<close>\<close>
Inr (load_store_type FLUSH,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13))])
\<comment> \<open>The following are arithmetic instructions.\<close>
else if op3 = uint(0b000001::word6) then \<comment> \<open>\<open>AND\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (logic_type ANDs,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (logic_type ANDs,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010001::word6) then \<comment> \<open>\<open>ANDcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (logic_type ANDcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (logic_type ANDcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000101::word6) then \<comment> \<open>\<open>ANDN\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (logic_type ANDN,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (logic_type ANDN,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010101::word6) then \<comment> \<open>\<open>ANDNcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (logic_type ANDNcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (logic_type ANDNcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000010::word6) then \<comment> \<open>\<open>OR\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (logic_type ORs,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (logic_type ORs,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010010::word6) then \<comment> \<open>\<open>ORcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (logic_type ORcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (logic_type ORcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000110::word6) then \<comment> \<open>\<open>ORN\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (logic_type ORN,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (logic_type ORN,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010110::word6) then \<comment> \<open>\<open>ORNcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (logic_type ORNcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (logic_type ORNcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000011::word6) then \<comment> \<open>\<open>XORs\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (logic_type XORs,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (logic_type XORs,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010011::word6) then \<comment> \<open>\<open>XORcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (logic_type XORcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (logic_type XORcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000111::word6) then \<comment> \<open>\<open>XNOR\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (logic_type XNOR,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (logic_type XNOR,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010111::word6) then \<comment> \<open>\<open>XNORcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (logic_type XNORcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (logic_type XNORcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b100101::word6) then \<comment> \<open>\<open>SLL\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (shift_type SLL,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,shcnt,rd]\<close>\<close>
let shcnt = rs2 in
Inr (shift_type SLL,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int shcnt)),
(W5 (word_of_int rd))])
else if op3 = uint (0b100110::word6) then \<comment> \<open>\<open>SRL\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (shift_type SRL,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,shcnt,rd]\<close>\<close>
let shcnt = rs2 in
Inr (shift_type SRL,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int shcnt)),
(W5 (word_of_int rd))])
else if op3 = uint(0b100111::word6) then \<comment> \<open>\<open>SRA\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (shift_type SRA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,shcnt,rd]\<close>\<close>
let shcnt = rs2 in
Inr (shift_type SRA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int shcnt)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000000::word6) then \<comment> \<open>\<open>ADD\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type ADD,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type ADD,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010000::word6) then \<comment> \<open>\<open>ADDcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type ADDcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type ADDcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b001000::word6) then \<comment> \<open>\<open>ADDX\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type ADDX,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type ADDX,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b011000::word6) then \<comment> \<open>\<open>ADDXcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type ADDXcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type ADDXcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b100000::word6) then \<comment> \<open>\<open>TADDcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type TADDcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type TADDcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b100010::word6) then \<comment> \<open>\<open>TADDccTV\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type TADDccTV,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type TADDccTV,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b000100::word6) then \<comment> \<open>\<open>SUB\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type SUB,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type SUB,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b010100::word6) then \<comment> \<open>\<open>SUBcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type SUBcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type SUBcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b001100::word6) then \<comment> \<open>\<open>SUBX\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type SUBX,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type SUBX,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b011100::word6) then \<comment> \<open>\<open>SUBXcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type SUBXcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type SUBXcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b100001::word6) then \<comment> \<open>\<open>TSUBcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type TSUBcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type TSUBcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b100011::word6) then \<comment> \<open>\<open>TSUBccTV\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type TSUBccTV,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type TSUBccTV,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b100100::word6) then \<comment> \<open>\<open>MULScc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type MULScc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type MULScc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b001010::word6) then \<comment> \<open>\<open>UMUL\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type UMUL,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type UMUL,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b011010::word6) then \<comment> \<open>\<open>UMULcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type UMULcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type UMULcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b001011::word6) then \<comment> \<open>\<open>SMUL\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type SMUL,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type SMUL,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b011011::word6) then \<comment> \<open>\<open>SMULcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type SMULcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type SMULcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b001110::word6) then \<comment> \<open>\<open>UDIV\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type UDIV,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type UDIV,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b011110::word6) then \<comment> \<open>\<open>UDIVcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type UDIVcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type UDIVcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b001111::word6) then \<comment> \<open>\<open>SDIV\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type SDIV,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type SDIV,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b011111::word6) then \<comment> \<open>\<open>SDIVcc\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (arith_type SDIVcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (arith_type SDIVcc,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b111100::word6) then \<comment> \<open>\<open>SAVE\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (ctrl_type SAVE,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (ctrl_type SAVE,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b111101::word6) then \<comment> \<open>\<open>RESTORE\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2,rd]\<close>\<close>
Inr (ctrl_type RESTORE,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2)),
(W5 (word_of_int rd))])
else \<comment> \<open>return \<open>[i,rs1,simm13,rd]\<close>\<close>
Inr (ctrl_type RESTORE,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Simm13 (word_of_int simm13)),
(W5 (word_of_int rd))])
else if op3 = uint(0b111010::word6) then \<comment> \<open>\<open>Ticc\<close>\<close>
let trap_cond = get_trap_cond w in
let trap_imm7 = get_trap_imm7 w in
if trap_cond = uint(0b1000::word4) then \<comment> \<open>\<open>TA\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TA,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b0000::word4) then \<comment> \<open>\<open>TN\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TN,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TN,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b1001::word4) then \<comment> \<open>\<open>TNE\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TNE,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TNE,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b0001::word4) then \<comment> \<open>\<open>TE\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TE,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TE,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b1010::word4) then \<comment> \<open>\<open>TG\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TG,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TG,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b0010::word4) then \<comment> \<open>\<open>TLE\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TLE,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TLE,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b1011::word4) then \<comment> \<open>\<open>TGE\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TGE,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TGE,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b0011::word4) then \<comment> \<open>\<open>TL\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TL,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TL,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b1100::word4) then \<comment> \<open>\<open>TGU\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TGU,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TGU,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b0100::word4) then \<comment> \<open>\<open>TLEU\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TLEU,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TLEU,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b1101::word4) then \<comment> \<open>\<open>TCC\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TCC,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TCC,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b0101::word4) then \<comment> \<open>\<open>TCS\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TCS,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TCS,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b1110::word4) then \<comment> \<open>\<open>TPOS\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TPOS,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TPOS,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b0110::word4) then \<comment> \<open>\<open>TNEG\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TNEG,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TNEG,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b1111::word4) then \<comment> \<open>\<open>TVC\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TVC,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TVC,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else if trap_cond = uint(0b0111::word4) then \<comment> \<open>\<open>TVS\<close>\<close>
if flagi = 0 then \<comment> \<open>return \<open>[i,rs1,rs2]\<close>\<close>
Inr (ticc_type TVS,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(W5 (word_of_int rs2))])
else \<comment> \<open>return \<open>[i,rs1,trap_imm7]\<close>\<close>
Inr (ticc_type TVS,[(Flag (word_of_int flagi)),
(W5 (word_of_int rs1)),
(Imm7 (word_of_int trap_imm7))])
else Inl [invalid_trap_cond]
else Inl [invalid_op3_f3_op10]
else Inl [invalid_op_f3]
"
text \<open>Read the word32 value from the Program Counter in the current state.
Find the instruction in the memory address of the word32 value.
Return a word32 value of the insturction.\<close>
definition fetch_instruction::"('a) sparc_state \<Rightarrow>
(Exception list + word32)"
where "fetch_instruction s \<equiv>
\<comment> \<open>\<open>pc_val\<close> is the 32-bit memory address of the instruction.\<close>
let pc_val = cpu_reg_val PC s;
psr_val = cpu_reg_val PSR s;
s_val = get_S psr_val;
asi = if s_val = 0 then word_of_int 8 else word_of_int 9
in
\<comment> \<open>Check if \<open>pc_val\<close> is aligned to 4-byte (32-bit) boundary.\<close>
\<comment> \<open>That is, check if the least significant two bits of\<close>
\<comment> \<open>\<open>pc_val\<close> are 0s.\<close>
- if uint(bitAND (0b00000000000000000000000000000011) pc_val) = 0 then
+ if uint((AND) (0b00000000000000000000000000000011) pc_val) = 0 then
\<comment> \<open>Get the 32-bit value from the address of \<open>pc_val\<close>\<close>
\<comment> \<open>to the address of \<open>pc_val+3\<close>\<close>
let (mem_result,n_s) = memory_read asi pc_val s in
case mem_result of
None \<Rightarrow> Inl [fetch_instruction_error]
|Some v \<Rightarrow> Inr v
else Inl [fetch_instruction_error]
"
text \<open>Decode the word32 value of an instruction into
the name of the instruction and its operands.\<close>
definition decode_instruction::"word32 \<Rightarrow>
Exception list + instruction"
where "decode_instruction w \<equiv>
let this_op = get_op w in
if this_op = uint(0b01::word2) then \<comment> \<open>Instruction format 1\<close>
parse_instr_f1 w
else if this_op = uint(0b00::word2) then \<comment> \<open>Instruction format 2\<close>
parse_instr_f2 w
else \<comment> \<open>\<open>op = 11 0r 10\<close>, instruction format 3\<close>
parse_instr_f3 w
"
text \<open>Get the current window from the PSR\<close>
definition get_curr_win::"unit \<Rightarrow> ('a,('a::len0 window_size)) sparc_state_monad"
where "get_curr_win _ \<equiv>
do
curr_win \<leftarrow> gets (\<lambda>s. (ucast (get_CWP (cpu_reg_val PSR s))));
return curr_win
od"
text \<open>Operational semantics for CALL\<close>
definition call_instr::"instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "call_instr instr \<equiv>
let op_list = snd instr;
mem_addr = ((ucast (get_operand_w30 (op_list!0)))::word32) << 2
in
do
curr_win \<leftarrow> get_curr_win();
pc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PC s));
npc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val nPC s));
write_reg pc_val curr_win (word_of_int 15);
write_cpu npc_val PC;
write_cpu (pc_val + mem_addr) nPC;
return ()
od"
text\<open>Evaluate icc based on the bits N, Z, V, C in PSR
and the type of branching instruction.
See Sparcv8 manual Page 178.\<close>
definition eval_icc::"sparc_operation \<Rightarrow> word1 \<Rightarrow> word1 \<Rightarrow> word1 \<Rightarrow> word1 \<Rightarrow> int"
where
"eval_icc instr_name n_val z_val v_val c_val \<equiv>
if instr_name = bicc_type BNE then
if z_val = 0 then 1 else 0
else if instr_name = bicc_type BE then
if z_val = 1 then 1 else 0
else if instr_name = bicc_type BG then
- if (bitOR z_val (n_val XOR v_val)) = 0 then 1 else 0
+ if ((OR) z_val (n_val XOR v_val)) = 0 then 1 else 0
else if instr_name = bicc_type BLE then
- if (bitOR z_val (n_val XOR v_val)) = 1 then 1 else 0
+ if ((OR) z_val (n_val XOR v_val)) = 1 then 1 else 0
else if instr_name = bicc_type BGE then
if (n_val XOR v_val) = 0 then 1 else 0
else if instr_name = bicc_type BL then
if (n_val XOR v_val) = 1 then 1 else 0
else if instr_name = bicc_type BGU then
if (c_val = 0 \<and> z_val = 0) then 1 else 0
else if instr_name = bicc_type BLEU then
if (c_val = 1 \<or> z_val = 1) then 1 else 0
else if instr_name = bicc_type BCC then
if c_val = 0 then 1 else 0
else if instr_name = bicc_type BCS then
if c_val = 1 then 1 else 0
else if instr_name = bicc_type BNEG then
if n_val = 1 then 1 else 0
else if instr_name = bicc_type BA then 1
else if instr_name = bicc_type BN then 0
else if instr_name = bicc_type BPOS then
if n_val = 0 then 1 else 0
else if instr_name = bicc_type BVC then
if v_val = 0 then 1 else 0
else if instr_name = bicc_type BVS then
if v_val = 1 then 1 else 0
else -1
"
definition branch_instr_sub1:: "sparc_operation \<Rightarrow> ('a) sparc_state \<Rightarrow> int"
where "branch_instr_sub1 instr_name s \<equiv>
let n_val = get_icc_N ((cpu_reg s) PSR);
z_val = get_icc_Z ((cpu_reg s) PSR);
v_val = get_icc_V ((cpu_reg s) PSR);
c_val = get_icc_C ((cpu_reg s) PSR)
in
eval_icc instr_name n_val z_val v_val c_val"
text \<open>Operational semantics for Branching insturctions.
Return exception or a bool value for annulment.
If the bool value is 1, then the delay instruciton
is not executed, otherwise the delay instruction
is executed.\<close>
definition branch_instr::"instruction \<Rightarrow> ('a,unit) sparc_state_monad"
where "branch_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
disp22 = get_operand_w22 (op_list!1);
flaga = get_operand_flag (op_list!0)
in
do
icc_val \<leftarrow> gets( \<lambda>s. (branch_instr_sub1 instr_name s));
npc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val nPC s));
pc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PC s));
write_cpu npc_val PC;
if icc_val = 1 then
do
write_cpu (pc_val + (sign_ext24 (((ucast(disp22))::word24) << 2))) nPC;
if (instr_name = bicc_type BA) \<and> (flaga = 1) then
do
set_annul True;
return ()
od
else
return ()
od
else \<comment> \<open>\<open>icc_val = 0\<close>\<close>
do
write_cpu (npc_val + 4) nPC;
if flaga = 1 then
do
set_annul True;
return ()
od
else return ()
od
od"
text \<open>Operational semantics for NOP\<close>
definition nop_instr::"instruction \<Rightarrow> ('a,unit) sparc_state_monad"
where "nop_instr instr \<equiv> return ()"
text \<open>Operational semantics for SETHI\<close>
definition sethi_instr::"instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "sethi_instr instr \<equiv>
let op_list = snd instr;
imm22 = get_operand_w22 (op_list!0);
rd = get_operand_w5 (op_list!1)
in
if rd \<noteq> 0 then
do
curr_win \<leftarrow> get_curr_win();
write_reg (((ucast(imm22))::word32) << 10) curr_win rd;
return ()
od
else return ()
"
text \<open>
Get \<open>operand2\<close> based on the flag \<open>i\<close>, \<open>rs1\<close>, \<open>rs2\<close>, and \<open>simm13\<close>.
If \<open>i = 0\<close> then \<open>operand2 = r[rs2]\<close>,
else \<open>operand2 = sign_ext13(simm13)\<close>.
\<open>op_list\<close> should be \<open>[i,rs1,rs2,\<dots>]\<close> or \<open>[i,rs1,simm13,\<dots>]\<close>.
\<close>
definition get_operand2::"inst_operand list \<Rightarrow> ('a::len0) sparc_state
\<Rightarrow> virtua_address"
where "get_operand2 op_list s \<equiv>
let flagi = get_operand_flag (op_list!0);
curr_win = ucast (get_CWP (cpu_reg_val PSR s))
in
if flagi = 0 then
let rs2 = get_operand_w5 (op_list!2);
rs2_val = user_reg_val curr_win rs2 s
in rs2_val
else
let ext_simm13 = sign_ext13 (get_operand_simm13 (op_list!2)) in
ext_simm13
"
text \<open>
Get \<open>operand2_val\<close> based on the flag \<open>i\<close>, \<open>rs1\<close>, \<open>rs2\<close>, and \<open>simm13\<close>.
If \<open>i = 0\<close> then \<open>operand2_val = uint r[rs2]\<close>,
else \<open>operand2_val = sint sign_ext13(simm13)\<close>.
\<open>op_list\<close> should be \<open>[i,rs1,rs2,\<dots>]\<close> or \<open>[i,rs1,simm13,\<dots>]\<close>.
\<close>
definition get_operand2_val::"inst_operand list \<Rightarrow> ('a::len0) sparc_state \<Rightarrow> int"
where "get_operand2_val op_list s \<equiv>
let flagi = get_operand_flag (op_list!0);
curr_win = ucast (get_CWP (cpu_reg_val PSR s))
in
if flagi = 0 then
let rs2 = get_operand_w5 (op_list!2);
rs2_val = user_reg_val curr_win rs2 s
in sint rs2_val
else
let ext_simm13 = sign_ext13 (get_operand_simm13 (op_list!2)) in
sint ext_simm13
"
text \<open>
Get the address based on the flag \<open>i\<close>, \<open>rs1\<close>, \<open>rs2\<close>, and \<open>simm13\<close>.
If \<open>i = 0\<close> then \<open>addr = r[rs1] + r[rs2]\<close>,
else \<open>addr = r[rs1] + sign_ext13(simm13)\<close>.
\<open>op_list\<close> should be \<open>[i,rs1,rs2,\<dots>]\<close> or \<open>[i,rs1,simm13,\<dots>]\<close>.
\<close>
definition get_addr::"inst_operand list \<Rightarrow> ('a::len0) sparc_state \<Rightarrow> virtua_address"
where "get_addr op_list s \<equiv>
let rs1 = get_operand_w5 (op_list!1);
curr_win = ucast (get_CWP (cpu_reg_val PSR s));
rs1_val = user_reg_val curr_win rs1 s;
op2 = get_operand2 op_list s
in
(rs1_val + op2)
"
text \<open>Operational semantics for JMPL\<close>
definition jmpl_instr::"instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "jmpl_instr instr \<equiv>
let op_list = snd instr;
rd = get_operand_w5 (op_list!3)
in
do
curr_win \<leftarrow> get_curr_win();
jmp_addr \<leftarrow> gets (\<lambda>s. (get_addr op_list s));
- if (bitAND jmp_addr 0b00000000000000000000000000000011) \<noteq> 0 then
+ if ((AND) jmp_addr 0b00000000000000000000000000000011) \<noteq> 0 then
do
raise_trap mem_address_not_aligned;
return ()
od
else
do
rd_next_val \<leftarrow> gets (\<lambda>s. (if rd \<noteq> 0 then
(cpu_reg_val PC s)
else
user_reg_val curr_win rd s));
write_reg rd_next_val curr_win rd;
npc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val nPC s));
write_cpu npc_val PC;
write_cpu jmp_addr nPC;
return ()
od
od"
text \<open>Operational semantics for RETT\<close>
definition rett_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "rett_instr instr \<equiv>
let op_list = snd instr in
do
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
curr_win \<leftarrow> gets (\<lambda>s. (get_CWP (cpu_reg_val PSR s)));
new_cwp \<leftarrow> gets (\<lambda>s. (word_of_int (((uint curr_win) + 1) mod NWINDOWS)));
new_cwp_int \<leftarrow> gets (\<lambda>s. ((uint curr_win) + 1) mod NWINDOWS);
addr \<leftarrow> gets (\<lambda>s. (get_addr op_list s));
et_val \<leftarrow> gets (\<lambda>s. ((ucast (get_ET psr_val))::word1));
s_val \<leftarrow> gets (\<lambda>s. ((ucast (get_S psr_val))::word1));
ps_val \<leftarrow> gets (\<lambda>s. ((ucast (get_PS psr_val))::word1));
wim_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val WIM s));
npc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val nPC s));
if et_val = 1 then
if s_val = 0 then
do
raise_trap privileged_instruction;
return ()
od
else
do
raise_trap illegal_instruction;
return ()
od
else if s_val = 0 then
do
write_cpu_tt (0b00000011::word8);
set_exe_mode False;
set_err_mode True;
raise_trap privileged_instruction;
fail ()
od
else if (get_WIM_bit (nat new_cwp_int) wim_val) \<noteq> 0 then
do
write_cpu_tt (0b00000110::word8);
set_exe_mode False;
set_err_mode True;
raise_trap window_underflow;
fail ()
od
- else if (bitAND addr (0b00000000000000000000000000000011::word32)) \<noteq> 0 then
+ else if ((AND) addr (0b00000000000000000000000000000011::word32)) \<noteq> 0 then
do
write_cpu_tt (0b00000111::word8);
set_exe_mode False;
set_err_mode True;
raise_trap mem_address_not_aligned;
fail ()
od
else
do
write_cpu npc_val PC;
write_cpu addr nPC;
new_psr_val \<leftarrow> gets (\<lambda>s. (update_PSR_rett new_cwp 1 ps_val psr_val));
write_cpu new_psr_val PSR;
return ()
od
od"
definition save_retore_sub1 :: "word32 \<Rightarrow> word5 \<Rightarrow> word5 \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "save_retore_sub1 result new_cwp rd \<equiv>
do
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
new_psr_val \<leftarrow> gets (\<lambda>s. (update_CWP new_cwp psr_val));
write_cpu new_psr_val PSR; \<comment> \<open>Change \<open>CWP\<close> to the new window value.\<close>
write_reg result (ucast new_cwp) rd; \<comment> \<open>Write result in \<open>rd\<close> of the new window.\<close>
return ()
od"
text \<open>Operational semantics for SAVE and RESTORE.\<close>
definition save_restore_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "save_restore_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
rd = get_operand_w5 (op_list!3)
in
do
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
curr_win \<leftarrow> get_curr_win();
wim_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val WIM s));
if instr_name = ctrl_type SAVE then
do
new_cwp \<leftarrow> gets (\<lambda>s. ((word_of_int (((uint curr_win) - 1) mod NWINDOWS)))::word5);
if (get_WIM_bit (unat new_cwp) wim_val) \<noteq> 0 then
do
raise_trap window_overflow;
return ()
od
else
do
result \<leftarrow> gets (\<lambda>s. (get_addr op_list s)); \<comment> \<open>operands are from the old window.\<close>
save_retore_sub1 result new_cwp rd
od
od
else \<comment> \<open>\<open>instr_name = RESTORE\<close>\<close>
do
new_cwp \<leftarrow> gets (\<lambda>s. ((word_of_int (((uint curr_win) + 1) mod NWINDOWS)))::word5);
if (get_WIM_bit (unat new_cwp) wim_val) \<noteq> 0 then
do
raise_trap window_underflow;
return ()
od
else
do
result \<leftarrow> gets (\<lambda>s. (get_addr op_list s)); \<comment> \<open>operands are from the old window.\<close>
save_retore_sub1 result new_cwp rd
od
od
od"
definition flush_cache_line :: "word32 \<Rightarrow> ('a,unit) sparc_state_monad"
where "flush_cache_line \<equiv> undefined"
definition flush_Ibuf_and_pipeline :: "word32 \<Rightarrow> ('a,unit) sparc_state_monad"
where "flush_Ibuf_and_pipeline \<equiv> undefined"
text \<open>Operational semantics for FLUSH.
Flush the all the caches.\<close>
definition flush_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "flush_instr instr \<equiv>
let op_list = snd instr in
do
addr \<leftarrow> gets (\<lambda>s. (get_addr op_list s));
modify (\<lambda>s. (flush_cache_all s));
\<^cancel>\<open>flush_cache_line(addr);\<close>
\<^cancel>\<open>flush_Ibuf_and_pipeline(addr);\<close>
return ()
od"
text \<open>Operational semantics for read state register instructions.
We do not consider RDASR here.\<close>
definition read_state_reg_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "read_state_reg_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
rs1 = get_operand_w5 (op_list!0);
rd = get_operand_w5 (op_list!1)
in
do
curr_win \<leftarrow> get_curr_win();
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
s_val \<leftarrow> gets (\<lambda>s. (get_S psr_val));
if (instr_name \<in> {sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR} \<or>
(instr_name = sreg_type RDASR \<and> privileged_ASR rs1))
\<and> ((ucast s_val)::word1) = 0 then
do
raise_trap privileged_instruction;
return ()
od
else if illegal_instruction_ASR rs1 then
do
raise_trap illegal_instruction;
return ()
od
else if rd \<noteq> 0 then
if instr_name = sreg_type RDY then
do
y_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val Y s));
write_reg y_val curr_win rd;
return ()
od
else if instr_name = sreg_type RDASR then
do
asr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val (ASR rs1) s));
write_reg asr_val curr_win rd;
return ()
od
else if instr_name = sreg_type RDPSR then
do
write_reg psr_val curr_win rd;
return ()
od
else if instr_name = sreg_type RDWIM then
do
wim_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val WIM s));
write_reg wim_val curr_win rd;
return ()
od
else \<comment> \<open>Must be \<open>RDTBR\<close>.\<close>
do
tbr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val TBR s));
write_reg tbr_val curr_win rd;
return ()
od
else return ()
od"
text \<open>Operational semantics for write state register instructions.
We do not consider WRASR here.\<close>
definition write_state_reg_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "write_state_reg_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
rs1 = get_operand_w5 (op_list!1);
rd = get_operand_w5 (op_list!3)
in
do
curr_win \<leftarrow> get_curr_win();
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
s_val \<leftarrow> gets (\<lambda>s. (get_S psr_val));
op2 \<leftarrow> gets (\<lambda>s. (get_operand2 op_list s));
rs1_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rs1 s));
- result \<leftarrow> gets (\<lambda>s. (bitXOR rs1_val op2));
+ result \<leftarrow> gets (\<lambda>s. ((XOR) rs1_val op2));
if instr_name = sreg_type WRY then
do
modify (\<lambda>s. (delayed_pool_add (DELAYNUM, result, Y) s));
return ()
od
else if instr_name = sreg_type WRASR then
if privileged_ASR rd \<and> s_val = 0 then
do
raise_trap privileged_instruction;
return ()
od
else if illegal_instruction_ASR rd then
do
raise_trap illegal_instruction;
return ()
od
else
do
modify (\<lambda>s. (delayed_pool_add (DELAYNUM, result, (ASR rd)) s));
return ()
od
else if instr_name = sreg_type WRPSR then
if s_val = 0 then
do
raise_trap privileged_instruction;
return ()
od
else if (uint ((ucast result)::word5)) \<ge> NWINDOWS then
do
raise_trap illegal_instruction;
return ()
od
else
do \<comment> \<open>\<open>ET\<close> and \<open>PIL\<close> appear to be written IMMEDIATELY w.r.t. interrupts.\<close>
pil_val \<leftarrow> gets (\<lambda>s. (get_PIL result));
et_val \<leftarrow> gets (\<lambda>s. (get_ET result));
new_psr_val \<leftarrow> gets (\<lambda>s. (update_PSR_et_pil et_val pil_val psr_val));
write_cpu new_psr_val PSR;
modify (\<lambda>s. (delayed_pool_add (DELAYNUM, result, PSR) s));
return ()
od
else if instr_name = sreg_type WRWIM then
if s_val = 0 then
do
raise_trap privileged_instruction;
return ()
od
else
do \<comment> \<open>Don't write bits corresponding to non-existent windows.\<close>
result_f \<leftarrow> gets (\<lambda>s. ((result << nat (32 - NWINDOWS)) >> nat (32 - NWINDOWS)));
modify (\<lambda>s. (delayed_pool_add (DELAYNUM, result_f, WIM) s));
return ()
od
else \<comment> \<open>Must be \<open>WRTBR\<close>\<close>
if s_val = 0 then
do
raise_trap privileged_instruction;
return ()
od
else
do \<comment> \<open>Only write the bits \<open><31:12>\<close> of the result to \<open>TBR\<close>.\<close>
tbr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val TBR s));
- tbr_val_11_0 \<leftarrow> gets (\<lambda>s. (bitAND tbr_val 0b00000000000000000000111111111111));
- result_tmp \<leftarrow> gets (\<lambda>s. (bitAND result 0b11111111111111111111000000000000));
- result_f \<leftarrow> gets (\<lambda>s. (bitOR tbr_val_11_0 result_tmp));
+ tbr_val_11_0 \<leftarrow> gets (\<lambda>s. ((AND) tbr_val 0b00000000000000000000111111111111));
+ result_tmp \<leftarrow> gets (\<lambda>s. ((AND) result 0b11111111111111111111000000000000));
+ result_f \<leftarrow> gets (\<lambda>s. ((OR) tbr_val_11_0 result_tmp));
modify (\<lambda>s. (delayed_pool_add (DELAYNUM, result_f, TBR) s));
return ()
od
od"
definition logical_result :: "sparc_operation \<Rightarrow> word32 \<Rightarrow> word32 \<Rightarrow> word32"
where "logical_result instr_name rs1_val operand2 \<equiv>
if (instr_name = logic_type ANDs) \<or>
(instr_name = logic_type ANDcc) then
- bitAND rs1_val operand2
+ (AND) rs1_val operand2
else if (instr_name = logic_type ANDN) \<or>
(instr_name = logic_type ANDNcc) then
- bitAND rs1_val (bitNOT operand2)
+ (AND) rs1_val (NOT operand2)
else if (instr_name = logic_type ORs) \<or>
(instr_name = logic_type ORcc) then
- bitOR rs1_val operand2
+ (OR) rs1_val operand2
else if instr_name \<in> {logic_type ORN,logic_type ORNcc} then
- bitOR rs1_val (bitNOT operand2)
+ (OR) rs1_val (NOT operand2)
else if instr_name \<in> {logic_type XORs,logic_type XORcc} then
- bitXOR rs1_val operand2
+ (XOR) rs1_val operand2
else \<comment> \<open>Must be \<open>XNOR\<close> or \<open>XNORcc\<close>\<close>
- bitXOR rs1_val (bitNOT operand2)
+ (XOR) rs1_val (NOT operand2)
"
definition logical_new_psr_val :: "word32 \<Rightarrow> ('a) sparc_state \<Rightarrow> word32"
where "logical_new_psr_val result s \<equiv>
let psr_val = cpu_reg_val PSR s;
n_val = (ucast (result >> 31))::word1;
z_val = if (result = 0) then 1 else 0;
v_val = 0;
c_val = 0
in
update_PSR_icc n_val z_val v_val c_val psr_val
"
definition logical_instr_sub1 :: "sparc_operation \<Rightarrow> word32 \<Rightarrow>
('a::len0,unit) sparc_state_monad"
where
"logical_instr_sub1 instr_name result \<equiv>
if instr_name \<in> {logic_type ANDcc,logic_type ANDNcc,logic_type ORcc,
logic_type ORNcc,logic_type XORcc,logic_type XNORcc} then
do
new_psr_val \<leftarrow> gets (\<lambda>s. (logical_new_psr_val result s));
write_cpu new_psr_val PSR;
return ()
od
else return ()
"
text \<open>Operational semantics for logical instructions.\<close>
definition logical_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "logical_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
rs1 = get_operand_w5 (op_list!1);
rd = get_operand_w5 (op_list!3)
in
do
operand2 \<leftarrow> gets (\<lambda>s. (get_operand2 op_list s));
curr_win \<leftarrow> get_curr_win();
rs1_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rs1 s));
rd_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rd s));
result \<leftarrow> gets (\<lambda>s. (logical_result instr_name rs1_val operand2));
new_rd_val \<leftarrow> gets (\<lambda>s. (if rd \<noteq> 0 then result else rd_val));
write_reg new_rd_val curr_win rd;
logical_instr_sub1 instr_name result
od"
text \<open>Operational semantics for shift instructions.\<close>
definition shift_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "shift_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
flagi = get_operand_flag (op_list!0);
rs1 = get_operand_w5 (op_list!1);
rs2_shcnt = get_operand_w5 (op_list!2);
rd = get_operand_w5 (op_list!3)
in
do
curr_win \<leftarrow> get_curr_win();
shift_count \<leftarrow> gets (\<lambda>s. (if flagi = 0 then
ucast (user_reg_val curr_win rs2_shcnt s)
else rs2_shcnt));
rs1_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rs1 s));
if (instr_name = shift_type SLL) \<and> (rd \<noteq> 0) then
do
rd_val \<leftarrow> gets (\<lambda>s. (rs1_val << (unat shift_count)));
write_reg rd_val curr_win rd;
return ()
od
else if (instr_name = shift_type SRL) \<and> (rd \<noteq> 0) then
do
rd_val \<leftarrow> gets (\<lambda>s. (rs1_val >> (unat shift_count)));
write_reg rd_val curr_win rd;
return ()
od
else if (instr_name = shift_type SRA) \<and> (rd \<noteq> 0) then
do
rd_val \<leftarrow> gets (\<lambda>s. (rs1_val >>> (unat shift_count)));
write_reg rd_val curr_win rd;
return ()
od
else return ()
od"
definition add_instr_sub1 :: "sparc_operation \<Rightarrow> word32 \<Rightarrow> word32 \<Rightarrow> word32
\<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "add_instr_sub1 instr_name result rs1_val operand2 \<equiv>
if instr_name \<in> {arith_type ADDcc,arith_type ADDXcc} then
do
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
result_31 \<leftarrow> gets (\<lambda>s. ((ucast (result >> 31))::word1));
rs1_val_31 \<leftarrow> gets (\<lambda>s. ((ucast (rs1_val >> 31))::word1));
operand2_31 \<leftarrow> gets (\<lambda>s. ((ucast (operand2 >> 31))::word1));
new_n_val \<leftarrow> gets (\<lambda>s. (result_31));
new_z_val \<leftarrow> gets (\<lambda>s. (if result = 0 then 1::word1 else 0::word1));
- new_v_val \<leftarrow> gets (\<lambda>s. (bitOR (bitAND rs1_val_31
- (bitAND operand2_31
- (bitNOT result_31)))
- (bitAND (bitNOT rs1_val_31)
- (bitAND (bitNOT operand2_31)
+ new_v_val \<leftarrow> gets (\<lambda>s. ((OR) ((AND) rs1_val_31
+ ((AND) operand2_31
+ (NOT result_31)))
+ ((AND) (NOT rs1_val_31)
+ ((AND) (NOT operand2_31)
result_31))));
- new_c_val \<leftarrow> gets (\<lambda>s. (bitOR (bitAND rs1_val_31
+ new_c_val \<leftarrow> gets (\<lambda>s. ((OR) ((AND) rs1_val_31
operand2_31)
- (bitAND (bitNOT result_31)
- (bitOR rs1_val_31
+ ((AND) (NOT result_31)
+ ((OR) rs1_val_31
operand2_31))));
new_psr_val \<leftarrow> gets (\<lambda>s. (update_PSR_icc new_n_val
new_z_val
new_v_val
new_c_val psr_val));
write_cpu new_psr_val PSR;
return ()
od
else return ()
"
text \<open>Operational semantics for add instructions.
These include ADD, ADDcc, ADDX.\<close>
definition add_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "add_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
rs1 = get_operand_w5 (op_list!1);
rd = get_operand_w5 (op_list!3)
in
do
operand2 \<leftarrow> gets (\<lambda>s. (get_operand2 op_list s));
curr_win \<leftarrow> get_curr_win();
rs1_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rs1 s));
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
c_val \<leftarrow> gets (\<lambda>s. (get_icc_C psr_val));
result \<leftarrow> gets (\<lambda>s. (if (instr_name = arith_type ADD) \<or>
(instr_name = arith_type ADDcc) then
rs1_val + operand2
else \<comment> \<open>Must be \<open>ADDX\<close> or \<open>ADDXcc\<close>\<close>
rs1_val + operand2 + (ucast c_val)));
rd_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rd s));
new_rd_val \<leftarrow> gets (\<lambda>s. (if rd \<noteq> 0 then result else rd_val));
write_reg new_rd_val curr_win rd;
add_instr_sub1 instr_name result rs1_val operand2
od"
definition sub_instr_sub1 :: "sparc_operation \<Rightarrow> word32 \<Rightarrow> word32 \<Rightarrow> word32
\<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "sub_instr_sub1 instr_name result rs1_val operand2 \<equiv>
if instr_name \<in> {arith_type SUBcc,arith_type SUBXcc} then
do
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
result_31 \<leftarrow> gets (\<lambda>s. ((ucast (result >> 31))::word1));
rs1_val_31 \<leftarrow> gets (\<lambda>s. ((ucast (rs1_val >> 31))::word1));
operand2_31 \<leftarrow> gets (\<lambda>s. ((ucast (operand2 >> 31))::word1));
new_n_val \<leftarrow> gets (\<lambda>s. (result_31));
new_z_val \<leftarrow> gets (\<lambda>s. (if result = 0 then 1::word1 else 0::word1));
- new_v_val \<leftarrow> gets (\<lambda>s. (bitOR (bitAND rs1_val_31
- (bitAND (bitNOT operand2_31)
- (bitNOT result_31)))
- (bitAND (bitNOT rs1_val_31)
- (bitAND operand2_31
+ new_v_val \<leftarrow> gets (\<lambda>s. ((OR) ((AND) rs1_val_31
+ ((AND) (NOT operand2_31)
+ (NOT result_31)))
+ ((AND) (NOT rs1_val_31)
+ ((AND) operand2_31
result_31))));
- new_c_val \<leftarrow> gets (\<lambda>s. (bitOR (bitAND (bitNOT rs1_val_31)
+ new_c_val \<leftarrow> gets (\<lambda>s. ((OR) ((AND) (NOT rs1_val_31)
operand2_31)
- (bitAND result_31
- (bitOR (bitNOT rs1_val_31)
+ ((AND) result_31
+ ((OR) (NOT rs1_val_31)
operand2_31))));
new_psr_val \<leftarrow> gets (\<lambda>s. (update_PSR_icc new_n_val
new_z_val
new_v_val
new_c_val psr_val));
write_cpu new_psr_val PSR;
return ()
od
else return ()
"
text \<open>Operational semantics for subtract instructions.
These include SUB, SUBcc, SUBX.\<close>
definition sub_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "sub_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
rs1 = get_operand_w5 (op_list!1);
rd = get_operand_w5 (op_list!3)
in
do
operand2 \<leftarrow> gets (\<lambda>s. (get_operand2 op_list s));
curr_win \<leftarrow> get_curr_win();
rs1_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rs1 s));
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
c_val \<leftarrow> gets (\<lambda>s. (get_icc_C psr_val));
result \<leftarrow> gets (\<lambda>s. (if (instr_name = arith_type SUB) \<or>
(instr_name = arith_type SUBcc) then
rs1_val - operand2
else \<comment> \<open>Must be \<open>SUBX\<close> or \<open>SUBXcc\<close>\<close>
rs1_val - operand2 - (ucast c_val)));
rd_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rd s));
new_rd_val \<leftarrow> gets (\<lambda>s. (if rd \<noteq> 0 then result else rd_val));
write_reg new_rd_val curr_win rd;
sub_instr_sub1 instr_name result rs1_val operand2
od"
definition mul_instr_sub1 :: "sparc_operation \<Rightarrow> word32 \<Rightarrow>
('a::len0,unit) sparc_state_monad"
where "mul_instr_sub1 instr_name result \<equiv>
if instr_name \<in> {arith_type SMULcc,arith_type UMULcc} then
do
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
new_n_val \<leftarrow> gets (\<lambda>s. ((ucast (result >> 31))::word1));
new_z_val \<leftarrow> gets (\<lambda>s. (if result = 0 then 1 else 0));
new_v_val \<leftarrow> gets (\<lambda>s. 0);
new_c_val \<leftarrow> gets (\<lambda>s. 0);
new_psr_val \<leftarrow> gets (\<lambda>s. (update_PSR_icc new_n_val
new_z_val
new_v_val
new_c_val psr_val));
write_cpu new_psr_val PSR;
return ()
od
else return ()
"
text \<open>Operational semantics for multiply instructions.\<close>
definition mul_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "mul_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
rs1 = get_operand_w5 (op_list!1);
rd = get_operand_w5 (op_list!3)
in
do
operand2 \<leftarrow> gets (\<lambda>s. (get_operand2 op_list s));
curr_win \<leftarrow> get_curr_win();
rs1_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rs1 s));
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
result0 \<leftarrow> gets (\<lambda>s. (if instr_name \<in> {arith_type UMUL,arith_type UMULcc} then
(word_of_int ((uint rs1_val) *
(uint operand2)))::word64
else \<comment> \<open>Must be \<open>SMUL\<close> or \<open>SMULcc\<close>\<close>
(word_of_int ((sint rs1_val) *
(sint operand2)))::word64));
\<comment> \<open>whether to use \<open>ucast\<close> or \<open>scast\<close> does not matter below.\<close>
y_val \<leftarrow> gets (\<lambda>s. ((ucast (result0 >> 32))::word32));
write_cpu y_val Y;
result \<leftarrow> gets (\<lambda>s. ((ucast result0)::word32));
rd_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rd s));
new_rd_val \<leftarrow> gets (\<lambda>s. (if rd \<noteq> 0 then result else rd_val));
write_reg new_rd_val curr_win rd;
mul_instr_sub1 instr_name result
od"
definition div_comp_temp_64bit :: "instruction \<Rightarrow> word64 \<Rightarrow>
virtua_address \<Rightarrow> word64"
where "div_comp_temp_64bit i y_rs1 operand2 \<equiv>
if ((fst i) = arith_type UDIV) \<or> ((fst i) = arith_type UDIVcc) then
(word_of_int ((uint y_rs1) div (uint operand2)))::word64
else \<comment> \<open>Must be \<open>SDIV\<close> or \<open>SDIVcc\<close>.\<close>
\<comment> \<open>Due to Isabelle's rounding method is not nearest to zero,\<close>
\<comment> \<open>we have to implement division in a different way.\<close>
let sop1 = sint y_rs1;
sop2 = sint operand2;
pop1 = abs sop1;
pop2 = abs sop2
in
if sop1 > 0 \<and> sop2 > 0 then
(word_of_int (sop1 div sop2))
else if sop1 > 0 \<and> sop2 < 0 then
(word_of_int (- (sop1 div pop2)))
else if sop1 < 0 \<and> sop2 > 0 then
(word_of_int (- (pop1 div sop2)))
else \<comment> \<open>\<open>sop1 < 0 \<and> sop2 < 0\<close>\<close>
(word_of_int (pop1 div pop2))"
definition div_comp_temp_V :: "instruction \<Rightarrow> word32 \<Rightarrow> word33 \<Rightarrow> word1"
where "div_comp_temp_V i w32 w33 \<equiv>
if ((fst i) = arith_type UDIV) \<or> ((fst i) = arith_type UDIVcc) then
if w32 = 0 then 0 else 1
else \<comment> \<open>Must be \<open>SDIV\<close> or \<open>SDIVcc\<close>.\<close>
if (w33 = 0) \<or> (w33 = (0b111111111111111111111111111111111::word33))
then 0 else 1"
definition div_comp_result :: "instruction \<Rightarrow> word1 \<Rightarrow> word64 \<Rightarrow> word32"
where "div_comp_result i temp_V temp_64bit \<equiv>
if temp_V = 1 then
if ((fst i) = arith_type UDIV) \<or> ((fst i) = arith_type UDIVcc) then
(0b11111111111111111111111111111111::word32)
else if (fst i) \<in> {arith_type SDIV,arith_type SDIVcc} then
if temp_64bit > 0 then
(0b01111111111111111111111111111111::word32)
else ((word_of_int (0 - (uint (0b10000000000000000000000000000000::word32))))::word32)
else ((ucast temp_64bit)::word32)
else ((ucast temp_64bit)::word32)"
definition div_write_new_val :: "instruction \<Rightarrow> word32 \<Rightarrow> word1 \<Rightarrow>
('a::len0,unit) sparc_state_monad"
where "div_write_new_val i result temp_V \<equiv>
if (fst i) \<in> {arith_type UDIVcc,arith_type SDIVcc} then
do
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
new_n_val \<leftarrow> gets (\<lambda>s. ((ucast (result >> 31))::word1));
new_z_val \<leftarrow> gets (\<lambda>s. (if result = 0 then 1 else 0));
new_v_val \<leftarrow> gets (\<lambda>s. temp_V);
new_c_val \<leftarrow> gets (\<lambda>s. 0);
new_psr_val \<leftarrow> gets (\<lambda>s. (update_PSR_icc new_n_val
new_z_val
new_v_val
new_c_val psr_val));
write_cpu new_psr_val PSR;
return ()
od
else return ()"
definition div_comp :: "instruction \<Rightarrow> word5 \<Rightarrow> word5 \<Rightarrow> virtua_address \<Rightarrow>
('a::len0,unit) sparc_state_monad"
where "div_comp instr rs1 rd operand2 \<equiv>
do
curr_win \<leftarrow> get_curr_win();
rs1_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rs1 s));
y_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val Y s));
y_rs1 \<leftarrow> gets (\<lambda>s. ((word_cat y_val rs1_val)::word64));
temp_64bit \<leftarrow> gets (\<lambda>s. (div_comp_temp_64bit instr y_rs1 operand2));
\<^cancel>\<open>result \<leftarrow> gets (\<lambda>s. ((ucast temp_64bit)::word32));\<close>
temp_high32 \<leftarrow> gets (\<lambda>s. ((ucast (temp_64bit >> 32))::word32));
temp_high33 \<leftarrow> gets (\<lambda>s. ((ucast (temp_64bit >> 31))::word33));
temp_V \<leftarrow> gets (\<lambda>s. (div_comp_temp_V instr temp_high32 temp_high33));
result \<leftarrow> gets (\<lambda>s. (div_comp_result instr temp_V temp_64bit));
rd_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rd s));
new_rd_val \<leftarrow> gets (\<lambda>s. (if rd \<noteq> 0 then result else rd_val));
write_reg new_rd_val curr_win rd;
div_write_new_val instr result temp_V
od"
text \<open>Operational semantics for divide instructions.\<close>
definition div_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "div_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
rs1 = get_operand_w5 (op_list!1);
rd = get_operand_w5 (op_list!3)
in
do
operand2 \<leftarrow> gets (\<lambda>s. (get_operand2 op_list s));
if (uint operand2) = 0 then
do
raise_trap division_by_zero;
return ()
od
else
div_comp instr rs1 rd operand2
od"
definition ld_word0 :: "instruction \<Rightarrow> word32 \<Rightarrow> virtua_address \<Rightarrow> word32"
where "ld_word0 instr data_word address \<equiv>
if (fst instr) \<in> {load_store_type LDSB,load_store_type LDUB,
load_store_type LDUBA,load_store_type LDSBA} then
let byte = if (uint ((ucast address)::word2)) = 0 then
(ucast (data_word >> 24))::word8
else if (uint ((ucast address)::word2)) = 1 then
(ucast (data_word >> 16))::word8
else if (uint ((ucast address)::word2)) = 2 then
(ucast (data_word >> 8))::word8
else \<comment> \<open>Must be 3.\<close>
(ucast data_word)::word8
in
if (fst instr) = load_store_type LDSB \<or> (fst instr) = load_store_type LDSBA then
sign_ext8 byte
else
zero_ext8 byte
else if (fst instr) = load_store_type LDUH \<or> (fst instr) = load_store_type LDSH \<or>
(fst instr) = load_store_type LDSHA \<or> (fst instr) = load_store_type LDUHA
then
let halfword = if (uint ((ucast address)::word2)) = 0 then
(ucast (data_word >> 16))::word16
else \<comment> \<open>Must be 2.\<close>
(ucast data_word)::word16
in
if (fst instr) = load_store_type LDSH \<or> (fst instr) = load_store_type LDSHA then
sign_ext16 halfword
else
zero_ext16 halfword
else \<comment> \<open>Must be LDD\<close>
data_word
"
definition ld_asi :: "instruction \<Rightarrow> word1 \<Rightarrow> asi_type"
where "ld_asi instr s_val \<equiv>
if (fst instr) \<in> {load_store_type LDD,load_store_type LD,load_store_type LDUH,
load_store_type LDSB,load_store_type LDUB,load_store_type LDSH} then
if s_val = 0 then (word_of_int 10)::asi_type
else (word_of_int 11)::asi_type
else \<comment> \<open>Must be \<open>LDA\<close>, \<open>LDUBA\<close>, \<open>LDSBA\<close>, \<open>LDSHA\<close>, \<open>LDUHA\<close>, or \<open>LDDA\<close>.\<close>
get_operand_asi ((snd instr)!3)
"
definition load_sub2 :: "virtua_address \<Rightarrow> asi_type \<Rightarrow> word5 \<Rightarrow>
('a::len0) window_size \<Rightarrow> word32 \<Rightarrow> ('a,unit) sparc_state_monad"
where "load_sub2 address asi rd curr_win word0 \<equiv>
do
- write_reg word0 curr_win (bitAND rd 0b11110);
+ write_reg word0 curr_win ((AND) rd 0b11110);
(result1,new_state1) \<leftarrow> gets (\<lambda>s. (memory_read asi (address + 4) s));
if result1 = None then
do
raise_trap data_access_exception;
return ()
od
else
do
word1 \<leftarrow> gets (\<lambda>s. (case result1 of Some v \<Rightarrow> v));
modify (\<lambda>s. (new_state1));
- write_reg word1 curr_win (bitOR rd 1);
+ write_reg word1 curr_win ((OR) rd 1);
return ()
od
od"
definition load_sub3 :: "instruction \<Rightarrow> ('a::len0) window_size \<Rightarrow>
word5 \<Rightarrow> asi_type \<Rightarrow> virtua_address \<Rightarrow>
('a::len0,unit) sparc_state_monad"
where "load_sub3 instr curr_win rd asi address \<equiv>
do
(result,new_state) \<leftarrow> gets (\<lambda>s. (memory_read asi address s));
if result = None then
do
raise_trap data_access_exception;
return ()
od
else
do
data_word \<leftarrow> gets (\<lambda>s. (case result of Some v \<Rightarrow> v));
modify (\<lambda>s. (new_state));
word0 \<leftarrow> gets (\<lambda>s. (ld_word0 instr data_word address));
if rd \<noteq> 0 \<and> (fst instr) \<in> {load_store_type LD,load_store_type LDA,
load_store_type LDUH,load_store_type LDSB,load_store_type LDUB,
load_store_type LDUBA,load_store_type LDSH,load_store_type LDSHA,
load_store_type LDUHA,load_store_type LDSBA} then
do
write_reg word0 curr_win rd;
return ()
od
else \<comment> \<open>Must be \<open>LDD\<close> or \<open>LDDA\<close>\<close>
load_sub2 address asi rd curr_win word0
od
od"
definition load_sub1 :: "instruction \<Rightarrow> word5 \<Rightarrow> word1 \<Rightarrow>
('a::len0,unit) sparc_state_monad"
where "load_sub1 instr rd s_val \<equiv>
do
curr_win \<leftarrow> get_curr_win();
address \<leftarrow> gets (\<lambda>s. (get_addr (snd instr) s));
asi \<leftarrow> gets (\<lambda>s. (ld_asi instr s_val));
if (((fst instr) = load_store_type LDD \<or> (fst instr) = load_store_type LDDA)
\<and> ((ucast address)::word3) \<noteq> 0)
\<or> ((fst instr) \<in> {load_store_type LD,load_store_type LDA}
\<and> ((ucast address)::word2) \<noteq> 0)
\<or> (((fst instr) = load_store_type LDUH \<or> (fst instr) = load_store_type LDUHA
\<or> (fst instr) = load_store_type LDSH \<or> (fst instr) = load_store_type LDSHA)
\<and> ((ucast address)::word1) \<noteq> 0)
then
do
raise_trap mem_address_not_aligned;
return ()
od
else
load_sub3 instr curr_win rd asi address
od"
text \<open>Operational semantics for Load instructions.\<close>
definition load_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "load_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
flagi = get_operand_flag (op_list!0);
rd = if instr_name \<in> {load_store_type LDUBA,load_store_type LDA,
load_store_type LDSBA,load_store_type LDSHA,
load_store_type LDSHA,load_store_type LDDA} then \<comment> \<open>\<open>rd\<close> is member 4\<close>
get_operand_w5 (op_list!4)
else \<comment> \<open>\<open>rd\<close> is member 3\<close>
get_operand_w5 (op_list!3)
in
do
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
s_val \<leftarrow> gets (\<lambda>s. (get_S psr_val));
if instr_name \<in> {load_store_type LDA,load_store_type LDUBA,
load_store_type LDSBA,load_store_type LDSHA,
load_store_type LDUHA,load_store_type LDDA} \<and> s_val = 0 then
do
raise_trap privileged_instruction;
return ()
od
else if instr_name \<in> {load_store_type LDA,load_store_type LDUBA,
load_store_type LDSBA,load_store_type LDSHA,load_store_type LDUHA,
load_store_type LDDA} \<and> flagi = 1 then
do
raise_trap illegal_instruction;
return ()
od
else
load_sub1 instr rd s_val
od"
definition st_asi :: "instruction \<Rightarrow> word1 \<Rightarrow> asi_type"
where "st_asi instr s_val \<equiv>
if (fst instr) \<in> {load_store_type STD,load_store_type ST,
load_store_type STH,load_store_type STB} then
if s_val = 0 then (word_of_int 10)::asi_type
else (word_of_int 11)::asi_type
else \<comment> \<open>Must be \<open>STA\<close>, \<open>STBA\<close>, \<open>STHA\<close>, \<open>STDA\<close>.\<close>
get_operand_asi ((snd instr)!3)
"
definition st_byte_mask :: "instruction \<Rightarrow> virtua_address \<Rightarrow> word4"
where "st_byte_mask instr address \<equiv>
if (fst instr) \<in> {load_store_type STD,load_store_type ST,
load_store_type STA,load_store_type STDA} then
(0b1111::word4)
else if (fst instr) \<in> {load_store_type STH,load_store_type STHA} then
if ((ucast address)::word2) = 0 then
(0b1100::word4)
else \<comment> \<open>Must be 2.\<close>
(0b0011::word4)
else \<comment> \<open>Must be \<open>STB\<close> or \<open>STBA\<close>.\<close>
if ((ucast address)::word2) = 0 then
(0b1000::word4)
else if ((ucast address)::word2) = 1 then
(0b0100::word4)
else if ((ucast address)::word2) = 2 then
(0b0010::word4)
else \<comment> \<open>Must be 3.\<close>
(0b0001::word4)
"
definition st_data0 :: "instruction \<Rightarrow> ('a::len0) window_size \<Rightarrow>
word5 \<Rightarrow> virtua_address \<Rightarrow> ('a) sparc_state \<Rightarrow> reg_type"
where "st_data0 instr curr_win rd address s \<equiv>
if (fst instr) \<in> {load_store_type STD,load_store_type STDA} then
- user_reg_val curr_win (bitAND rd 0b11110) s
+ user_reg_val curr_win ((AND) rd 0b11110) s
else if (fst instr) \<in> {load_store_type ST,load_store_type STA} then
user_reg_val curr_win rd s
else if (fst instr) \<in> {load_store_type STH,load_store_type STHA} then
if ((ucast address)::word2) = 0 then
(user_reg_val curr_win rd s) << 16
else \<comment> \<open>Must be 2.\<close>
user_reg_val curr_win rd s
else \<comment> \<open>Must be \<open>STB\<close> or \<open>STBA\<close>.\<close>
if ((ucast address)::word2) = 0 then
(user_reg_val curr_win rd s) << 24
else if ((ucast address)::word2) = 1 then
(user_reg_val curr_win rd s) << 16
else if ((ucast address)::word2) = 2 then
(user_reg_val curr_win rd s) << 8
else \<comment> \<open>Must be 3.\<close>
user_reg_val curr_win rd s
"
definition store_sub2 :: "instruction \<Rightarrow> ('a::len0) window_size \<Rightarrow>
word5 \<Rightarrow> asi_type \<Rightarrow> virtua_address \<Rightarrow>
('a::len0,unit) sparc_state_monad"
where "store_sub2 instr curr_win rd asi address \<equiv>
do
byte_mask \<leftarrow> gets (\<lambda>s. (st_byte_mask instr address));
data0 \<leftarrow> gets (\<lambda>s. (st_data0 instr curr_win rd address s));
result0 \<leftarrow> gets (\<lambda>s. (memory_write asi address byte_mask data0 s));
if result0 = None then
do
raise_trap data_access_exception;
return ()
od
else
do
new_state \<leftarrow> gets (\<lambda>s. (case result0 of Some v \<Rightarrow> v));
modify (\<lambda>s. (new_state));
if (fst instr) \<in> {load_store_type STD,load_store_type STDA} then
do
- data1 \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win (bitOR rd 0b00001) s));
+ data1 \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win ((OR) rd 0b00001) s));
result1 \<leftarrow> gets (\<lambda>s. (memory_write asi (address + 4) (0b1111::word4) data1 s));
if result1 = None then
do
raise_trap data_access_exception;
return ()
od
else
do
new_state1 \<leftarrow> gets (\<lambda>s. (case result1 of Some v \<Rightarrow> v));
modify (\<lambda>s. (new_state1));
return ()
od
od
else
return ()
od
od"
definition store_sub1 :: "instruction \<Rightarrow> word5 \<Rightarrow> word1 \<Rightarrow>
('a::len0,unit) sparc_state_monad"
where "store_sub1 instr rd s_val \<equiv>
do
curr_win \<leftarrow> get_curr_win();
address \<leftarrow> gets (\<lambda>s. (get_addr (snd instr) s));
asi \<leftarrow> gets (\<lambda>s. (st_asi instr s_val));
\<comment> \<open>The following code is intentionally long to match the definitions in SPARCv8.\<close>
if ((fst instr) = load_store_type STH \<or> (fst instr) = load_store_type STHA)
\<and> ((ucast address)::word1) \<noteq> 0 then
do
raise_trap mem_address_not_aligned;
return ()
od
else if (fst instr) \<in> {load_store_type ST,load_store_type STA}
\<and> ((ucast address)::word2) \<noteq> 0 then
do
raise_trap mem_address_not_aligned;
return ()
od
else if (fst instr) \<in> {load_store_type STD,load_store_type STDA}
\<and> ((ucast address)::word3) \<noteq> 0 then
do
raise_trap mem_address_not_aligned;
return ()
od
else
store_sub2 instr curr_win rd asi address
od"
text \<open>Operational semantics for Store instructions.\<close>
definition store_instr :: "instruction \<Rightarrow>
('a::len0,unit) sparc_state_monad"
where "store_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
flagi = get_operand_flag (op_list!0);
rd = if instr_name \<in> {load_store_type STA,load_store_type STBA,
load_store_type STHA,load_store_type STDA} then \<comment> \<open>\<open>rd\<close> is member 4\<close>
get_operand_w5 (op_list!4)
else \<comment> \<open>\<open>rd\<close> is member 3\<close>
get_operand_w5 (op_list!3)
in
do
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
s_val \<leftarrow> gets (\<lambda>s. (get_S psr_val));
if instr_name \<in> {load_store_type STA,load_store_type STDA,
load_store_type STHA,load_store_type STBA} \<and> s_val = 0 then
do
raise_trap privileged_instruction;
return ()
od
else if instr_name \<in> {load_store_type STA,load_store_type STDA,
load_store_type STHA,load_store_type STBA} \<and> flagi = 1 then
do
raise_trap illegal_instruction;
return ()
od
else
store_sub1 instr rd s_val
od"
text \<open>The instructions below are not used by Xtratum and they are
not tested.\<close>
definition ldst_asi :: "instruction \<Rightarrow> word1 \<Rightarrow> asi_type"
where "ldst_asi instr s_val \<equiv>
if (fst instr) \<in> {load_store_type LDSTUB} then
if s_val = 0 then (word_of_int 10)::asi_type
else (word_of_int 11)::asi_type
else \<comment> \<open>Must be \<open>LDSTUBA\<close>.\<close>
get_operand_asi ((snd instr)!3)
"
definition ldst_word0 :: "instruction \<Rightarrow> word32 \<Rightarrow> virtua_address \<Rightarrow> word32"
where "ldst_word0 instr data_word address \<equiv>
let byte = if (uint ((ucast address)::word2)) = 0 then
(ucast (data_word >> 24))::word8
else if (uint ((ucast address)::word2)) = 1 then
(ucast (data_word >> 16))::word8
else if (uint ((ucast address)::word2)) = 2 then
(ucast (data_word >> 8))::word8
else \<comment> \<open>Must be 3.\<close>
(ucast data_word)::word8
in
zero_ext8 byte
"
definition ldst_byte_mask :: "instruction \<Rightarrow> virtua_address \<Rightarrow> word4"
where "ldst_byte_mask instr address \<equiv>
if ((ucast address)::word2) = 0 then
(0b1000::word4)
else if ((ucast address)::word2) = 1 then
(0b0100::word4)
else if ((ucast address)::word2) = 2 then
(0b0010::word4)
else \<comment> \<open>Must be 3.\<close>
(0b0001::word4)
"
definition load_store_sub1 :: "instruction \<Rightarrow> word5 \<Rightarrow> word1 \<Rightarrow>
('a::len0,unit) sparc_state_monad"
where "load_store_sub1 instr rd s_val \<equiv>
do
curr_win \<leftarrow> get_curr_win();
address \<leftarrow> gets (\<lambda>s. (get_addr (snd instr) s));
asi \<leftarrow> gets (\<lambda>s. (ldst_asi instr s_val));
\<comment> \<open>wait for locks to be lifted.\<close>
\<comment> \<open>an implementation actually need only block when another \<open>LDSTUB\<close> or \<open>SWAP\<close>\<close>
\<comment> \<open>is pending on the same byte in memory as the one addressed by this \<open>LDSTUB\<close>\<close>
\<comment> \<open>Should wait when \<open>block_type = 1 \<or> block_word = 1\<close>\<close>
\<comment> \<open>until another processes write both to be 0.\<close>
\<comment> \<open>We implement this as setting \<open>pc\<close> as \<open>npc\<close> when the instruction\<close>
\<comment> \<open>is blocked. This way, in the next iteration, we will still execution\<close>
\<comment> \<open>the current instruction.\<close>
block_byte \<leftarrow> gets (\<lambda>s. (pb_block_ldst_byte_val address s));
block_word \<leftarrow> gets (\<lambda>s. (pb_block_ldst_word_val address s));
if block_byte \<or> block_word then
do
pc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PC s));
write_cpu pc_val nPC;
return ()
od
else
do
modify (\<lambda>s. (pb_block_ldst_byte_mod address True s));
(result,new_state) \<leftarrow> gets (\<lambda>s. (memory_read asi address s));
if result = None then
do
raise_trap data_access_exception;
return ()
od
else
do
data_word \<leftarrow> gets (\<lambda>s. (case result of Some v \<Rightarrow> v));
modify (\<lambda>s. (new_state));
byte_mask \<leftarrow> gets (\<lambda>s. (ldst_byte_mask instr address));
data0 \<leftarrow> gets (\<lambda>s. (0b11111111111111111111111111111111::word32));
result0 \<leftarrow> gets (\<lambda>s. (memory_write asi address byte_mask data0 s));
modify (\<lambda>s. (pb_block_ldst_byte_mod address False s));
if result0 = None then
do
raise_trap data_access_exception;
return ()
od
else
do
new_state1 \<leftarrow> gets (\<lambda>s. (case result0 of Some v \<Rightarrow> v));
modify (\<lambda>s. (new_state1));
word0 \<leftarrow> gets (\<lambda>s. (ldst_word0 instr data_word address));
if rd \<noteq> 0 then
do
write_reg word0 curr_win rd;
return ()
od
else
return ()
od
od
od
od"
text \<open>Operational semantics for atomic load-store.\<close>
definition load_store_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "load_store_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
flagi = get_operand_flag (op_list!0);
rd = if instr_name \<in> {load_store_type LDSTUBA} then \<comment> \<open>\<open>rd\<close> is member 4\<close>
get_operand_w5 (op_list!4)
else \<comment> \<open>\<open>rd\<close> is member 3\<close>
get_operand_w5 (op_list!3)
in
do
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
s_val \<leftarrow> gets (\<lambda>s. (get_S psr_val));
if instr_name \<in> {load_store_type LDSTUBA} \<and> s_val = 0 then
do
raise_trap privileged_instruction;
return ()
od
else if instr_name \<in> {load_store_type LDSTUBA} \<and> flagi = 1 then
do
raise_trap illegal_instruction;
return ()
od
else
load_store_sub1 instr rd s_val
od"
definition swap_sub1 :: "instruction \<Rightarrow> word5 \<Rightarrow> word1 \<Rightarrow>
('a::len0,unit) sparc_state_monad"
where "swap_sub1 instr rd s_val \<equiv>
do
curr_win \<leftarrow> get_curr_win();
address \<leftarrow> gets (\<lambda>s. (get_addr (snd instr) s));
asi \<leftarrow> gets (\<lambda>s. (ldst_asi instr s_val));
temp \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rd s));
\<comment> \<open>wait for locks to be lifted.\<close>
\<comment> \<open>an implementation actually need only block when another \<open>LDSTUB\<close> or \<open>SWAP\<close>\<close>
\<comment> \<open>is pending on the same byte in memory as the one addressed by this \<open>LDSTUB\<close>\<close>
\<comment> \<open>Should wait when \<open>block_type = 1 \<or> block_word = 1\<close>\<close>
\<comment> \<open>until another processes write both to be 0.\<close>
\<comment> \<open>We implement this as setting \<open>pc\<close> as \<open>npc\<close> when the instruction\<close>
\<comment> \<open>is blocked. This way, in the next iteration, we will still execution\<close>
\<comment> \<open>the current instruction.\<close>
block_byte \<leftarrow> gets (\<lambda>s. (pb_block_ldst_byte_val address s));
block_word \<leftarrow> gets (\<lambda>s. (pb_block_ldst_word_val address s));
if block_byte \<or> block_word then
do
pc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PC s));
write_cpu pc_val nPC;
return ()
od
else
do
modify (\<lambda>s. (pb_block_ldst_word_mod address True s));
(result,new_state) \<leftarrow> gets (\<lambda>s. (memory_read asi address s));
if result = None then
do
raise_trap data_access_exception;
return ()
od
else
do
word \<leftarrow> gets (\<lambda>s. (case result of Some v \<Rightarrow> v));
modify (\<lambda>s. (new_state));
byte_mask \<leftarrow> gets (\<lambda>s. (0b1111::word4));
result0 \<leftarrow> gets (\<lambda>s. (memory_write asi address byte_mask temp s));
modify (\<lambda>s. (pb_block_ldst_word_mod address False s));
if result0 = None then
do
raise_trap data_access_exception;
return ()
od
else
do
new_state1 \<leftarrow> gets (\<lambda>s. (case result0 of Some v \<Rightarrow> v));
modify (\<lambda>s. (new_state1));
if rd \<noteq> 0 then
do
write_reg word curr_win rd;
return ()
od
else
return ()
od
od
od
od"
text \<open>Operational semantics for swap.\<close>
definition swap_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "swap_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
flagi = get_operand_flag (op_list!0);
rd = if instr_name \<in> {load_store_type SWAPA} then \<comment> \<open>\<open>rd\<close> is member 4\<close>
get_operand_w5 (op_list!4)
else \<comment> \<open>\<open>rd\<close> is member 3\<close>
get_operand_w5 (op_list!3)
in
do
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
s_val \<leftarrow> gets (\<lambda>s. (get_S psr_val));
if instr_name \<in> {load_store_type SWAPA} \<and> s_val = 0 then
do
raise_trap privileged_instruction;
return ()
od
else if instr_name \<in> {load_store_type SWAPA} \<and> flagi = 1 then
do
raise_trap illegal_instruction;
return ()
od
else
swap_sub1 instr rd s_val
od"
definition bit2_zero :: "word2 \<Rightarrow> word1"
where "bit2_zero w2 \<equiv> if w2 \<noteq> 0 then 1 else 0"
text \<open>Operational semantics for tagged add instructions.\<close>
definition tadd_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "tadd_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
rs1 = get_operand_w5 (op_list!1);
rd = get_operand_w5 (op_list!3)
in
do
operand2 \<leftarrow> gets (\<lambda>s. (get_operand2 op_list s));
curr_win \<leftarrow> get_curr_win();
rs1_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rs1 s));
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
c_val \<leftarrow> gets (\<lambda>s. (get_icc_C psr_val));
result \<leftarrow> gets (\<lambda>s. (rs1_val + operand2));
result_31 \<leftarrow> gets (\<lambda>s. ((ucast (result >> 31))::word1));
rs1_val_31 \<leftarrow> gets (\<lambda>s. ((ucast (rs1_val >> 31))::word1));
operand2_31 \<leftarrow> gets (\<lambda>s. ((ucast (operand2 >> 31))::word1));
rs1_val_2 \<leftarrow> gets (\<lambda>s. (bit2_zero ((ucast rs1_val)::word2)));
operand2_2 \<leftarrow> gets (\<lambda>s. (bit2_zero ((ucast operand2)::word2)));
- temp_V \<leftarrow> gets (\<lambda>s. (bitOR (bitOR (bitAND rs1_val_31
- (bitAND operand2_31
- (bitNOT result_31)))
- (bitAND (bitNOT rs1_val_31)
- (bitAND (bitNOT operand2_31)
+ temp_V \<leftarrow> gets (\<lambda>s. ((OR) ((OR) ((AND) rs1_val_31
+ ((AND) operand2_31
+ (NOT result_31)))
+ ((AND) (NOT rs1_val_31)
+ ((AND) (NOT operand2_31)
result_31)))
- (bitOR rs1_val_2 operand2_2)));
+ ((OR) rs1_val_2 operand2_2)));
if instr_name = arith_type TADDccTV \<and> temp_V = 1 then
do
raise_trap tag_overflow;
return ()
od
else
do
rd_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rd s));
new_rd_val \<leftarrow> gets (\<lambda>s. (if rd \<noteq> 0 then result else rd_val));
write_reg new_rd_val curr_win rd;
new_n_val \<leftarrow> gets (\<lambda>s. (result_31));
new_z_val \<leftarrow> gets (\<lambda>s. (if result = 0 then 1::word1 else 0::word1));
new_v_val \<leftarrow> gets (\<lambda>s. temp_V);
- new_c_val \<leftarrow> gets (\<lambda>s. (bitOR (bitAND rs1_val_31
+ new_c_val \<leftarrow> gets (\<lambda>s. ((OR) ((AND) rs1_val_31
operand2_31)
- (bitAND (bitNOT result_31)
- (bitOR rs1_val_31
+ ((AND) (NOT result_31)
+ ((OR) rs1_val_31
operand2_31))));
new_psr_val \<leftarrow> gets (\<lambda>s. (update_PSR_icc new_n_val
new_z_val
new_v_val
new_c_val psr_val));
write_cpu new_psr_val PSR;
rd_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rd s));
new_rd_val \<leftarrow> gets (\<lambda>s. (if rd \<noteq> 0 then result else rd_val));
write_reg new_rd_val curr_win rd;
return ()
od
od"
text \<open>Operational semantics for tagged add instructions.\<close>
definition tsub_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "tsub_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
rs1 = get_operand_w5 (op_list!1);
rd = get_operand_w5 (op_list!3)
in
do
operand2 \<leftarrow> gets (\<lambda>s. (get_operand2 op_list s));
curr_win \<leftarrow> get_curr_win();
rs1_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rs1 s));
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
c_val \<leftarrow> gets (\<lambda>s. (get_icc_C psr_val));
result \<leftarrow> gets (\<lambda>s. (rs1_val - operand2));
result_31 \<leftarrow> gets (\<lambda>s. ((ucast (result >> 31))::word1));
rs1_val_31 \<leftarrow> gets (\<lambda>s. ((ucast (rs1_val >> 31))::word1));
operand2_31 \<leftarrow> gets (\<lambda>s. ((ucast (operand2 >> 31))::word1));
rs1_val_2 \<leftarrow> gets (\<lambda>s. (bit2_zero ((ucast rs1_val)::word2)));
operand2_2 \<leftarrow> gets (\<lambda>s. (bit2_zero ((ucast operand2)::word2)));
- temp_V \<leftarrow> gets (\<lambda>s. (bitOR (bitOR (bitAND rs1_val_31
- (bitAND operand2_31
- (bitNOT result_31)))
- (bitAND (bitNOT rs1_val_31)
- (bitAND (bitNOT operand2_31)
+ temp_V \<leftarrow> gets (\<lambda>s. ((OR) ((OR) ((AND) rs1_val_31
+ ((AND) operand2_31
+ (NOT result_31)))
+ ((AND) (NOT rs1_val_31)
+ ((AND) (NOT operand2_31)
result_31)))
- (bitOR rs1_val_2 operand2_2)));
+ ((OR) rs1_val_2 operand2_2)));
if instr_name = arith_type TSUBccTV \<and> temp_V = 1 then
do
raise_trap tag_overflow;
return ()
od
else
do
rd_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rd s));
new_rd_val \<leftarrow> gets (\<lambda>s. (if rd \<noteq> 0 then result else rd_val));
write_reg new_rd_val curr_win rd;
new_n_val \<leftarrow> gets (\<lambda>s. (result_31));
new_z_val \<leftarrow> gets (\<lambda>s. (if result = 0 then 1::word1 else 0::word1));
new_v_val \<leftarrow> gets (\<lambda>s. temp_V);
- new_c_val \<leftarrow> gets (\<lambda>s. (bitOR (bitAND rs1_val_31
+ new_c_val \<leftarrow> gets (\<lambda>s. ((OR) ((AND) rs1_val_31
operand2_31)
- (bitAND (bitNOT result_31)
- (bitOR rs1_val_31
+ ((AND) (NOT result_31)
+ ((OR) rs1_val_31
operand2_31))));
new_psr_val \<leftarrow> gets (\<lambda>s. (update_PSR_icc new_n_val
new_z_val
new_v_val
new_c_val psr_val));
write_cpu new_psr_val PSR;
rd_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rd s));
new_rd_val \<leftarrow> gets (\<lambda>s. (if rd \<noteq> 0 then result else rd_val));
write_reg new_rd_val curr_win rd;
return ()
od
od"
definition muls_op2 :: "inst_operand list \<Rightarrow> ('a::len0) sparc_state \<Rightarrow> word32"
where "muls_op2 op_list s \<equiv>
let y_val = cpu_reg_val Y s in
if ((ucast y_val)::word1) = 0 then 0
else get_operand2 op_list s
"
text \<open>Operational semantics for multiply step instruction.\<close>
definition muls_instr :: "instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "muls_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
rs1 = get_operand_w5 (op_list!1);
rd = get_operand_w5 (op_list!3)
in
do
curr_win \<leftarrow> get_curr_win();
rs1_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rs1 s));
psr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PSR s));
n_val \<leftarrow> gets (\<lambda>s. (get_icc_N psr_val));
v_val \<leftarrow> gets (\<lambda>s. (get_icc_V psr_val));
c_val \<leftarrow> gets (\<lambda>s. (get_icc_C psr_val));
y_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val Y s));
- operand1 \<leftarrow> gets (\<lambda>s. (word_cat (bitXOR n_val v_val)
+ operand1 \<leftarrow> gets (\<lambda>s. (word_cat ((XOR) n_val v_val)
((ucast (rs1_val >> 1))::word31)));
operand2 \<leftarrow> gets (\<lambda>s. (muls_op2 op_list s));
result \<leftarrow> gets (\<lambda>s. (operand1 + operand2));
new_y_val \<leftarrow> gets (\<lambda>s. (word_cat ((ucast rs1_val)::word1) ((ucast (y_val >> 1))::word31)));
write_cpu new_y_val Y;
rd_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rd s));
new_rd_val \<leftarrow> gets (\<lambda>s. (if rd \<noteq> 0 then result else rd_val));
write_reg new_rd_val curr_win rd;
result_31 \<leftarrow> gets (\<lambda>s. ((ucast (result >> 31))::word1));
operand1_31 \<leftarrow> gets (\<lambda>s. ((ucast (operand1 >> 31))::word1));
operand2_31 \<leftarrow> gets (\<lambda>s. ((ucast (operand2 >> 31))::word1));
new_n_val \<leftarrow> gets (\<lambda>s. (result_31));
new_z_val \<leftarrow> gets (\<lambda>s. (if result = 0 then 1::word1 else 0::word1));
- new_v_val \<leftarrow> gets (\<lambda>s. (bitOR (bitAND operand1_31
- (bitAND operand2_31
- (bitNOT result_31)))
- (bitAND (bitNOT operand1_31)
- (bitAND (bitNOT operand2_31)
+ new_v_val \<leftarrow> gets (\<lambda>s. ((OR) ((AND) operand1_31
+ ((AND) operand2_31
+ (NOT result_31)))
+ ((AND) (NOT operand1_31)
+ ((AND) (NOT operand2_31)
result_31))));
- new_c_val \<leftarrow> gets (\<lambda>s. (bitOR (bitAND operand1_31
+ new_c_val \<leftarrow> gets (\<lambda>s. ((OR) ((AND) operand1_31
operand2_31)
- (bitAND (bitNOT result_31)
- (bitOR operand1_31
+ ((AND) (NOT result_31)
+ ((OR) operand1_31
operand2_31))));
new_psr_val \<leftarrow> gets (\<lambda>s. (update_PSR_icc new_n_val
new_z_val
new_v_val
new_c_val psr_val));
write_cpu new_psr_val PSR;
return ()
od"
text\<open>Evaluate icc based on the bits N, Z, V, C in PSR
and the type of ticc instruction.
See Sparcv8 manual Page 182.\<close>
definition trap_eval_icc::"sparc_operation \<Rightarrow> word1 \<Rightarrow> word1 \<Rightarrow> word1 \<Rightarrow> word1 \<Rightarrow> int"
where "trap_eval_icc instr_name n_val z_val v_val c_val \<equiv>
if instr_name = ticc_type TNE then
if z_val = 0 then 1 else 0
else if instr_name = ticc_type TE then
if z_val = 1 then 1 else 0
else if instr_name = ticc_type TG then
- if (bitOR z_val (n_val XOR v_val)) = 0 then 1 else 0
+ if ((OR) z_val (n_val XOR v_val)) = 0 then 1 else 0
else if instr_name = ticc_type TLE then
- if (bitOR z_val (n_val XOR v_val)) = 1 then 1 else 0
+ if ((OR) z_val (n_val XOR v_val)) = 1 then 1 else 0
else if instr_name = ticc_type TGE then
if (n_val XOR v_val) = 0 then 1 else 0
else if instr_name = ticc_type TL then
if (n_val XOR v_val) = 1 then 1 else 0
else if instr_name = ticc_type TGU then
if (c_val = 0 \<and> z_val = 0) then 1 else 0
else if instr_name = ticc_type TLEU then
if (c_val = 1 \<or> z_val = 1) then 1 else 0
else if instr_name = ticc_type TCC then
if c_val = 0 then 1 else 0
else if instr_name = ticc_type TCS then
if c_val = 1 then 1 else 0
else if instr_name = ticc_type TPOS then
if n_val = 0 then 1 else 0
else if instr_name = ticc_type TNEG then
if n_val = 1 then 1 else 0
else if instr_name = ticc_type TVC then
if v_val = 0 then 1 else 0
else if instr_name = ticc_type TVS then
if v_val = 1 then 1 else 0
else if instr_name = ticc_type TA then 1
else if instr_name = ticc_type TN then 0
else -1
"
text \<open>
Get \<open>operand2\<close> for \<open>ticc\<close> based on the flag \<open>i\<close>, \<open>rs1\<close>, \<open>rs2\<close>, and \<open>trap_imm7\<close>.
If \<open>i = 0\<close> then \<open>operand2 = r[rs2]\<close>,
else \<open>operand2 = sign_ext7(trap_imm7)\<close>.
\<open>op_list\<close> should be \<open>[i,rs1,rs2]\<close> or \<open>[i,rs1,trap_imm7]\<close>.
\<close>
definition get_trap_op2::"inst_operand list \<Rightarrow> ('a::len0) sparc_state
\<Rightarrow> virtua_address"
where "get_trap_op2 op_list s \<equiv>
let flagi = get_operand_flag (op_list!0);
curr_win = ucast (get_CWP (cpu_reg_val PSR s))
in
if flagi = 0 then
let rs2 = get_operand_w5 (op_list!2);
rs2_val = user_reg_val curr_win rs2 s
in rs2_val
else
let ext_simm7 = sign_ext7 (get_operand_imm7 (op_list!2)) in
ext_simm7
"
text \<open>Operational semantics for Ticc insturctions.\<close>
definition ticc_instr::"instruction \<Rightarrow>
('a::len0,unit) sparc_state_monad"
where "ticc_instr instr \<equiv>
let instr_name = fst instr;
op_list = snd instr;
rs1 = get_operand_w5 (op_list!1)
in
do
n_val \<leftarrow> gets (\<lambda>s. get_icc_N ((cpu_reg s) PSR));
z_val \<leftarrow> gets (\<lambda>s. get_icc_Z ((cpu_reg s) PSR));
v_val \<leftarrow> gets (\<lambda>s. get_icc_V ((cpu_reg s) PSR));
c_val \<leftarrow> gets (\<lambda>s. get_icc_C ((cpu_reg s) PSR));
icc_val \<leftarrow> gets(\<lambda>s. (trap_eval_icc instr_name n_val z_val v_val c_val));
curr_win \<leftarrow> get_curr_win();
rs1_val \<leftarrow> gets (\<lambda>s. (user_reg_val curr_win rs1 s));
trap_number \<leftarrow> gets (\<lambda>s. (rs1_val + (get_trap_op2 op_list s)));
npc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val nPC s));
pc_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val PC s));
if icc_val = 1 then
do
raise_trap trap_instruction;
trap_number7 \<leftarrow> gets (\<lambda>s. ((ucast trap_number)::word7));
modify (\<lambda>s. (ticc_trap_type_mod trap_number7 s));
return ()
od
else \<comment> \<open>\<open>icc_val = 0\<close>\<close>
do
write_cpu npc_val PC;
write_cpu (npc_val + 4) nPC;
return ()
od
od"
text \<open>Operational semantics for store barrier.\<close>
definition store_barrier_instr::"instruction \<Rightarrow> ('a::len0,unit) sparc_state_monad"
where "store_barrier_instr instr \<equiv>
do
modify (\<lambda>s. (store_barrier_pending_mod True s));
return ()
od"
end
diff --git a/thys/SPARCv8/SparcModel_MMU/Sparc_Properties.thy b/thys/SPARCv8/SparcModel_MMU/Sparc_Properties.thy
--- a/thys/SPARCv8/SparcModel_MMU/Sparc_Properties.thy
+++ b/thys/SPARCv8/SparcModel_MMU/Sparc_Properties.thy
@@ -1,8565 +1,8565 @@
(*
* Copyright 2016, NTU
*
* This software may be distributed and modified according to the terms of
* the BSD 2-Clause license. Note that NO WARRANTY is provided.
* See "LICENSE_BSD2.txt" for details.
*
* Author: Zhe Hou.
*)
theory Sparc_Properties
imports Main Sparc_Execution
begin
(*********************************************************************)
section\<open>Single step theorem\<close>
(*********************************************************************)
text \<open>The following shows that, if the pre-state satisfies certain
conditions called \<open>good_context\<close>, there must be a defined post-state
after a single step execution.\<close>
method save_restore_proof =
((simp add: save_restore_instr_def),
(simp add: Let_def simpler_gets_def bind_def h1_def h2_def),
(simp add: case_prod_unfold),
(simp add: raise_trap_def simpler_modify_def),
(simp add: simpler_gets_def bind_def h1_def h2_def),
(simp add: save_retore_sub1_def),
(simp add: write_cpu_def simpler_modify_def),
(simp add: write_reg_def simpler_modify_def),
(simp add: get_curr_win_def),
(simp add: simpler_gets_def bind_def h1_def h2_def))
method select_trap_proof0 =
((simp add: select_trap_def exec_gets return_def),
(simp add: DetMonad.bind_def h1_def h2_def simpler_modify_def),
(simp add: write_cpu_tt_def write_cpu_def),
(simp add: DetMonad.bind_def h1_def h2_def simpler_modify_def),
(simp add: return_def simpler_gets_def))
method select_trap_proof1 =
((simp add: select_trap_def exec_gets return_def),
(simp add: DetMonad.bind_def h1_def h2_def simpler_modify_def),
(simp add: write_cpu_tt_def write_cpu_def),
(simp add: DetMonad.bind_def h1_def h2_def simpler_modify_def),
(simp add: return_def simpler_gets_def),
(simp add: emp_trap_set_def err_mode_val_def cpu_reg_mod_def))
method dispatch_instr_proof1 =
((simp add: dispatch_instruction_def),
(simp add: simpler_gets_def bind_def h1_def h2_def),
(simp add: Let_def))
method exe_proof_to_decode =
((simp add: execute_instruction_def),
(simp add: exec_gets bind_def h1_def h2_def Let_def return_def),
clarsimp,
(simp add: simpler_gets_def bind_def h1_def h2_def Let_def simpler_modify_def),
(simp add: return_def))
method exe_proof_dispatch_rett =
((simp add: dispatch_instruction_def),
(simp add: simpler_gets_def bind_def h1_def h2_def Let_def),
(simp add: rett_instr_def),
(simp add: simpler_gets_def bind_def h1_def h2_def Let_def))
lemma write_cpu_result: "snd (write_cpu w r s) = False"
by (simp add: write_cpu_def simpler_modify_def)
lemma set_annul_result: "snd (set_annul b s) = False"
by (simp add: set_annul_def simpler_modify_def)
lemma raise_trap_result : "snd (raise_trap t s) = False"
by (simp add: raise_trap_def simpler_modify_def)
lemma rett_instr_result: "(fst i) = ctrl_type RETT \<and>
(get_ET (cpu_reg_val PSR s) \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) \<noteq> 0 \<and>
(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR s))) + 1) mod NWINDOWS))
(cpu_reg_val WIM s)) = 0 \<and>
- (bitAND (get_addr (snd i) s) (0b00000000000000000000000000000011::word32)) = 0) \<Longrightarrow>
+ ((AND) (get_addr (snd i) s) (0b00000000000000000000000000000011::word32)) = 0) \<Longrightarrow>
snd (rett_instr i s) = False"
apply (simp add: rett_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: write_cpu_def simpler_modify_def)
apply (simp add: raise_trap_def simpler_modify_def)
by (simp add: return_def)
lemma call_instr_result: "(fst i) = call_type CALL \<Longrightarrow>
snd (call_instr i s) = False"
apply (simp add: call_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def case_prod_unfold)
apply (simp add: write_cpu_def write_reg_def)
apply (simp add: get_curr_win_def get_CWP_def)
by (simp add: simpler_modify_def simpler_gets_def)
lemma branch_instr_result: "(fst i) \<in> {bicc_type BE,bicc_type BNE,bicc_type BGU,
bicc_type BLE,bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,bicc_type BN} \<Longrightarrow>
snd (branch_instr i s) = False"
proof (cases "eval_icc (fst i) (get_icc_N ((cpu_reg s) PSR)) (get_icc_Z ((cpu_reg s) PSR))
(get_icc_V ((cpu_reg s) PSR)) (get_icc_C ((cpu_reg s) PSR)) = 1")
case True
then have f1: "eval_icc (fst i) (get_icc_N ((cpu_reg s) PSR)) (get_icc_Z ((cpu_reg s) PSR))
(get_icc_V ((cpu_reg s) PSR)) (get_icc_C ((cpu_reg s) PSR)) = 1"
by auto
then show ?thesis
proof (cases "(fst i) = bicc_type BA \<and> get_operand_flag ((snd i)!0) = 1")
case True
then show ?thesis using f1
apply (simp add: branch_instr_def)
apply (simp add: Let_def simpler_gets_def bind_def h1_def h2_def)
apply (simp add: set_annul_def case_prod_unfold)
apply (simp add: write_cpu_def simpler_modify_def)
by (simp add: return_def)
next
case False
then have f2: "\<not> (fst i = bicc_type BA \<and> get_operand_flag (snd i ! 0) = 1)" by auto
then show ?thesis using f1
apply (simp add: branch_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: write_cpu_def simpler_modify_def)
apply (simp add: branch_instr_sub1_def)
apply (simp add: Let_def)
apply auto
apply (simp add: write_cpu_def simpler_modify_def)
by (simp add: write_cpu_def simpler_modify_def)
qed
next
case False
then show ?thesis
apply (simp add: branch_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: write_cpu_def simpler_modify_def)
apply (simp add: branch_instr_sub1_def)
apply (simp add: Let_def)
apply auto
apply (simp add: Let_def bind_def h1_def h2_def)
apply (simp add: write_cpu_def simpler_modify_def)
apply (simp add: cpu_reg_mod_def set_annul_def simpler_modify_def)
by (simp add: write_cpu_def simpler_modify_def)
qed
lemma nop_instr_result: "(fst i) = nop_type NOP \<Longrightarrow>
snd (nop_instr i s) = False"
apply (simp add: nop_instr_def)
by (simp add: returnOk_def return_def)
lemma sethi_instr_result: "(fst i) = sethi_type SETHI \<Longrightarrow>
snd (sethi_instr i s) = False"
apply (simp add: sethi_instr_def)
apply (simp add: Let_def)
apply (simp add: get_curr_win_def get_CWP_def cpu_reg_val_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: write_reg_def simpler_modify_def)
by (simp add: return_def)
lemma jmpl_instr_result: "(fst i) = ctrl_type JMPL \<Longrightarrow>
snd (jmpl_instr i s) = False"
apply (simp add: jmpl_instr_def)
apply (simp add: get_curr_win_def get_CWP_def cpu_reg_val_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: write_reg_def simpler_modify_def)
apply (simp add: write_cpu_def simpler_modify_def)
by (simp add: raise_trap_def simpler_modify_def)
lemma save_restore_instr_result: "(fst i) \<in> {ctrl_type SAVE,ctrl_type RESTORE} \<Longrightarrow>
snd (save_restore_instr i s) = False"
proof (cases "(fst i) = ctrl_type SAVE")
case True
then show ?thesis
by save_restore_proof
next
case False
then show ?thesis
by save_restore_proof
qed
lemma flush_instr_result: "(fst i) = load_store_type FLUSH \<Longrightarrow>
snd (flush_instr i s) = False"
apply (simp add: flush_instr_def)
by (simp add: simpler_gets_def bind_def h1_def h2_def simpler_modify_def)
lemma read_state_reg_instr_result: "(fst i) \<in> {sreg_type RDY,sreg_type RDPSR,
sreg_type RDWIM,sreg_type RDTBR} \<Longrightarrow>
snd (read_state_reg_instr i s) = False"
apply (simp add: read_state_reg_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: simpler_gets_def bind_def)
apply (simp add: write_reg_def simpler_modify_def)
apply (simp add: raise_trap_def simpler_modify_def return_def)
apply (simp add: bind_def h1_def h2_def)
by (simp add: get_curr_win_def simpler_gets_def)
lemma write_state_reg_instr_result: "(fst i) \<in> {sreg_type WRY,sreg_type WRPSR,
sreg_type WRWIM,sreg_type WRTBR} \<Longrightarrow>
snd (write_state_reg_instr i s) = False"
apply (simp add: write_state_reg_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: simpler_modify_def)
apply (simp add: raise_trap_def simpler_modify_def return_def)
apply (simp add: bind_def h1_def h2_def)
apply (simp add: simpler_gets_def)
apply (simp add: write_cpu_def simpler_modify_def)
by (simp add: get_curr_win_def simpler_gets_def)
lemma logical_instr_result: "(fst i) \<in> {logic_type ANDs,logic_type ANDcc,
logic_type ANDN,logic_type ANDNcc,logic_type ORs,logic_type ORcc,
logic_type ORN,logic_type XORs,logic_type XNOR} \<Longrightarrow>
snd (logical_instr i s) = False"
apply (simp add: logical_instr_def)
apply (simp add: Let_def simpler_gets_def)
apply (simp add: write_reg_def simpler_modify_def)
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: logical_instr_sub1_def)
apply (simp add: return_def)
apply (simp add: write_cpu_def simpler_modify_def)
apply (simp add: bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
apply (simp add: simpler_gets_def)
by (simp add: get_curr_win_def simpler_gets_def)
lemma shift_instr_result: "(fst i) \<in> {shift_type SLL,shift_type
SRL,shift_type SRA} \<Longrightarrow>
snd (shift_instr i s) = False"
apply (simp add: shift_instr_def)
apply (simp add: Let_def)
apply (simp add: get_curr_win_def simpler_gets_def bind_def h1_def h2_def)
apply (simp add: return_def)
apply (simp add: bind_def h1_def h2_def)
by (simp add: write_reg_def simpler_modify_def)
method add_sub_instr_proof =
((simp add: Let_def),
auto,
(simp add: write_reg_def simpler_modify_def),
(simp add: simpler_gets_def bind_def),
(simp add: get_curr_win_def simpler_gets_def),
(simp add: write_reg_def write_cpu_def simpler_modify_def),
(simp add: bind_def),
(simp add: case_prod_unfold),
(simp add: simpler_gets_def),
(simp add: get_curr_win_def simpler_gets_def),
(simp add: write_reg_def simpler_modify_def),
(simp add: simpler_gets_def bind_def),
(simp add: get_curr_win_def simpler_gets_def))
lemma add_instr_result: "(fst i) \<in> {arith_type ADD,arith_type
ADDcc,arith_type ADDX} \<Longrightarrow>
snd (add_instr i s) = False"
apply (simp add: add_instr_def)
apply (simp add: Let_def)
apply auto
apply (simp add: add_instr_sub1_def)
apply (simp add: write_reg_def simpler_modify_def)
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: simpler_gets_def)
apply (simp add: get_curr_win_def simpler_gets_def)
apply (simp add: add_instr_sub1_def)
apply (simp add: write_reg_def simpler_modify_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: get_curr_win_def simpler_gets_def)
apply (simp add: write_cpu_def simpler_modify_def)
apply (simp add: add_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: get_curr_win_def simpler_gets_def)
by (simp add: write_reg_def simpler_modify_def)
lemma sub_instr_result: "(fst i) \<in> {arith_type SUB,arith_type SUBcc,
arith_type SUBX} \<Longrightarrow>
snd (sub_instr i s) = False"
apply (simp add: sub_instr_def)
apply (simp add: Let_def)
apply auto
apply (simp add: sub_instr_sub1_def)
apply (simp add: write_reg_def simpler_modify_def)
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: simpler_gets_def)
apply (simp add: get_curr_win_def simpler_gets_def)
apply (simp add: sub_instr_sub1_def)
apply (simp add: write_reg_def simpler_modify_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: get_curr_win_def simpler_gets_def)
apply (simp add: write_cpu_def simpler_modify_def)
apply (simp add: sub_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: get_curr_win_def simpler_gets_def)
by (simp add: write_reg_def simpler_modify_def)
lemma mul_instr_result: "(fst i) \<in> {arith_type UMUL,arith_type SMUL,
arith_type SMULcc} \<Longrightarrow>
snd (mul_instr i s) = False"
apply (simp add: mul_instr_def)
apply (simp add: Let_def)
apply auto
apply (simp add: mul_instr_sub1_def)
apply (simp add: write_reg_def simpler_modify_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: get_curr_win_def simpler_gets_def)
apply (simp add: write_reg_def write_cpu_def simpler_modify_def)
apply (simp add: mul_instr_sub1_def)
apply (simp add: simpler_gets_def)
apply (simp add: write_cpu_def write_reg_def simpler_modify_def)
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp add: get_curr_win_def simpler_gets_def)
apply (simp add: mul_instr_sub1_def)
apply (simp add: simpler_gets_def)
apply (simp add: write_cpu_def write_reg_def simpler_modify_def)
apply (simp add: bind_def h1_def h2_def)
by (simp add: get_curr_win_def simpler_gets_def)
lemma div_write_new_val_result: "snd (div_write_new_val i result temp_V s) = False"
apply (simp add: div_write_new_val_def)
apply (simp add: return_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
by (simp add: write_cpu_def simpler_modify_def)
lemma div_result: "snd (div_comp instr rs1 rd operand2 s) = False"
apply (simp add: div_comp_def)
apply (simp add: simpler_gets_def)
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: write_reg_def simpler_modify_def)
apply (simp add: get_curr_win_def simpler_gets_def)
by (simp add: div_write_new_val_result)
lemma div_instr_result: "(fst i) \<in> {arith_type UDIV,arith_type UDIVcc,
arith_type SDIV} \<Longrightarrow>
snd (div_instr i s) = False"
apply (simp add: div_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: raise_trap_def simpler_modify_def)
apply (simp add: return_def bind_def)
by (simp add: div_result)
lemma load_sub2_result: "snd (load_sub2 address asi rd curr_win word0 s) = False"
apply (simp add: load_sub2_def)
apply (simp add: write_reg_def simpler_modify_def)
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: raise_trap_def simpler_modify_def)
apply (simp add: bind_def h1_def h2_def)
apply (simp add: write_reg_def simpler_modify_def)
by (simp add: simpler_gets_def)
lemma load_sub3_result: "snd (load_sub3 instr curr_win rd asi address s) = False"
apply (simp add: load_sub3_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
apply (simp add: simpler_modify_def bind_def h1_def h2_def Let_def)
apply (simp add: write_reg_def simpler_modify_def)
apply (simp add: load_sub2_result)
by (simp add: raise_trap_def simpler_modify_def)
lemma load_sub1_result: "snd (load_sub1 i rd s_val s) = False"
apply (simp add: load_sub1_def)
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: raise_trap_def simpler_modify_def)
apply (simp add: get_curr_win_def simpler_gets_def)
by (simp add: load_sub3_result)
lemma load_instr_result: "(fst i) \<in> {load_store_type LDSB,load_store_type LDUB,
load_store_type LDUBA,load_store_type LDUH,load_store_type LD,
load_store_type LDA,load_store_type LDD} \<Longrightarrow>
snd (load_instr i s) = False"
apply (simp add: load_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: raise_trap_def simpler_modify_def)
apply (simp add: return_def)
by (simp add: load_sub1_result)
lemma store_sub2_result: "snd (store_sub2 instr curr_win rd asi address s) = False"
apply (simp add: store_sub2_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: raise_trap_def simpler_modify_def)
apply (simp add: return_def)
apply (simp add: raise_trap_def simpler_modify_def)
by (simp add: bind_def h1_def h2_def)
lemma store_sub1_result: "snd (store_sub1 instr rd s_val s) = False"
apply (simp add: store_sub1_def)
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: raise_trap_def simpler_modify_def)
apply (simp add: get_curr_win_def)
apply (simp add: simpler_gets_def)
by (simp add: store_sub2_result)
lemma store_instr_result: "(fst i) \<in> {load_store_type STB,load_store_type STH,
load_store_type ST,load_store_type STA,load_store_type STD} \<Longrightarrow>
snd (store_instr i s) = False"
apply (simp add: store_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: raise_trap_def simpler_modify_def)
apply (simp add: return_def)
by (simp add: store_sub1_result)
lemma supported_instr_set: "supported_instruction i = True \<Longrightarrow>
i \<in> {load_store_type LDSB,load_store_type LDUB,load_store_type LDUBA,
load_store_type LDUH,load_store_type LD,load_store_type LDA,
load_store_type LDD,
load_store_type STB,load_store_type STH,load_store_type ST,
load_store_type STA,load_store_type STD,
sethi_type SETHI,
nop_type NOP,
logic_type ANDs,logic_type ANDcc,logic_type ANDN,logic_type ANDNcc,
logic_type ORs,logic_type ORcc,logic_type ORN,logic_type XORs,
logic_type XNOR,
shift_type SLL,shift_type SRL,shift_type SRA,
arith_type ADD,arith_type ADDcc,arith_type ADDX,
arith_type SUB,arith_type SUBcc,arith_type SUBX,
arith_type UMUL,arith_type SMUL,arith_type SMULcc,
arith_type UDIV,arith_type UDIVcc,arith_type SDIV,
ctrl_type SAVE,ctrl_type RESTORE,
call_type CALL,
ctrl_type JMPL,
ctrl_type RETT,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
apply (simp add: supported_instruction_def)
by presburger
lemma dispatch_instr_result:
assumes a1: "supported_instruction (fst i) = True \<and> (fst i) \<noteq> ctrl_type RETT"
shows "snd (dispatch_instruction i s) = False"
proof (cases "get_trap_set s = {}")
case True
then have f1: "get_trap_set s = {}" by auto
then show ?thesis
proof (cases "(fst i) \<in> {load_store_type LDSB,load_store_type LDUB,
load_store_type LDUBA,load_store_type LDUH,load_store_type LD,
load_store_type LDA,load_store_type LDD}")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (simp add: load_instr_result)
next
case False
then have f2: "(fst i) \<in> {load_store_type STB,load_store_type STH,load_store_type ST,
load_store_type STA,load_store_type STD,
sethi_type SETHI,
nop_type NOP,
logic_type ANDs,logic_type ANDcc,logic_type ANDN,logic_type ANDNcc,
logic_type ORs,logic_type ORcc,logic_type ORN,logic_type XORs,
logic_type XNOR,
shift_type SLL,shift_type SRL,shift_type SRA,
arith_type ADD,arith_type ADDcc,arith_type ADDX,
arith_type SUB,arith_type SUBcc,arith_type SUBX,
arith_type UMUL,arith_type SMUL,arith_type SMULcc,
arith_type UDIV,arith_type UDIVcc,arith_type SDIV,
ctrl_type SAVE,ctrl_type RESTORE,
call_type CALL,
ctrl_type JMPL,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using a1
apply (simp add: supported_instruction_def)
by presburger
then show ?thesis
proof (cases "(fst i) \<in> {load_store_type STB,load_store_type STH,
load_store_type ST,
load_store_type STA,load_store_type STD}")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (auto simp add: store_instr_result)
next
case False
then have f3: "(fst i) \<in> {sethi_type SETHI,
nop_type NOP,
logic_type ANDs,logic_type ANDcc,logic_type ANDN,logic_type ANDNcc,
logic_type ORs,logic_type ORcc,logic_type ORN,logic_type XORs,
logic_type XNOR,
shift_type SLL,shift_type SRL,shift_type SRA,
arith_type ADD,arith_type ADDcc,arith_type ADDX,
arith_type SUB,arith_type SUBcc,arith_type SUBX,
arith_type UMUL,arith_type SMUL,arith_type SMULcc,
arith_type UDIV,arith_type UDIVcc,arith_type SDIV,
ctrl_type SAVE,ctrl_type RESTORE,
call_type CALL,
ctrl_type JMPL,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f2 by auto
then show ?thesis
proof (cases "(fst i) = sethi_type SETHI")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (simp add: sethi_instr_result)
next
case False
then have f4: "(fst i) \<in> {nop_type NOP,
logic_type ANDs,logic_type ANDcc,logic_type ANDN,logic_type ANDNcc,
logic_type ORs,logic_type ORcc,logic_type ORN,logic_type XORs,
logic_type XNOR,
shift_type SLL,shift_type SRL,shift_type SRA,
arith_type ADD,arith_type ADDcc,arith_type ADDX,
arith_type SUB,arith_type SUBcc,arith_type SUBX,
arith_type UMUL,arith_type SMUL,arith_type SMULcc,
arith_type UDIV,arith_type UDIVcc,arith_type SDIV,
ctrl_type SAVE,ctrl_type RESTORE,
call_type CALL,
ctrl_type JMPL,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f3 by auto
then show ?thesis
proof (cases "fst i = nop_type NOP")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (simp add: nop_instr_result)
next
case False
then have f5: "(fst i) \<in> {logic_type ANDs,logic_type ANDcc,
logic_type ANDN,logic_type ANDNcc,
logic_type ORs,logic_type ORcc,logic_type ORN,logic_type XORs,
logic_type XNOR,
shift_type SLL,shift_type SRL,shift_type SRA,
arith_type ADD,arith_type ADDcc,arith_type ADDX,
arith_type SUB,arith_type SUBcc,arith_type SUBX,
arith_type UMUL,arith_type SMUL,arith_type SMULcc,
arith_type UDIV,arith_type UDIVcc,arith_type SDIV,
ctrl_type SAVE,ctrl_type RESTORE,
call_type CALL,
ctrl_type JMPL,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f4 by auto
then show ?thesis
proof (cases "(fst i) \<in> {logic_type ANDs,logic_type ANDcc,
logic_type ANDN,logic_type ANDNcc,
logic_type ORs,logic_type ORcc,logic_type ORN,logic_type XORs,
logic_type XNOR}")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (auto simp add: logical_instr_result)
next
case False
then have f6: "(fst i) \<in> {shift_type SLL,shift_type SRL,
shift_type SRA,
arith_type ADD,arith_type ADDcc,arith_type ADDX,
arith_type SUB,arith_type SUBcc,arith_type SUBX,
arith_type UMUL,arith_type SMUL,arith_type SMULcc,
arith_type UDIV,arith_type UDIVcc,arith_type SDIV,
ctrl_type SAVE,ctrl_type RESTORE,
call_type CALL,
ctrl_type JMPL,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f5 by auto
then show ?thesis
proof (cases "(fst i) \<in> {shift_type SLL,shift_type SRL,
shift_type SRA}")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (auto simp add: shift_instr_result)
next
case False
then have f7: "(fst i) \<in> {arith_type ADD,arith_type ADDcc,
arith_type ADDX,
arith_type SUB,arith_type SUBcc,arith_type SUBX,
arith_type UMUL,arith_type SMUL,arith_type SMULcc,
arith_type UDIV,arith_type UDIVcc,arith_type SDIV,
ctrl_type SAVE,ctrl_type RESTORE,
call_type CALL,
ctrl_type JMPL,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f6 by auto
then show ?thesis
proof (cases "(fst i) \<in> {arith_type ADD,arith_type ADDcc,
arith_type ADDX}")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (auto simp add: add_instr_result)
next
case False
then have f8: "(fst i) \<in> {arith_type SUB,arith_type SUBcc,
arith_type SUBX,
arith_type UMUL,arith_type SMUL,arith_type SMULcc,
arith_type UDIV,arith_type UDIVcc,arith_type SDIV,
ctrl_type SAVE,ctrl_type RESTORE,
call_type CALL,
ctrl_type JMPL,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f7 by auto
then show ?thesis
proof (cases "(fst i) \<in> {arith_type SUB,arith_type SUBcc,
arith_type SUBX}")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (auto simp add: sub_instr_result)
next
case False
then have f9: "(fst i) \<in> {arith_type UMUL,arith_type SMUL,
arith_type SMULcc,
arith_type UDIV,arith_type UDIVcc,arith_type SDIV,
ctrl_type SAVE,ctrl_type RESTORE,
call_type CALL,
ctrl_type JMPL,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f8 by auto
then show ?thesis
proof (cases "(fst i) \<in> {arith_type UMUL,arith_type SMUL,
arith_type SMULcc}")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (auto simp add: mul_instr_result)
next
case False
then have f10: "(fst i) \<in> {arith_type UDIV,arith_type UDIVcc,
arith_type SDIV,
ctrl_type SAVE,ctrl_type RESTORE,
call_type CALL,
ctrl_type JMPL,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f9 by auto
then show ?thesis
proof (cases "(fst i) \<in> {arith_type UDIV,arith_type UDIVcc,
arith_type SDIV}")
case True
then show ?thesis
apply dispatch_instr_proof1 using f1
by (auto simp add: div_instr_result)
next
case False
then have f11: "(fst i) \<in> {ctrl_type SAVE,ctrl_type RESTORE,
call_type CALL,
ctrl_type JMPL,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f10 by auto
then show ?thesis
proof (cases "(fst i) \<in> {ctrl_type SAVE,ctrl_type RESTORE}")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (auto simp add: save_restore_instr_result)
next
case False
then have f12: "(fst i) \<in> {call_type CALL,
ctrl_type JMPL,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f11 by auto
then show ?thesis
proof (cases "(fst i) = call_type CALL")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (auto simp add: call_instr_result)
next
case False
then have f13: "(fst i) \<in> {ctrl_type JMPL,
sreg_type RDY,sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f12 by auto
then show ?thesis
proof (cases "(fst i) = ctrl_type JMPL")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (auto simp add: jmpl_instr_result)
next
case False
then have f14: "(fst i) \<in> {
sreg_type RDY,
sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR,
sreg_type WRY,sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f13 by auto
then show ?thesis
proof (cases "(fst i) \<in> {sreg_type RDY,
sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR}")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (auto simp add: read_state_reg_instr_result)
next
case False
then have f15: "(fst i) \<in> {
sreg_type WRY,
sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR,
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f14 by auto
then show ?thesis
proof (cases "(fst i) \<in> {sreg_type WRY,
sreg_type WRPSR,sreg_type WRWIM,sreg_type WRTBR}")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (auto simp add: write_state_reg_instr_result)
next
case False
then have f16: "(fst i) \<in> {
load_store_type FLUSH,
bicc_type BE,bicc_type BNE,bicc_type BGU,bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f15 by auto
then show ?thesis
proof (cases "(fst i) = load_store_type FLUSH")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
by (auto simp add: flush_instr_result)
next
case False
then have f17: "(fst i) \<in>
{
bicc_type BE,bicc_type BNE,bicc_type BGU,
bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA,
bicc_type BN}"
using f16 by auto
then show ?thesis using f1
proof (cases "(fst i) \<in> {bicc_type BE,
bicc_type BNE,bicc_type BGU,
bicc_type BLE,
bicc_type BL,bicc_type BGE,bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,bicc_type BA
}")
case True
then show ?thesis using f1
apply dispatch_instr_proof1
apply auto
by (auto simp add: branch_instr_result)
next
case False
then have f18: "(fst i) \<in> {bicc_type BN}"
using f17 by auto
then show ?thesis using f1
apply dispatch_instr_proof1
apply auto
by (auto simp add: branch_instr_result)
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
next
case False
then show ?thesis
apply (simp add: dispatch_instruction_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: Let_def)
by (simp add: returnOk_def return_def)
qed
lemma dispatch_instr_result_rett:
assumes a1: "(fst i) = ctrl_type RETT \<and> (get_ET (cpu_reg_val PSR s) \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) \<noteq> 0 \<and>
(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR s))) + 1) mod NWINDOWS))
(cpu_reg_val WIM s)) = 0 \<and>
- (bitAND (get_addr (snd i) s) (0b00000000000000000000000000000011::word32)) = 0)"
+ ((AND) (get_addr (snd i) s) (0b00000000000000000000000000000011::word32)) = 0)"
shows "snd (dispatch_instruction i s) = False"
proof (cases "get_trap_set s = {}")
case True
then show ?thesis using a1
apply (simp add: dispatch_instruction_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (simp add: rett_instr_result)
next
case False
then show ?thesis using a1
apply (simp add: dispatch_instruction_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (simp add: return_def)
qed
lemma execute_instr_sub1_result: "snd (execute_instr_sub1 i s) = False"
proof (cases "get_trap_set s = {} \<and> (fst i) \<in> {call_type CALL,ctrl_type RETT,
ctrl_type JMPL}")
case True
then show ?thesis
apply (simp add: execute_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: write_cpu_def simpler_modify_def)
apply auto
by (auto simp add: return_def)
next
case False
then show ?thesis
apply (simp add: execute_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: write_cpu_def simpler_modify_def)
by (auto simp add: return_def)
qed
lemma next_match : "snd (execute_instruction () s) = False \<Longrightarrow>
NEXT s = Some (snd (fst (execute_instruction () s)))"
apply (simp add: NEXT_def)
by (simp add: case_prod_unfold)
lemma exec_ss1 : "\<exists>s'. (execute_instruction () s = (s', False)) \<Longrightarrow>
\<exists>s''. (execute_instruction() s = (s'', False))"
proof -
assume "\<exists>s'. (execute_instruction () s = (s', False))"
hence "(snd (execute_instruction() s)) = False"
by (auto simp add: execute_instruction_def case_prod_unfold)
hence "(execute_instruction() s) =
((fst (execute_instruction() s)),False)"
by (metis (full_types) prod.collapse)
hence "\<exists>s''. (execute_instruction() s = (s'', False))"
by blast
thus ?thesis by assumption
qed
lemma exec_ss2 : "snd (execute_instruction() s) = False \<Longrightarrow>
snd (execute_instruction () s) = False"
proof -
assume "snd (execute_instruction() s) = False"
hence "snd (execute_instruction () s) = False"
by (auto simp add:execute_instruction_def)
thus ?thesis by assumption
qed
lemma good_context_1 : "good_context s \<and> s' = s \<and>
(get_trap_set s') \<noteq> {} \<and> (reset_trap_val s') = False \<and> get_ET (cpu_reg_val PSR s') = 0
\<Longrightarrow> False"
proof -
assume asm: "good_context s \<and> s' = s \<and>
(get_trap_set s') \<noteq> {} \<and> (reset_trap_val s') = False \<and> get_ET (cpu_reg_val PSR s') = 0"
then have "(get_trap_set s') \<noteq> {} \<and> (reset_trap_val s') = False \<and>
get_ET (cpu_reg_val PSR s') = 0 \<Longrightarrow> False"
by (simp add: good_context_def get_ET_def cpu_reg_val_def)
then show ?thesis using asm by auto
qed
lemma fetch_instr_result_1 : "\<not> (\<exists>e. fetch_instruction s' = Inl e) \<Longrightarrow>
(\<exists>v. fetch_instruction s' = Inr v)"
by (meson sumE)
lemma fetch_instr_result_2 : "(\<exists>v. fetch_instruction s' = Inr v) \<Longrightarrow>
\<not> (\<exists>e. fetch_instruction s' = Inl e)"
by force
lemma fetch_instr_result_3 : "(\<exists>e. fetch_instruction s' = Inl e) \<Longrightarrow>
\<not> (\<exists>v. fetch_instruction s' = Inr v)"
by auto
lemma decode_instr_result_1 :
"\<not>(\<exists>v2. ((decode_instruction v1)::(Exception list + instruction)) = Inr v2) \<Longrightarrow>
(\<exists>e. ((decode_instruction v1)::(Exception list + instruction)) = Inl e)"
by (meson sumE)
lemma decode_instr_result_2 :
"(\<exists>e. ((decode_instruction v1)::(Exception list + instruction)) = Inl e) \<Longrightarrow>
\<not>(\<exists>v2. ((decode_instruction v1)::(Exception list + instruction)) = Inr v2)"
by force
lemma decode_instr_result_3 : "x = decode_instruction v1 \<and> y = decode_instruction v2
\<and> v1 = v2 \<Longrightarrow> x = y"
by auto
lemma decode_instr_result_4 :
"\<not> (\<exists>e. ((decode_instruction v1)::(Exception list + instruction)) = Inl e) \<Longrightarrow>
(\<exists>v2. ((decode_instruction v1)::(Exception list + instruction)) = Inr v2)"
by (meson sumE)
lemma good_context_2 :
"good_context (s::(('a::len0) sparc_state)) \<and>
fetch_instruction (delayed_pool_write s) = Inr v1 \<and>
\<not>(\<exists>v2. (decode_instruction v1::(Exception list + instruction)) = Inr v2)
\<Longrightarrow> False"
proof -
assume "good_context s \<and>
fetch_instruction (delayed_pool_write s) = Inr v1 \<and>
\<not>(\<exists>v2. ((decode_instruction v1)::(Exception list + instruction)) = Inr v2)"
hence fact1: "good_context s \<and>
fetch_instruction (delayed_pool_write s) = Inr v1 \<and>
(\<exists>e. ((decode_instruction v1)::(Exception list + instruction)) = Inl e)"
using decode_instr_result_1 by auto
hence fact2: "\<not>(\<exists>e. fetch_instruction (delayed_pool_write s) = Inl e)"
using fetch_instr_result_2 by auto
then have "fetch_instruction (delayed_pool_write s) = Inr v1 \<and>
(\<exists>e. ((decode_instruction v1)::(Exception list + instruction)) = Inl e)
\<Longrightarrow> False"
proof (cases "(get_trap_set s) \<noteq> {} \<and> (reset_trap_val s) = False \<and>
get_ET (cpu_reg_val PSR s) = 0")
case True
from this fact1 show ?thesis using good_context_1 by blast
next
case False
then have fact3: "(get_trap_set s) = {} \<or> (reset_trap_val s) \<noteq> False
\<or> get_ET (cpu_reg_val PSR s) \<noteq> 0"
by auto
then show ?thesis
using fact1 decode_instr_result_3
by (metis (no_types, lifting) good_context_def sum.case(1) sum.case(2))
qed
thus ?thesis using fact1 by auto
qed
lemma good_context_3 :
"good_context (s::(('a::len0) sparc_state)) \<and>
s'' = delayed_pool_write s \<and>
fetch_instruction s'' = Inr v1 \<and>
(decode_instruction v1::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and> supported_instruction (fst v2) = False
\<Longrightarrow> False"
proof -
assume asm: "good_context (s::(('a::len0) sparc_state)) \<and>
s'' = delayed_pool_write s \<and>
fetch_instruction s'' = Inr v1 \<and>
(decode_instruction v1::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and> supported_instruction (fst v2) = False"
then have "annul_val s'' = False \<and> supported_instruction (fst v2) = False
\<Longrightarrow> False"
proof (cases "(get_trap_set s) \<noteq> {} \<and> (reset_trap_val s) = False \<and>
get_ET (cpu_reg_val PSR s) = 0")
case True
from this asm show ?thesis using good_context_1 by blast
next
case False
then have fact3: "(get_trap_set s) = {} \<or> (reset_trap_val s) \<noteq> False \<or>
get_ET (cpu_reg_val PSR s) \<noteq> 0"
by auto
thus ?thesis using asm by (auto simp add: good_context_def)
qed
thus ?thesis using asm by auto
qed
lemma good_context_4 :
"good_context (s::(('a::len0) sparc_state)) \<and>
s'' = delayed_pool_write s \<and>
fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and>
supported_instruction (fst v2) = True \<and> \<comment> \<open>This line is redundant\<close>
(fst v2) = ctrl_type RETT \<and> get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) = 0
\<Longrightarrow> False"
proof -
assume asm: "good_context (s::(('a::len0) sparc_state)) \<and>
s'' = delayed_pool_write s \<and>
fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and>
supported_instruction (fst v2) = True \<and> \<comment> \<open>This line is redundant\<close>
(fst v2) = ctrl_type RETT \<and> get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) = 0"
then have "(fst v2) = ctrl_type RETT \<and> get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) = 0 \<Longrightarrow> False"
proof (cases "(get_trap_set s) \<noteq> {} \<and> (reset_trap_val s) = False \<and>
get_ET (cpu_reg_val PSR s) = 0")
case True
from this asm show ?thesis using good_context_1 by blast
next
case False
then have fact3: "(get_trap_set s) = {} \<or> (reset_trap_val s) \<noteq> False \<or>
get_ET (cpu_reg_val PSR s) \<noteq> 0"
by auto
thus ?thesis using asm by (auto simp add: good_context_def)
qed
thus ?thesis using asm by auto
qed
lemma good_context_5 :
"good_context (s::(('a::len0) sparc_state)) \<and>
s'' = delayed_pool_write s \<and>
fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and>
supported_instruction (fst v2) = True \<and> \<comment> \<open>This line is redundant\<close>
(fst v2) = ctrl_type RETT \<and> get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) \<noteq> 0 \<and>
(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR s''))) + 1) mod NWINDOWS))
(cpu_reg_val WIM s'')) \<noteq> 0
\<Longrightarrow> False"
proof -
assume asm: "good_context (s::(('a::len0) sparc_state)) \<and>
s'' = delayed_pool_write s \<and>
fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and>
supported_instruction (fst v2) = True \<and> \<comment> \<open>This line is redundant\<close>
(fst v2) = ctrl_type RETT \<and> get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) \<noteq> 0 \<and>
(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR s''))) + 1) mod NWINDOWS))
(cpu_reg_val WIM s'')) \<noteq> 0"
then have "(fst v2) = ctrl_type RETT \<and> get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) \<noteq> 0 \<and>
(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR s''))) + 1) mod NWINDOWS))
(cpu_reg_val WIM s'')) \<noteq> 0
\<Longrightarrow> False"
proof (cases "(get_trap_set s) \<noteq> {} \<and> (reset_trap_val s) = False \<and>
get_ET (cpu_reg_val PSR s) = 0")
case True
from this asm show ?thesis using good_context_1 by blast
next
case False
then have fact3: "(get_trap_set s) = {} \<or> (reset_trap_val s) \<noteq> False \<or>
get_ET (cpu_reg_val PSR s) \<noteq> 0"
by auto
thus ?thesis using asm by (auto simp add: good_context_def)
qed
thus ?thesis using asm by auto
qed
lemma good_context_6 :
"good_context (s::(('a::len0) sparc_state)) \<and>
s'' = delayed_pool_write s \<and>
fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and>
supported_instruction (fst v2) = True \<and> \<comment> \<open>This line is redundant\<close>
(fst v2) = ctrl_type RETT \<and> get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) \<noteq> 0 \<and>
(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR s''))) + 1) mod NWINDOWS))
(cpu_reg_val WIM s'')) = 0 \<and>
- (bitAND (get_addr (snd v2) s'') (0b00000000000000000000000000000011::word32)) \<noteq> 0
+ ((AND) (get_addr (snd v2) s'') (0b00000000000000000000000000000011::word32)) \<noteq> 0
\<Longrightarrow> False"
proof -
assume asm: "good_context (s::(('a::len0) sparc_state)) \<and>
s'' = delayed_pool_write s \<and>
fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and>
supported_instruction (fst v2) = True \<and> \<comment> \<open>This line is redundant\<close>
(fst v2) = ctrl_type RETT \<and> get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) \<noteq> 0 \<and>
(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR s''))) + 1) mod NWINDOWS))
(cpu_reg_val WIM s'')) = 0 \<and>
- (bitAND (get_addr (snd v2) s'') (0b00000000000000000000000000000011::word32)) \<noteq> 0"
+ ((AND) (get_addr (snd v2) s'') (0b00000000000000000000000000000011::word32)) \<noteq> 0"
then have "(fst v2) = ctrl_type RETT \<and> get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) \<noteq> 0 \<and>
(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR s''))) + 1) mod NWINDOWS))
(cpu_reg_val WIM s'')) = 0 \<and>
- (bitAND (get_addr (snd v2) s'') (0b00000000000000000000000000000011::word32)) \<noteq> 0
+ ((AND) (get_addr (snd v2) s'') (0b00000000000000000000000000000011::word32)) \<noteq> 0
\<Longrightarrow> False"
proof (cases "(get_trap_set s) \<noteq> {} \<and> (reset_trap_val s) = False \<and>
get_ET (cpu_reg_val PSR s) = 0")
case True
from this asm show ?thesis using good_context_1 by blast
next
case False
then have fact3: "(get_trap_set s) = {} \<or> (reset_trap_val s) \<noteq> False \<or>
get_ET (cpu_reg_val PSR s) \<noteq> 0"
by auto
thus ?thesis using asm by (auto simp add: good_context_def)
qed
thus ?thesis using asm by auto
qed
lemma good_context_all :
"good_context (s::(('a::len0) sparc_state)) \<and>
s'' = delayed_pool_write s \<Longrightarrow>
(get_trap_set s = {} \<or> (reset_trap_val s) \<noteq> False \<or> get_ET (cpu_reg_val PSR s) \<noteq> 0) \<and>
((\<exists>e. fetch_instruction s'' = Inl e) \<or>
(\<exists>v1 v2. fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
(annul_val s'' = True \<or>
(annul_val s'' = False \<and>
(\<forall>v1' v2'. fetch_instruction s'' = Inr v1' \<and>
((decode_instruction v1')::(Exception list + instruction)) = Inr v2' \<longrightarrow>
supported_instruction (fst v2') = True) \<and>
((fst v2) \<noteq> ctrl_type RETT \<or>
((fst v2) = ctrl_type RETT \<and>
(get_ET (cpu_reg_val PSR s'') = 1 \<or>
(get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) \<noteq> 0 \<and>
(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR s''))) + 1) mod NWINDOWS))
(cpu_reg_val WIM s'')) = 0 \<and>
- (bitAND (get_addr (snd v2) s'') (0b00000000000000000000000000000011::word32)) = 0))))))))"
+ ((AND) (get_addr (snd v2) s'') (0b00000000000000000000000000000011::word32)) = 0))))))))"
proof -
assume asm: "good_context s \<and> s'' = delayed_pool_write s"
from asm have "(get_trap_set s) \<noteq> {} \<and> (reset_trap_val s) = False \<and>
get_ET (cpu_reg_val PSR s) = 0 \<Longrightarrow> False"
using good_context_1 by blast
hence fact1: "(get_trap_set s = {} \<or> (reset_trap_val s) \<noteq> False \<or>
get_ET (cpu_reg_val PSR s) \<noteq> 0)" by auto
have fact2: "\<not>(\<exists>e. fetch_instruction s'' = Inl e) \<and> \<not> (\<exists>v1. fetch_instruction s'' = Inr v1)
\<Longrightarrow> False" using fetch_instr_result_1 by blast
from asm have fact3: "\<exists>v1. fetch_instruction s'' = Inr v1 \<and>
\<not>(\<exists>v2.((decode_instruction v1)::(Exception list + instruction)) = Inr v2)
\<Longrightarrow> False"
using good_context_2 by blast
from asm have fact4: "\<exists>v1 v2. fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and> supported_instruction (fst v2) = False
\<Longrightarrow> False"
using good_context_3 by blast
from asm have fact5: "\<exists>v1 v2. fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and> supported_instruction (fst v2) = True \<and>
(fst v2) = ctrl_type RETT \<and> get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) = 0
\<Longrightarrow> False"
using good_context_4 by blast
from asm have fact6: "\<exists>v1 v2. fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and> supported_instruction (fst v2) = True \<and>
(fst v2) = ctrl_type RETT \<and> get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) \<noteq> 0 \<and>
(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR s''))) + 1) mod NWINDOWS))
(cpu_reg_val WIM s'')) \<noteq> 0
\<Longrightarrow> False"
using good_context_5 by blast
from asm have fact7: "\<exists>v1 v2. fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and> supported_instruction (fst v2) = True \<and>
(fst v2) = ctrl_type RETT \<and> get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) \<noteq> 0 \<and>
(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR s''))) + 1) mod NWINDOWS))
(cpu_reg_val WIM s'')) = 0 \<and>
- (bitAND (get_addr (snd v2) s'') (0b00000000000000000000000000000011::word32)) \<noteq> 0
+ ((AND) (get_addr (snd v2) s'') (0b00000000000000000000000000000011::word32)) \<noteq> 0
\<Longrightarrow> False"
using good_context_6 by blast
from asm show ?thesis
proof (cases "(\<exists>e. fetch_instruction s'' = Inl e)")
case True
then show ?thesis using fact1 by auto
next
case False
then have fact8: "\<exists>v1. fetch_instruction s'' = Inr v1 \<and>
(\<exists>v2.((decode_instruction v1)::(Exception list + instruction)) = Inr v2)"
using fact2 fact3 by auto
then show ?thesis
proof (cases "annul_val s'' = True")
case True
then show ?thesis using fact1 fact8 by auto
next
case False
then have fact9: "\<exists>v1 v2. fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and> supported_instruction (fst v2) = True"
using fact4 fact8 by blast
then show ?thesis
proof (cases "\<exists>v1 v2. fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
(fst v2) \<noteq> ctrl_type RETT")
case True
then show ?thesis using fact1 fact9 by auto
next
case False
then have fact10: "\<exists>v1 v2. fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
annul_val s'' = False \<and> supported_instruction (fst v2) = True \<and>
(fst v2) = ctrl_type RETT"
using fact9 by auto
then show ?thesis
proof (cases "get_ET (cpu_reg_val PSR s'') = 1")
case True
then show ?thesis using fact1 fact9 by auto
next
case False
then have fact11: "get_ET (cpu_reg_val PSR s'') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR s'')))::word1) \<noteq> 0"
using fact10 fact5 by auto
then have fact12: "(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR s''))) + 1)
mod NWINDOWS)) (cpu_reg_val WIM s'')) = 0"
using fact10 fact6 by auto
then have fact13: "\<exists>v1 v2. fetch_instruction s'' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
- (bitAND (get_addr (snd v2) s'') (0b00000000000000000000000000000011::word32)) = 0"
+ ((AND) (get_addr (snd v2) s'') (0b00000000000000000000000000000011::word32)) = 0"
using fact10 fact11 fact7 by blast
thus ?thesis using fact1 fact10 fact11 fact12 by auto
qed
qed
qed
qed
qed
lemma select_trap_result1 : "(reset_trap_val s) = True \<Longrightarrow>
snd (select_trap() s) = False"
apply (simp add: select_trap_def exec_gets return_def)
by (simp add: bind_def h1_def h2_def simpler_modify_def)
lemma select_trap_result2 :
assumes a1: "\<not>(reset_trap_val s = False \<and> get_ET (cpu_reg_val PSR s) = 0)"
shows "snd (select_trap() s) = False"
proof (cases "reset_trap_val s = True")
case True
then show ?thesis using select_trap_result1
by blast
next
case False
then have f1: "reset_trap_val s = False \<and> get_ET (cpu_reg_val PSR s) \<noteq> 0"
using a1 by auto
then show ?thesis
proof (cases "data_store_error \<in> get_trap_set s")
case True
then show ?thesis using f1
by select_trap_proof0
next
case False
then have f2: "data_store_error \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "instruction_access_error \<in> get_trap_set s")
case True
then show ?thesis using f1 f2
by select_trap_proof0
next
case False
then have f3: "instruction_access_error \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "r_register_access_error \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3
by select_trap_proof0
next
case False
then have f4: "r_register_access_error \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "instruction_access_exception \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4
by select_trap_proof0
next
case False
then have f5: "instruction_access_exception \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "privileged_instruction \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5
by select_trap_proof0
next
case False
then have f6: "privileged_instruction \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "illegal_instruction \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6
by select_trap_proof0
next
case False
then have f7: "illegal_instruction \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "fp_disabled \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7
by select_trap_proof0
next
case False
then have f8: "fp_disabled \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "cp_disabled \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8
by select_trap_proof0
next
case False
then have f9: "cp_disabled \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "unimplemented_FLUSH \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9
by select_trap_proof0
next
case False
then have f10: "unimplemented_FLUSH \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "window_overflow \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
by select_trap_proof0
next
case False
then have f11: "window_overflow \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "window_underflow \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11
by select_trap_proof0
next
case False
then have f12: "window_underflow \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "mem_address_not_aligned \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12
by select_trap_proof0
next
case False
then have f13: "mem_address_not_aligned \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "fp_exception \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13
by select_trap_proof0
next
case False
then have f14: "fp_exception \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "cp_exception \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13
f14
by select_trap_proof0
next
case False
then have f15: "cp_exception \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "data_access_error \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13
f14 f15
by select_trap_proof0
next
case False
then have f16: "data_access_error \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "data_access_exception \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12
f13 f14 f15 f16
by select_trap_proof0
next
case False
then have f17: "data_access_exception \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "tag_overflow \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12
f13 f14 f15 f16 f17
by select_trap_proof0
next
case False
then have f18: "tag_overflow \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "division_by_zero \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11
f12 f13 f14 f15 f16 f17 f18
by select_trap_proof0
next
case False
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11
f12 f13 f14 f15 f16 f17 f18
apply (simp add: select_trap_def exec_gets return_def)
apply (simp add: DetMonad.bind_def h1_def h2_def simpler_modify_def)
apply (simp add: return_def simpler_gets_def)
apply (simp add: case_prod_unfold)
apply (simp add: return_def)
apply (simp add: write_cpu_tt_def write_cpu_def)
by (simp add: simpler_gets_def bind_def h1_def h2_def simpler_modify_def)
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
lemma emp_trap_set_err_mode : "err_mode_val s = err_mode_val (emp_trap_set s)"
by (auto simp add: emp_trap_set_def err_mode_val_def)
lemma write_cpu_tt_err_mode : "err_mode_val s = err_mode_val (snd (fst (write_cpu_tt w s)))"
apply (simp add: write_cpu_tt_def err_mode_val_def write_cpu_def)
apply (simp add: exec_gets return_def)
apply (simp add: bind_def simpler_modify_def)
by (simp add: cpu_reg_mod_def)
lemma select_trap_monad : "snd (select_trap() s) = False \<Longrightarrow>
err_mode_val s = err_mode_val (snd (fst (select_trap () s)))"
proof -
assume a1: "snd (select_trap() s) = False"
then have f0: "reset_trap_val s = False \<and> get_ET (cpu_reg_val PSR s) = 0 \<Longrightarrow> False"
apply (simp add: select_trap_def exec_gets return_def)
apply (simp add: bind_def h1_def h2_def simpler_modify_def)
by (simp add: fail_def split_def)
then show ?thesis
proof (cases "reset_trap_val s = True")
case True
from a1 f0 this show ?thesis
apply (simp add: select_trap_def exec_gets return_def)
apply (simp add: bind_def h1_def h2_def simpler_modify_def)
by (simp add: emp_trap_set_def err_mode_val_def)
next
case False
then have f1: "reset_trap_val s = False \<and> get_ET (cpu_reg_val PSR s) \<noteq> 0" using f0 by auto
then show ?thesis using f1 a1
proof (cases "data_store_error \<in> get_trap_set s")
case True
then show ?thesis using f1 a1
by select_trap_proof1
next
case False
then have f2: "data_store_error \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "instruction_access_error \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 a1
by select_trap_proof1
next
case False
then have f3: "instruction_access_error \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "r_register_access_error \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 a1
by select_trap_proof1
next
case False
then have f4: "r_register_access_error \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "instruction_access_exception \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 a1
by select_trap_proof1
next
case False
then have f5: "instruction_access_exception \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "privileged_instruction \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 a1
by select_trap_proof1
next
case False
then have f6: "privileged_instruction \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "illegal_instruction \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 a1
by select_trap_proof1
next
case False
then have f7: "illegal_instruction \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "fp_disabled \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 a1
by select_trap_proof1
next
case False
then have f8: "fp_disabled \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "cp_disabled \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 a1
by select_trap_proof1
next
case False
then have f9: "cp_disabled \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "unimplemented_FLUSH \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 a1
by select_trap_proof1
next
case False
then have f10: "unimplemented_FLUSH \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "window_overflow \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 a1
by select_trap_proof1
next
case False
then have f11: "window_overflow \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "window_underflow \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 a1
by select_trap_proof1
next
case False
then have f12: "window_underflow \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "mem_address_not_aligned \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 a1
by select_trap_proof1
next
case False
then have f13: "mem_address_not_aligned \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "fp_exception \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13
a1
by select_trap_proof1
next
case False
then have f14: "fp_exception \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "cp_exception \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13
f14 a1
by select_trap_proof1
next
case False
then have f15: "cp_exception \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "data_access_error \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13
f14 f15 a1
by select_trap_proof1
next
case False
then have f16: "data_access_error \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "data_access_exception \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12
f13 f14 f15 f16 a1
by select_trap_proof1
next
case False
then have f17: "data_access_exception \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "tag_overflow \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12
f13 f14 f15 f16 f17 a1
by select_trap_proof1
next
case False
then have f18: "tag_overflow \<notin> get_trap_set s" by auto
then show ?thesis
proof (cases "division_by_zero \<in> get_trap_set s")
case True
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11
f12 f13 f14 f15 f16 f17 f18 a1
by select_trap_proof1
next
case False
then show ?thesis using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11
f12 f13 f14 f15 f16 f17 f18 a1
apply (simp add: select_trap_def exec_gets return_def)
apply (simp add: bind_def h1_def h2_def simpler_modify_def)
apply (simp add: return_def simpler_gets_def)
apply (simp add: emp_trap_set_def err_mode_val_def
cpu_reg_mod_def)
apply (simp add: case_prod_unfold)
apply (simp add: return_def)
apply clarsimp
apply (simp add: write_cpu_tt_def write_cpu_def write_tt_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: simpler_modify_def)
by (simp add: cpu_reg_val_def cpu_reg_mod_def)
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
lemma exe_trap_st_pc_result : "snd (exe_trap_st_pc() s) = False"
proof (cases "annul_val s = True")
case True
then show ?thesis
apply (simp add: exe_trap_st_pc_def get_curr_win_def)
apply (simp add: exec_gets return_def)
apply (simp add: DetMonad.bind_def h1_def h2_def)
by (simp add: set_annul_def write_reg_def simpler_modify_def)
next
case False
then show ?thesis
apply (simp add: exe_trap_st_pc_def get_curr_win_def)
apply (simp add: exec_gets return_def)
apply (simp add: DetMonad.bind_def h1_def h2_def)
by (simp add: write_reg_def simpler_modify_def)
qed
lemma exe_trap_wr_pc_result : "snd (exe_trap_wr_pc() s) = False"
proof (cases "reset_trap_val s = True")
case True
then show ?thesis
apply (simp add: exe_trap_wr_pc_def get_curr_win_def)
apply (simp add: exec_gets return_def)
apply (simp add: DetMonad.bind_def h1_def h2_def)
apply (simp add: write_cpu_def simpler_modify_def)
apply (simp add: simpler_gets_def)
apply (simp add: cpu_reg_val_def update_S_def cpu_reg_mod_def reset_trap_val_def)
apply (simp add: write_cpu_def simpler_modify_def DetMonad.bind_def h1_def h2_def)
apply (simp add: return_def)
by (simp add: set_reset_trap_def simpler_modify_def DetMonad.bind_def h1_def h2_def return_def)
next
case False
then show ?thesis
apply (simp add: exe_trap_wr_pc_def get_curr_win_def)
apply (simp add: exec_gets return_def)
apply (simp add: DetMonad.bind_def h1_def h2_def)
apply (simp add: write_cpu_def simpler_modify_def)
apply (simp add: simpler_gets_def)
apply (simp add: cpu_reg_val_def update_S_def cpu_reg_mod_def reset_trap_val_def)
apply (simp add: write_cpu_def simpler_modify_def DetMonad.bind_def h1_def h2_def)
by (simp add: return_def)
qed
lemma execute_trap_result : "\<not>(reset_trap_val s = False \<and> get_ET (cpu_reg_val PSR s) = 0) \<Longrightarrow>
snd (execute_trap() s) = False"
proof -
assume "\<not>(reset_trap_val s = False \<and> get_ET (cpu_reg_val PSR s) = 0)"
then have fact1: "snd (select_trap() s) = False" using select_trap_result2 by blast
then show ?thesis
proof (cases "err_mode_val s = True")
case True
then show ?thesis using fact1
apply (simp add: execute_trap_def exec_gets return_def)
apply (simp add: DetMonad.bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
by (simp add: in_gets return_def select_trap_monad simpler_gets_def)
next
case False
then show ?thesis using fact1 select_trap_monad
apply (simp add: execute_trap_def exec_gets return_def)
apply (simp add: DetMonad.bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
apply (simp add: simpler_gets_def)
apply (auto simp add: select_trap_monad)
apply (simp add: DetMonad.bind_def h1_def h2_def get_curr_win_def)
apply (simp add: get_CWP_def cpu_reg_val_def)
apply (simp add: simpler_gets_def return_def write_cpu_def)
apply (simp add: simpler_modify_def DetMonad.bind_def h1_def h2_def)
apply (simp add: exe_trap_st_pc_result)
by (simp add: case_prod_unfold exe_trap_wr_pc_result)
qed
qed
lemma execute_trap_result2 : "\<not>(reset_trap_val s = False \<and> get_ET (cpu_reg_val PSR s) = 0) \<Longrightarrow>
snd (execute_trap() s) = False"
using execute_trap_result
by blast
lemma exe_instr_all :
"good_context (s::(('a::len0) sparc_state)) \<Longrightarrow>
snd (execute_instruction() s) = False"
proof -
assume asm1: "good_context s"
let ?s' = "delayed_pool_write s"
from asm1 have f1 : "(get_trap_set s = {} \<or> (reset_trap_val s) \<noteq> False \<or>
get_ET (cpu_reg_val PSR s) \<noteq> 0) \<and>
((\<exists>e. fetch_instruction ?s' = Inl e) \<or>
(\<exists>v1 v2. fetch_instruction ?s' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
(annul_val ?s' = True \<or>
(annul_val ?s' = False \<and>
(\<forall>v1' v2'. fetch_instruction ?s' = Inr v1' \<and>
((decode_instruction v1')::(Exception list + instruction)) = Inr v2' \<longrightarrow>
supported_instruction (fst v2') = True) \<and>
((fst v2) \<noteq> ctrl_type RETT \<or>
((fst v2) = ctrl_type RETT \<and>
(get_ET (cpu_reg_val PSR ?s') = 1 \<or>
(get_ET (cpu_reg_val PSR ?s') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR ?s')))::word1) \<noteq> 0 \<and>
(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR ?s'))) + 1) mod NWINDOWS))
(cpu_reg_val WIM ?s')) = 0 \<and>
- (bitAND (get_addr (snd v2) ?s') (0b00000000000000000000000000000011::word32)) = 0))))))))"
+ ((AND) (get_addr (snd v2) ?s') (0b00000000000000000000000000000011::word32)) = 0))))))))"
using good_context_all by blast
from f1 have f2: "get_trap_set s \<noteq> {} \<Longrightarrow>
(reset_trap_val s) \<noteq> False \<or> get_ET (cpu_reg_val PSR s) \<noteq> 0"
by auto
show ?thesis
proof (cases "get_trap_set s = {}")
case True
then have f3: "get_trap_set s = {}" by auto
then show ?thesis
proof (cases "exe_mode_val s = True")
case True
then have f4: "exe_mode_val s = True" by auto
then show ?thesis
proof (cases "\<exists>e1. fetch_instruction ?s' = Inl e1")
case True
then show ?thesis using f3
apply exe_proof_to_decode
apply (simp add: raise_trap_def simpler_modify_def)
by (simp add: bind_def h1_def h2_def return_def)
next
case False
then have f5: "\<exists> v1. fetch_instruction ?s' = Inr v1" using fetch_instr_result_1 by blast
then have f6: "\<exists>v1 v2. fetch_instruction ?s' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2"
using f1 fetch_instr_result_2 by blast
then show ?thesis
proof (cases "annul_val ?s' = True")
case True
then show ?thesis using f3 f4 f6
apply exe_proof_to_decode
apply (simp add: set_annul_def annul_mod_def simpler_modify_def bind_def h1_def h2_def)
apply (simp add: return_def simpler_gets_def)
by (simp add: write_cpu_def simpler_modify_def)
next
case False
then have f7: "\<exists>v1 v2. fetch_instruction ?s' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
(\<forall>v1' v2'. fetch_instruction ?s' = Inr v1' \<and>
((decode_instruction v1')::(Exception list + instruction)) = Inr v2' \<longrightarrow>
supported_instruction (fst v2') = True) \<and> annul_val ?s' = False"
using f1 f6 fetch_instr_result_2 by auto
then have f7': "\<exists>v1 v2. fetch_instruction ?s' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
supported_instruction (fst v2) = True \<and> annul_val ?s' = False"
by auto
then show ?thesis
proof (cases "\<exists>v1 v2. fetch_instruction ?s' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
(fst v2) = ctrl_type RETT")
case True
then have f8: "\<exists>v1 v2. fetch_instruction ?s' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
(fst v2) = ctrl_type RETT" by auto
then show ?thesis
proof (cases "get_trap_set ?s' = {}")
case True
then have f9: "get_trap_set ?s' = {}" by auto
then show ?thesis
proof (cases "get_ET (cpu_reg_val PSR ?s') = 1")
case True
then have f10: "get_ET (cpu_reg_val PSR ?s') = 1" by auto
then show ?thesis
proof (cases "((ucast (get_S (cpu_reg_val PSR ?s')))::word1) = 0")
case True
then show ?thesis using f3 f4 f7 f8 f9 f10
apply exe_proof_to_decode
apply exe_proof_dispatch_rett
apply (simp add: raise_trap_def simpler_modify_def)
apply (auto simp add: execute_instr_sub1_result return_def)
by (simp add: case_prod_unfold)
next
case False
then show ?thesis using f3 f4 f7 f8 f9 f10
apply exe_proof_to_decode
apply exe_proof_dispatch_rett
apply (simp add: raise_trap_def simpler_modify_def)
apply (auto simp add: execute_instr_sub1_result return_def)
by (simp add: case_prod_unfold)
qed
next
case False
then have f11: "\<exists>v1 v2. fetch_instruction ?s' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
annul_val ?s' = False \<and>
(fst v2) = ctrl_type RETT \<and>
(get_ET (cpu_reg_val PSR ?s') \<noteq> 1 \<and>
((ucast (get_S (cpu_reg_val PSR ?s')))::word1) \<noteq> 0 \<and>
(get_WIM_bit (nat (((uint (get_CWP (cpu_reg_val PSR ?s'))) + 1) mod NWINDOWS))
(cpu_reg_val WIM ?s')) = 0 \<and>
- (bitAND (get_addr (snd v2) ?s') (0b00000000000000000000000000000011::word32)) = 0)"
+ ((AND) (get_addr (snd v2) ?s') (0b00000000000000000000000000000011::word32)) = 0)"
using f1 fetch_instr_result_2 f7' f8 by auto
then show ?thesis using f3 f4
proof (cases "get_trap_set ?s' = {}")
case True
then show ?thesis using f3 f4 f11
apply (simp add: execute_instruction_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def simpler_modify_def)
apply clarsimp
apply (simp add: return_def)
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply auto
apply (simp add: execute_instr_sub1_result)
apply (simp add: dispatch_instruction_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (simp add: rett_instr_result)
next
case False
then show ?thesis using f3 f4 f11
apply (simp add: execute_instruction_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def simpler_modify_def)
apply clarsimp
apply (simp add: return_def)
apply (simp add: bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
apply (simp add: execute_instr_sub1_result)
apply (simp add: dispatch_instruction_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (simp add: return_def)
qed
qed
next
case False
then show ?thesis using f3 f4 f7 f8
apply exe_proof_to_decode
apply (simp add: dispatch_instruction_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
by (auto simp add: execute_instr_sub1_result return_def Let_def)
qed
next
case False \<comment> \<open>Instruction is not \<open>RETT\<close>.\<close>
then have "\<exists>v1 v2. fetch_instruction ?s' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
(fst v2) \<noteq> ctrl_type RETT" using f7 by auto
then have "\<exists>v1 v2. fetch_instruction ?s' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
(fst v2) \<noteq> ctrl_type RETT \<and>
supported_instruction (fst v2) = True \<and> annul_val ?s' = False"
using f7 by auto
then have "\<exists>v1 v2. fetch_instruction ?s' = Inr v1 \<and>
((decode_instruction v1)::(Exception list + instruction)) = Inr v2 \<and>
(fst v2) \<noteq> ctrl_type RETT \<and>
supported_instruction (fst v2) = True \<and> annul_val ?s' = False \<and>
snd (dispatch_instruction v2 ?s') = False"
by (auto simp add: dispatch_instr_result)
then show ?thesis using f3 f4
apply exe_proof_to_decode
apply (simp add: bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
by (simp add: execute_instr_sub1_result)
qed
qed
qed
next
case False
then show ?thesis using f3
apply (simp add: execute_instruction_def)
by (simp add: exec_gets return_def)
qed
next
case False
then have "get_trap_set s \<noteq> {} \<and>
((reset_trap_val s) \<noteq> False \<or> get_ET (cpu_reg_val PSR s) \<noteq> 0)"
using f2 by auto
then show ?thesis
apply (simp add: execute_instruction_def exec_gets)
by (simp add: execute_trap_result2)
qed
qed
lemma dispatch_fail:
"snd (execute_instruction() (s::(('a::len0) sparc_state))) = False \<and>
get_trap_set s = {} \<and>
exe_mode_val s \<and>
fetch_instruction (delayed_pool_write s) = Inr v \<and>
((decode_instruction v)::(Exception list + instruction)) = Inl e
\<Longrightarrow> False"
using decode_instr_result_2
apply (simp add: execute_instruction_def)
apply (simp add: exec_gets bind_def)
apply clarsimp
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: simpler_modify_def return_def)
by (simp add: fail_def)
lemma no_error : "good_context s \<Longrightarrow> snd (execute_instruction () s) = False"
proof -
assume "good_context s"
hence "snd (execute_instruction() s) = False"
using exe_instr_all by auto
hence "snd (execute_instruction () s) = False" by (simp add: exec_ss2)
thus ?thesis by assumption
qed
theorem single_step : "good_context s \<Longrightarrow> NEXT s = Some (snd (fst (execute_instruction () s)))"
by (simp add: no_error next_match)
(*********************************************************************)
section \<open>Privilege safty\<close>
(*********************************************************************)
text \<open>The following shows that, if the pre-state is under user mode,
then after a singel step execution, the post-state is aslo under user mode.\<close>
lemma write_cpu_pc_privilege: "s' = snd (fst (write_cpu w PC s)) \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: write_cpu_def simpler_modify_def)
apply (simp add: cpu_reg_mod_def)
by (simp add: cpu_reg_val_def)
lemma write_cpu_npc_privilege: "s' = snd (fst (write_cpu w nPC s)) \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: write_cpu_def simpler_modify_def)
apply (simp add: cpu_reg_mod_def)
by (simp add: cpu_reg_val_def)
lemma write_cpu_y_privilege: "s' = snd (fst (write_cpu w Y s)) \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0
\<Longrightarrow> ((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: write_cpu_def simpler_modify_def)
apply (simp add: cpu_reg_mod_def)
by (simp add: cpu_reg_val_def)
lemma cpu_reg_mod_y_privilege: "s' = cpu_reg_mod w Y s \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0
\<Longrightarrow> ((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
by (simp add: cpu_reg_mod_def cpu_reg_val_def)
lemma cpu_reg_mod_asr_privilege: "s' = cpu_reg_mod w (ASR r) s \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0
\<Longrightarrow> ((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
by (simp add: cpu_reg_mod_def cpu_reg_val_def)
lemma global_reg_mod_privilege: "s' = global_reg_mod w1 n w2 s \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (induction n arbitrary:s)
apply (clarsimp)
apply (auto)
apply (simp add: Let_def)
by (simp add: cpu_reg_val_def)
lemma out_reg_mod_privilege: "s' = out_reg_mod a w r s \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: out_reg_mod_def Let_def)
by (simp add: cpu_reg_val_def)
lemma in_reg_mod_privilege: "s' = in_reg_mod a w r s \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: in_reg_mod_def Let_def)
by (simp add: cpu_reg_val_def)
lemma user_reg_mod_privilege:
assumes a1: " s' = user_reg_mod d (w::(('a::len0) window_size)) r
(s::(('a::len0) sparc_state)) \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "r = 0")
case True
then show ?thesis using a1
by (simp add: user_reg_mod_def)
next
case False
then have f1: "r \<noteq> 0" by auto
then show ?thesis
proof (cases "0 < r \<and> r < 8")
case True
then show ?thesis using a1 f1
apply (simp add: user_reg_mod_def)
by (auto intro: global_reg_mod_privilege)
next
case False
then have f2: "\<not>(0 < r \<and> r < 8)" by auto
then show ?thesis
proof (cases "7 < r \<and> r < 16")
case True
then show ?thesis using a1 f1 f2
apply (simp add: user_reg_mod_def)
by (auto intro: out_reg_mod_privilege)
next
case False
then have f3: "\<not> (7 < r \<and> r < 16)" by auto
then show ?thesis
proof (cases "15 < r \<and> r < 24")
case True
then show ?thesis using a1 f1 f2 f3
apply (simp add: user_reg_mod_def)
by (simp add: cpu_reg_val_def)
next
case False
then show ?thesis using a1 f1 f2 f3
apply (simp add: user_reg_mod_def)
by (auto intro: in_reg_mod_privilege)
qed
qed
qed
qed
lemma write_reg_privilege: "s' = snd (fst (write_reg w1 w2 w3
(s::(('a::len0) sparc_state)))) \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: write_reg_def simpler_modify_def)
by (auto intro: user_reg_mod_privilege)
lemma set_annul_privilege: "s' = snd (fst (set_annul b s)) \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: set_annul_def simpler_modify_def)
apply (simp add: annul_mod_def write_annul_def)
by (simp add: cpu_reg_val_def)
lemma set_reset_trap_privilege: "s' = snd (fst (set_reset_trap b s)) \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: set_reset_trap_def simpler_modify_def)
apply (simp add: reset_trap_mod_def write_annul_def)
by (simp add: cpu_reg_val_def)
lemma empty_delayed_pool_write_privilege: "get_delayed_pool s = [] \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<and>
s' = delayed_pool_write s \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: delayed_pool_write_def)
by (simp add: get_delayed_write_def delayed_write_all_def delayed_pool_rm_list_def)
lemma raise_trap_privilege:
"((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<and>
s' = snd (fst (raise_trap t s)) \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: raise_trap_def)
apply (simp add: simpler_modify_def add_trap_set_def)
by (simp add: cpu_reg_val_def)
lemma write_cpu_tt_privilege: "s' = snd (fst (write_cpu_tt w s)) \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0
\<Longrightarrow> ((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: write_cpu_tt_def)
apply (simp add: exec_gets)
apply (simp add: write_cpu_def cpu_reg_mod_def write_tt_def)
apply (simp add: simpler_modify_def)
by (simp add: cpu_reg_val_def)
lemma emp_trap_set_privilege: "s' = emp_trap_set s \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0
\<Longrightarrow> ((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: emp_trap_set_def)
by (simp add: cpu_reg_val_def)
lemma sys_reg_mod_privilege: "s' = sys_reg_mod w r s
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0
\<Longrightarrow> ((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: sys_reg_mod_def)
by (simp add: cpu_reg_val_def)
lemma mem_mod_privilege:
assumes a1: "s' = mem_mod a1 a2 v s \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "(uint a1) = 8 \<or> (uint a1) = 10")
case True
then show ?thesis using a1
apply (simp add: mem_mod_def)
apply (simp add: Let_def)
by (simp add: cpu_reg_val_def)
next
case False
then have f1: "\<not>((uint a1) = 8 \<or> (uint a1) = 10)" by auto
then show ?thesis
proof (cases "(uint a1) = 9 \<or> (uint a1) = 11")
case True
then show ?thesis using a1 f1
apply (simp add: mem_mod_def)
apply (simp add: Let_def)
by (simp add: cpu_reg_val_def)
next
case False
then show ?thesis using a1 f1
apply (simp add: mem_mod_def)
by (simp add: cpu_reg_val_def)
qed
qed
lemma mem_mod_w32_privilege: "s' = mem_mod_w32 a1 a2 b d s \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0
\<Longrightarrow> ((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: mem_mod_w32_def)
apply (simp add: Let_def)
by (auto intro: mem_mod_privilege)
lemma add_instr_cache_privilege: "s' = add_instr_cache s addr y m \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: add_instr_cache_def)
apply (simp add: Let_def)
by (simp add: icache_mod_def cpu_reg_val_def)
lemma add_data_cache_privilege: "s' = add_data_cache s addr y m \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: add_data_cache_def)
apply (simp add: Let_def)
by (simp add: dcache_mod_def cpu_reg_val_def)
lemma memory_read_privilege:
assumes a1: "s' = snd (memory_read asi addr s) \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "uint asi = 1")
case True
then show ?thesis using a1
apply (simp add: memory_read_def)
by (simp add: Let_def)
next
case False
then have f1: "uint asi \<noteq> 1" by auto
then show ?thesis
proof (cases "uint asi = 2")
case True
then show ?thesis using a1 f1
by (simp add: memory_read_def)
next
case False
then have f2: "uint asi \<noteq> 2" by auto
then show ?thesis
proof (cases "uint asi \<in> {8,9}")
case True
then have f3: "uint asi \<in> {8,9}" by auto
then show ?thesis
proof (cases "load_word_mem s addr asi = None")
case True
then have f4: "load_word_mem s addr asi = None" by auto
then show ?thesis
using a1 f1 f2 f3 f4
by (simp add: memory_read_def)
next
case False
then show ?thesis using a1 f1 f2 f3
apply (simp add: memory_read_def)
apply auto
apply (simp add: add_instr_cache_privilege)
by (simp add: add_instr_cache_privilege)
qed
next
case False
then have f5: "uint asi \<notin> {8, 9}" by auto
then show ?thesis
proof (cases "uint asi \<in> {10,11}")
case True
then have f6: "uint asi \<in> {10,11}" by auto
then show ?thesis
proof (cases "load_word_mem s addr asi = None")
case True
then have f7: "load_word_mem s addr asi = None" by auto
then show ?thesis
using a1 f1 f2 f5 f6 f7
by (simp add: memory_read_def)
next
case False
then show ?thesis using a1 f1 f2 f5 f6
apply (simp add: memory_read_def)
apply auto
apply (simp add: add_data_cache_privilege)
by (simp add: add_data_cache_privilege)
qed
next
case False
then have f8: "uint asi \<notin> {10,11}" by auto
then show ?thesis
proof (cases "uint asi = 13")
case True
then have f9: "uint asi = 13" by auto
then show ?thesis
proof (cases "read_instr_cache s addr = None")
case True
then show ?thesis using a1 f1 f2 f5 f8 f9
by (simp add: memory_read_def)
next
case False
then show ?thesis using a1 f1 f2 f5 f8 f9
apply (simp add: memory_read_def)
by auto
qed
next
case False
then have f10: "uint asi \<noteq> 13" by auto
then show ?thesis
proof (cases "uint asi = 15")
case True
then show ?thesis using a1 f1 f2 f5 f8 f10
apply (simp add: memory_read_def)
apply (cases "read_data_cache s addr = None")
by auto
next
case False
then show ?thesis using a1 f1 f2 f5 f8 f10
apply (simp add: memory_read_def) \<comment> \<open>The rest cases are easy.\<close>
by (simp add: Let_def)
qed
qed
qed
qed
qed
qed
lemma get_curr_win_privilege: "s' = snd (fst (get_curr_win() s)) \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0
\<Longrightarrow> ((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: get_curr_win_def)
by (simp add: simpler_gets_def)
lemma load_sub2_privilege:
assumes a1: "s' = snd (fst (load_sub2 addr asi r win w s))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "fst (memory_read asi (addr + 4)
(snd (fst (write_reg w win (r AND 30) s)))) =
None")
case True
then show ?thesis using a1
apply (simp add: load_sub2_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
by (auto intro: raise_trap_privilege write_reg_privilege)
next
case False
then show ?thesis using a1
apply (simp add: load_sub2_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
apply clarsimp
apply (simp add: simpler_modify_def bind_def h1_def h2_def Let_def)
by (auto intro: write_reg_privilege memory_read_privilege)
qed
lemma load_sub3_privilege:
assumes a1: "s' = snd (fst (load_sub3 instr curr_win rd asi address s))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "fst (memory_read asi address s) = None")
case True
then show ?thesis using a1
apply (simp add: load_sub3_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
by (auto intro: raise_trap_privilege)
next
case False
then have f1: "fst (memory_read asi address s) \<noteq> None " by auto
then show ?thesis
proof (cases "rd \<noteq> 0 \<and>
(fst instr = load_store_type LD \<or>
fst instr = load_store_type LDA \<or>
fst instr = load_store_type LDUH \<or>
fst instr = load_store_type LDSB \<or>
fst instr = load_store_type LDUB \<or>
fst instr = load_store_type LDUBA \<or>
fst instr = load_store_type LDSH \<or>
fst instr = load_store_type LDSHA \<or>
fst instr = load_store_type LDUHA \<or>
fst instr = load_store_type LDSBA)")
case True
then show ?thesis using a1 f1
apply (simp add: load_sub3_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
apply clarsimp
apply (simp add: simpler_modify_def bind_def h1_def h2_def Let_def)
by (auto intro: write_reg_privilege memory_read_privilege)
next
case False
then show ?thesis using a1 f1
apply (simp add: load_sub3_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
apply auto
apply (simp add: simpler_modify_def bind_def h1_def h2_def)
apply (auto intro: load_sub2_privilege memory_read_privilege)
apply (simp add: simpler_modify_def bind_def h1_def h2_def)
by (auto intro: load_sub2_privilege memory_read_privilege)
qed
qed
lemma load_sub1_privilege:
assumes a1: "s' = snd (fst (load_sub1 instr rd s_val s))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: load_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply auto
by (auto intro: get_curr_win_privilege raise_trap_privilege load_sub3_privilege)
lemma load_instr_privilege: "s' = snd (fst (load_instr i s))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0
\<Longrightarrow> ((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: load_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: Let_def)
apply clarsimp
by (auto intro: get_curr_win_privilege raise_trap_privilege load_sub1_privilege)
lemma store_barrier_pending_mod_privilege: "s' = store_barrier_pending_mod b s
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0
\<Longrightarrow> ((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: store_barrier_pending_mod_def)
apply (simp add: write_store_barrier_pending_def)
by (simp add: cpu_reg_val_def)
lemma store_word_mem_privilege:
assumes a1: "store_word_mem s addr data byte_mask asi = Some s' \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1 apply (simp add: store_word_mem_def)
apply (case_tac "virt_to_phys addr (mmu s) (mem s) = None")
apply auto
apply (case_tac "mmu_writable (get_acc_flag b) asi")
apply auto
by (simp add: mem_mod_w32_privilege)
lemma flush_instr_cache_privilege: "((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<Longrightarrow>
s' = flush_instr_cache s \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: flush_instr_cache_def)
by (simp add: cpu_reg_val_def)
lemma flush_data_cache_privilege: "((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<Longrightarrow>
s' = flush_data_cache s \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: flush_data_cache_def)
by (simp add: cpu_reg_val_def)
lemma flush_cache_all_privilege: "((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0 \<Longrightarrow>
s' = flush_cache_all s \<Longrightarrow>
((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
apply (simp add: flush_cache_all_def)
by (simp add: cpu_reg_val_def)
lemma memory_write_asi_privilege:
assumes a1: "r = memory_write_asi asi addr byte_mask data s \<and>
r = Some s' \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "uint asi = 1")
case True
then show ?thesis using a1
apply (simp add: memory_write_asi_def)
by (auto intro: store_word_mem_privilege)
next
case False
then have f1: "uint asi \<noteq> 1" by auto
then show ?thesis
proof (cases "uint asi = 2")
case True
then have f01: "uint asi = 2" by auto
then show ?thesis
proof (cases "uint addr = 0")
case True
then show ?thesis using a1 f1 f01
apply (simp add: memory_write_asi_def)
apply (simp add: ccr_flush_def)
apply (simp add: Let_def)
apply auto
apply (metis flush_data_cache_privilege flush_instr_cache_privilege sys_reg_mod_privilege)
apply (metis flush_instr_cache_privilege sys_reg_mod_privilege)
apply (metis flush_data_cache_privilege sys_reg_mod_privilege)
by (simp add: sys_reg_mod_privilege)
next
case False
then show ?thesis using a1 f1 f01
apply (simp add: memory_write_asi_def)
apply clarsimp
by (metis option.distinct(1) option.sel sys_reg_mod_privilege)
qed
next
case False
then have f2: "uint asi \<noteq> 2" by auto
then show ?thesis
proof (cases "uint asi \<in> {8,9}")
case True
then show ?thesis using a1 f1 f2
apply (simp add: memory_write_asi_def)
using store_word_mem_privilege add_instr_cache_privilege
by blast
next
case False
then have f3: "uint asi \<notin> {8,9}" by auto
then show ?thesis
proof (cases "uint asi \<in> {10,11}")
case True
then show ?thesis using a1 f1 f2 f3
apply (simp add: memory_write_asi_def)
using store_word_mem_privilege add_data_cache_privilege
by blast
next
case False
then have f4: "uint asi \<notin> {10,11}" by auto
then show ?thesis
proof (cases "uint asi = 13")
case True
then show ?thesis using a1 f1 f2 f3 f4
apply (simp add: memory_write_asi_def)
by (auto simp add: add_instr_cache_privilege)
next
case False
then have f5: "uint asi \<noteq> 13" by auto
then show ?thesis
proof (cases "uint asi = 15")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5
apply (simp add: memory_write_asi_def)
by (auto simp add: add_data_cache_privilege)
next
case False
then have f6: "uint asi \<noteq> 15" by auto
then show ?thesis
proof (cases "uint asi = 16")
case True
then show ?thesis using a1
apply (simp add: memory_write_asi_def)
by (auto simp add: flush_instr_cache_privilege)
next
case False
then have f7: "uint asi \<noteq> 16" by auto
then show ?thesis
proof (cases "uint asi = 17")
case True
then show ?thesis using a1
apply (simp add: memory_write_asi_def)
by (auto simp add: flush_data_cache_privilege)
next
case False
then have f8: "uint asi \<noteq> 17" by auto
then show ?thesis
proof (cases "uint asi = 24")
case True
then show ?thesis using a1
apply (simp add: memory_write_asi_def)
by (auto simp add: flush_cache_all_privilege)
next
case False
then have f9: "uint asi \<noteq> 24" by auto
then show ?thesis
proof (cases "uint asi = 25")
case True
then show ?thesis using a1
apply (simp add: memory_write_asi_def)
apply (case_tac "mmu_reg_mod (mmu s) addr data = None")
apply auto
by (simp add: cpu_reg_val_def)
next
case False
then have f10: "uint asi \<noteq> 25" by auto
then show ?thesis
proof (cases "uint asi = 28")
case True
then show ?thesis using a1
apply (simp add: memory_write_asi_def)
by (auto simp add: mem_mod_w32_privilege)
next
case False \<comment> \<open>The remaining cases are easy.\<close>
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
apply (simp add: memory_write_asi_def)
apply (auto simp add: Let_def)
apply (case_tac "uint asi = 20 \<or> uint asi = 21")
by auto
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
lemma memory_write_privilege:
assumes a1: "r = memory_write asi addr byte_mask data
(s::(('a::len0) sparc_state)) \<and>
r = Some s' \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR
(s'::(('a::len0) sparc_state)))))::word1) = 0"
proof -
have "\<forall>x. Some x \<noteq> None" by auto
then have "r \<noteq> None" using a1
by (simp add: \<open>r = memory_write asi addr byte_mask data s \<and>
r = Some s' \<and> ucast (get_S (cpu_reg_val PSR s)) = 0\<close>)
then have "\<exists>s''. r = Some (store_barrier_pending_mod False s'')" using a1
by (metis (no_types, lifting) memory_write_def option.case_eq_if)
then have "\<exists>s''. s' = store_barrier_pending_mod False s''" using a1
by blast
then have "\<exists>s''. memory_write_asi asi addr byte_mask data s = Some s'' \<and>
s' = store_barrier_pending_mod False s''"
by (metis (no_types, lifting) assms memory_write_def not_None_eq option.case_eq_if option.sel)
then show ?thesis using a1
using memory_write_asi_privilege store_barrier_pending_mod_privilege by blast
qed
lemma store_sub2_privilege:
assumes a1: "s' = snd (fst (store_sub2 instr curr_win rd asi address s))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "memory_write asi address (st_byte_mask instr address)
(st_data0 instr curr_win rd address s) s =
None")
case True
then show ?thesis using a1
apply (simp add: store_sub2_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
by (metis fst_conv raise_trap_privilege return_def snd_conv)
next
case False
then have f1: "\<not>(memory_write asi address (st_byte_mask instr address)
(st_data0 instr curr_win rd address s) s =
None)"
by auto
then show ?thesis
proof (cases "(fst instr) \<in> {load_store_type STD,load_store_type STDA}")
case True
then have f2: "(fst instr) \<in> {load_store_type STD,load_store_type STDA}" by auto
then show ?thesis using a1 f1
apply (simp add: store_sub2_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: simpler_modify_def bind_def h1_def h2_def Let_def)
apply (simp add: return_def)
apply (simp add: bind_def case_prod_unfold)
apply (simp add: simpler_modify_def)
apply clarsimp
apply (simp add: case_prod_unfold bind_def h1_def h2_def Let_def simpler_modify_def)
apply (simp add: simpler_gets_def)
apply auto
using memory_write_privilege raise_trap_privilege apply blast
apply (simp add: simpler_modify_def simpler_gets_def bind_def)
apply (meson memory_write_privilege)
using memory_write_privilege raise_trap_privilege apply blast
by (meson memory_write_privilege)
next
case False
then show ?thesis using a1 f1
apply (simp add: store_sub2_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply clarsimp
apply (simp add: simpler_modify_def return_def)
by (auto intro: memory_write_privilege)
qed
qed
lemma store_sub1_privilege:
assumes a1: "s' = snd (fst (store_sub1 instr rd s_val
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR
(s'::(('a::len0) sparc_state)))))::word1) = 0"
proof (cases "(fst instr = load_store_type STH \<or> fst instr = load_store_type STHA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s)))))::word1) \<noteq> 0")
case True
then show ?thesis using a1
apply (simp add: store_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
using get_curr_win_privilege raise_trap_privilege by blast
next
case False
then have f1: "\<not>((fst instr = load_store_type STH \<or> fst instr = load_store_type STHA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s)))))::word1) \<noteq> 0)"
by auto
then show ?thesis
proof (cases "(fst instr \<in> {load_store_type ST,load_store_type STA}) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s)))))::word2) \<noteq> 0")
case True
then show ?thesis using a1 f1
apply (simp add: store_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
using get_curr_win_privilege raise_trap_privilege by blast
next
case False
then have f2: "\<not>((fst instr \<in> {load_store_type ST,load_store_type STA}) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s)))))::word2) \<noteq> 0)"
by auto
then show ?thesis
proof (cases "(fst instr \<in> {load_store_type STD,load_store_type STDA}) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s)))))::word3) \<noteq> 0")
case True
then show ?thesis using a1 f1 f2
apply (simp add: store_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
using get_curr_win_privilege raise_trap_privilege by blast
next
case False
then show ?thesis using a1 f1 f2
apply (simp add: store_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
by (meson get_curr_win_privilege store_sub2_privilege)
qed
qed
qed
lemma store_instr_privilege:
assumes a1: "s' = snd (fst (store_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR
(s'::(('a::len0) sparc_state)))))::word1) = 0"
using a1
apply (simp add: store_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: Let_def)
using raise_trap_privilege store_sub1_privilege by blast
lemma sethi_instr_privilege:
assumes a1: "s' = snd (fst (sethi_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: sethi_instr_def)
apply (simp add: Let_def)
apply auto
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
using get_curr_win_privilege write_reg_privilege apply blast
by (simp add: return_def)
lemma nop_instr_privilege:
assumes a1: "s' = snd (fst (nop_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: nop_instr_def)
by (simp add: return_def)
lemma ucast_0: "((ucast (get_S w))::word1) = 0 \<Longrightarrow> get_S w = 0"
by (simp add: ucast_id)
lemma ucast_02: "get_S w = 0 \<Longrightarrow> ((ucast (get_S w))::word1) = 0"
by simp
lemma ucast_s: "((ucast (get_S w))::word1) = 0 \<Longrightarrow>
- bitAND w (0b00000000000000000000000010000000::word32) = 0"
+ (AND) w (0b00000000000000000000000010000000::word32) = 0"
apply (simp add: get_S_def)
by (metis (mono_tags) ucast_id zero_neq_one)
-lemma ucast_s2: "bitAND w 0b00000000000000000000000010000000 = 0
+lemma ucast_s2: "(AND) w 0b00000000000000000000000010000000 = 0
\<Longrightarrow> ((ucast (get_S w))::word1) = 0"
by (simp add: get_S_def)
-lemma update_PSR_icc_1: "w' = bitAND w (0b11111111000011111111111111111111::word32)
+lemma update_PSR_icc_1: "w' = (AND) w (0b11111111000011111111111111111111::word32)
\<and> ((ucast (get_S w))::word1) = 0
\<Longrightarrow> ((ucast (get_S w'))::word1) = 0"
by (simp add: get_S_def word_bw_assocs(1))
-lemma and_num_1048576_128: "bitAND (0b00000000000100000000000000000000::word32)
- (0b00000000000000000000000010000000::word32) = 0"
-by simp
-
-lemma and_num_2097152_128: "bitAND (0b00000000001000000000000000000000::word32)
+lemma and_num_1048576_128: "(AND) (0b00000000000100000000000000000000::word32)
(0b00000000000000000000000010000000::word32) = 0"
by simp
-lemma and_num_4194304_128: "bitAND (0b00000000010000000000000000000000::word32)
+lemma and_num_2097152_128: "(AND) (0b00000000001000000000000000000000::word32)
(0b00000000000000000000000010000000::word32) = 0"
by simp
-lemma and_num_8388608_128: "bitAND (0b00000000100000000000000000000000::word32)
+lemma and_num_4194304_128: "(AND) (0b00000000010000000000000000000000::word32)
(0b00000000000000000000000010000000::word32) = 0"
by simp
-lemma or_and_s: "bitAND w1 (0b00000000000000000000000010000000::word32) = 0
- \<and> bitAND w2 (0b00000000000000000000000010000000::word32) = 0
- \<Longrightarrow> bitAND (bitOR w1 w2) (0b00000000000000000000000010000000::word32) = 0"
+lemma and_num_8388608_128: "(AND) (0b00000000100000000000000000000000::word32)
+ (0b00000000000000000000000010000000::word32) = 0"
+by simp
+
+lemma or_and_s: "(AND) w1 (0b00000000000000000000000010000000::word32) = 0
+ \<and> (AND) w2 (0b00000000000000000000000010000000::word32) = 0
+ \<Longrightarrow> (AND) ((OR) w1 w2) (0b00000000000000000000000010000000::word32) = 0"
by (simp add: word_ao_dist)
lemma and_or_s:
assumes a1: "((ucast (get_S w1))::word1) = 0 \<and>
- bitAND w2 (0b00000000000000000000000010000000::word32) = 0"
-shows "((ucast (get_S (bitOR (bitAND w1
+ (AND) w2 (0b00000000000000000000000010000000::word32) = 0"
+shows "((ucast (get_S ((OR) ((AND) w1
(0b11111111000011111111111111111111::word32)) w2)))::word1) = 0"
by (metis (full_types) assms ucast_s ucast_s2 word_ao_absorbs(8) word_bool_alg.conj_disj_distrib2)
lemma and_or_or_s:
assumes a1: "((ucast (get_S w1))::word1) = 0 \<and>
- bitAND w2 (0b00000000000000000000000010000000::word32) = 0 \<and>
- bitAND w3 (0b00000000000000000000000010000000::word32) = 0"
-shows "((ucast (get_S (bitOR (bitOR (bitAND w1
+ (AND) w2 (0b00000000000000000000000010000000::word32) = 0 \<and>
+ (AND) w3 (0b00000000000000000000000010000000::word32) = 0"
+shows "((ucast (get_S ((OR) ((OR) ((AND) w1
(0b11111111000011111111111111111111::word32)) w2) w3)))::word1) = 0"
using and_or_s assms or_and_s ucast_s ucast_s2 by blast
lemma and_or_or_or_s:
assumes a1: "((ucast (get_S w1))::word1) = 0 \<and>
- bitAND w2 (0b00000000000000000000000010000000::word32) = 0 \<and>
- bitAND w3 (0b00000000000000000000000010000000::word32) = 0 \<and>
- bitAND w4 (0b00000000000000000000000010000000::word32) = 0"
-shows "((ucast (get_S (bitOR (bitOR (bitOR (bitAND w1
+ (AND) w2 (0b00000000000000000000000010000000::word32) = 0 \<and>
+ (AND) w3 (0b00000000000000000000000010000000::word32) = 0 \<and>
+ (AND) w4 (0b00000000000000000000000010000000::word32) = 0"
+shows "((ucast (get_S ((OR) ((OR) ((OR) ((AND) w1
(0b11111111000011111111111111111111::word32)) w2) w3) w4)))::word1) = 0"
using and_or_or_s assms or_and_s ucast_s ucast_s2
by (meson word_bool_alg.conj.commute word_bool_alg.conj_zero_left word_bw_assocs(1))
lemma and_or_or_or_or_s:
assumes a1: "((ucast (get_S w1))::word1) = 0 \<and>
- bitAND w2 (0b00000000000000000000000010000000::word32) = 0 \<and>
- bitAND w3 (0b00000000000000000000000010000000::word32) = 0 \<and>
- bitAND w4 (0b00000000000000000000000010000000::word32) = 0 \<and>
- bitAND w5 (0b00000000000000000000000010000000::word32) = 0"
-shows "((ucast (get_S (bitOR (bitOR (bitOR (bitOR (bitAND w1
+ (AND) w2 (0b00000000000000000000000010000000::word32) = 0 \<and>
+ (AND) w3 (0b00000000000000000000000010000000::word32) = 0 \<and>
+ (AND) w4 (0b00000000000000000000000010000000::word32) = 0 \<and>
+ (AND) w5 (0b00000000000000000000000010000000::word32) = 0"
+shows "((ucast (get_S ((OR) ((OR) ((OR) ((OR) ((AND) w1
(0b11111111000011111111111111111111::word32)) w2) w3) w4) w5)))::word1) = 0"
using and_or_or_or_s assms or_and_s ucast_s ucast_s2
by (meson word_ao_absorbs(8) word_bool_alg.conj_disj_distrib2)
lemma write_cpu_PSR_icc_privilege:
assumes a1: "s' = snd (fst (write_cpu (update_PSR_icc n_val z_val v_val c_val
(cpu_reg_val PSR s))
PSR
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: write_cpu_def)
apply (simp add: simpler_modify_def)
apply (simp add: cpu_reg_mod_def update_PSR_icc_def)
apply (simp add: cpu_reg_val_def)
apply auto
using update_PSR_icc_1 apply blast
using update_PSR_icc_1 and_num_1048576_128 and_or_s apply blast
using update_PSR_icc_1 and_num_2097152_128 and_or_s apply blast
using update_PSR_icc_1 and_num_1048576_128 and_num_2097152_128
and_or_or_s apply blast
using update_PSR_icc_1 and_num_4194304_128 and_or_s apply blast
using update_PSR_icc_1 and_num_1048576_128 and_num_4194304_128
and_or_or_s apply blast
using update_PSR_icc_1 and_num_2097152_128 and_num_4194304_128
and_or_or_s apply blast
using update_PSR_icc_1 and_num_1048576_128 and_num_2097152_128 and_num_4194304_128
and_or_or_or_s apply blast
using update_PSR_icc_1 and_num_8388608_128 and_or_s apply blast
using update_PSR_icc_1 and_num_1048576_128 and_num_8388608_128
and_or_or_s apply blast
using update_PSR_icc_1 and_num_2097152_128 and_num_8388608_128
and_or_or_s apply blast
using update_PSR_icc_1 and_num_1048576_128 and_num_2097152_128 and_num_8388608_128
and_or_or_or_s apply blast
using update_PSR_icc_1 and_num_4194304_128 and_num_8388608_128
and_or_or_s apply blast
using update_PSR_icc_1 and_num_1048576_128 and_num_4194304_128 and_num_8388608_128
and_or_or_or_s apply blast
using update_PSR_icc_1 and_num_2097152_128 and_num_4194304_128 and_num_8388608_128
and_or_or_or_s apply blast
using update_PSR_icc_1 and_num_1048576_128 and_num_2097152_128 and_num_4194304_128
and_num_8388608_128 and_or_or_or_or_s by blast
-lemma and_num_4294967167_128: "bitAND (0b11111111111111111111111101111111::word32)
+lemma and_num_4294967167_128: "(AND) (0b11111111111111111111111101111111::word32)
(0b00000000000000000000000010000000::word32) = 0"
by simp
-lemma s_0_word: "((ucast (get_S (bitAND w
+lemma s_0_word: "((ucast (get_S ((AND) w
(0b11111111111111111111111101111111::word32))))::word1) = 0"
apply (simp add: get_S_def)
using and_num_4294967167_128
by (simp add: word_bool_alg.conj.commute word_bw_lcs(1))
-lemma update_PSR_CWP_1: "w' = bitAND w (0b11111111111111111111111111100000::word32)
+lemma update_PSR_CWP_1: "w' = (AND) w (0b11111111111111111111111111100000::word32)
\<and> ((ucast (get_S w))::word1) = 0
\<Longrightarrow> ((ucast (get_S w'))::word1) = 0"
by (simp add: get_S_def word_bw_assocs(1))
lemma write_cpu_PSR_CWP_privilege:
assumes a1: "s' = snd (fst (write_cpu (update_CWP cwp_val
(cpu_reg_val PSR s))
PSR
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: write_cpu_def)
apply (simp add: simpler_modify_def)
apply (simp add: cpu_reg_mod_def)
apply (simp add: update_CWP_def)
apply (simp add: Let_def)
apply auto
apply (simp add: cpu_reg_val_def)
using s_0_word by blast
lemma logical_instr_sub1_privilege:
assumes a1: "s' = snd (fst (logical_instr_sub1 instr_name result
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "instr_name = logic_type ANDcc \<or>
instr_name = logic_type ANDNcc \<or>
instr_name = logic_type ORcc \<or>
instr_name = logic_type ORNcc \<or>
instr_name = logic_type XORcc \<or> instr_name = logic_type XNORcc")
case True
then show ?thesis using a1
apply (simp add: logical_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: logical_new_psr_val_def)
using write_cpu_PSR_icc_privilege by blast
next
case False
then show ?thesis using a1
apply (simp add: logical_instr_sub1_def)
by (simp add: return_def)
qed
lemma logical_instr_privilege:
assumes a1: "s' = snd (fst (logical_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: logical_instr_def)
apply (simp add: Let_def simpler_gets_def bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
apply auto
apply (meson get_curr_win_privilege logical_instr_sub1_privilege write_reg_privilege)
by (meson get_curr_win_privilege logical_instr_sub1_privilege write_reg_privilege)
method shift_instr_privilege_proof = (
(simp add: shift_instr_def),
(simp add: Let_def),
(simp add: simpler_gets_def),
(simp add: bind_def h1_def h2_def Let_def case_prod_unfold),
auto,
(blast intro: get_curr_win_privilege write_reg_privilege),
(blast intro: get_curr_win_privilege write_reg_privilege)
)
lemma shift_instr_privilege:
assumes a1: "s' = snd (fst (shift_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "(fst instr = shift_type SLL) \<and> (get_operand_w5 ((snd instr)!3) \<noteq> 0)")
case True
then show ?thesis using a1
by shift_instr_privilege_proof
next
case False
then have f1: "\<not>((fst instr = shift_type SLL) \<and> (get_operand_w5 ((snd instr)!3) \<noteq> 0))"
by auto
then show ?thesis
proof (cases "(fst instr = shift_type SRL) \<and> (get_operand_w5 ((snd instr)!3) \<noteq> 0)")
case True
then show ?thesis using a1 f1
by shift_instr_privilege_proof
next
case False
then have f2: "\<not>((fst instr = shift_type SRL) \<and> (get_operand_w5 ((snd instr)!3) \<noteq> 0))"
by auto
then show ?thesis
proof (cases "(fst instr = shift_type SRA) \<and> (get_operand_w5 ((snd instr)!3) \<noteq> 0)")
case True
then show ?thesis using a1 f1 f2
by shift_instr_privilege_proof
next
case False
then show ?thesis using a1 f1 f2
apply (simp add: shift_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def)
apply (simp add: bind_def h1_def h2_def Let_def case_prod_unfold)
apply (simp add: return_def)
using get_curr_win_privilege by blast
qed
qed
qed
lemma add_instr_sub1_privilege:
assumes a1: "s' = snd (fst (add_instr_sub1 instr_name result rs1_val operand2
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "instr_name = arith_type ADDcc \<or> instr_name = arith_type ADDXcc")
case True
then show ?thesis using a1
apply (simp add: add_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (blast intro: write_cpu_PSR_icc_privilege)
next
case False
then show ?thesis using a1
apply (simp add: add_instr_sub1_def)
by (simp add: return_def)
qed
lemma add_instr_privilege:
assumes a1: "s' = snd (fst (add_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: add_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
by (meson add_instr_sub1_privilege get_curr_win_privilege write_reg_privilege)
lemma sub_instr_sub1_privilege:
assumes a1: "s' = snd (fst (sub_instr_sub1 instr_name result rs1_val operand2
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "instr_name = arith_type SUBcc \<or> instr_name = arith_type SUBXcc")
case True
then show ?thesis using a1
apply (simp add: sub_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (blast intro: write_cpu_PSR_icc_privilege)
next
case False
then show ?thesis using a1
apply (simp add: sub_instr_sub1_def)
by (simp add: return_def)
qed
lemma sub_instr_privilege:
assumes a1: "s' = snd (fst (sub_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: sub_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
by (meson sub_instr_sub1_privilege get_curr_win_privilege write_reg_privilege)
lemma mul_instr_sub1_privilege:
assumes a1: "s' = snd (fst (mul_instr_sub1 instr_name result
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "instr_name \<in> {arith_type SMULcc,arith_type UMULcc}")
case True
then show ?thesis using a1
apply (simp add: mul_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (blast intro: write_cpu_PSR_icc_privilege)
next
case False
then show ?thesis using a1
apply (simp add: mul_instr_sub1_def)
by (simp add: return_def)
qed
lemma mul_instr_privilege:
assumes a1: "s' = snd (fst (mul_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: mul_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
by (meson get_curr_win_privilege mul_instr_sub1_privilege write_cpu_y_privilege write_reg_privilege)
lemma div_write_new_val_privilege:
assumes a1: "s' = snd (fst (div_write_new_val i result temp_V
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "(fst i) \<in> {arith_type UDIVcc,arith_type SDIVcc}")
case True
then show ?thesis using a1
apply (simp add: div_write_new_val_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (blast intro: write_cpu_PSR_icc_privilege)
next
case False
then show ?thesis using a1
apply (simp add: div_write_new_val_def)
by (simp add: return_def)
qed
lemma div_comp_privilege:
assumes a1: "s' = snd (fst (div_comp instr rs1 rd operand2
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: div_comp_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
by (meson get_curr_win_privilege div_write_new_val_privilege write_reg_privilege)
lemma div_instr_privilege:
assumes a1: "s' = snd (fst (div_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: div_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: return_def)
apply auto
using raise_trap_privilege apply blast
using div_comp_privilege by blast
lemma save_retore_sub1_privilege:
assumes a1: "s' = snd (fst (save_retore_sub1 result new_cwp rd
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: save_retore_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
using write_cpu_PSR_CWP_privilege write_reg_privilege by blast
method save_restore_instr_privilege_proof = (
(simp add: save_restore_instr_def),
(simp add: Let_def),
(simp add: simpler_gets_def bind_def h1_def h2_def Let_def),
(simp add: case_prod_unfold),
auto,
(blast intro: get_curr_win_privilege raise_trap_privilege),
(simp add: simpler_gets_def bind_def h1_def h2_def Let_def case_prod_unfold),
(blast intro: get_curr_win_privilege save_retore_sub1_privilege)
)
lemma save_restore_instr_privilege:
assumes a1: "s' = snd (fst (save_restore_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "fst instr = ctrl_type SAVE")
case True
then have f1: "fst instr = ctrl_type SAVE" by auto
then show ?thesis using a1
by save_restore_instr_privilege_proof
next
case False
then show ?thesis using a1
by save_restore_instr_privilege_proof
qed
lemma call_instr_privilege:
assumes a1: "s' = snd (fst (call_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: call_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
by (meson get_curr_win_privilege write_cpu_npc_privilege write_cpu_pc_privilege write_reg_privilege)
lemma jmpl_instr_privilege:
assumes a1: "s' = snd (fst (jmpl_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: jmpl_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply auto
using get_curr_win_privilege raise_trap_privilege apply blast
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
by (meson get_curr_win_privilege write_cpu_npc_privilege write_cpu_pc_privilege write_reg_privilege)
lemma rett_instr_privilege:
assumes a1: "snd (rett_instr i s) = False \<and>
s' = snd (fst (rett_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: rett_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply auto
apply (simp add: case_prod_unfold)
apply (simp add: return_def)
apply (blast intro: raise_trap_privilege)
apply (simp add: bind_def h1_def h2_def Let_def)
by (simp add: case_prod_unfold fail_def)
method read_state_reg_instr_privilege_proof = (
(simp add: read_state_reg_instr_def),
(simp add: Let_def),
(simp add: simpler_gets_def bind_def h1_def h2_def Let_def),
(simp add: case_prod_unfold)
)
lemma read_state_reg_instr_privilege:
assumes a1: "s' = snd (fst (read_state_reg_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "(fst instr \<in> {sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR} \<or>
(fst instr = sreg_type RDASR \<and> privileged_ASR (get_operand_w5 ((snd instr)!0))))")
case True
then have "(fst instr \<in> {sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR} \<or>
(fst instr = sreg_type RDASR \<and> privileged_ASR (get_operand_w5 ((snd instr)!0))))
\<and> ((ucast (get_S (cpu_reg_val PSR (snd (fst (get_curr_win () s))))))::word1) = 0"
by (metis assms get_curr_win_privilege)
then show ?thesis using a1
apply read_state_reg_instr_privilege_proof
by (blast intro: raise_trap_privilege get_curr_win_privilege)
next
case False
then have f1: "\<not>((fst instr = sreg_type RDPSR \<or>
fst instr = sreg_type RDWIM \<or>
fst instr = sreg_type RDTBR \<or>
fst instr = sreg_type RDASR \<and> privileged_ASR (get_operand_w5 (snd instr ! 0))) \<and>
ucast (get_S (cpu_reg_val PSR (snd (fst (get_curr_win () s))))) = 0)"
by blast
then show ?thesis
proof (cases "illegal_instruction_ASR (get_operand_w5 ((snd instr)!0))")
case True
then show ?thesis using a1 f1
apply read_state_reg_instr_privilege_proof
by (simp add: illegal_instruction_ASR_def)
next
case False
then have f2: "\<not>(illegal_instruction_ASR (get_operand_w5 ((snd instr)!0)))"
by auto
then show ?thesis
proof (cases "(get_operand_w5 ((snd instr)!1)) \<noteq> 0")
case True
then have f3: "(get_operand_w5 ((snd instr)!1)) \<noteq> 0"
by auto
then show ?thesis
proof (cases "fst instr = sreg_type RDY")
case True
then show ?thesis using a1 f1 f2 f3
apply (simp add: read_state_reg_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
by (blast intro: get_curr_win_privilege write_reg_privilege)
next
case False
then have f4: "\<not>(fst instr = sreg_type RDY)" by auto
then show ?thesis
proof (cases "fst instr = sreg_type RDASR")
case True
then show ?thesis using a1 f1 f2 f3 f4
apply read_state_reg_instr_privilege_proof
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (blast intro: get_curr_win_privilege write_reg_privilege)
next
case False
then have f5: "\<not>(fst instr = sreg_type RDASR)" by auto
then show ?thesis
proof (cases "fst instr = sreg_type RDPSR")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5
apply read_state_reg_instr_privilege_proof
by (blast intro: get_curr_win_privilege write_reg_privilege)
next
case False
then show ?thesis using a1 f1 f2 f3 f4 f5
apply read_state_reg_instr_privilege_proof
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (blast intro: get_curr_win_privilege write_reg_privilege)
qed
qed
qed
next
case False
then show ?thesis using a1
apply read_state_reg_instr_privilege_proof
apply (simp add: return_def)
using f1 f2 get_curr_win_privilege by blast
qed
qed
qed
method write_state_reg_instr_privilege_proof = (
(simp add: write_state_reg_instr_def),
(simp add: Let_def),
(simp add: simpler_gets_def bind_def h1_def h2_def Let_def),
(simp add: case_prod_unfold)
)
lemma write_state_reg_instr_privilege:
assumes a1: "s' = snd (fst (write_state_reg_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "fst instr = sreg_type WRY")
case True
then show ?thesis using a1
apply write_state_reg_instr_privilege_proof
apply (simp add: simpler_modify_def)
apply (simp add: delayed_pool_add_def DELAYNUM_def)
by (blast intro: cpu_reg_mod_y_privilege get_curr_win_privilege)
next
case False
then have f1: "\<not>(fst instr = sreg_type WRY)" by auto
then show ?thesis
proof (cases "fst instr = sreg_type WRASR")
case True
then show ?thesis
using a1 f1
apply write_state_reg_instr_privilege_proof
apply (simp add: simpler_modify_def)
apply auto
using illegal_instruction_ASR_def apply blast
using illegal_instruction_ASR_def apply blast
using illegal_instruction_ASR_def apply blast
using raise_trap_privilege get_curr_win_privilege apply blast
apply (simp add: simpler_modify_def delayed_pool_add_def DELAYNUM_def)
using cpu_reg_mod_asr_privilege get_curr_win_privilege apply blast
apply (simp add: simpler_modify_def delayed_pool_add_def DELAYNUM_def)
using cpu_reg_mod_asr_privilege get_curr_win_privilege by blast
next
case False
then have f2: "\<not>(fst instr = sreg_type WRASR)" by auto
have f3: "get_S (cpu_reg_val PSR (snd (fst (get_curr_win () s)))) = 0"
using get_curr_win_privilege a1 by (metis ucast_id)
then show ?thesis
proof (cases "fst instr = sreg_type WRPSR")
case True
then show ?thesis using a1 f1 f2 f3
apply write_state_reg_instr_privilege_proof
by (metis raise_trap_privilege ucast_0)
next
case False
then have f4: "\<not>(fst instr = sreg_type WRPSR)" by auto
then show ?thesis
proof (cases "fst instr = sreg_type WRWIM")
case True
then show ?thesis using a1 f1 f2 f3 f4
apply write_state_reg_instr_privilege_proof
by (metis raise_trap_privilege ucast_0)
next
case False
then have f5: "\<not>(fst instr = sreg_type WRWIM)" by auto
then show ?thesis using a1 f1 f2 f3 f4 f5
apply write_state_reg_instr_privilege_proof
by (metis raise_trap_privilege ucast_0)
qed
qed
qed
qed
lemma flush_instr_privilege:
assumes a1: "s' = snd (fst (flush_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: flush_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def simpler_modify_def)
by (auto simp add: flush_cache_all_privilege)
lemma branch_instr_privilege:
assumes a1: "s' = snd (fst (branch_instr instr
(s::(('a::len0) sparc_state))))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
using a1
apply (simp add: branch_instr_def)
apply (simp add: Let_def simpler_gets_def bind_def h1_def h2_def)
apply (simp add: case_prod_unfold return_def)
by (meson set_annul_privilege write_cpu_npc_privilege write_cpu_pc_privilege)
method dispath_instr_privilege_proof = (
(simp add: dispatch_instruction_def),
(simp add: simpler_gets_def bind_def h1_def h2_def Let_def),
(simp add: Let_def)
)
lemma dispath_instr_privilege:
assumes a1: "snd (dispatch_instruction instr s) = False \<and>
s' = snd (fst (dispatch_instruction instr s))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "get_trap_set s = {}")
case True
then have f1: "get_trap_set s = {}" by auto
show ?thesis
proof (cases "fst instr \<in> {load_store_type LDSB,load_store_type LDUB,
load_store_type LDUBA,load_store_type LDUH,load_store_type LD,
load_store_type LDA,load_store_type LDD}")
case True
then show ?thesis using a1 f1
apply dispath_instr_privilege_proof
by (blast intro: load_instr_privilege)
next
case False
then have f2: "\<not>(fst instr \<in> {load_store_type LDSB,load_store_type LDUB,
load_store_type LDUBA,load_store_type LDUH,load_store_type LD,
load_store_type LDA,load_store_type LDD})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {load_store_type STB,load_store_type STH,
load_store_type ST,load_store_type STA,load_store_type STD}")
case True
then show ?thesis using a1 f1 f2
apply dispath_instr_privilege_proof
by (blast intro: store_instr_privilege)
next
case False
then have f3: "\<not>(fst instr \<in> {load_store_type STB,load_store_type STH,
load_store_type ST,load_store_type STA,load_store_type STD})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {sethi_type SETHI}")
case True
then show ?thesis using a1 f1 f2 f3
apply dispath_instr_privilege_proof
by (blast intro: sethi_instr_privilege)
next
case False
then have f4: "\<not>(fst instr \<in> {sethi_type SETHI})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {nop_type NOP}")
case True
then show ?thesis using a1 f1 f2 f3 f4
apply dispath_instr_privilege_proof
by (blast intro: nop_instr_privilege)
next
case False
then have f5: "\<not>(fst instr \<in> {nop_type NOP})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {logic_type ANDs,logic_type ANDcc,logic_type ANDN,
logic_type ANDNcc,logic_type ORs,logic_type ORcc,logic_type ORN,
logic_type XORs,logic_type XNOR}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5
apply dispath_instr_privilege_proof
by (blast intro: logical_instr_privilege)
next
case False
then have f6: "\<not>(fst instr \<in> {logic_type ANDs,logic_type ANDcc,logic_type ANDN,
logic_type ANDNcc,logic_type ORs,logic_type ORcc,logic_type ORN,
logic_type XORs,logic_type XNOR})"
by auto
show ?thesis
proof (cases "fst instr \<in> {shift_type SLL,shift_type SRL,shift_type SRA}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5 f6
apply dispath_instr_privilege_proof
by (blast intro: shift_instr_privilege)
next
case False
then have f7: "\<not>(fst instr \<in> {shift_type SLL,shift_type SRL,shift_type SRA})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {arith_type ADD,arith_type ADDcc,arith_type ADDX}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7
apply dispath_instr_privilege_proof
by (blast intro: add_instr_privilege)
next
case False
then have f8: "\<not>(fst instr \<in> {arith_type ADD,arith_type ADDcc,arith_type ADDX})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {arith_type SUB,arith_type SUBcc,arith_type SUBX}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7 f8
apply dispath_instr_privilege_proof
by (blast intro: sub_instr_privilege)
next
case False
then have f9: "\<not>(fst instr \<in> {arith_type SUB,arith_type SUBcc,arith_type SUBX})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {arith_type UMUL,arith_type SMUL,arith_type SMULcc}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7 f8 f9
apply dispath_instr_privilege_proof
by (blast intro: mul_instr_privilege)
next
case False
then have f10: "\<not>(fst instr \<in> {arith_type UMUL,arith_type SMUL,
arith_type SMULcc})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {arith_type UDIV,arith_type UDIVcc,arith_type SDIV}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
apply dispath_instr_privilege_proof
by (blast intro: div_instr_privilege)
next
case False
then have f11: "\<not>(fst instr \<in> {arith_type UDIV,
arith_type UDIVcc,arith_type SDIV})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {ctrl_type SAVE,ctrl_type RESTORE}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11
apply dispath_instr_privilege_proof
by (blast intro: save_restore_instr_privilege)
next
case False
then have f12: "\<not>(fst instr \<in> {ctrl_type SAVE,ctrl_type RESTORE})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {call_type CALL}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12
apply dispath_instr_privilege_proof
by (blast intro: call_instr_privilege)
next
case False
then have f13: "\<not>(fst instr \<in> {call_type CALL})" by auto
then show ?thesis
proof (cases "fst instr \<in> {ctrl_type JMPL}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13
apply dispath_instr_privilege_proof
by (blast intro: jmpl_instr_privilege)
next
case False
then have f14: "\<not>(fst instr \<in> {ctrl_type JMPL})" by auto
then show ?thesis
proof (cases "fst instr \<in> {ctrl_type RETT}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13
f14
apply dispath_instr_privilege_proof
by (blast intro: rett_instr_privilege)
next
case False
then have f15: "\<not>(fst instr \<in> {ctrl_type RETT})" by auto
then show ?thesis
proof (cases "fst instr \<in> {sreg_type RDY,sreg_type RDPSR,
sreg_type RDWIM, sreg_type RDTBR}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12
f13 f14 f15
apply dispath_instr_privilege_proof
by (blast intro: read_state_reg_instr_privilege)
next
case False
then have f16: "\<not>(fst instr \<in> {sreg_type RDY,sreg_type RDPSR,
sreg_type RDWIM, sreg_type RDTBR})" by auto
then show ?thesis
proof (cases "fst instr \<in> {sreg_type WRY,sreg_type WRPSR,
sreg_type WRWIM, sreg_type WRTBR}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12
f13 f14 f15 f16
apply dispath_instr_privilege_proof
by (blast intro: write_state_reg_instr_privilege)
next
case False
then have f17: "\<not>(fst instr \<in> {sreg_type WRY,sreg_type WRPSR,
sreg_type WRWIM, sreg_type WRTBR})" by auto
then show ?thesis
proof (cases "fst instr \<in> {load_store_type FLUSH}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11
f12 f13 f14 f15 f16 f17
apply dispath_instr_privilege_proof
by (blast intro: flush_instr_privilege)
next
case False
then have f18: "\<not>(fst instr \<in> {load_store_type FLUSH})" by auto
then show ?thesis
proof (cases "fst instr \<in> {bicc_type BE,bicc_type BNE,
bicc_type BGU,bicc_type BLE,bicc_type BL,bicc_type BGE,
bicc_type BNEG,bicc_type BG,bicc_type BCS,bicc_type BLEU,
bicc_type BCC,bicc_type BA,bicc_type BN}")
case True
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11
f12 f13 f14 f15 f16 f17 f18
apply dispath_instr_privilege_proof
by (blast intro: branch_instr_privilege)
next
case False
then show ?thesis using a1 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11
f12 f13 f14 f15 f16 f17 f18
apply dispath_instr_privilege_proof
by (simp add: fail_def)
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
next
case False
then show ?thesis using a1
apply (simp add: dispatch_instruction_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: Let_def)
by (simp add: return_def)
qed
lemma execute_instr_sub1_privilege:
assumes a1: "snd (execute_instr_sub1 i s) = False \<and>
s' = snd (fst (execute_instr_sub1 i s))
\<and> ((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "get_trap_set s = {} \<and> fst i \<notin> {call_type CALL,ctrl_type RETT,ctrl_type JMPL,
bicc_type BE,bicc_type BNE,bicc_type BGU,
bicc_type BLE,bicc_type BL,bicc_type BGE,
bicc_type BNEG,bicc_type BG,
bicc_type BCS,bicc_type BLEU,bicc_type BCC,
bicc_type BA,bicc_type BN}")
case True
then show ?thesis using a1
apply (simp add: execute_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold return_def)
by (auto intro: write_cpu_pc_privilege write_cpu_npc_privilege)
next
case False
then show ?thesis using a1
apply (simp add: execute_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold return_def)
by auto
qed
text \<open>
Assume that there is no \<open>delayed_write\<close> and
there is no traps to be executed.
If an instruction is executed as a user,
the privilege will not be changed to supervisor after
the execution.\<close>
theorem safe_privilege :
assumes a1: "get_delayed_pool s = [] \<and> get_trap_set s = {} \<and>
snd (execute_instruction() s) = False \<and>
s' = snd (fst (execute_instruction() s)) \<and>
((ucast (get_S (cpu_reg_val PSR s)))::word1) = 0"
shows "((ucast (get_S (cpu_reg_val PSR s')))::word1) = 0"
proof (cases "exe_mode_val s")
case True
then have f2: "exe_mode_val s = True" by auto
then show ?thesis
proof (cases "\<exists>e. fetch_instruction (delayed_pool_write s) = Inl e")
case True
then have f3: "\<exists>e. fetch_instruction (delayed_pool_write s) = Inl e"
by auto
then have f4: "\<not> (\<exists>v. fetch_instruction (delayed_pool_write s) = Inr v)"
using fetch_instr_result_3 by auto
then show ?thesis using a1 f2 f3 raise_trap_result empty_delayed_pool_write_privilege raise_trap_privilege
apply (simp add: execute_instruction_def)
apply (simp add: exec_gets return_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: simpler_modify_def)
apply clarsimp
apply (simp add: case_prod_unfold)
by (blast intro: empty_delayed_pool_write_privilege raise_trap_privilege)
next
case False
then have f5: "\<exists>v. fetch_instruction (delayed_pool_write s) = Inr v"
using fetch_instr_result_1 by blast
then have f6: "\<exists>v. fetch_instruction (delayed_pool_write s) = Inr v \<and>
\<not> (\<exists>e. ((decode_instruction v)::(Exception list + instruction)) = Inl e)"
using a1 f2 dispatch_fail by blast
then have f7: "\<exists>v. fetch_instruction (delayed_pool_write s) = Inr v \<and>
(\<exists>v1. ((decode_instruction v)::(Exception list + instruction)) = Inr v1)"
using decode_instr_result_4 by auto
then show ?thesis
proof (cases "annul_val (delayed_pool_write s)")
case True
then show ?thesis using a1 f2 f7
apply (simp add: execute_instruction_def)
apply (simp add: exec_gets return_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: simpler_modify_def)
apply clarsimp
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
by (auto intro: empty_delayed_pool_write_privilege
set_annul_privilege write_cpu_npc_privilege write_cpu_pc_privilege)
next
case False
then show ?thesis using a1 f2 f7
apply (simp add: execute_instruction_def)
apply (simp add: exec_gets return_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: simpler_modify_def)
apply clarsimp
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: simpler_modify_def return_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
by (auto intro: empty_delayed_pool_write_privilege dispath_instr_privilege
execute_instr_sub1_privilege)
qed
qed
next
case False
then show ?thesis using a1
apply (simp add: execute_instruction_def)
by (simp add: simpler_gets_def bind_def h1_def h2_def Let_def return_def)
qed
(*********************************************************************)
section \<open>Single step non-interference property.\<close>
(*********************************************************************)
definition user_accessible:: "('a::len0) sparc_state \<Rightarrow> phys_address \<Rightarrow> bool" where
"user_accessible s pa \<equiv> \<exists>va p. (virt_to_phys va (mmu s) (mem s)) = Some p \<and>
mmu_readable (get_acc_flag (snd p)) 10 \<and>
(fst p) = pa" \<comment> \<open>Passing \<open>asi = 8\<close> is the same.\<close>
lemma user_accessible_8:
assumes a1: "mmu_readable (get_acc_flag (snd p)) 8"
shows "mmu_readable (get_acc_flag (snd p)) 10"
using a1 by (simp add: mmu_readable_def)
definition mem_equal:: "('a) sparc_state \<Rightarrow> ('a) sparc_state \<Rightarrow>
phys_address \<Rightarrow> bool" where
"mem_equal s1 s2 pa \<equiv>
(mem s1) 8 (pa AND 68719476732) = (mem s2) 8 (pa AND 68719476732) \<and>
(mem s1) 8 ((pa AND 68719476732) + 1) = (mem s2) 8 ((pa AND 68719476732) + 1) \<and>
(mem s1) 8 ((pa AND 68719476732) + 2) = (mem s2) 8 ((pa AND 68719476732) + 2) \<and>
(mem s1) 8 ((pa AND 68719476732) + 3) = (mem s2) 8 ((pa AND 68719476732) + 3) \<and>
(mem s1) 9 (pa AND 68719476732) = (mem s2) 9 (pa AND 68719476732) \<and>
(mem s1) 9 ((pa AND 68719476732) + 1) = (mem s2) 9 ((pa AND 68719476732) + 1) \<and>
(mem s1) 9 ((pa AND 68719476732) + 2) = (mem s2) 9 ((pa AND 68719476732) + 2) \<and>
(mem s1) 9 ((pa AND 68719476732) + 3) = (mem s2) 9 ((pa AND 68719476732) + 3) \<and>
(mem s1) 10 (pa AND 68719476732) = (mem s2) 10 (pa AND 68719476732) \<and>
(mem s1) 10 ((pa AND 68719476732) + 1) = (mem s2) 10 ((pa AND 68719476732) + 1) \<and>
(mem s1) 10 ((pa AND 68719476732) + 2) = (mem s2) 10 ((pa AND 68719476732) + 2) \<and>
(mem s1) 10 ((pa AND 68719476732) + 3) = (mem s2) 10 ((pa AND 68719476732) + 3) \<and>
(mem s1) 11 (pa AND 68719476732) = (mem s2) 11 (pa AND 68719476732) \<and>
(mem s1) 11 ((pa AND 68719476732) + 1) = (mem s2) 11 ((pa AND 68719476732) + 1) \<and>
(mem s1) 11 ((pa AND 68719476732) + 2) = (mem s2) 11 ((pa AND 68719476732) + 2) \<and>
(mem s1) 11 ((pa AND 68719476732) + 3) = (mem s2) 11 ((pa AND 68719476732) + 3)"
text \<open>\<open>low_equal\<close> defines the equivalence relation over two sparc
states that is an analogy to the \<open>=\<^sub>L\<close> relation over memory contexts
in the traditional non-interference theorem.\<close>
definition low_equal:: "('a::len0) sparc_state \<Rightarrow> ('a) sparc_state \<Rightarrow> bool" where
"low_equal s1 s2 \<equiv>
(cpu_reg s1) = (cpu_reg s2) \<and>
(user_reg s1) = (user_reg s2) \<and>
(sys_reg s1) = (sys_reg s2) \<and>
(\<forall>va. (virt_to_phys va (mmu s1) (mem s1)) = (virt_to_phys va (mmu s2) (mem s2))) \<and>
(\<forall>pa. (user_accessible s1 pa) \<longrightarrow> mem_equal s1 s2 pa) \<and>
(mmu s1) = (mmu s2) \<and>
(state_var s1) = (state_var s2) \<and>
(traps s1) = (traps s2) \<and>
(undef s1) = (undef s2)
"
lemma low_equal_com: "low_equal s1 s2 \<Longrightarrow> low_equal s2 s1"
apply (simp add: low_equal_def)
apply (simp add: mem_equal_def user_accessible_def)
by metis
lemma non_exe_mode_equal: "exe_mode_val s = False \<and>
get_trap_set s = {} \<and>
Some t = NEXT s \<Longrightarrow>
t = s"
apply (simp add: NEXT_def execute_instruction_def)
apply auto
by (simp add: simpler_gets_def bind_def h1_def h2_def Let_def return_def)
lemma exe_mode_low_equal:
assumes a1: "low_equal s1 s2"
shows " exe_mode_val s1 = exe_mode_val s2"
using a1 apply (simp add: low_equal_def)
by (simp add: exe_mode_val_def)
lemma mem_val_mod_state: "mem_val_alt asi a s = mem_val_alt asi a
(s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>)"
apply (simp add: mem_val_alt_def)
by (simp add: Let_def)
lemma mem_val_w32_mod_state: "mem_val_w32 asi a s = mem_val_w32 asi a
(s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>)"
apply (simp add: mem_val_w32_def)
apply (simp add: Let_def)
by (metis mem_val_mod_state)
lemma load_word_mem_mod_state: "load_word_mem s addr asi = load_word_mem
(s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr asi"
apply (simp add: load_word_mem_def)
apply (case_tac "virt_to_phys addr (mmu s) (mem s) = None")
apply auto
by (auto simp add: mem_val_w32_mod_state)
lemma load_word_mem2_mod_state:
"fst (case load_word_mem s addr asi of None \<Rightarrow> (None, s)
| Some w \<Rightarrow> (Some w, add_data_cache s addr w 15)) =
fst (case load_word_mem (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr asi of
None \<Rightarrow> (None, (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>))
| Some w \<Rightarrow> (Some w, add_data_cache (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr w 15))"
proof (cases "load_word_mem s addr asi = None")
case True
then have "load_word_mem s addr asi = None \<and>
load_word_mem (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr asi = None"
using load_word_mem_mod_state by metis
then show ?thesis by auto
next
case False
then have "\<exists>w. load_word_mem s addr asi = Some w" by auto
then have "\<exists>w. load_word_mem s addr asi = Some w \<and>
load_word_mem (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr asi = Some w"
using load_word_mem_mod_state by metis
then show ?thesis by auto
qed
lemma load_word_mem3_mod_state:
"fst (case load_word_mem s addr asi of None \<Rightarrow> (None, s)
| Some w \<Rightarrow> (Some w, add_instr_cache s addr w 15)) =
fst (case load_word_mem (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr asi of
None \<Rightarrow> (None, (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>))
| Some w \<Rightarrow> (Some w, add_instr_cache (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr w 15))"
proof (cases "load_word_mem s addr asi = None")
case True
then have "load_word_mem s addr asi = None \<and>
load_word_mem (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr asi = None"
using load_word_mem_mod_state by metis
then show ?thesis by auto
next
case False
then have "\<exists>w. load_word_mem s addr asi = Some w" by auto
then have "\<exists>w. load_word_mem s addr asi = Some w \<and>
load_word_mem (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr asi = Some w"
using load_word_mem_mod_state by metis
then show ?thesis by auto
qed
lemma read_dcache_mod_state: "read_data_cache s addr = read_data_cache
(s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr"
apply (simp add: read_data_cache_def)
by (simp add: dcache_val_def)
lemma read_dcache2_mod_state:
"fst (case read_data_cache s addr of None \<Rightarrow> (None, s)
| Some w \<Rightarrow> (Some w, s)) =
fst (case read_data_cache (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr of
None \<Rightarrow> (None, (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>))
| Some w \<Rightarrow> (Some w, (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>)))"
proof (cases "read_data_cache s addr = None")
case True
then have "read_data_cache s addr = None \<and>
read_data_cache (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr = None"
using read_dcache_mod_state by metis
then show ?thesis by auto
next
case False
then have "\<exists>w. read_data_cache s addr = Some w" by auto
then have "\<exists>w. read_data_cache s addr = Some w \<and>
read_data_cache (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr = Some w"
using read_dcache_mod_state by metis
then show ?thesis by auto
qed
lemma read_icache_mod_state: "read_instr_cache s addr = read_instr_cache
(s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr"
apply (simp add: read_instr_cache_def)
by (simp add: icache_val_def)
lemma read_icache2_mod_state:
"fst (case read_instr_cache s addr of None \<Rightarrow> (None, s)
| Some w \<Rightarrow> (Some w, s)) =
fst (case read_instr_cache (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr of
None \<Rightarrow> (None, (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>))
| Some w \<Rightarrow> (Some w, (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>)))"
proof (cases "read_instr_cache s addr = None")
case True
then have "read_instr_cache s addr = None \<and>
read_instr_cache (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr = None"
using read_icache_mod_state by metis
then show ?thesis by auto
next
case False
then have "\<exists>w. read_instr_cache s addr = Some w" by auto
then have "\<exists>w. read_instr_cache s addr = Some w \<and>
read_instr_cache (s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) addr = Some w"
using read_icache_mod_state by metis
then show ?thesis by auto
qed
lemma mem_read_mod_state: "fst (memory_read asi addr s) =
fst (memory_read asi addr
(s\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>))"
apply (simp add: memory_read_def)
apply (case_tac "uint asi = 1")
apply (simp add: Let_def)
apply (metis load_word_mem_mod_state option.distinct(1))
apply (case_tac "uint asi = 2")
apply (simp add: Let_def)
apply (simp add: sys_reg_val_def)
apply (case_tac "uint asi \<in> {8,9}")
apply (simp add: Let_def)
apply (simp add: load_word_mem3_mod_state)
apply (simp add: load_word_mem_mod_state)
apply (case_tac "uint asi \<in> {10,11}")
apply (simp add: Let_def)
apply (simp add: load_word_mem2_mod_state)
apply (simp add: load_word_mem_mod_state)
apply (case_tac "uint asi = 13")
apply (simp add: Let_def)
apply (simp add: read_icache2_mod_state)
apply (case_tac "uint asi = 15")
apply (simp add: Let_def)
apply (simp add: read_dcache2_mod_state)
apply (case_tac "uint asi = 25")
apply (simp add: Let_def)
apply (case_tac "uint asi = 28")
apply (simp add: Let_def)
apply (simp add: mem_val_w32_mod_state)
by (simp add: Let_def)
lemma insert_trap_mem: "fst (memory_read asi addr s) =
fst (memory_read asi addr (s\<lparr>traps := new_traps\<rparr>))"
proof -
have "fst (memory_read asi addr s) =
fst (memory_read asi addr
(s\<lparr>cpu_reg := (cpu_reg s),
user_reg := (user_reg s),
dwrite := (dwrite s),
state_var := (state_var s),
traps := new_traps,
undef := (undef s)\<rparr>))"
using mem_read_mod_state by blast
then show ?thesis by auto
qed
lemma cpu_reg_mod_mem: "fst (memory_read asi addr s) =
fst (memory_read asi addr (s\<lparr>cpu_reg := new_cpu_reg\<rparr>))"
proof -
have "fst (memory_read asi addr s) =
fst (memory_read asi addr
(s\<lparr>cpu_reg := new_cpu_reg,
user_reg := (user_reg s),
dwrite := (dwrite s),
state_var := (state_var s),
traps := (traps s),
undef := (undef s)\<rparr>))"
using mem_read_mod_state by blast
then show ?thesis by auto
qed
lemma user_reg_mod_mem: "fst (memory_read asi addr s) =
fst (memory_read asi addr (s\<lparr>user_reg := new_user_reg\<rparr>))"
proof -
have "fst (memory_read asi addr s) =
fst (memory_read asi addr
(s\<lparr>cpu_reg := (cpu_reg s),
user_reg := new_user_reg,
dwrite := (dwrite s),
state_var := (state_var s),
traps := (traps s),
undef := (undef s)\<rparr>))"
using mem_read_mod_state by blast
then show ?thesis by auto
qed
lemma annul_mem: "fst (memory_read asi addr s) =
fst (memory_read asi addr
(s\<lparr>state_var := new_state_var,
cpu_reg := new_cpu_reg\<rparr>))"
proof -
have "fst (memory_read asi addr s) =
fst (memory_read asi addr
(s\<lparr>cpu_reg := new_cpu_reg,
user_reg := (user_reg s),
dwrite := (dwrite s),
state_var := new_state_var,
traps := (traps s),
undef := (undef s)\<rparr>))"
using mem_read_mod_state by blast
then have "fst (memory_read asi addr s) =
fst (memory_read asi addr
(s\<lparr>cpu_reg := new_cpu_reg,
state_var := new_state_var\<rparr>))"
by auto
then show ?thesis
by (metis Sparc_State.sparc_state.surjective Sparc_State.sparc_state.update_convs(1) Sparc_State.sparc_state.update_convs(8))
qed
lemma state_var_mod_mem: "fst (memory_read asi addr s) =
fst (memory_read asi addr (s\<lparr>state_var := new_state_var\<rparr>))"
proof -
have "fst (memory_read asi addr s) =
fst (memory_read asi addr
(s\<lparr>cpu_reg := (cpu_reg s),
user_reg := (user_reg s),
dwrite := (dwrite s),
state_var := new_state_var,
traps := (traps s),
undef := (undef s)\<rparr>))"
using mem_read_mod_state by blast
then show ?thesis by auto
qed
lemma mod_state_low_equal: "low_equal s1 s2 \<and>
t1 = (s1\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) \<and>
t2 = (s2\<lparr>cpu_reg := new_cpu_reg,
user_reg := new_user_reg,
dwrite := new_dwrite,
state_var := new_state_var,
traps := new_traps,
undef := new_undef\<rparr>) \<Longrightarrow>
low_equal t1 t2"
apply (simp add: low_equal_def)
apply clarsimp
apply (simp add: mem_equal_def)
by (simp add: user_accessible_def)
lemma user_reg_state_mod_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = (s1\<lparr>user_reg := new_user_reg\<rparr>) \<and>
t2 = (s2\<lparr>user_reg := new_user_reg\<rparr>)"
shows "low_equal t1 t2"
proof -
have "low_equal s1 s2 \<and>
t1 = (s1\<lparr>cpu_reg := (cpu_reg s1),
user_reg := new_user_reg,
dwrite := (dwrite s1),
state_var := (state_var s1),
traps := (traps s1),
undef := (undef s1)\<rparr>) \<and>
t2 = (s2\<lparr>cpu_reg := (cpu_reg s2),
user_reg := new_user_reg,
dwrite := (dwrite s2),
state_var := (state_var s2),
traps := (traps s2),
undef := (undef s2)\<rparr>) \<Longrightarrow>
low_equal t1 t2"
using mod_state_low_equal apply (simp add: low_equal_def)
apply (simp add: user_accessible_def mem_equal_def)
by clarsimp
then show ?thesis using a1
by clarsimp
qed
lemma mod_trap_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = (s1\<lparr>traps := new_traps\<rparr>) \<and>
t2 = (s2\<lparr>traps := new_traps\<rparr>)"
shows "low_equal t1 t2"
proof -
have "low_equal s1 s2 \<and>
t1 = (s1\<lparr>cpu_reg := (cpu_reg s1),
user_reg := (user_reg s1),
dwrite := (dwrite s1),
state_var := (state_var s1),
traps := new_traps,
undef := (undef s1)\<rparr>) \<and>
t2 = (s2\<lparr>cpu_reg := (cpu_reg s2),
user_reg := (user_reg s2),
dwrite := (dwrite s2),
state_var := (state_var s2),
traps := new_traps,
undef := (undef s2)\<rparr>) \<Longrightarrow>
low_equal t1 t2"
using mod_state_low_equal apply (simp add: low_equal_def)
apply (simp add: user_accessible_def mem_equal_def)
by clarsimp
then show ?thesis using a1
by clarsimp
qed
lemma state_var_low_equal: "low_equal s1 s2 \<Longrightarrow>
state_var s1 = state_var s2"
by (simp add: low_equal_def)
lemma state_var2_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = (s1\<lparr>state_var := new_state_var\<rparr>) \<and>
t2 = (s2\<lparr>state_var := new_state_var\<rparr>)"
shows "low_equal t1 t2"
proof -
have "low_equal s1 s2 \<and>
t1 = (s1\<lparr>cpu_reg := (cpu_reg s1),
user_reg := (user_reg s1),
dwrite := (dwrite s1),
state_var := new_state_var,
traps := (traps s1),
undef := (undef s1)\<rparr>) \<and>
t2 = (s2\<lparr>cpu_reg := (cpu_reg s2),
user_reg := (user_reg s2),
dwrite := (dwrite s2),
state_var := new_state_var,
traps := (traps s2),
undef := (undef s2)\<rparr>) \<Longrightarrow>
low_equal t1 t2"
using mod_state_low_equal apply (simp add: low_equal_def)
apply (simp add: user_accessible_def mem_equal_def)
by clarsimp
then show ?thesis using a1
by clarsimp
qed
lemma traps_low_equal: "low_equal s1 s2 \<Longrightarrow> traps s1 = traps s2"
by (simp add: low_equal_def)
lemma s_low_equal: "low_equal s1 s2 \<Longrightarrow>
(get_S (cpu_reg_val PSR s1)) = (get_S (cpu_reg_val PSR s2))"
by (simp add: low_equal_def cpu_reg_val_def)
lemma cpu_reg_val_low_equal: "low_equal s1 s2 \<Longrightarrow>
(cpu_reg_val cr s1) = (cpu_reg_val cr s2)"
by (simp add: cpu_reg_val_def low_equal_def)
lemma get_curr_win_low_equal: "low_equal s1 s2 \<Longrightarrow>
(fst (fst (get_curr_win () s1))) = (fst (fst (get_curr_win () s2)))"
apply (simp add: low_equal_def)
apply (simp add: get_curr_win_def cpu_reg_val_def get_CWP_def)
by (simp add: simpler_gets_def)
lemma get_curr_win2_low_equal: "low_equal s1 s2 \<Longrightarrow>
t1 = (snd (fst (get_curr_win () s1))) \<Longrightarrow>
t2 = (snd (fst (get_curr_win () s2))) \<Longrightarrow>
low_equal t1 t2"
apply (simp add: low_equal_def)
apply (simp add: get_curr_win_def cpu_reg_val_def get_CWP_def)
by (auto simp add: simpler_gets_def)
lemma get_curr_win3_low_equal: "low_equal s1 s2 \<Longrightarrow>
(traps (snd (fst (get_curr_win () s1)))) =
(traps (snd (fst (get_curr_win () s2))))"
using low_equal_def get_curr_win2_low_equal by blast
lemma get_addr_low_equal: "low_equal s1 s2 \<Longrightarrow>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word3) =
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word3) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word2) =
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word2) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word1) =
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word1)"
apply (simp add: low_equal_def)
apply (simp add: get_curr_win_def cpu_reg_val_def get_CWP_def)
apply (simp add: simpler_gets_def get_addr_def user_reg_val_def)
apply (simp add: Let_def )
apply (simp add: get_CWP_def cpu_reg_val_def get_operand2_def)
by (simp add: user_reg_val_def)
lemma get_addr2_low_equal: "low_equal s1 s2 \<Longrightarrow>
get_addr (snd instr) (snd (fst (get_curr_win () s1))) =
get_addr (snd instr) (snd (fst (get_curr_win () s2)))"
apply (simp add: low_equal_def)
apply (simp add: get_curr_win_def cpu_reg_val_def get_CWP_def)
apply (simp add: simpler_gets_def get_addr_def user_reg_val_def)
apply (simp add: Let_def )
apply (simp add: get_CWP_def cpu_reg_val_def get_operand2_def)
by (simp add: user_reg_val_def)
lemma sys_reg_low_equal: "low_equal s1 s2 \<Longrightarrow>
sys_reg s1 = sys_reg s2"
by (simp add: low_equal_def)
lemma user_reg_low_equal: "low_equal s1 s2 \<Longrightarrow>
user_reg s1 = user_reg s2"
by (simp add: low_equal_def)
lemma user_reg_val_low_equal: "low_equal s1 s2 \<Longrightarrow>
user_reg_val win ur s1 = user_reg_val win ur s2"
apply (simp add: user_reg_val_def)
by (simp add: user_reg_low_equal)
lemma get_operand2_low_equal: "low_equal s1 s2 \<Longrightarrow>
get_operand2 op_list s1 = get_operand2 op_list s2"
apply (simp add: get_operand2_def)
apply (simp add: cpu_reg_val_low_equal)
apply auto
apply (simp add: user_reg_val_def)
using user_reg_low_equal by fastforce
lemma mem_val_mod_cache: "mem_val_alt asi a s =
mem_val_alt asi a (s\<lparr>cache := new_cache\<rparr>)"
apply (simp add: mem_val_alt_def)
by (simp add: Let_def)
lemma mem_val_w32_mod_cache: "mem_val_w32 asi a s =
mem_val_w32 asi a (s\<lparr>cache := new_cache\<rparr>)"
apply (simp add: mem_val_w32_def)
apply (simp add: Let_def)
by (metis mem_val_mod_cache)
lemma load_word_mem_mod_cache:
"load_word_mem s addr asi =
load_word_mem (s\<lparr>cache := new_cache\<rparr>) addr asi"
apply (simp add: load_word_mem_def)
apply (case_tac "virt_to_phys addr (mmu s) (mem s) = None")
apply auto
by (simp add: mem_val_w32_mod_cache)
lemma memory_read_8_mod_cache:
"fst (memory_read 8 addr s) = fst (memory_read 8 addr (s\<lparr>cache := new_cache\<rparr>))"
apply (simp add: memory_read_def)
apply (case_tac "sys_reg s CCR AND 1 \<noteq> 0")
apply auto
apply (simp add: option.case_eq_if load_word_mem_mod_cache)
apply (auto intro: load_word_mem_mod_cache)
apply (metis load_word_mem_mod_cache option.distinct(1))
by (metis load_word_mem_mod_cache option.distinct(1))
lemma memory_read_10_mod_cache:
"fst (memory_read 10 addr s) = fst (memory_read 10 addr (s\<lparr>cache := new_cache\<rparr>))"
apply (simp add: memory_read_def)
apply (case_tac "sys_reg s CCR AND 1 \<noteq> 0")
apply auto
apply (simp add: option.case_eq_if load_word_mem_mod_cache)
apply (auto intro: load_word_mem_mod_cache)
apply (metis load_word_mem_mod_cache option.distinct(1))
by (metis load_word_mem_mod_cache option.distinct(1))
lemma mem_equal_mod_cache: "mem_equal s1 s2 pa \<Longrightarrow>
mem_equal (s1\<lparr>cache := new_cache1\<rparr>) (s2\<lparr>cache := new_cache2\<rparr>) pa"
by (simp add: mem_equal_def)
lemma user_accessible_mod_cache: "user_accessible (s\<lparr>cache := new_cache\<rparr>) pa =
user_accessible s pa"
by (simp add: user_accessible_def)
lemma mem_equal_mod_user_reg: "mem_equal s1 s2 pa \<Longrightarrow>
mem_equal (s1\<lparr>user_reg := new_user_reg1\<rparr>) (s2\<lparr>user_reg := user_reg2\<rparr>) pa"
by (simp add: mem_equal_def)
lemma user_accessible_mod_user_reg: "user_accessible (s\<lparr>user_reg := new_user_reg\<rparr>) pa =
user_accessible s pa"
by (simp add: user_accessible_def)
lemma mem_equal_mod_cpu_reg: "mem_equal s1 s2 pa \<Longrightarrow>
mem_equal (s1\<lparr>cpu_reg := new_cpu1\<rparr>) (s2\<lparr>cpu_reg := cpu_reg2\<rparr>) pa"
by (simp add: mem_equal_def)
lemma user_accessible_mod_cpu_reg: "user_accessible (s\<lparr>cpu_reg := new_cpu_reg\<rparr>) pa =
user_accessible s pa"
by (simp add: user_accessible_def)
lemma mem_equal_mod_trap: "mem_equal s1 s2 pa \<Longrightarrow>
mem_equal (s1\<lparr>traps := new_traps1\<rparr>) (s2\<lparr>traps := traps2\<rparr>) pa"
by (simp add: mem_equal_def)
lemma user_accessible_mod_trap: "user_accessible (s\<lparr>traps := new_traps\<rparr>) pa =
user_accessible s pa"
by (simp add: user_accessible_def)
lemma mem_equal_annul: "mem_equal s1 s2 pa \<Longrightarrow>
mem_equal (s1\<lparr>state_var := new_state_var,
cpu_reg := new_cpu_reg\<rparr>) (s2\<lparr>state_var := new_state_var2,
cpu_reg := new_cpu_reg2\<rparr>) pa"
by (simp add: mem_equal_def)
lemma user_accessible_annul: "user_accessible (s\<lparr>state_var := new_state_var,
cpu_reg := new_cpu_reg\<rparr>) pa =
user_accessible s pa"
by (simp add: user_accessible_def)
lemma mem_val_alt_10_mem_equal_0: "mem_equal s1 s2 pa \<Longrightarrow>
mem_val_alt 10 (pa AND 68719476732) s1 = mem_val_alt 10 (pa AND 68719476732) s2"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_10_mem_equal_1: "mem_equal s1 s2 pa \<Longrightarrow>
mem_val_alt 10 ((pa AND 68719476732) + 1) s1 = mem_val_alt 10 ((pa AND 68719476732) + 1) s2"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_10_mem_equal_2: "mem_equal s1 s2 pa \<Longrightarrow>
mem_val_alt 10 ((pa AND 68719476732) + 2) s1 = mem_val_alt 10 ((pa AND 68719476732) + 2) s2"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_10_mem_equal_3: "mem_equal s1 s2 pa \<Longrightarrow>
mem_val_alt 10 ((pa AND 68719476732) + 3) s1 = mem_val_alt 10 ((pa AND 68719476732) + 3) s2"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_10_mem_equal:
assumes a1: "mem_equal s1 s2 pa"
shows "mem_val_alt 10 (pa AND 68719476732) s1 = mem_val_alt 10 (pa AND 68719476732) s2 \<and>
mem_val_alt 10 ((pa AND 68719476732) + 1) s1 = mem_val_alt 10 ((pa AND 68719476732) + 1) s2 \<and>
mem_val_alt 10 ((pa AND 68719476732) + 2) s1 = mem_val_alt 10 ((pa AND 68719476732) + 2) s2 \<and>
mem_val_alt 10 ((pa AND 68719476732) + 3) s1 = mem_val_alt 10 ((pa AND 68719476732) + 3) s2"
using mem_val_alt_10_mem_equal_0 mem_val_alt_10_mem_equal_1
mem_val_alt_10_mem_equal_2 mem_val_alt_10_mem_equal_3 a1
by blast
lemma mem_val_w32_10_mem_equal:
assumes a1: "mem_equal s1 s2 a"
shows "mem_val_w32 10 a s1 = mem_val_w32 10 a s2"
apply (simp add: mem_val_w32_def)
apply (simp add: Let_def)
using mem_val_alt_10_mem_equal a1 apply auto
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
by fastforce
lemma mem_val_alt_8_mem_equal_0: "mem_equal s1 s2 pa \<Longrightarrow>
mem_val_alt 8 (pa AND 68719476732) s1 = mem_val_alt 8 (pa AND 68719476732) s2"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_8_mem_equal_1: "mem_equal s1 s2 pa \<Longrightarrow>
mem_val_alt 8 ((pa AND 68719476732) + 1) s1 = mem_val_alt 8 ((pa AND 68719476732) + 1) s2"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_8_mem_equal_2: "mem_equal s1 s2 pa \<Longrightarrow>
mem_val_alt 8 ((pa AND 68719476732) + 2) s1 = mem_val_alt 8 ((pa AND 68719476732) + 2) s2"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_8_mem_equal_3: "mem_equal s1 s2 pa \<Longrightarrow>
mem_val_alt 8 ((pa AND 68719476732) + 3) s1 = mem_val_alt 8 ((pa AND 68719476732) + 3) s2"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_8_mem_equal:
assumes a1: "mem_equal s1 s2 pa"
shows "mem_val_alt 8 (pa AND 68719476732) s1 = mem_val_alt 8 (pa AND 68719476732) s2 \<and>
mem_val_alt 8 ((pa AND 68719476732) + 1) s1 = mem_val_alt 8 ((pa AND 68719476732) + 1) s2 \<and>
mem_val_alt 8 ((pa AND 68719476732) + 2) s1 = mem_val_alt 8 ((pa AND 68719476732) + 2) s2 \<and>
mem_val_alt 8 ((pa AND 68719476732) + 3) s1 = mem_val_alt 8 ((pa AND 68719476732) + 3) s2"
using mem_val_alt_8_mem_equal_0 mem_val_alt_8_mem_equal_1
mem_val_alt_8_mem_equal_2 mem_val_alt_8_mem_equal_3 a1
by blast
lemma mem_val_w32_8_mem_equal:
assumes a1: "mem_equal s1 s2 a"
shows "mem_val_w32 8 a s1 = mem_val_w32 8 a s2"
apply (simp add: mem_val_w32_def)
apply (simp add: Let_def)
using mem_val_alt_8_mem_equal a1 apply auto
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
by fastforce
lemma load_word_mem_10_low_equal:
assumes a1: "low_equal s1 s2"
shows "load_word_mem s1 address 10 = load_word_mem s2 address 10"
using a1 apply (simp add: low_equal_def load_word_mem_def)
apply clarsimp
apply (case_tac "virt_to_phys address (mmu s2) (mem s2) = None")
apply auto
apply (simp add: user_accessible_def)
using mem_val_w32_10_mem_equal apply blast
apply (simp add: user_accessible_def)
using mem_val_w32_10_mem_equal by blast
lemma load_word_mem_8_low_equal:
assumes a1: "low_equal s1 s2"
shows "load_word_mem s1 address 8 = load_word_mem s2 address 8"
using a1 apply (simp add: low_equal_def load_word_mem_def)
apply clarsimp
apply (case_tac "virt_to_phys address (mmu s2) (mem s2) = None")
apply auto
apply (simp add: user_accessible_def)
using mem_val_w32_8_mem_equal user_accessible_8 apply fastforce
apply (simp add: user_accessible_def)
using mem_val_w32_8_mem_equal user_accessible_8 by fastforce
lemma mem_read_low_equal:
assumes a1: "low_equal s1 s2 \<and> asi \<in> {8,10}"
shows "fst (memory_read asi address s1) = fst (memory_read asi address s2)"
proof (cases "asi = 8")
case True
then show ?thesis using a1
apply (simp add: low_equal_def)
apply (simp add: memory_read_def)
using a1 load_word_mem_8_low_equal apply auto
apply (simp add: option.case_eq_if)
by (simp add: option.case_eq_if)
next
case False
then have "asi = 10" using a1 by auto
then show ?thesis using a1
apply (simp add: low_equal_def)
apply (simp add: memory_read_def)
using a1 load_word_mem_10_low_equal apply auto
apply (simp add: option.case_eq_if)
by (simp add: option.case_eq_if)
qed
lemma read_mem_pc_low_equal:
assumes a1: "low_equal s1 s2"
shows "fst (memory_read 8 (cpu_reg_val PC s1) s1) =
fst (memory_read 8 (cpu_reg_val PC s2) s2)"
proof -
have f2: "cpu_reg_val PC s1 = cpu_reg_val PC s2" using a1
by (simp add: low_equal_def cpu_reg_val_def)
then show ?thesis using a1 f2 mem_read_low_equal
by auto
qed
lemma dcache_mod_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = dcache_mod c v s1 \<and>
t2 = dcache_mod c v s2"
shows "low_equal t1 t2"
using a1 apply (simp add: low_equal_def)
apply (simp add: dcache_mod_def)
apply auto
apply (simp add: user_accessible_mod_cache mem_equal_mod_cache)
by (simp add: user_accessible_mod_cache mem_equal_mod_cache)
lemma add_data_cache_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = add_data_cache s1 address w bm \<and>
t2 = add_data_cache s2 address w bm"
shows "low_equal t1 t2"
using a1 apply (simp add: add_data_cache_def)
apply (case_tac "bm AND 8 >> 3 = 1")
apply auto
apply (case_tac "bm AND 4 >> 2 = 1")
apply auto
apply (case_tac "bm AND 2 >> Suc 0 = 1")
apply auto
apply (case_tac "bm AND 1 = 1")
apply auto
apply (meson dcache_mod_low_equal)
apply (meson dcache_mod_low_equal)
apply (case_tac "bm AND 1 = 1")
apply auto
apply (meson dcache_mod_low_equal)
apply (meson dcache_mod_low_equal)
apply (case_tac "bm AND 2 >> Suc 0 = 1")
apply auto
apply (case_tac "bm AND 1 = 1")
apply auto
apply (meson dcache_mod_low_equal)
apply (meson dcache_mod_low_equal)
apply (case_tac "bm AND 1 = 1")
apply auto
apply (meson dcache_mod_low_equal)
apply (meson dcache_mod_low_equal)
apply (case_tac "bm AND 4 >> 2 = 1")
apply auto
apply (case_tac "bm AND 2 >> Suc 0 = 1")
apply auto
apply (case_tac "bm AND 1 = 1")
apply auto
apply (meson dcache_mod_low_equal)
apply (meson dcache_mod_low_equal)
apply (case_tac "bm AND 1 = 1")
apply auto
apply (meson dcache_mod_low_equal)
apply (meson dcache_mod_low_equal)
apply (case_tac "bm AND 2 >> Suc 0 = 1")
apply auto
apply (case_tac "bm AND 1 = 1")
apply auto
apply (meson dcache_mod_low_equal)
apply (meson dcache_mod_low_equal)
by (meson dcache_mod_low_equal)
lemma mem_read2_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (memory_read (10::word8) address s1) \<and>
t2 = snd (memory_read (10::word8) address s2)"
shows "low_equal t1 t2"
using a1 apply (simp add: memory_read_def)
using a1 apply (auto simp add: sys_reg_low_equal)
using a1 apply (simp add: load_word_mem_10_low_equal)
by (metis (no_types, lifting) add_data_cache_low_equal option.case_eq_if snd_conv)
lemma mem_read_delayed_write_low_equal:
assumes a1: "low_equal s1 s2 \<and> get_delayed_pool s1 = [] \<and> get_delayed_pool s2 = []"
shows "fst (memory_read 8 (cpu_reg_val PC (delayed_pool_write s1)) (delayed_pool_write s1)) =
fst (memory_read 8 (cpu_reg_val PC (delayed_pool_write s2)) (delayed_pool_write s2))"
using a1 apply (simp add: delayed_pool_write_def)
apply (simp add: Let_def)
apply (simp add: get_delayed_write_def)
by (simp add: read_mem_pc_low_equal)
lemma global_reg_mod_low_equal:
assumes a1: "low_equal s1 s2\<and>
t1 = (global_reg_mod w n rd s1) \<and>
t2 = (global_reg_mod w n rd s2)"
shows "low_equal t1 t2"
using a1 apply (induction n arbitrary: s1 s2)
apply clarsimp
apply auto
apply (simp add: Let_def)
apply (simp add: user_reg_low_equal)
using user_reg_state_mod_low_equal by blast
lemma out_reg_mod_low_equal:
assumes a1: "low_equal s1 s2\<and>
t1 = (out_reg_mod w curr_win rd s1) \<and>
t2 = (out_reg_mod w curr_win rd s2)"
shows "low_equal t1 t2"
using a1 apply (simp add: out_reg_mod_def Let_def)
apply auto
apply (simp add: user_reg_low_equal)
using user_reg_state_mod_low_equal apply fastforce
apply (simp add: user_reg_low_equal)
using user_reg_state_mod_low_equal by blast
lemma in_reg_mod_low_equal:
assumes a1: "low_equal s1 s2\<and>
t1 = (in_reg_mod w curr_win rd s1) \<and>
t2 = (in_reg_mod w curr_win rd s2)"
shows "low_equal t1 t2"
using a1 apply (simp add: in_reg_mod_def Let_def)
apply auto
apply (simp add: user_reg_low_equal)
using user_reg_state_mod_low_equal apply fastforce
apply (simp add: user_reg_low_equal)
using user_reg_state_mod_low_equal by blast
lemma user_reg_mod_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = user_reg_mod w curr_win rd s1 \<and> t2 = user_reg_mod w curr_win rd s2"
shows "low_equal t1 t2"
proof (cases "rd = 0")
case True
then show ?thesis using a1
by (simp add: user_reg_mod_def)
next
case False
then have f1: "rd \<noteq> 0" by auto
then show ?thesis
proof (cases "0 < rd \<and> rd < 8")
case True
then show ?thesis using a1 f1
apply (simp add: user_reg_mod_def)
using global_reg_mod_low_equal by blast
next
case False
then have f2: "\<not> (0 < rd \<and> rd < 8)" by auto
then show ?thesis
proof (cases "7 < rd \<and> rd < 16")
case True
then show ?thesis using a1 f1 f2
apply (simp add: user_reg_mod_def)
by (auto intro: out_reg_mod_low_equal)
next
case False
then have f3: "\<not> (7 < rd \<and> rd < 16)" by auto
then show ?thesis
proof (cases "15 < rd \<and> rd < 24")
case True
then show ?thesis using a1 f1 f2 f3
apply (simp add: user_reg_mod_def)
apply (simp add: low_equal_def)
apply clarsimp
by (simp add: user_accessible_mod_user_reg mem_equal_mod_user_reg)
next
case False
then show ?thesis using a1 f1 f2 f3
apply (simp add: user_reg_mod_def)
by (auto intro: in_reg_mod_low_equal)
qed
qed
qed
qed
lemma virt_to_phys_low_equal: "low_equal s1 s2 \<Longrightarrow>
virt_to_phys addr (mmu s1) (mem s1) = virt_to_phys addr (mmu s2) (mem s2)"
by (auto simp add: low_equal_def)
lemma write_reg_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = (snd (fst (write_reg w curr_win rd s1))) \<and>
t2 = (snd (fst (write_reg w curr_win rd s2)))"
shows "low_equal t1 t2"
using a1 apply (simp add: write_reg_def)
apply (simp add: simpler_modify_def)
by (auto intro: user_reg_mod_low_equal)
lemma write_cpu_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (write_cpu w cr s1)) \<and>
t2 = (snd (fst (write_cpu w cr s2)))"
shows "low_equal t1 t2"
using a1
apply (simp add: write_cpu_def simpler_modify_def)
apply (simp add: cpu_reg_mod_def)
apply (simp add: low_equal_def)
using user_accessible_mod_cpu_reg mem_equal_mod_cpu_reg
by metis
lemma cpu_reg_mod_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = cpu_reg_mod w cr s1 \<and>
t2 = cpu_reg_mod w cr s2"
shows "low_equal t1 t2"
using a1
apply (simp add: cpu_reg_mod_def)
apply (simp add: low_equal_def)
using user_accessible_mod_cpu_reg mem_equal_mod_cpu_reg
by metis
lemma load_sub2_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = (snd (fst (load_sub2 address 10 rd curr_win w s1))) \<and>
t2 = (snd (fst (load_sub2 address 10 rd curr_win w s2)))"
shows "low_equal t1 t2"
proof (cases "fst (memory_read 10 (address + 4)
(snd (fst (write_reg w curr_win (rd AND 30) s1)))) = None")
case True
then have f0: "fst (memory_read 10 (address + 4)
(snd (fst (write_reg w curr_win (rd AND 30) s1)))) = None" by auto
have f1: "low_equal (snd (fst (write_reg w curr_win (rd AND 30) s1)))
(snd (fst (write_reg w curr_win (rd AND 30) s2)))"
using a1 by (auto intro: write_reg_low_equal)
then have "fst (memory_read 10 (address + 4)
(snd (fst (write_reg w curr_win (rd AND 30) s1)))) = None \<and>
fst (memory_read 10 (address + 4)
(snd (fst (write_reg w curr_win (rd AND 30) s1)))) =
fst (memory_read 10 (address + 4)
(snd (fst (write_reg w curr_win (rd AND 30) s2))))"
using f0 by (blast intro: mem_read_low_equal)
then have "fst (memory_read 10 (address + 4)
(snd (fst (write_reg w curr_win (rd AND 30) s1)))) = None \<and>
fst (memory_read 10 (address + 4)
(snd (fst (write_reg w curr_win (rd AND 30) s2)))) = None"
by auto
then show ?thesis using a1
apply (simp add: load_sub2_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: raise_trap_def add_trap_set_def)
apply (simp add: simpler_modify_def)
using f1 apply (simp add: traps_low_equal)
using f1 by (auto intro: mod_trap_low_equal)
next
case False
then have f2: "fst (memory_read 10 (address + 4)
(snd (fst (write_reg w curr_win (rd AND 30) s1)))) \<noteq> None"
by auto
have f3: "low_equal (snd (fst (write_reg w curr_win (rd AND 30) s1)))
(snd (fst (write_reg w curr_win (rd AND 30) s2)))"
using a1 by (auto intro: write_reg_low_equal)
then have f4: "fst (memory_read 10 (address + 4)
(snd (fst (write_reg w curr_win (rd AND 30) s1)))) =
fst (memory_read 10 (address + 4)
(snd (fst (write_reg w curr_win (rd AND 30) s2))))"
using f2 by (blast intro: mem_read_low_equal)
then have "fst (memory_read 10 (address + 4)
(snd (fst (write_reg w curr_win (rd AND 30) s1)))) \<noteq> None \<and>
fst (memory_read 10 (address + 4)
(snd (fst (write_reg w curr_win (rd AND 30) s2)))) \<noteq> None"
using f2 by auto
then show ?thesis using a1
apply (simp add: load_sub2_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply clarsimp
apply (simp add: simpler_modify_def bind_def h1_def h2_def Let_def)
using f4 apply clarsimp
using f3 by (auto intro: mem_read2_low_equal write_reg_low_equal)
qed
lemma load_sub3_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (load_sub3 instr curr_win rd (10::word8) address s1)) \<and>
t2 = snd (fst (load_sub3 instr curr_win rd (10::word8) address s2))"
shows "low_equal t1 t2"
proof (cases "fst (memory_read 10 address s1) = None")
case True
then have "fst (memory_read 10 address s1) = None \<and>
fst (memory_read 10 address s2) = None"
using a1 by (auto simp add: mem_read_low_equal)
then show ?thesis using a1
apply (simp add: load_sub3_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
apply (simp add: raise_trap_def add_trap_set_def)
apply (simp add: simpler_modify_def)
apply (auto simp add: traps_low_equal)
by (auto intro: mod_trap_low_equal)
next
case False
then have f1: "fst (memory_read 10 address s1) \<noteq> None \<and>
fst (memory_read 10 address s2) \<noteq> None"
using a1 by (auto simp add: mem_read_low_equal)
then show ?thesis
proof (cases "rd \<noteq> 0 \<and>
(fst instr = load_store_type LD \<or>
fst instr = load_store_type LDA \<or>
fst instr = load_store_type LDUH \<or>
fst instr = load_store_type LDSB \<or>
fst instr = load_store_type LDUB \<or>
fst instr = load_store_type LDUBA \<or>
fst instr = load_store_type LDSH \<or>
fst instr = load_store_type LDSHA \<or>
fst instr = load_store_type LDUHA \<or>
fst instr = load_store_type LDSBA)")
case True
then show ?thesis using a1 f1
apply (simp add: load_sub3_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
apply clarsimp
apply (simp add: simpler_modify_def bind_def h1_def h2_def Let_def)
apply (simp add: mem_read_low_equal)
by (meson mem_read2_low_equal write_reg_low_equal)
next
case False
then show ?thesis using a1 f1
apply (simp add: load_sub3_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
apply clarsimp
apply (simp add: simpler_modify_def bind_def h1_def h2_def Let_def)
apply (simp add: mem_read_low_equal)
by (meson load_sub2_low_equal mem_read2_low_equal)
qed
qed
lemma ld_asi_user:
"(fst instr = load_store_type LDSB \<or>
fst instr = load_store_type LDUB \<or>
fst instr = load_store_type LDUH \<or>
fst instr = load_store_type LD \<or>
fst instr = load_store_type LDD) \<Longrightarrow>
ld_asi instr 0 = 10"
apply (simp add: ld_asi_def)
by auto
lemma load_sub1_low_equal:
assumes a1: "low_equal s1 s2 \<and>
(fst instr = load_store_type LDSB \<or>
fst instr = load_store_type LDUB \<or>
fst instr = load_store_type LDUH \<or>
fst instr = load_store_type LD \<or>
fst instr = load_store_type LDD) \<and>
t1 = snd (fst (load_sub1 instr rd 0 s1)) \<and>
t2 = snd (fst (load_sub1 instr rd 0 s2))"
shows "low_equal t1 t2"
proof (cases "(fst instr = load_store_type LDD \<or> fst instr = load_store_type LDDA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word3) \<noteq> 0 \<or>
(fst instr = load_store_type LD \<or> fst instr = load_store_type LDA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word2) \<noteq> 0 \<or>
(fst instr = load_store_type LDUH \<or>
fst instr = load_store_type LDUHA \<or>
fst instr = load_store_type LDSH \<or> fst instr = load_store_type LDSHA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word1) \<noteq> 0")
case True
then have "((fst instr = load_store_type LDD \<or> fst instr = load_store_type LDDA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word3) \<noteq> 0 \<or>
(fst instr = load_store_type LD \<or> fst instr = load_store_type LDA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word2) \<noteq> 0 \<or>
(fst instr = load_store_type LDUH \<or>
fst instr = load_store_type LDUHA \<or>
fst instr = load_store_type LDSH \<or> fst instr = load_store_type LDSHA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word1) \<noteq> 0) \<and>
((fst instr = load_store_type LDD \<or> fst instr = load_store_type LDDA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word3) \<noteq> 0 \<or>
(fst instr = load_store_type LD \<or> fst instr = load_store_type LDA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word2) \<noteq> 0 \<or>
(fst instr = load_store_type LDUH \<or>
fst instr = load_store_type LDUHA \<or>
fst instr = load_store_type LDSH \<or> fst instr = load_store_type LDSHA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word1) \<noteq> 0)"
by (metis (mono_tags, lifting) assms get_addr_low_equal)
then show ?thesis using a1
apply (simp add: load_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: raise_trap_def add_trap_set_def)
apply (simp add: simpler_modify_def)
apply clarsimp
apply (simp add: get_curr_win3_low_equal)
by (auto intro: get_curr_win2_low_equal mod_trap_low_equal)
next
case False
then have f1: "\<not> ((fst instr = load_store_type LDD \<or> fst instr = load_store_type LDDA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word3) \<noteq> 0 \<or>
(fst instr = load_store_type LD \<or> fst instr = load_store_type LDA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word2) \<noteq> 0 \<or>
(fst instr = load_store_type LDUH \<or>
fst instr = load_store_type LDUHA \<or>
fst instr = load_store_type LDSH \<or> fst instr = load_store_type LDSHA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word1) \<noteq> 0) \<and>
\<not> ((fst instr = load_store_type LDD \<or> fst instr = load_store_type LDDA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word3) \<noteq> 0 \<or>
(fst instr = load_store_type LD \<or> fst instr = load_store_type LDA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word2) \<noteq> 0 \<or>
(fst instr = load_store_type LDUH \<or>
fst instr = load_store_type LDUHA \<or>
fst instr = load_store_type LDSH \<or> fst instr = load_store_type LDSHA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word1) \<noteq> 0)"
by (metis assms get_addr_low_equal)
show ?thesis
proof -
have "low_equal s1 s2 \<Longrightarrow>
low_equal (snd (fst (get_curr_win () s1)))
(snd (fst (get_curr_win () s2)))"
using get_curr_win2_low_equal by auto
then have f2: "low_equal s1 s2 \<Longrightarrow>
low_equal (snd (fst (load_sub3 instr (fst (fst (get_curr_win () s2))) rd 10
(get_addr (snd instr) (snd (fst (get_curr_win () s2))))
(snd (fst (get_curr_win () s1))))))
(snd (fst (load_sub3 instr (fst (fst (get_curr_win () s2))) rd 10
(get_addr (snd instr) (snd (fst (get_curr_win () s2))))
(snd (fst (get_curr_win () s2))))))"
using load_sub3_low_equal by blast
show ?thesis using a1
unfolding load_sub1_def simpler_gets_def bind_def h1_def h2_def Let_def case_prod_unfold
using f1 f2 apply clarsimp
by (simp add: get_addr2_low_equal get_curr_win_low_equal ld_asi_user)
qed
qed
lemma load_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
(fst instr = load_store_type LDSB \<or>
fst instr = load_store_type LDUB \<or>
fst instr = load_store_type LDUBA \<or>
fst instr = load_store_type LDUH \<or>
fst instr = load_store_type LD \<or>
fst instr = load_store_type LDA \<or>
fst instr = load_store_type LDD) \<and>
((ucast (get_S (cpu_reg_val PSR s1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR s2)))::word1) = 0 \<and>
t1 = snd (fst (load_instr instr s1)) \<and> t2 = snd (fst (load_instr instr s2))"
shows "low_equal t1 t2"
proof -
have "get_S (cpu_reg_val PSR s1) = 0 \<and> get_S (cpu_reg_val PSR s2) = 0"
using a1 by (simp add: ucast_id)
then show ?thesis using a1
apply (simp add: load_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: Let_def)
apply clarsimp
apply (simp add: raise_trap_def add_trap_set_def)
apply (simp add: simpler_modify_def)
apply (simp add: traps_low_equal)
by (auto intro: mod_trap_low_equal load_sub1_low_equal)
qed
lemma st_data0_low_equal: "low_equal s1 s2 \<Longrightarrow>
st_data0 instr curr_win rd addr s1 = st_data0 instr curr_win rd addr s2"
apply (simp add: st_data0_def)
by (simp add: user_reg_val_def low_equal_def)
lemma store_word_mem_low_equal_none: "low_equal s1 s2 \<Longrightarrow>
store_word_mem (add_data_cache s1 addr data bm) addr data bm 10 = None \<Longrightarrow>
store_word_mem (add_data_cache s2 addr data bm) addr data bm 10 = None"
apply (simp add: store_word_mem_def)
proof -
assume a1: "low_equal s1 s2"
assume a2: "(case virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm)) of None \<Rightarrow> None | Some pair \<Rightarrow> if mmu_writable (get_acc_flag (snd pair)) 10 then Some (mem_mod_w32 10 (fst pair) bm data (add_data_cache s1 addr data bm)) else None) = None"
have f3: "(if mmu_writable (get_acc_flag (snd (v1_2 (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))))) 10 then Some (mem_mod_w32 10 (fst (v1_2 (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm))))) bm data (add_data_cache s2 addr data bm)) else None) = (case Some (v1_2 (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))) of None \<Rightarrow> if mmu_writable (get_acc_flag (snd (v1_2 (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))))) 10 then Some (mem_mod_w32 10 (fst (v1_2 (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm))))) bm data (add_data_cache s1 addr data bm)) else None | Some p \<Rightarrow> if mmu_writable (get_acc_flag (snd p)) 10 then Some (mem_mod_w32 10 (fst p) bm data (add_data_cache s2 addr data bm)) else None)"
by auto
obtain pp :: "(word36 \<times> word8) option \<Rightarrow> word36 \<times> word8" where
f4: "virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm)) = None \<or> virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm)) = Some (pp (virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm))))"
by (metis (no_types) option.exhaust)
have f5: "virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm)) = virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm))"
using a1 by (meson add_data_cache_low_equal virt_to_phys_low_equal)
{ assume "Some (mem_mod_w32 10 (fst (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm))))) bm data (add_data_cache s1 addr data bm)) \<noteq> (case Some (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))) of None \<Rightarrow> None | Some p \<Rightarrow> if mmu_writable (get_acc_flag (snd p)) 10 then Some (mem_mod_w32 10 (fst p) bm data (add_data_cache s1 addr data bm)) else None)"
then have "None = (if mmu_writable (get_acc_flag (snd (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))))) 10 then Some (mem_mod_w32 10 (fst (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm))))) bm data (add_data_cache s2 addr data bm)) else None)"
by fastforce
moreover
{ assume "(if mmu_writable (get_acc_flag (snd (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))))) 10 then Some (mem_mod_w32 10 (fst (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm))))) bm data (add_data_cache s2 addr data bm)) else None) \<noteq> (case virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)) of None \<Rightarrow> None | Some p \<Rightarrow> if mmu_writable (get_acc_flag (snd p)) 10 then Some (mem_mod_w32 10 (fst p) bm data (add_data_cache s2 addr data bm)) else None)"
then have "(case Some (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))) of None \<Rightarrow> if mmu_writable (get_acc_flag (snd (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))))) 10 then Some (mem_mod_w32 10 (fst (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm))))) bm data (add_data_cache s1 addr data bm)) else None | Some p \<Rightarrow> if mmu_writable (get_acc_flag (snd p)) 10 then Some (mem_mod_w32 10 (fst p) bm data (add_data_cache s2 addr data bm)) else None) \<noteq> (case virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)) of None \<Rightarrow> None | Some p \<Rightarrow> if mmu_writable (get_acc_flag (snd p)) 10 then Some (mem_mod_w32 10 (fst p) bm data (add_data_cache s2 addr data bm)) else None)"
using f3 by simp
then have "Some (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))) \<noteq> virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)) \<or> (if mmu_writable (get_acc_flag (snd (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))))) 10 then Some (mem_mod_w32 10 (fst (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm))))) bm data (add_data_cache s1 addr data bm)) else None) \<noteq> None"
proof -
have "(case virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)) of None \<Rightarrow> if mmu_writable (get_acc_flag (snd (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))))) 10 then Some (mem_mod_w32 10 (fst (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm))))) bm data (add_data_cache s1 addr data bm)) else None | Some p \<Rightarrow> if mmu_writable (get_acc_flag (snd p)) 10 then Some (mem_mod_w32 10 (fst p) bm data (add_data_cache s2 addr data bm)) else None) = (case virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)) of None \<Rightarrow> None | Some p \<Rightarrow> if mmu_writable (get_acc_flag (snd p)) 10 then Some (mem_mod_w32 10 (fst p) bm data (add_data_cache s2 addr data bm)) else None) \<or> Some (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))) \<noteq> virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)) \<or> (if mmu_writable (get_acc_flag (snd (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))))) 10 then Some (mem_mod_w32 10 (fst (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm))))) bm data (add_data_cache s1 addr data bm)) else None) \<noteq> None"
by simp
then show ?thesis
using \<open>(case Some (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))) of None \<Rightarrow> if mmu_writable (get_acc_flag (snd (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))))) 10 then Some (mem_mod_w32 10 (fst (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm))))) bm data (add_data_cache s1 addr data bm)) else None | Some p \<Rightarrow> if mmu_writable (get_acc_flag (snd p)) 10 then Some (mem_mod_w32 10 (fst p) bm data (add_data_cache s2 addr data bm)) else None) \<noteq> (case virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)) of None \<Rightarrow> None | Some p \<Rightarrow> if mmu_writable (get_acc_flag (snd p)) 10 then Some (mem_mod_w32 10 (fst p) bm data (add_data_cache s2 addr data bm)) else None)\<close> by force
qed
moreover
{ assume "Some (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))) \<noteq> virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm))"
then have "virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm)) \<noteq> Some (pp (virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm))))"
using f5 by simp }
ultimately have "virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm)) \<noteq> Some (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))) \<or> virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm)) \<noteq> Some (pp (virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm))))"
using a2 by force }
ultimately have "virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm)) = Some (pp (virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm)))) \<and> virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm)) = Some (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))) \<longrightarrow> (case virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)) of None \<Rightarrow> None | Some p \<Rightarrow> if mmu_writable (get_acc_flag (snd p)) 10 then Some (mem_mod_w32 10 (fst p) bm data (add_data_cache s2 addr data bm)) else None) = None"
by fastforce }
then have "virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm)) = Some (pp (virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm)))) \<and> virt_to_phys addr (mmu (add_data_cache s1 addr data bm)) (mem (add_data_cache s1 addr data bm)) = Some (pp (virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)))) \<longrightarrow> (case virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)) of None \<Rightarrow> None | Some p \<Rightarrow> if mmu_writable (get_acc_flag (snd p)) 10 then Some (mem_mod_w32 10 (fst p) bm data (add_data_cache s2 addr data bm)) else None) = None"
using a2 by force
then show "(case virt_to_phys addr (mmu (add_data_cache s2 addr data bm)) (mem (add_data_cache s2 addr data bm)) of None \<Rightarrow> None | Some p \<Rightarrow> if mmu_writable (get_acc_flag (snd p)) 10 then Some (mem_mod_w32 10 (fst p) bm data (add_data_cache s2 addr data bm)) else None) = None"
using f5 f4 by force
qed
lemma memory_write_asi_low_equal_none: "low_equal s1 s2 \<Longrightarrow>
memory_write_asi 10 addr bm data s1 = None \<Longrightarrow>
memory_write_asi 10 addr bm data s2 = None"
apply (simp add: memory_write_asi_def)
by (simp add: store_word_mem_low_equal_none)
lemma memory_write_low_equal_none: "low_equal s1 s2 \<Longrightarrow>
memory_write 10 addr bm data s1 = None \<Longrightarrow>
memory_write 10 addr bm data s2 = None"
apply (simp add: memory_write_def)
by (metis map_option_case memory_write_asi_low_equal_none option.map_disc_iff)
lemma memory_write_low_equal_none2: "low_equal s1 s2 \<Longrightarrow>
memory_write 10 addr bm data s2 = None \<Longrightarrow>
memory_write 10 addr bm data s1 = None"
apply (simp add: memory_write_def)
by (metis low_equal_com memory_write_def memory_write_low_equal_none)
lemma mem_context_val_9_unchanged:
"mem_context_val 9 addr1 (mem s1) =
mem_context_val 9 addr1
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None)))"
apply (simp add: mem_context_val_def)
by (simp add: Let_def)
lemma mem_context_val_w32_9_unchanged:
"mem_context_val_w32 9 addr1 (mem s1) =
mem_context_val_w32 9 addr1
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None)))"
apply (simp add: mem_context_val_w32_def)
apply (simp add: Let_def)
by (metis mem_context_val_9_unchanged)
lemma ptd_lookup_unchanged_4:
"ptd_lookup va ptp (mem s1) 4 =
ptd_lookup va ptp ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) 4"
by auto
lemma ptd_lookup_unchanged_3:
"ptd_lookup va ptp (mem s1) 3 =
ptd_lookup va ptp ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) 3"
proof (cases "mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 12))::word6))::word32)))::word36) (mem s1) = None")
case True
then have "mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 12))::word6))::word32)))::word36) (mem s1) = None \<and>
mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 12))::word6))::word32)))::word36)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) = None"
using mem_context_val_w32_9_unchanged by metis
then show ?thesis
by auto
next
case False
then have "mem_context_val_w32 9
((ucast (ptp OR ((ucast ((ucast (va >> 12))::word6))::word32)))::word36) (mem s1) \<noteq> None \<and>
mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 12))::word6))::word32)))::word36)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) \<noteq> None"
using mem_context_val_w32_9_unchanged by metis
then have "mem_context_val_w32 9
((ucast (ptp OR ((ucast ((ucast (va >> 12))::word6))::word32)))::word36) (mem s1) \<noteq> None \<and>
mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 12))::word6))::word32)))::word36)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) \<noteq> None \<and>
(\<forall>y. (mem_context_val_w32 9
((ucast (ptp OR ((ucast ((ucast (va >> 12))::word6))::word32)))::word36) (mem s1) = Some y) \<longrightarrow>
(mem_context_val_w32 9
((ucast (ptp OR ((ucast ((ucast (va >> 12))::word6))::word32)))::word36)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None)))= Some y))"
using mem_context_val_w32_9_unchanged by metis
then show ?thesis
apply auto
by (simp add: Let_def)
qed
lemma ptd_lookup_unchanged_2:
"ptd_lookup va ptp (mem s1) 2 =
ptd_lookup va ptp ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) 2"
proof (cases "mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 18))::word6))::word32)))::word36) (mem s1) = None")
case True
then have "mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 18))::word6))::word32)))::word36) (mem s1) = None \<and>
mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 18))::word6))::word32)))::word36)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) = None"
using mem_context_val_w32_9_unchanged by metis
then show ?thesis
by auto
next
case False
then have "mem_context_val_w32 9
((ucast (ptp OR ((ucast ((ucast (va >> 18))::word6))::word32)))::word36) (mem s1) \<noteq> None \<and>
mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 18))::word6))::word32)))::word36)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) \<noteq> None"
using mem_context_val_w32_9_unchanged by metis
then have "mem_context_val_w32 9
((ucast (ptp OR ((ucast ((ucast (va >> 18))::word6))::word32)))::word36) (mem s1) \<noteq> None \<and>
mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 18))::word6))::word32)))::word36)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) \<noteq> None \<and>
(\<forall>y. (mem_context_val_w32 9
((ucast (ptp OR ((ucast ((ucast (va >> 18))::word6))::word32)))::word36) (mem s1) = Some y) \<longrightarrow>
(mem_context_val_w32 9
((ucast (ptp OR ((ucast ((ucast (va >> 18))::word6))::word32)))::word36)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None)))= Some y))"
using mem_context_val_w32_9_unchanged by metis
then show ?thesis
apply auto
using ptd_lookup_unchanged_3
unfolding Let_def
by auto
qed
lemma ptd_lookup_unchanged_1:
"ptd_lookup va ptp (mem s1) 1 =
ptd_lookup va ptp ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) 1"
proof (cases "mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 24))::word8))::word32)))::word36) (mem s1) = None")
case True
then have "mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 24))::word8))::word32)))::word36) (mem s1) = None \<and>
mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 24))::word8))::word32)))::word36)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) = None"
using mem_context_val_w32_9_unchanged by metis
then show ?thesis
by auto
next
case False
then have "mem_context_val_w32 9
((ucast (ptp OR ((ucast ((ucast (va >> 24))::word8))::word32)))::word36) (mem s1) \<noteq> None \<and>
mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 24))::word8))::word32)))::word36)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) \<noteq> None"
using mem_context_val_w32_9_unchanged by metis
then have "mem_context_val_w32 9
((ucast (ptp OR ((ucast ((ucast (va >> 24))::word8))::word32)))::word36) (mem s1) \<noteq> None \<and>
mem_context_val_w32 9 ((ucast (ptp OR ((ucast ((ucast (va >> 24))::word8))::word32)))::word36)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) \<noteq> None \<and>
(\<forall>y. (mem_context_val_w32 9
((ucast (ptp OR ((ucast ((ucast (va >> 24))::word8))::word32)))::word36) (mem s1) = Some y) \<longrightarrow>
(mem_context_val_w32 9
((ucast (ptp OR ((ucast ((ucast (va >> 24))::word8))::word32)))::word36)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None)))= Some y))"
using mem_context_val_w32_9_unchanged by metis
then show ?thesis
apply auto
using ptd_lookup_unchanged_2
unfolding Let_def
proof -
fix y :: word32
have "(y AND 3 \<noteq> 0 \<or> y AND 3 = 0 \<or> (y AND 3 \<noteq> 1 \<or> ptd_lookup va (y AND 4294967292) (mem s1) (Suc 0 + 1) = None) \<and> (y AND 3 = 1 \<or> y AND 3 \<noteq> 2 \<or> None = Some ((ucast (ucast (y >> 8)::word24) << 12) OR (ucast (ucast va::word12)::word36), ucast y::word8))) \<and> (y AND 3 = 0 \<or> (y AND 3 \<noteq> 1 \<or> (y AND 3 \<noteq> 0 \<or> ptd_lookup va (y AND 4294967292) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) (Suc 0 + 1) = None) \<and> (y AND 3 = 0 \<or> (y AND 3 \<noteq> 1 \<or> ptd_lookup va (y AND 4294967292) (mem s1) (Suc 0 + 1) = ptd_lookup va (y AND 4294967292) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) (Suc 0 + 1)) \<and> (y AND 3 = 1 \<or> (y AND 3 \<noteq> 2 \<or> ptd_lookup va (y AND 4294967292) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) (Suc 0 + 1) = Some ((ucast (ucast (y >> 8)::word24) << 12) OR ucast (ucast va::word12), ucast y)) \<and> (y AND 3 = 2 \<or> ptd_lookup va (y AND 4294967292) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) (Suc 0 + 1) = None)))) \<and> (y AND 3 = 1 \<or> (y AND 3 \<noteq> 2 \<or> (y AND 3 \<noteq> 0 \<or> None = Some ((ucast (ucast (y >> 8)::word24) << 12) OR (ucast (ucast va::word12)::word36), ucast y::word8)) \<and> (y AND 3 = 0 \<or> (y AND 3 \<noteq> 1 \<or> ptd_lookup va (y AND 4294967292) (mem s1) (Suc 0 + 1) = Some ((ucast (ucast (y >> 8)::word24) << 12) OR ucast (ucast va::word12), ucast y)) \<and> (y AND 3 = 1 \<or> y AND 3 = 2 \<or> None = Some ((ucast (ucast (y >> 8)::word24) << 12) OR (ucast (ucast va::word12)::word36), ucast y::word8)))) \<and> (y AND 3 = 2 \<or> y AND 3 = 0 \<or> (y AND 3 \<noteq> 1 \<or> ptd_lookup va (y AND 4294967292) (mem s1) (Suc 0 + 1) = None) \<and> (y AND 3 = 1 \<or> y AND 3 \<noteq> 2 \<or> None = Some ((ucast (ucast (y >> 8)::word24) << 12) OR (ucast (ucast va::word12)::word36), ucast y::word8))))) \<or> (\<forall>w. mem s1 w = ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) w)"
by (metis (no_types) One_nat_def Suc_1 Suc_eq_plus1 ptd_lookup_unchanged_2)
then show "(if y AND 3 = 0 then None else if y AND 3 = 1 then ptd_lookup va (y AND 4294967292) (mem s1) (Suc 0 + 1) else if y AND 3 = 2 then Some ((ucast (ucast (y >> 8)::word24) << 12) OR ucast (ucast va::word12), ucast y) else None) = (if y AND 3 = 0 then None else if y AND 3 = 1 then ptd_lookup va (y AND 4294967292) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) (Suc 0 + 1) else if y AND 3 = 2 then Some ((ucast (ucast (y >> 8)::word24) << 12) OR ucast (ucast va::word12), ucast y) else None)"
proof -
have f1: "2 = Suc 0 + 1"
by (metis One_nat_def Suc_1 Suc_eq_plus1)
{ assume "y AND 3 = 1"
moreover
{ assume "y AND 3 = 1 \<and> (if y AND 3 = 0 then None else if y AND 3 = 1 then ptd_lookup va (y AND 4294967292) (mem s1) (Suc 0 + 1) else if y AND 3 = 2 then Some ((ucast (ucast (y >> 8)::word24) << 12) OR ucast (ucast va::word12), ucast y) else None) \<noteq> (if y AND 3 = 0 then None else if y AND 3 = 1 then ptd_lookup va (y AND 4294967292) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) (Suc 0 + 1) else if y AND 3 = 2 then Some ((ucast (ucast (y >> 8)::word24) << 12) OR ucast (ucast va::word12), ucast y) else None)"
have "y AND 3 = 1 \<and> (if y AND 3 = 0 then None else if y AND 3 = 1 then ptd_lookup va (y AND 4294967292) (mem s1) (Suc 0 + 1) else if y AND 3 = 2 then Some ((ucast (ucast (y >> 8)::word24) << 12) OR ucast (ucast va::word12), ucast y) else None) \<noteq> ptd_lookup va (y AND 4294967292) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) (Suc 0 + 1) \<or> (if y AND 3 = 0 then None else if y AND 3 = 1 then ptd_lookup va (y AND 4294967292) (mem s1) (Suc 0 + 1) else if y AND 3 = 2 then Some ((ucast (ucast (y >> 8)::word24) << 12) OR ucast (ucast va::word12), ucast y) else None) = (if y AND 3 = 0 then None else if y AND 3 = 1 then ptd_lookup va (y AND 4294967292) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) (Suc 0 + 1) else if y AND 3 = 2 then Some ((ucast (ucast (y >> 8)::word24) << 12) OR ucast (ucast va::word12), ucast y) else None)"
by presburger
moreover
{ assume "y AND 3 = 1 \<and> (if y AND 3 = 0 then None else if y AND 3 = 1 then ptd_lookup va (y AND 4294967292) (mem s1) (Suc 0 + 1) else if y AND 3 = 2 then Some ((ucast (ucast (y >> 8)::word24) << 12) OR ucast (ucast va::word12), ucast y) else None) \<noteq> ptd_lookup va (y AND 4294967292) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) (Suc 0 + 1)"
then have "y AND 3 = 1 \<and> (if y AND 3 = 0 then None else if y AND 3 = 1 then ptd_lookup va (y AND 4294967292) (mem s1) (Suc 0 + 1) else if y AND 3 = 2 then Some ((ucast (ucast (y >> 8)::word24) << 12) OR ucast (ucast va::word12), ucast y) else None) \<noteq> ptd_lookup va (y AND 4294967292) (mem s1) 2"
by (metis One_nat_def Suc_1 Suc_eq_plus1 ptd_lookup_unchanged_2)
then have ?thesis
using f1 by auto }
ultimately have ?thesis
by blast }
ultimately have ?thesis
by blast }
then show ?thesis
by presburger
qed
qed
qed
lemma virt_to_phys_unchanged_sub1:
assumes a1: "(let context_table_entry = (v1 >> 11 << 11) OR (v2 AND 511 << 2)
in Let (mem_context_val_w32 (word_of_int 9) (ucast context_table_entry) (mem s1))
(case_option None (\<lambda>lvl1_page_table. ptd_lookup va lvl1_page_table (mem s1) 1))) =
(let context_table_entry = (v1 >> 11 << 11) OR (v2 AND 511 << 2)
in Let (mem_context_val_w32 (word_of_int 9) (ucast context_table_entry) (mem s2))
(case_option None (\<lambda>lvl1_page_table. ptd_lookup va lvl1_page_table (mem s2) 1)))"
shows "(let context_table_entry = (v1 >> 11 << 11) OR (v2 AND 511 << 2)
in Let (mem_context_val_w32 (word_of_int 9) (ucast context_table_entry)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))))
(case_option None (\<lambda>lvl1_page_table. ptd_lookup va lvl1_page_table
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) 1))) =
(let context_table_entry = (v1 >> 11 << 11) OR (v2 AND 511 << 2)
in Let (mem_context_val_w32 (word_of_int 9) (ucast context_table_entry)
((mem s2)(10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))))
(case_option None (\<lambda>lvl1_page_table. ptd_lookup va lvl1_page_table
((mem s2)(10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) 1)))"
proof -
from a1 have
"(case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) (mem s1) of
None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table (mem s1) 1) =
(case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) (mem s2) of
None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table (mem s2) 1)"
unfolding Let_def by auto
then have "(case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2)))
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) of
None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table (mem s1) 1) =
(case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2)))
((mem s2)(10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) of
None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table (mem s2) 1)"
using mem_context_val_w32_9_unchanged
by (metis word_numeral_alt)
then have "(case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2)))
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) of
None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) 1) =
(case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2)))
((mem s2)(10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) of
None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table
((mem s2)(10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) 1)"
using ptd_lookup_unchanged_1
proof -
obtain ww :: "word32 option \<Rightarrow> word32" where
f1: "\<forall>z. (z = None \<or> z = Some (ww z)) \<and> (z \<noteq> None \<or> (\<forall>w. z \<noteq> Some w))"
by moura
then have f2: "(mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) = None \<or> mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) = Some (ww (mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None)))))) \<and> (mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) \<noteq> None \<or> (\<forall>w. mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) \<noteq> Some w))"
by blast
then have f3: "(case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) of None \<Rightarrow> None | Some w \<Rightarrow> ptd_lookup va w ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) 1) \<noteq> None \<longrightarrow> (case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) of None \<Rightarrow> None | Some w \<Rightarrow> ptd_lookup va w ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) 1) = (case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) of None \<Rightarrow> None | Some w \<Rightarrow> ptd_lookup va w (mem s1) 1)"
by (metis (no_types) \<open>\<And>val va s1 ptp addr. ptd_lookup va ptp (mem s1) 1 = ptd_lookup va ptp ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) 1\<close> option.case(2) option.simps(4))
have f4: "mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) = Some (ww (mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))))) \<and> mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) = Some (ww (mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))))) \<longrightarrow> (case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) of None \<Rightarrow> None | Some w \<Rightarrow> ptd_lookup va w ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) 1) = (case Some (ww (mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))))) of None \<Rightarrow> None | Some w \<Rightarrow> ptd_lookup va w ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) 1)"
by (metis (no_types) \<open>(case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) of None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table (mem s1) 1) = (case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) of None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table (mem s2) 1)\<close> \<open>\<And>val va s1 ptp addr. ptd_lookup va ptp (mem s1) 1 = ptd_lookup va ptp ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) 1\<close> option.case(2))
have f5: "(mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) = None \<or> mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) = Some (ww (mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None)))))) \<and> (mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) \<noteq> None \<or> (\<forall>w. mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) \<noteq> Some w))"
using f1 by blast
{ assume "(case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) of None \<Rightarrow> None | Some w \<Rightarrow> ptd_lookup va w ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) 1) \<noteq> (case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) of None \<Rightarrow> None | Some w \<Rightarrow> ptd_lookup va w ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) 1)"
{ assume "(case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) of None \<Rightarrow> None | Some w \<Rightarrow> ptd_lookup va w (mem s2) 1) \<noteq> None \<and> (case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) of None \<Rightarrow> None | Some w \<Rightarrow> ptd_lookup va w (mem s2) 1) \<noteq> None"
then have "(case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) of None \<Rightarrow> None | Some w \<Rightarrow> ptd_lookup va w (mem s2) 1) \<noteq> None \<and> mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) \<noteq> None"
by (metis (no_types) \<open>(case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) of None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table (mem s1) 1) = (case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) of None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table (mem s2) 1)\<close> option.simps(4))
then have ?thesis
using f5 f4 f2 by force }
then have ?thesis
using f5 f3 by (metis (no_types) \<open>(case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) of None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table (mem s1) 1) = (case mem_context_val_w32 (word_of_int 9) (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2) (10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) of None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table (mem s2) 1)\<close> \<open>\<And>val va s1 ptp addr. ptd_lookup va ptp (mem s1) 1 = ptd_lookup va ptp ((mem s1) (10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) 1\<close> option.case(2) option.simps(4)) }
then show ?thesis
by blast
qed
then show ?thesis
unfolding Let_def by auto
qed
lemma virt_to_phys_unchanged:
assumes a1: "(\<forall>va. virt_to_phys va (mmu s2) (mem s1) = virt_to_phys va (mmu s2) (mem s2))"
shows "(\<forall>va. virt_to_phys va (mmu s2) ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) =
virt_to_phys va (mmu s2) ((mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))))"
proof (cases "registers (mmu s2) CR AND 1 \<noteq> 0")
case True
then have f1: "registers (mmu s2) CR AND 1 \<noteq> 0" by auto
then show ?thesis
proof (cases "mmu_reg_val (mmu s2) 256 = None")
case True
then show ?thesis
by (simp add: virt_to_phys_def)
next
case False
then have f2: "mmu_reg_val (mmu s2) 256 \<noteq> None" by auto
then show ?thesis
proof (cases "mmu_reg_val (mmu s2) 512 = None")
case True
then show ?thesis using f1 f2
apply (simp add: virt_to_phys_def)
by auto
next
case False
then show ?thesis using f1 f2 a1
apply (simp add: virt_to_phys_def)
apply clarify
using virt_to_phys_unchanged_sub1 by fastforce
qed
qed
next
case False
then show ?thesis
by (simp add: virt_to_phys_def)
qed
lemma virt_to_phys_unchanged2_sub1:
"(case mem_context_val_w32 (word_of_int 9)
(ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) (mem s2) of
None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table (mem s2) 1) =
(case mem_context_val_w32 (word_of_int 9)
(ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2)
(10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) of
None \<Rightarrow> None | Some lvl1_page_table \<Rightarrow> ptd_lookup va lvl1_page_table ((mem s2)
(10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) 1)"
proof (cases "mem_context_val_w32 9 (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) (mem s2) = None")
case True
then have "mem_context_val_w32 9 (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) (mem s2) = None \<and>
mem_context_val_w32 9 (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2)
(10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) = None"
using mem_context_val_w32_9_unchanged by metis
then show ?thesis
by auto
next
case False
then have "mem_context_val_w32 9 (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) (mem s2) \<noteq> None \<and>
(\<forall>y. mem_context_val_w32 9 (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) (mem s2) = Some y \<longrightarrow>
mem_context_val_w32 9 (ucast ((v1 >> 11 << 11) OR (v2 AND 511 << 2))) ((mem s2)
(10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))) = Some y)"
using mem_context_val_w32_9_unchanged by metis
then show ?thesis
using ptd_lookup_unchanged_1 by fastforce
qed
lemma virt_to_phys_unchanged2:
"virt_to_phys va (mmu s2) (mem s2) =
virt_to_phys va (mmu s2) ((mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None)))"
proof (cases "registers (mmu s2) CR AND 1 \<noteq> 0")
case True
then have f1: "registers (mmu s2) CR AND 1 \<noteq> 0" by auto
then show ?thesis
proof (cases "mmu_reg_val (mmu s2) 256 = None")
case True
then show ?thesis
by (simp add: virt_to_phys_def)
next
case False
then have f2: "mmu_reg_val (mmu s2) 256 \<noteq> None" by auto
then show ?thesis
proof (cases "mmu_reg_val (mmu s2) 512 = None")
case True
then show ?thesis using f1 f2
apply (simp add: virt_to_phys_def)
by auto
next
case False
then show ?thesis
using f1 f2
apply (simp add: virt_to_phys_def)
apply clarify
unfolding Let_def
using virt_to_phys_unchanged2_sub1
by auto
qed
qed
next
case False
then show ?thesis
by (simp add: virt_to_phys_def)
qed
lemma virt_to_phys_unchanged_low_equal:
assumes a1: "low_equal s1 s2"
shows "(\<forall>va. virt_to_phys va (mmu s2) ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) =
virt_to_phys va (mmu s2) ((mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))))"
using a1 apply (simp add: low_equal_def)
using virt_to_phys_unchanged
by metis
lemma mmu_low_equal: "low_equal s1 s2 \<Longrightarrow> mmu s1 = mmu s2"
by (simp add: low_equal_def)
lemma mem_val_alt_8_unchanged0:
assumes a1: "mem_equal s1 s2 pa"
shows "mem_val_alt 8 (pa AND 68719476732) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 8 (pa AND 68719476732) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>)"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
using a1 apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_8_unchanged1:
assumes a1: "mem_equal s1 s2 pa"
shows "mem_val_alt 8 ((pa AND 68719476732) + 1) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 8 ((pa AND 68719476732) + 1) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>)"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
using a1 apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_8_unchanged2:
assumes a1: "mem_equal s1 s2 pa"
shows "mem_val_alt 8 ((pa AND 68719476732) + 2) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 8 ((pa AND 68719476732) + 2) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>)"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
using a1 apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_8_unchanged3:
assumes a1: "mem_equal s1 s2 pa"
shows "mem_val_alt 8 ((pa AND 68719476732) + 3) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 8 ((pa AND 68719476732) + 3) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>)"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
using a1 apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_8_unchanged:
assumes a1: "mem_equal s1 s2 pa"
shows "mem_val_alt 8 (pa AND 68719476732) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 8 (pa AND 68719476732) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>) \<and>
mem_val_alt 8 ((pa AND 68719476732) + 1) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 8 ((pa AND 68719476732) + 1) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>) \<and>
mem_val_alt 8 ((pa AND 68719476732) + 2) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 8 ((pa AND 68719476732) + 2) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>) \<and>
mem_val_alt 8 ((pa AND 68719476732) + 3) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 8 ((pa AND 68719476732) + 3) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>)"
using a1 mem_val_alt_8_unchanged0 mem_val_alt_8_unchanged1
mem_val_alt_8_unchanged2 mem_val_alt_8_unchanged3
by blast
lemma mem_val_w32_8_unchanged:
assumes a1: "mem_equal s1 s2 a"
shows "mem_val_w32 8 a (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_w32 8 a (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>)"
apply (simp add: mem_val_w32_def)
apply (simp add: Let_def)
using mem_val_alt_8_unchanged a1 apply auto
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
by fastforce
lemma load_word_mem_8_unchanged:
assumes a1: "low_equal s1 s2 \<and>
load_word_mem s1 addra 8 = load_word_mem s2 addra 8"
shows "load_word_mem (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) addra 8 =
load_word_mem (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>) addra 8"
proof (cases "virt_to_phys addra (mmu s1) ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) = None")
case True
then have "virt_to_phys addra (mmu s1) ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) = None \<and>
virt_to_phys addra (mmu s2) ((mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))) = None"
using a1 apply (auto simp add: mmu_low_equal)
using a1 virt_to_phys_unchanged_low_equal by metis
then show ?thesis
by (simp add: load_word_mem_def)
next
case False
then have "\<exists>p. virt_to_phys addra (mmu s1) ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) = Some p \<and>
virt_to_phys addra (mmu s2) ((mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))) = Some p"
using a1 apply (auto simp add: mmu_low_equal)
using a1 virt_to_phys_unchanged_low_equal by metis
then have "\<exists>p. virt_to_phys addra (mmu s1) ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) = Some p \<and>
virt_to_phys addra (mmu s2) ((mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))) = Some p \<and>
virt_to_phys addra (mmu s1) (mem s1) = Some p \<and>
virt_to_phys addra (mmu s2) (mem s2) = Some p"
using virt_to_phys_unchanged2 by metis
then show ?thesis using a1
apply (simp add: load_word_mem_def)
apply auto
apply (simp add: low_equal_def)
apply (simp add: user_accessible_def)
using mem_val_w32_8_unchanged a1 user_accessible_8
by (metis snd_conv)
qed
lemma load_word_mem_select_8:
assumes a1: "fst (case load_word_mem s1 addra 8 of None \<Rightarrow> (None, s1)
| Some w \<Rightarrow> (Some w, add_instr_cache s1 addra w 15)) =
fst (case load_word_mem s2 addra 8 of None \<Rightarrow> (None, s2)
| Some w \<Rightarrow> (Some w, add_instr_cache s2 addra w 15))"
shows "load_word_mem s1 addra 8 = load_word_mem s2 addra 8"
using a1
by (metis (mono_tags, lifting) fst_conv not_None_eq option.simps(4) option.simps(5))
lemma memory_read_8_unchanged:
assumes a1: "low_equal s1 s2 \<and>
fst (memory_read 8 addra s1) = fst (memory_read 8 addra s2)"
shows "fst (memory_read 8 addra
(s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>)) =
fst (memory_read 8 addra
(s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>))"
proof (cases "sys_reg s1 CCR AND 1 = 0")
case True
then have "sys_reg s1 CCR AND 1 = 0 \<and> sys_reg s2 CCR AND 1 = 0"
using a1 sys_reg_low_equal by fastforce
then show ?thesis using a1
apply (simp add: memory_read_def)
using load_word_mem_8_unchanged by blast
next
case False
then have f1: "sys_reg s1 CCR AND 1 \<noteq> 0 \<and> sys_reg s2 CCR AND 1 \<noteq> 0"
using a1 sys_reg_low_equal by fastforce
then show ?thesis using a1
proof (cases "load_word_mem (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) addra 8 = None")
case True
then have "load_word_mem (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) addra 8 = None \<and>
load_word_mem (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>) addra 8 = None"
using a1 f1
apply (simp add: memory_read_def)
apply clarsimp
using load_word_mem_select_8 load_word_mem_8_unchanged
by fastforce
then show ?thesis
by (simp add: memory_read_def)
next
case False
then have "\<exists>y. load_word_mem (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) addra 8 = Some y" by auto
then have "\<exists>y. load_word_mem (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) addra 8 = Some y \<and>
load_word_mem (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>) addra 8 = Some y"
using a1 f1
apply (simp add: memory_read_def)
apply clarsimp
using load_word_mem_select_8 load_word_mem_8_unchanged by fastforce
then show ?thesis using a1 f1
apply (simp add: memory_read_def)
by auto
qed
qed
lemma mem_val_alt_mod:
assumes a1: "addr1 \<noteq> addr2"
shows "mem_val_alt 10 addr1 s =
mem_val_alt 10 addr1 (s\<lparr>mem := (mem s)(10 := mem s 10(addr2 \<mapsto> val),
11 := (mem s 11)(addr2 := None))\<rparr>)"
using a1 apply (simp add: mem_val_alt_def)
by (simp add: Let_def)
lemma mem_val_alt_mod2:
"mem_val_alt 10 addr (s\<lparr>mem := (mem s)(10 := mem s 10(addr \<mapsto> val),
11 := (mem s 11)(addr := None))\<rparr>) = Some val"
by (simp add: mem_val_alt_def)
lemma mem_val_alt_10_unchanged0:
assumes a1: "mem_equal s1 s2 pa"
shows "mem_val_alt 10 (pa AND 68719476732) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 10 (pa AND 68719476732) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>)"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
using a1 apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_10_unchanged1:
assumes a1: "mem_equal s1 s2 pa"
shows "mem_val_alt 10 ((pa AND 68719476732) + 1) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 10 ((pa AND 68719476732) + 1) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>)"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
using a1 apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_10_unchanged2:
assumes a1: "mem_equal s1 s2 pa"
shows "mem_val_alt 10 ((pa AND 68719476732) + 2) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 10 ((pa AND 68719476732) + 2) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>)"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
using a1 apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_10_unchanged3:
assumes a1: "mem_equal s1 s2 pa"
shows "mem_val_alt 10 ((pa AND 68719476732) + 3) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 10 ((pa AND 68719476732) + 3) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>)"
apply (simp add: mem_val_alt_def)
apply (simp add: Let_def)
using a1 apply (simp add: mem_equal_def)
by (metis option.distinct(1))
lemma mem_val_alt_10_unchanged:
assumes a1: "mem_equal s1 s2 pa"
shows "mem_val_alt 10 (pa AND 68719476732) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 10 (pa AND 68719476732) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>) \<and>
mem_val_alt 10 ((pa AND 68719476732) + 1) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 10 ((pa AND 68719476732) + 1) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>) \<and>
mem_val_alt 10 ((pa AND 68719476732) + 2) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 10 ((pa AND 68719476732) + 2) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>) \<and>
mem_val_alt 10 ((pa AND 68719476732) + 3) (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_alt 10 ((pa AND 68719476732) + 3) (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>)"
using a1 mem_val_alt_10_unchanged0 mem_val_alt_10_unchanged1
mem_val_alt_10_unchanged2 mem_val_alt_10_unchanged3
by blast
lemma mem_val_w32_10_unchanged:
assumes a1: "mem_equal s1 s2 a"
shows "mem_val_w32 10 a (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) =
mem_val_w32 10 a (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>)"
apply (simp add: mem_val_w32_def)
apply (simp add: Let_def)
using mem_val_alt_10_unchanged a1 apply auto
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
apply fastforce
by fastforce
lemma is_accessible: "low_equal s1 s2 \<Longrightarrow>
virt_to_phys addra (mmu s1) (mem s1) = Some (a, b) \<Longrightarrow>
virt_to_phys addra (mmu s2) (mem s2) = Some (a, b) \<Longrightarrow>
mmu_readable (get_acc_flag b) 10 \<Longrightarrow>
mem_equal s1 s2 a"
apply (simp add: low_equal_def)
apply (simp add: user_accessible_def)
by fastforce
lemma load_word_mem_10_unchanged:
assumes a1: "low_equal s1 s2 \<and>
load_word_mem s1 addra 10 = load_word_mem s2 addra 10"
shows "load_word_mem (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) addra 10 =
load_word_mem (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>) addra 10"
proof (cases "virt_to_phys addra (mmu s1) ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) = None")
case True
then have "virt_to_phys addra (mmu s1) ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) = None \<and>
virt_to_phys addra (mmu s2) ((mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))) = None"
using a1 apply (auto simp add: mmu_low_equal)
using a1 virt_to_phys_unchanged_low_equal by metis
then show ?thesis
by (simp add: load_word_mem_def)
next
case False
then have "\<exists>p. virt_to_phys addra (mmu s1) ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) = Some p \<and>
virt_to_phys addra (mmu s2) ((mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))) = Some p"
using a1 apply (auto simp add: mmu_low_equal)
using a1 virt_to_phys_unchanged_low_equal by metis
then have "\<exists>p. virt_to_phys addra (mmu s1) ((mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))) = Some p \<and>
virt_to_phys addra (mmu s2) ((mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))) = Some p \<and>
virt_to_phys addra (mmu s1) (mem s1) = Some p \<and>
virt_to_phys addra (mmu s2) (mem s2) = Some p"
using virt_to_phys_unchanged2 by metis
then show ?thesis using a1
apply (simp add: load_word_mem_def)
apply auto
apply (simp add: low_equal_def)
apply (simp add: user_accessible_def)
using mem_val_w32_10_unchanged a1 by metis
qed
lemma load_word_mem_select_10:
assumes a1: "fst (case load_word_mem s1 addra 10 of None \<Rightarrow> (None, s1)
| Some w \<Rightarrow> (Some w, add_data_cache s1 addra w 15)) =
fst (case load_word_mem s2 addra 10 of None \<Rightarrow> (None, s2)
| Some w \<Rightarrow> (Some w, add_data_cache s2 addra w 15))"
shows "load_word_mem s1 addra 10 = load_word_mem s2 addra 10"
using a1
by (metis (mono_tags, lifting) fst_conv not_None_eq option.simps(4) option.simps(5))
lemma memory_read_10_unchanged:
assumes a1: "low_equal s1 s2 \<and>
fst (memory_read 10 addra s1) = fst (memory_read 10 addra s2)"
shows "fst (memory_read 10 addra
(s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>)) =
fst (memory_read 10 addra
(s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>))"
proof (cases "sys_reg s1 CCR AND 1 = 0")
case True
then have "sys_reg s1 CCR AND 1 = 0 \<and> sys_reg s2 CCR AND 1 = 0"
using a1 sys_reg_low_equal by fastforce
then show ?thesis using a1
apply (simp add: memory_read_def)
using load_word_mem_10_unchanged by blast
next
case False
then have f1: "sys_reg s1 CCR AND 1 \<noteq> 0 \<and> sys_reg s2 CCR AND 1 \<noteq> 0"
using a1 sys_reg_low_equal by fastforce
then show ?thesis using a1
proof (cases "load_word_mem (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) addra 10 = None")
case True
then have "load_word_mem (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) addra 10 = None \<and>
load_word_mem (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>) addra 10 = None"
using a1 f1
apply (simp add: memory_read_def)
apply clarsimp
using load_word_mem_select_10 load_word_mem_10_unchanged by fastforce
then show ?thesis
by (simp add: memory_read_def)
next
case False
then have "\<exists>y. load_word_mem (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) addra 10 = Some y" by auto
then have "\<exists>y. load_word_mem (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>) addra 10 = Some y \<and>
load_word_mem (s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>) addra 10 = Some y"
using a1 f1
apply (simp add: memory_read_def)
apply clarsimp
using load_word_mem_select_10 load_word_mem_10_unchanged by fastforce
then show ?thesis using a1 f1
apply (simp add: memory_read_def)
by auto
qed
qed
lemma state_mem_mod_1011_low_equal_sub1:
assumes a1: "(\<forall>va. virt_to_phys va (mmu s2) (mem s1) =
virt_to_phys va (mmu s2) (mem s2)) \<and>
(\<forall>pa. (\<exists>va b. virt_to_phys va (mmu s2) (mem s2) = Some (pa, b) \<and>
mmu_readable (get_acc_flag b) 10) \<longrightarrow>
mem_equal s1 s2 pa) \<and>
mmu s1 = mmu s2 \<and>
virt_to_phys va (mmu s2)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) =
Some (pa, b) \<and>
mmu_readable (get_acc_flag b) 10"
shows "mem_equal s1 s2 pa"
proof -
have "virt_to_phys va (mmu s1)
((mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))) =
Some (pa, b)"
using a1 by auto
then have "virt_to_phys va (mmu s1) (mem s1) = Some (pa, b)"
using virt_to_phys_unchanged2 by metis
then have "virt_to_phys va (mmu s2) (mem s2) = Some (pa, b)"
using a1 by auto
then show ?thesis using a1 by auto
qed
lemma mem_equal_unchanged:
assumes a1: "mem_equal s1 s2 pa"
shows "mem_equal (s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val),
11 := (mem s1 11)(addr := None))\<rparr>)
(s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val),
11 := (mem s2 11)(addr := None))\<rparr>)
pa"
using a1 apply (simp add: mem_equal_def)
by auto
lemma state_mem_mod_1011_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = s1\<lparr>mem := (mem s1)(10 := mem s1 10(addr \<mapsto> val), 11 := (mem s1 11)(addr := None))\<rparr> \<and>
t2 = s2\<lparr>mem := (mem s2)(10 := mem s2 10(addr \<mapsto> val), 11 := (mem s2 11)(addr := None))\<rparr>"
shows "low_equal t1 t2"
using a1
apply (simp add: low_equal_def)
apply (simp add: user_accessible_def)
apply auto
apply (simp add: assms virt_to_phys_unchanged_low_equal)
using state_mem_mod_1011_low_equal_sub1 mem_equal_unchanged
apply metis
apply (metis virt_to_phys_unchanged2)
using state_mem_mod_1011_low_equal_sub1 mem_equal_unchanged
by metis
lemma mem_mod_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = (mem_mod 10 addr val s1) \<and>
t2 = (mem_mod 10 addr val s2)"
shows "low_equal t1 t2"
using a1
apply (simp add: mem_mod_def)
by (auto intro: state_mem_mod_1011_low_equal)
lemma mem_mod_w32_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = mem_mod_w32 10 a bm data s1 \<and>
t2 = mem_mod_w32 10 a bm data s2"
shows "low_equal t1 t2"
using a1
apply (simp add: mem_mod_w32_def)
apply (simp add: Let_def)
by (meson mem_mod_low_equal)
lemma store_word_mem_low_equal:
assumes a1: "low_equal s1 s2 \<and>
Some t1 = store_word_mem s1 addr data bm 10 \<and>
Some t2 = store_word_mem s2 addr data bm 10"
shows "low_equal t1 t2" using a1
apply (simp add: store_word_mem_def)
apply (auto simp add: virt_to_phys_low_equal)
apply (case_tac "virt_to_phys addr (mmu s2) (mem s2) = None")
apply auto
apply (case_tac "mmu_writable (get_acc_flag b) 10")
apply auto
using mem_mod_w32_low_equal by blast
lemma memory_write_asi_low_equal:
assumes a1: "low_equal s1 s2 \<and>
Some t1 = memory_write_asi 10 addr bm data s1 \<and>
Some t2 = memory_write_asi 10 addr bm data s2"
shows "low_equal t1 t2"
using a1 apply (simp add: memory_write_asi_def)
by (meson add_data_cache_low_equal store_word_mem_low_equal)
lemma store_barrier_pending_mod_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = store_barrier_pending_mod False s1 \<and>
t2 = store_barrier_pending_mod False s2"
shows "low_equal t1 t2"
using a1 apply (simp add: store_barrier_pending_mod_def)
apply clarsimp
using a1 apply (auto simp add: state_var_low_equal)
by (auto intro: state_var2_low_equal)
lemma memory_write_low_equal:
assumes a1: "low_equal s1 s2 \<and>
Some t1 = memory_write 10 addr bm data s1 \<and>
Some t2 = memory_write 10 addr bm data s2"
shows "low_equal t1 t2"
apply (case_tac "memory_write_asi 10 addr bm data s1 = None")
using a1 apply (simp add: memory_write_def)
apply (case_tac "memory_write_asi 10 addr bm data s2 = None")
apply (meson assms low_equal_com memory_write_asi_low_equal_none)
using a1 apply (simp add: memory_write_def)
apply auto
by (metis memory_write_asi_low_equal store_barrier_pending_mod_low_equal)
lemma memory_write_low_equal2:
assumes a1: "low_equal s1 s2 \<and>
Some t1 = memory_write 10 addr bm data s1"
shows "\<exists>t2. Some t2 = memory_write 10 addr bm data s2"
using a1
apply (simp add: memory_write_def)
apply auto
by (metis (full_types) memory_write_def memory_write_low_equal_none2 not_None_eq)
lemma store_sub2_low_equal_sub1:
assumes a1: "low_equal s1 s2 \<and>
memory_write 10 addr (st_byte_mask instr addr)
(st_data0 instr curr_win rd addr s2) s1 = Some y \<and>
memory_write 10 addr (st_byte_mask instr addr)
(st_data0 instr curr_win rd addr s2) s2 = Some ya"
shows "low_equal (y\<lparr>traps := insert data_access_exception (traps y)\<rparr>)
(ya\<lparr>traps := insert data_access_exception (traps ya)\<rparr>)"
proof -
from a1 have f1: "low_equal y ya" using memory_write_low_equal by metis
then have "traps y = traps ya" by (simp add: low_equal_def)
then show ?thesis using f1 mod_trap_low_equal by fastforce
qed
lemma store_sub2_low_equal_sub2:
assumes a1: "low_equal s1 s2 \<and>
memory_write 10 addr (st_byte_mask instr addr)
(st_data0 instr curr_win rd addr s2) s1 = Some y \<and>
memory_write 10 addr (st_byte_mask instr addr)
(st_data0 instr curr_win rd addr s2) s2 = Some ya \<and>
memory_write 10 (addr + 4) 15 (user_reg_val curr_win (rd OR 1) y) y = None \<and>
memory_write 10 (addr + 4) 15 (user_reg_val curr_win (rd OR 1) ya) ya = Some yb"
shows "False"
proof -
from a1 have f1: "low_equal y ya" using memory_write_low_equal by metis
then have "(user_reg_val curr_win (rd OR 1) y) =
(user_reg_val curr_win (rd OR 1) ya)"
by (simp add: low_equal_def user_reg_val_def)
then show ?thesis using a1
using f1 memory_write_low_equal_none by fastforce
qed
lemma store_sub2_low_equal_sub3:
assumes a1: "low_equal s1 s2 \<and>
memory_write 10 addr (st_byte_mask instr addr)
(st_data0 instr curr_win rd addr s2) s1 = Some y \<and>
memory_write 10 addr (st_byte_mask instr addr)
(st_data0 instr curr_win rd addr s2) s2 = Some ya \<and>
memory_write 10 (addr + 4) 15 (user_reg_val curr_win (rd OR 1) y) y = Some yb \<and>
memory_write 10 (addr + 4) 15 (user_reg_val curr_win (rd OR 1) ya) ya = None"
shows "False"
proof -
from a1 have f1: "low_equal y ya" using memory_write_low_equal by metis
then have "(user_reg_val curr_win (rd OR 1) y) =
(user_reg_val curr_win (rd OR 1) ya)"
by (simp add: low_equal_def user_reg_val_def)
then show ?thesis using a1
using f1 memory_write_low_equal_none2 by fastforce
qed
lemma store_sub2_low_equal_sub4:
assumes a1: "low_equal s1 s2 \<and>
memory_write 10 addr (st_byte_mask instr addr)
(st_data0 instr curr_win rd addr s2) s1 = Some y \<and>
memory_write 10 addr (st_byte_mask instr addr)
(st_data0 instr curr_win rd addr s2) s2 = Some ya \<and>
memory_write 10 (addr + 4) 15 (user_reg_val curr_win (rd OR 1) y) y = Some yb \<and>
memory_write 10 (addr + 4) 15 (user_reg_val curr_win (rd OR 1) ya) ya = Some yc"
shows "low_equal yb yc"
proof -
from a1 have f1: "low_equal y ya" using memory_write_low_equal by metis
then have "(user_reg_val curr_win (rd OR 1) y) =
(user_reg_val curr_win (rd OR 1) ya)"
by (simp add: low_equal_def user_reg_val_def)
then show ?thesis using a1 f1
by (metis memory_write_low_equal)
qed
lemma store_sub2_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (store_sub2 instr curr_win rd 10 addr s1)) \<and>
t2 = snd (fst (store_sub2 instr curr_win rd 10 addr s2))"
shows "low_equal t1 t2"
proof (cases "memory_write 10 addr (st_byte_mask instr addr)
(st_data0 instr curr_win rd addr s1) s1 = None")
case True
then have "memory_write 10 addr (st_byte_mask instr addr)
(st_data0 instr curr_win rd addr s1) s1 = None \<and>
memory_write 10 addr (st_byte_mask instr addr)
(st_data0 instr curr_win rd addr s2) s2 = None"
using a1 by (metis memory_write_low_equal_none st_data0_low_equal)
then show ?thesis using a1
apply (simp add: store_sub2_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold return_def)
apply (simp add: raise_trap_def add_trap_set_def)
apply (simp add: simpler_modify_def)
using mod_trap_low_equal traps_low_equal by fastforce
next
case False
then have f1: "memory_write 10 addr (st_byte_mask instr addr)
(st_data0 instr curr_win rd addr s1) s1 \<noteq> None \<and>
memory_write 10 addr (st_byte_mask instr addr)
(st_data0 instr curr_win rd addr s2) s2 \<noteq> None"
using a1 by (metis memory_write_low_equal_none2 st_data0_low_equal)
then show ?thesis
proof (cases "(fst instr) \<in> {load_store_type STD,load_store_type STDA}")
case True
then show ?thesis using a1 f1
apply (simp add: store_sub2_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: simpler_modify_def bind_def h1_def h2_def Let_def)
apply (simp add: return_def)
apply (simp add: bind_def case_prod_unfold)
apply (simp add: simpler_modify_def)
apply clarsimp
apply (simp add: case_prod_unfold bind_def h1_def h2_def Let_def simpler_modify_def)
apply (simp add: simpler_gets_def)
apply auto
apply (simp add: raise_trap_def add_trap_set_def)
apply (simp add: simpler_modify_def)
apply (simp add: st_data0_low_equal)
apply (simp add: store_sub2_low_equal_sub1)
apply (simp add: st_data0_low_equal)
using store_sub2_low_equal_sub2 apply blast
apply (simp add: st_data0_low_equal)
using store_sub2_low_equal_sub3 apply blast
apply (simp add: st_data0_low_equal)
using store_sub2_low_equal_sub4 apply blast
apply (simp add: st_data0_low_equal)
apply (simp add: raise_trap_def add_trap_set_def)
apply (simp add: simpler_modify_def)
using store_sub2_low_equal_sub1 apply blast
apply (simp add: st_data0_low_equal)
using store_sub2_low_equal_sub2 apply blast
apply (simp add: st_data0_low_equal)
using store_sub2_low_equal_sub3 apply blast
apply (simp add: st_data0_low_equal)
using store_sub2_low_equal_sub4 by blast
next
case False
then show ?thesis using a1 f1
apply (simp add: store_sub2_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: simpler_modify_def bind_def h1_def h2_def Let_def)
apply (simp add: return_def)
apply (simp add: bind_def case_prod_unfold)
apply clarsimp
apply (simp add: simpler_modify_def)
apply (simp add: st_data0_low_equal)
using memory_write_low_equal by metis
qed
qed
lemma store_sub1_low_equal:
assumes a1: "low_equal s1 s2 \<and>
(fst instr = load_store_type STB \<or>
fst instr = load_store_type STH \<or>
fst instr = load_store_type ST \<or>
fst instr = load_store_type STD) \<and>
t1 = snd (fst (store_sub1 instr rd 0 s1)) \<and>
t2 = snd (fst (store_sub1 instr rd 0 s2))"
shows "low_equal t1 t2"
proof (cases "(fst instr = load_store_type STH \<or> fst instr = load_store_type STHA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word1) \<noteq> 0")
case True
then have "((fst instr = load_store_type STH \<or> fst instr = load_store_type STHA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word1) \<noteq> 0) \<and>
((fst instr = load_store_type STH \<or> fst instr = load_store_type STHA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word1) \<noteq> 0)"
by (metis (mono_tags, lifting) assms get_addr_low_equal)
then show ?thesis using a1
apply (simp add: store_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: raise_trap_def add_trap_set_def)
apply (simp add: simpler_modify_def)
apply clarsimp
apply (simp add: get_curr_win3_low_equal)
by (auto intro: get_curr_win2_low_equal mod_trap_low_equal)
next
case False
then have f1: "\<not> ((fst instr = load_store_type STH \<or> fst instr = load_store_type STHA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word1) \<noteq> 0) \<and>
\<not> ((fst instr = load_store_type STH \<or> fst instr = load_store_type STHA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word1) \<noteq> 0)"
by (metis (mono_tags, lifting) assms get_addr_low_equal)
then show ?thesis
proof (cases "(fst instr \<in> {load_store_type ST,load_store_type STA}) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word2) \<noteq> 0")
case True
then have "(fst instr \<in> {load_store_type ST,load_store_type STA}) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word2) \<noteq> 0 \<and>
(fst instr \<in> {load_store_type ST,load_store_type STA}) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word2) \<noteq> 0"
by (metis (mono_tags, lifting) assms get_addr_low_equal)
then show ?thesis using a1 f1
apply (simp add: store_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: raise_trap_def add_trap_set_def)
apply (simp add: simpler_modify_def)
apply clarsimp
apply (simp add: get_curr_win3_low_equal)
by (auto intro: get_curr_win2_low_equal mod_trap_low_equal)
next
case False
then have "\<not>((fst instr \<in> {load_store_type ST,load_store_type STA}) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word2) \<noteq> 0) \<and>
\<not>((fst instr \<in> {load_store_type ST,load_store_type STA}) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word2) \<noteq> 0)"
by (metis (mono_tags, lifting) assms get_addr_low_equal)
then have f2: "\<not>((fst instr = load_store_type ST \<or> fst instr = load_store_type STA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word2) \<noteq> 0) \<and>
\<not>((fst instr = load_store_type ST \<or> fst instr = load_store_type STA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word2) \<noteq> 0)"
by auto
then show ?thesis
proof (cases "(fst instr \<in> {load_store_type STD,load_store_type STDA}) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word3) \<noteq> 0")
case True
then have "(fst instr \<in> {load_store_type STD,load_store_type STDA}) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word3) \<noteq> 0 \<and>
(fst instr \<in> {load_store_type STD,load_store_type STDA}) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word3) \<noteq> 0"
by (metis (mono_tags, lifting) assms get_addr_low_equal)
then show ?thesis using a1
apply (simp add: store_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply auto
apply (simp add: case_prod_unfold)
apply (simp add: raise_trap_def add_trap_set_def)
apply (simp add: simpler_modify_def)
apply (simp add: get_curr_win3_low_equal)
by (auto intro: get_curr_win2_low_equal mod_trap_low_equal)
next
case False
then have "\<not> (fst instr \<in> {load_store_type STD, load_store_type STDA} \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word3) \<noteq> 0) \<and>
\<not> (fst instr \<in> {load_store_type STD, load_store_type STDA} \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word3) \<noteq> 0)"
by (metis (mono_tags, lifting) assms get_addr_low_equal)
then have f3: "\<not> ((fst instr = load_store_type STD \<or> fst instr = load_store_type STDA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s1)))))::word3) \<noteq> 0) \<and>
\<not> ((fst instr = load_store_type STD \<or> fst instr = load_store_type STDA) \<and>
((ucast (get_addr (snd instr) (snd (fst (get_curr_win () s2)))))::word3) \<noteq> 0)"
by auto
show ?thesis using a1
apply (simp add: store_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (unfold case_prod_beta)
apply (simp add: f1 f2 f3)
apply (simp_all add: st_asi_def)
using a1 apply clarsimp
apply (simp add: get_curr_win_low_equal get_addr2_low_equal)
by (metis store_sub2_low_equal get_curr_win2_low_equal)
qed
qed
qed
lemma store_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
(fst instr = load_store_type STB \<or>
fst instr = load_store_type STH \<or>
fst instr = load_store_type ST \<or>
fst instr = load_store_type STA \<or>
fst instr = load_store_type STD) \<and>
((ucast (get_S (cpu_reg_val PSR s1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR s2)))::word1) = 0 \<and>
t1 = snd (fst (store_instr instr s1)) \<and> t2 = snd (fst (store_instr instr s2))"
shows "low_equal t1 t2"
proof -
have "get_S (cpu_reg_val PSR s1) = 0 \<and> get_S (cpu_reg_val PSR s2) = 0"
using a1 by (simp add: ucast_id)
then show ?thesis using a1
apply (simp add: store_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: Let_def)
apply clarsimp
apply (simp add: raise_trap_def add_trap_set_def)
apply (simp add: simpler_modify_def)
apply (simp add: traps_low_equal)
by (auto intro: mod_trap_low_equal store_sub1_low_equal)
qed
lemma sethi_low_equal: "low_equal s1 s2 \<and>
t1 = snd (fst (sethi_instr instr s1)) \<and> t2 = snd (fst (sethi_instr instr s2)) \<Longrightarrow>
low_equal t1 t2"
apply (simp add: sethi_instr_def)
apply (simp add: Let_def)
apply (case_tac "get_operand_w5 (snd instr ! Suc 0) \<noteq> 0")
apply auto
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: get_curr_win_low_equal)
using get_curr_win2_low_equal write_reg_low_equal
apply metis
by (simp add: return_def)
lemma nop_low_equal: "low_equal s1 s2 \<and>
t1 = snd (fst (nop_instr instr s1)) \<and> t2 = snd (fst (nop_instr instr s2)) \<Longrightarrow>
low_equal t1 t2"
apply (simp add: nop_instr_def)
by (simp add: return_def)
lemma logical_instr_sub1_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (logical_instr_sub1 instr_name result s1)) \<and>
t2 = snd (fst (logical_instr_sub1 instr_name result s2))"
shows "low_equal t1 t2"
proof (cases "instr_name = logic_type ANDcc \<or>
instr_name = logic_type ANDNcc \<or>
instr_name = logic_type ORcc \<or>
instr_name = logic_type ORNcc \<or>
instr_name = logic_type XORcc \<or> instr_name = logic_type XNORcc")
case True
then show ?thesis using a1
apply (simp add: logical_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: logical_new_psr_val_def)
using write_cpu_low_equal cpu_reg_val_low_equal
by fastforce
next
case False
then show ?thesis using a1
apply (simp add: logical_instr_sub1_def)
by (simp add: return_def)
qed
lemma logical_instr_low_equal: "low_equal s1 s2 \<and>
t1 = snd (fst (logical_instr instr s1)) \<and> t2 = snd (fst (logical_instr instr s2)) \<Longrightarrow>
low_equal t1 t2"
apply (simp add: logical_instr_def)
apply (simp add: Let_def simpler_gets_def bind_def h1_def h2_def)
apply (simp add: case_prod_unfold)
apply auto
apply (simp_all add: get_curr_win_low_equal)
apply (simp_all add: get_operand2_low_equal)
using logical_instr_sub1_low_equal get_operand2_low_equal
get_curr_win2_low_equal write_reg_low_equal user_reg_val_low_equal
proof -
assume a1: "low_equal s1 s2"
assume "t2 = snd (fst (logical_instr_sub1 (fst instr) (logical_result (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2)) (snd (fst (write_reg (logical_result (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2))))))))"
assume "t1 = snd (fst (logical_instr_sub1 (fst instr) (logical_result (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s2)) (snd (fst (write_reg (logical_result (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s2)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1))))))))"
have "\<And>w wa. user_reg_val w wa (snd (fst (get_curr_win () s2))) = user_reg_val w wa (snd (fst (get_curr_win () s1)))"
using a1 by (metis (no_types) get_curr_win2_low_equal user_reg_val_low_equal)
then show "low_equal (snd (fst (logical_instr_sub1 (fst instr) (logical_result (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s2)) (snd (fst (write_reg (logical_result (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s2)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1))))))))) (snd (fst (logical_instr_sub1 (fst instr) (logical_result (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2)) (snd (fst (write_reg (logical_result (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2)))))))))"
using a1 by (metis (no_types) get_curr_win2_low_equal logical_instr_sub1_low_equal write_reg_low_equal)
next
assume a2: "low_equal s1 s2"
assume "t1 = snd (fst (logical_instr_sub1 (fst instr)
(logical_result (fst instr)
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))))
(get_operand2 (snd instr) s2))
(snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1))))
(fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1))))))))"
assume "t2 = snd (fst (logical_instr_sub1 (fst instr)
(logical_result (fst instr)
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))))
(get_operand2 (snd instr) s2))
(snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2))))
(fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2))))))))"
have "\<And>w wa. user_reg_val w wa (snd (fst (get_curr_win () s2))) = user_reg_val w wa (snd (fst (get_curr_win () s1)))"
using a2 by (metis (no_types) get_curr_win2_low_equal user_reg_val_low_equal)
then show "low_equal
(snd (fst (logical_instr_sub1 (fst instr)
(logical_result (fst instr)
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))))
(get_operand2 (snd instr) s2))
(snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1))))
(fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s1)))))))))
(snd (fst (logical_instr_sub1 (fst instr)
(logical_result (fst instr)
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))))
(get_operand2 (snd instr) s2))
(snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2))))
(fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))))))))"
proof -
have "low_equal (snd (fst (logical_instr_sub1 (fst instr) (logical_result (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s2)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s1))))))))) (snd (fst (logical_instr_sub1 (fst instr) (logical_result (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s2)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))))))))"
by (meson a2 get_curr_win2_low_equal logical_instr_sub1_low_equal write_reg_low_equal)
then show ?thesis
using \<open>\<And>wa w. user_reg_val w wa (snd (fst (get_curr_win () s2))) = user_reg_val w wa (snd (fst (get_curr_win () s1)))\<close> by presburger
qed
qed
lemma shift_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (shift_instr instr s1)) \<and> t2 = snd (fst (shift_instr instr s2))"
shows "low_equal t1 t2"
proof (cases "(fst instr = shift_type SLL) \<and> (get_operand_w5 ((snd instr)!3) \<noteq> 0)")
case True
then show ?thesis using a1
apply (simp add: shift_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def)
apply (simp add: bind_def h1_def h2_def Let_def case_prod_unfold)
apply auto
apply (simp_all add: get_curr_win_low_equal)
proof -
assume a1: "low_equal s1 s2"
assume "t2 = snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) << unat (ucast (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 2)) (snd (fst (get_curr_win () s2))))::word5)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2)))))"
assume "t1 = snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) << unat (ucast (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 2)) (snd (fst (get_curr_win () s1))))::word5)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1)))))"
have "\<And>w wa wb. low_equal (snd (fst (write_reg w wa wb s1))) (snd (fst (write_reg w wa wb s2)))"
using a1 by (metis write_reg_low_equal)
then show "low_equal (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) << unat (ucast (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 2)) (snd (fst (get_curr_win () s1))))::word5)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) << unat (ucast (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 2)) (snd (fst (get_curr_win () s2))))::word5)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2))))))"
using a1 by (simp add: get_curr_win_def simpler_gets_def user_reg_val_low_equal)
next
assume a2: "low_equal s1 s2"
assume "t1 = snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) <<
unat (get_operand_w5 (snd instr ! 2)))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1)))))"
assume "t2 = snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) <<
unat (get_operand_w5 (snd instr ! 2)))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2)))))"
have "\<And>w wa wb. low_equal (snd (fst (write_reg w wa wb s1))) (snd (fst (write_reg w wa wb s2)))"
using a2 by (metis write_reg_low_equal)
then show "low_equal
(snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) <<
unat (get_operand_w5 (snd instr ! 2)))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1))))))
(snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) <<
unat (get_operand_w5 (snd instr ! 2)))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2))))))"
proof -
assume a1: "\<And>w wa wb. low_equal (snd (fst (write_reg w wa wb s1))) (snd (fst (write_reg w wa wb s2)))"
have "\<And>u s. fst (get_curr_win u s) = (ucast (get_CWP (cpu_reg_val PSR s))::'a word, s)"
by (simp add: get_curr_win_def simpler_gets_def)
then show ?thesis
using a1 assms user_reg_val_low_equal by fastforce
qed
qed
next
case False
then have f1: "\<not>((fst instr = shift_type SLL) \<and> (get_operand_w5 ((snd instr)!3) \<noteq> 0))"
by auto
then show ?thesis
proof (cases "(fst instr = shift_type SRL) \<and> (get_operand_w5 ((snd instr)!3) \<noteq> 0)")
case True
then show ?thesis using a1 f1
apply (simp add: shift_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def)
apply (simp add: bind_def h1_def h2_def Let_def case_prod_unfold)
apply auto
apply (simp_all add: get_curr_win_low_equal)
proof -
assume a1: "low_equal s1 s2"
assume "t2 = snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) >> unat (ucast (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 2)) (snd (fst (get_curr_win () s2))))::word5)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2)))))"
assume "t1 = snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) >> unat (ucast (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 2)) (snd (fst (get_curr_win () s1))))::word5)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1)))))"
have "\<And>u s. fst (get_curr_win u s) = (ucast (get_CWP (cpu_reg_val PSR s))::'a word, s)"
by (simp add: get_curr_win_def simpler_gets_def)
then show "low_equal (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) >> unat (ucast (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 2)) (snd (fst (get_curr_win () s1))))::word5)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) >> unat (ucast (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 2)) (snd (fst (get_curr_win () s2))))::word5)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2))))))"
using a1 user_reg_val_low_equal write_reg_low_equal by fastforce
next
assume a2: "low_equal s1 s2"
assume "t1 = snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) >>
unat (get_operand_w5 (snd instr ! 2)))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1)))))"
assume "t2 = snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) >>
unat (get_operand_w5 (snd instr ! 2)))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2)))))"
have "\<And>u s. fst (get_curr_win u s) = (ucast (get_CWP (cpu_reg_val PSR s))::'a word, s)"
by (simp add: get_curr_win_def simpler_gets_def)
then show "low_equal
(snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) >>
unat (get_operand_w5 (snd instr ! 2)))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1))))))
(snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) >>
unat (get_operand_w5 (snd instr ! 2)))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2))))))"
using a2 user_reg_val_low_equal write_reg_low_equal by fastforce
qed
next
case False
then have f2: "\<not>((fst instr = shift_type SRL) \<and> (get_operand_w5 ((snd instr)!3) \<noteq> 0))"
by auto
then show ?thesis
proof (cases "(fst instr = shift_type SRA) \<and> (get_operand_w5 ((snd instr)!3) \<noteq> 0)")
case True
then show ?thesis using a1 f1 f2
apply (simp add: shift_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def)
apply (simp add: bind_def h1_def h2_def Let_def case_prod_unfold)
apply auto
apply (simp_all add: get_curr_win_low_equal)
proof -
assume a1: "low_equal s1 s2"
assume "t1 = snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) >>> unat (ucast (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 2)) (snd (fst (get_curr_win () s1))))::word5)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1)))))"
assume "t2 = snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) >>> unat (ucast (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 2)) (snd (fst (get_curr_win () s2))))::word5)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2)))))"
have "\<forall>w wa. user_reg_val wa w (snd (fst (get_curr_win () s1))) = user_reg_val wa w (snd (fst (get_curr_win () s2)))"
using a1 by (meson get_curr_win2_low_equal user_reg_val_low_equal)
then show "low_equal (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) >>> unat (ucast (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 2)) (snd (fst (get_curr_win () s1))))::word5)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) >>> unat (ucast (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 2)) (snd (fst (get_curr_win () s2))))::word5)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2))))))"
using a1 by (metis (no_types) get_curr_win2_low_equal write_reg_low_equal)
next
assume a2: "low_equal s1 s2"
assume "t1 = snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) >>>
unat (get_operand_w5 (snd instr ! 2)))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1)))))"
assume "t2 = snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) >>>
unat (get_operand_w5 (snd instr ! 2)))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2)))))"
have "\<forall>w wa. user_reg_val wa w (snd (fst (get_curr_win () s1))) = user_reg_val wa w (snd (fst (get_curr_win () s2)))"
using a2 by (meson get_curr_win2_low_equal user_reg_val_low_equal)
then show "low_equal
(snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) >>>
unat (get_operand_w5 (snd instr ! 2)))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1))))))
(snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) >>>
unat (get_operand_w5 (snd instr ! 2)))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2))))))"
using a2 get_curr_win2_low_equal write_reg_low_equal by fastforce
qed
next
case False
then show ?thesis using a1 f1 f2
apply (simp add: shift_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def)
apply (simp add: bind_def h1_def h2_def Let_def case_prod_unfold)
apply (simp add: return_def)
using get_curr_win2_low_equal by blast
qed
qed
qed
lemma add_instr_sub1_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (add_instr_sub1 instr_name result rs1_val operand2 s1)) \<and>
t2 = snd (fst (add_instr_sub1 instr_name result rs1_val operand2 s2))"
shows "low_equal t1 t2"
proof (cases "instr_name = arith_type ADDcc \<or> instr_name = arith_type ADDXcc")
case True
then show ?thesis using a1
apply (simp add: add_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (clarsimp simp add: cpu_reg_val_low_equal)
using write_cpu_low_equal by blast
next
case False
then show ?thesis using a1
apply (simp add: add_instr_sub1_def)
by (simp add: return_def)
qed
lemma add_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (add_instr instr s1)) \<and> t2 = snd (fst (add_instr instr s2))"
shows "low_equal t1 t2"
proof -
have f1: "low_equal s1 s2 \<and>
t1 = snd (fst (add_instr_sub1 (fst instr)
(if fst instr = arith_type ADD \<or> fst instr = arith_type ADDcc
then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) +
get_operand2 (snd instr) s1
else user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) +
get_operand2 (snd instr) s1 +
ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1))))))
(user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))))
(get_operand2 (snd instr) s1)
(snd (fst (write_reg
(if get_operand_w5 (snd instr ! 3) \<noteq> 0
then if fst instr = arith_type ADD \<or> fst instr = arith_type ADDcc
then user_reg_val (fst (fst (get_curr_win () s1)))
(get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) +
get_operand2 (snd instr) s1
else user_reg_val (fst (fst (get_curr_win () s1)))
(get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) +
get_operand2 (snd instr) s1 +
ucast (get_icc_C
(cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))
else user_reg_val (fst (fst (get_curr_win () s1)))
(get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1))))
(fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1)))))))) \<and>
t2 = snd (fst (add_instr_sub1 (fst instr)
(if fst instr = arith_type ADD \<or> fst instr = arith_type ADDcc
then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) +
get_operand2 (snd instr) s2
else user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) +
get_operand2 (snd instr) s2 +
ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2))))))
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))))
(get_operand2 (snd instr) s2)
(snd (fst (write_reg
(if get_operand_w5 (snd instr ! 3) \<noteq> 0
then if fst instr = arith_type ADD \<or> fst instr = arith_type ADDcc
then user_reg_val (fst (fst (get_curr_win () s2)))
(get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) +
get_operand2 (snd instr) s2
else user_reg_val (fst (fst (get_curr_win () s2)))
(get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) +
get_operand2 (snd instr) s2 +
ucast (get_icc_C
(cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))
else user_reg_val (fst (fst (get_curr_win () s2)))
(get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2))))))))"
using a1 apply (simp add: add_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (simp add: case_prod_unfold)
then show ?thesis
proof (cases "get_operand_w5 (snd instr ! 3) \<noteq> 0")
case True
then have f2: "get_operand_w5 (snd instr ! 3) \<noteq> 0" by auto
then show ?thesis
proof (cases "fst instr = arith_type ADD \<or> fst instr = arith_type ADDcc")
case True
then show ?thesis
using f1 f2 apply clarsimp
proof -
assume a1: "low_equal s1 s2"
assume "t1 = snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) + get_operand2 (snd instr) s1) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) + get_operand2 (snd instr) s1) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1))))))))"
assume a2: "t2 = snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2))))))))"
have f3: "\<forall>is. get_operand2 is s1 = get_operand2 is s2"
using a1 by (metis get_operand2_low_equal)
have f4: "fst (fst (get_curr_win () s1)) = fst (fst (get_curr_win () s2))"
using a1 by (meson get_curr_win_low_equal)
have "\<forall>s. snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) s + get_operand2 (snd instr) s2) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) s) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) s + get_operand2 (snd instr) s2) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2)))))))) = t2 \<or> \<not> low_equal s (snd (fst (get_curr_win () s2)))"
using a2 user_reg_val_low_equal by fastforce
then show "low_equal (snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) + get_operand2 (snd instr) s1) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) + get_operand2 (snd instr) s1) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1))))))))) (snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2)))))))))"
using f4 f3 a2 a1 by (metis (no_types) add_instr_sub1_low_equal get_curr_win2_low_equal write_reg_low_equal)
qed
next
case False
then show ?thesis
using f1 f2 apply clarsimp
proof -
assume a1: "low_equal s1 s2"
have f2: "\<forall>s sa sb w wa wb sc. (\<not> low_equal s sa \<or> sb \<noteq> snd (fst (write_reg w (wa::'a word) wb s)) \<or> sc \<noteq> snd (fst (write_reg w wa wb sa))) \<or> low_equal sb sc"
by (meson write_reg_low_equal)
have f3: "gets (\<lambda>s. ucast (get_CWP (cpu_reg_val PSR s))::'a word) = get_curr_win ()"
by (simp add: get_curr_win_def)
then have "((ucast (get_CWP (cpu_reg_val PSR s1)), s1), False) = (fst (get_curr_win () s1), snd (get_curr_win () s1))"
by (metis (no_types) prod.collapse simpler_gets_def)
then have "(ucast (get_CWP (cpu_reg_val PSR s1)), s1) = fst (get_curr_win () s1) \<and> \<not> snd (get_curr_win () s1)"
by blast
then have f4: "ucast (get_CWP (cpu_reg_val PSR s1)) = fst (fst (get_curr_win () s1)) \<and> s1 = snd (fst (get_curr_win () s1))"
by (metis (no_types) prod.collapse prod.simps(1))
have "((ucast (get_CWP (cpu_reg_val PSR s2)), s2), False) = (fst (get_curr_win () s2), snd (get_curr_win () s2))"
using f3 by (metis (no_types) prod.collapse simpler_gets_def)
then have "(ucast (get_CWP (cpu_reg_val PSR s2)), s2) = fst (get_curr_win () s2) \<and> \<not> snd (get_curr_win () s2)"
by blast
then have f5: "ucast (get_CWP (cpu_reg_val PSR s2)) = fst (fst (get_curr_win () s2)) \<and> s2 = snd (fst (get_curr_win () s2))"
by (metis prod.collapse prod.simps(1))
then have f6: "low_equal (snd (fst (get_curr_win () s1))) (snd (fst (get_curr_win () s2))) = low_equal s1 s2"
using f4 by presburger
have f7: "fst (fst (get_curr_win () s1)) = ucast (get_CWP (cpu_reg_val PSR s1))"
using f4 by presburger
have f8: "cpu_reg_val PSR s1 = cpu_reg_val PSR s2"
using a1 by (meson cpu_reg_val_low_equal)
have f9: "user_reg_val (ucast (get_CWP (cpu_reg_val PSR s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) = user_reg_val (ucast (get_CWP (cpu_reg_val PSR s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))"
using f6 a1 by (meson user_reg_val_low_equal)
have f10: "ucast (get_CWP (cpu_reg_val PSR s2)) = fst (fst (get_curr_win () s2))"
using f5 by meson
have f11: "\<forall>s sa is. \<not> low_equal (s::'a sparc_state) sa \<or> get_operand2 is s = get_operand2 is sa"
using get_operand2_low_equal by blast
then have f12: "user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) + get_operand2 (snd instr) s1 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1))))) = user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))"
using f9 f8 f5 f4 a1 by auto
then have "low_equal (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) + get_operand2 (snd instr) s1 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2))))))"
using f10 f8 f6 f4 f2 a1 by simp
then show "low_equal (snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) + get_operand2 (snd instr) s1 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) + get_operand2 (snd instr) s1 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1))))))))) (snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2)))))))))"
using f12 f11 f10 f9 f8 f7 a1 add_instr_sub1_low_equal by fastforce
qed
qed
next
case False
then have f3: "\<not> get_operand_w5 (snd instr ! 3) \<noteq> 0" by auto
then show ?thesis
proof (cases "fst instr = arith_type ADD \<or> fst instr = arith_type ADDcc")
case True
then show ?thesis
using f1 f3 apply clarsimp
proof -
assume a1: "low_equal s1 s2"
assume "t1 = snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) + get_operand2 (snd instr) s1) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1))))))))"
assume "t2 = snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2))))))))"
have f2: "\<forall>is. get_operand2 is s1 = get_operand2 is s2"
using a1 by (meson get_operand2_low_equal)
have f3: "fst (fst (get_curr_win () s1)) = fst (fst (get_curr_win () s2))"
using a1 by (meson get_curr_win_low_equal)
have "\<forall>w wa. user_reg_val wa w (snd (fst (get_curr_win () s1))) = user_reg_val wa w (snd (fst (get_curr_win () s2)))"
using a1 by (meson get_curr_win2_low_equal user_reg_val_low_equal)
then show "low_equal (snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) + get_operand2 (snd instr) s1) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1))))))))) (snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))))))))"
using f3 f2 a1 by (metis (no_types) add_instr_sub1_low_equal get_curr_win2_low_equal write_reg_low_equal)
qed
next
case False
then show ?thesis
using f1 f3 apply clarsimp
proof -
assume a1: "low_equal s1 s2"
have f2: "gets (\<lambda>s. ucast (get_CWP (cpu_reg_val PSR s))::'a word) = get_curr_win ()"
by (simp add: get_curr_win_def)
then have "((ucast (get_CWP (cpu_reg_val PSR s1)), s1), False) = (fst (get_curr_win () s1), snd (get_curr_win () s1))"
by (metis (no_types) prod.collapse simpler_gets_def)
then have "(ucast (get_CWP (cpu_reg_val PSR s1)), s1) = fst (get_curr_win () s1) \<and> \<not> snd (get_curr_win () s1)"
by fastforce
then have f3: "ucast (get_CWP (cpu_reg_val PSR s1)) = fst (fst (get_curr_win () s1)) \<and> s1 = snd (fst (get_curr_win () s1))"
by (metis prod.collapse prod.simps(1))
have "((ucast (get_CWP (cpu_reg_val PSR s2)), s2), False) = (fst (get_curr_win () s2), snd (get_curr_win () s2))"
using f2 by (metis (no_types) prod.collapse simpler_gets_def)
then have "(ucast (get_CWP (cpu_reg_val PSR s2)), s2) = fst (get_curr_win () s2) \<and> \<not> snd (get_curr_win () s2)"
by fastforce
then have f4: "ucast (get_CWP (cpu_reg_val PSR s2)) = fst (fst (get_curr_win () s2)) \<and> s2 = snd (fst (get_curr_win () s2))"
by (metis (no_types) prod.collapse prod.simps(1))
then have f5: "low_equal (snd (fst (get_curr_win () s1))) (snd (fst (get_curr_win () s2))) = low_equal s1 s2"
using f3 by presburger
have f6: "fst (fst (get_curr_win () s1)) = ucast (get_CWP (cpu_reg_val PSR s1))"
using f3 by auto
have f7: "cpu_reg_val PSR s1 = cpu_reg_val PSR s2"
using a1 by (meson cpu_reg_val_low_equal)
have f8: "\<forall>s sa w wa. \<not> low_equal s sa \<or> user_reg_val (w::'a word) wa s = user_reg_val w wa sa"
by (meson user_reg_val_low_equal)
have f9: "ucast (get_CWP (cpu_reg_val PSR s2)) = fst (fst (get_curr_win () s2))"
using f4 by meson
have "\<forall>s sa is. \<not> low_equal (s::'a sparc_state) sa \<or> get_operand2 is s = get_operand2 is sa"
using get_operand2_low_equal by blast
then have f10: "get_operand2 (snd instr) s1 = get_operand2 (snd instr) s2"
using a1 by meson
have f11: "cpu_reg_val PSR (snd (fst (get_curr_win () s2))) = cpu_reg_val PSR s1"
using f4 a1 by (simp add: cpu_reg_val_low_equal)
have f12: "user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1))) = 0"
by (meson user_reg_val_def)
have "user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2))) = 0"
by (meson user_reg_val_def)
then have "low_equal (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2))))))"
using f12 f9 f7 f5 f3 a1 write_reg_low_equal by fastforce
then have "low_equal (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))))) \<and> snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) + get_operand2 (snd instr) s1 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))))))) = snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (if get_operand_w5 (snd instr ! Suc 0) = 0 then 0 else user_reg (snd (fst (get_curr_win () s2))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))))))) \<and> snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))))))) = snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (if get_operand_w5 (snd instr ! Suc 0) = 0 then 0 else user_reg (snd (fst (get_curr_win () s2))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2))))))))"
using f11 f10 f9 f8 f7 f6 f5 f3 a1 by (simp add: user_reg_val_def)
then show "low_equal (snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) + get_operand2 (snd instr) s1 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1))))))))) (snd (fst (add_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) + get_operand2 (snd instr) s2 + ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))))))))"
using add_instr_sub1_low_equal by blast
qed
qed
qed
qed
lemma sub_instr_sub1_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (sub_instr_sub1 instr_name result rs1_val operand2 s1)) \<and>
t2 = snd (fst (sub_instr_sub1 instr_name result rs1_val operand2 s2))"
shows "low_equal t1 t2"
proof (cases "instr_name = arith_type SUBcc \<or> instr_name = arith_type SUBXcc")
case True
then show ?thesis using a1
apply (simp add: sub_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (clarsimp simp add: cpu_reg_val_low_equal)
using write_cpu_low_equal by blast
next
case False
then show ?thesis using a1
apply (simp add: sub_instr_sub1_def)
by (simp add: return_def)
qed
lemma sub_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (sub_instr instr s1)) \<and> t2 = snd (fst (sub_instr instr s2))"
shows "low_equal t1 t2"
proof -
have f1: "low_equal s1 s2 \<and>
t1 = snd (fst (sub_instr_sub1 (fst instr)
(if fst instr = arith_type SUB \<or> fst instr = arith_type SUBcc
then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) -
get_operand2 (snd instr) s1
else user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) -
get_operand2 (snd instr) s1 -
ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1))))))
(user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))))
(get_operand2 (snd instr) s1)
(snd (fst (write_reg
(if get_operand_w5 (snd instr ! 3) \<noteq> 0
then if fst instr = arith_type SUB \<or> fst instr = arith_type SUBcc
then user_reg_val (fst (fst (get_curr_win () s1)))
(get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) -
get_operand2 (snd instr) s1
else user_reg_val (fst (fst (get_curr_win () s1)))
(get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) -
get_operand2 (snd instr) s1 -
ucast (get_icc_C
(cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))
else user_reg_val (fst (fst (get_curr_win () s1)))
(get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1))))
(fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1)))))))) \<and>
t2 = snd (fst (sub_instr_sub1 (fst instr)
(if fst instr = arith_type SUB \<or> fst instr = arith_type SUBcc
then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) -
get_operand2 (snd instr) s2
else user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) -
get_operand2 (snd instr) s2 -
ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2))))))
(user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))))
(get_operand2 (snd instr) s2)
(snd (fst (write_reg
(if get_operand_w5 (snd instr ! 3) \<noteq> 0
then if fst instr = arith_type SUB \<or> fst instr = arith_type SUBcc
then user_reg_val (fst (fst (get_curr_win () s2)))
(get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) -
get_operand2 (snd instr) s2
else user_reg_val (fst (fst (get_curr_win () s2)))
(get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) -
get_operand2 (snd instr) s2 -
ucast (get_icc_C
(cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))
else user_reg_val (fst (fst (get_curr_win () s2)))
(get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2))))))))"
using a1 apply (simp add: sub_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (simp add: case_prod_unfold)
then show ?thesis
proof (cases "get_operand_w5 (snd instr ! 3) \<noteq> 0")
case True
then have f2: "get_operand_w5 (snd instr ! 3) \<noteq> 0" by auto
then show ?thesis
proof (cases "fst instr = arith_type SUB \<or> fst instr = arith_type SUBcc")
case True
then show ?thesis
using f1 f2 apply clarsimp
proof -
assume a1: "low_equal s1 s2"
assume a2: "t1 = snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1))))))))"
assume a3: "t2 = snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2))))))))"
then have f4: "snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s1) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s1) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2)))))))) = t2"
using a1 by (simp add: get_operand2_low_equal)
have "\<forall>s. \<not> low_equal (snd (fst (get_curr_win () s1))) s \<or> snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) s - get_operand2 (snd instr) s1) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) s) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) s - get_operand2 (snd instr) s1) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1)))))))) = t1"
using a2 a1 by (simp add: get_curr_win_low_equal user_reg_val_low_equal)
then show "low_equal (snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1))))))))) (snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2)))))))))"
using f4 a3 a2 a1 by (metis (no_types) get_curr_win2_low_equal sub_instr_sub1_low_equal write_reg_low_equal)
qed
next
case False
then show ?thesis
using f1 f2 apply clarsimp
proof -
assume a1: "low_equal s1 s2"
have f2: "fst (get_curr_win () s1) = (ucast (get_CWP (cpu_reg_val PSR s1)), s1)"
by (simp add: get_curr_win_def simpler_gets_def)
have f3: "cpu_reg_val PSR s1 = cpu_reg_val PSR s2"
using a1 by (meson cpu_reg_val_low_equal)
then have f4: "user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) = user_reg_val (ucast (get_CWP (cpu_reg_val PSR s2))) (get_operand_w5 (snd instr ! Suc 0)) s1"
using f2 by simp
have f5: "\<forall>s sa is. \<not> low_equal (s::'a sparc_state) sa \<or> get_operand2 is s = get_operand2 is sa"
using get_operand2_low_equal by blast
then have f6: "sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (user_reg_val (ucast (get_CWP (cpu_reg_val PSR s2))) (get_operand_w5 (snd instr ! Suc 0)) s2) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1)))))) = sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1))))))"
using f4 a1 by (simp add: user_reg_val_low_equal)
have f7: "fst (get_curr_win () s2) = (ucast (get_CWP (cpu_reg_val PSR s2)), s2)"
by (simp add: get_curr_win_def simpler_gets_def)
then have f8: "user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2))))) = user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))"
using f5 f2 a1 by (simp add: cpu_reg_val_low_equal user_reg_val_low_equal)
then have f9: "sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (user_reg_val (ucast (get_CWP (cpu_reg_val PSR s2))) (get_operand_w5 (snd instr ! Suc 0)) s2) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2)))))) = sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2))))))"
using f7 by fastforce
have "write_reg (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (ucast (get_CWP (cpu_reg_val PSR s2))) (get_operand_w5 (snd instr ! 3)) s2 = write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2)))"
using f8 f7 by simp
then have "low_equal (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2))))))"
using f3 f2 a1 by (metis (no_types) prod.sel(1) prod.sel(2) write_reg_low_equal)
then show "low_equal (snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s1))))))))) (snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (get_curr_win () s2)))))))))"
using f9 f6 by (metis (no_types) sub_instr_sub1_low_equal)
qed
qed
next
case False
then have f3: "\<not> get_operand_w5 (snd instr ! 3) \<noteq> 0" by auto
then show ?thesis
proof (cases "fst instr = arith_type SUB \<or> fst instr = arith_type SUBcc")
case True
then show ?thesis
using f1 f3 apply clarsimp
proof -
assume a1: "low_equal s1 s2"
assume "t1 = snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1))))))))"
assume "t2 = snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2))))))))"
have f2: "\<forall>is. get_operand2 is s1 = get_operand2 is s2"
using a1 get_operand2_low_equal by blast
have f3: "fst (fst (get_curr_win () s1)) = fst (fst (get_curr_win () s2))"
using a1 by (meson get_curr_win_low_equal)
have "\<forall>w wa. user_reg_val wa w (snd (fst (get_curr_win () s1))) = user_reg_val wa w (snd (fst (get_curr_win () s2)))"
using a1 by (metis (no_types) get_curr_win2_low_equal user_reg_val_low_equal)
then show "low_equal (snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1))))))))) (snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))))))))"
using f3 f2 a1 by (metis (no_types) get_curr_win2_low_equal sub_instr_sub1_low_equal write_reg_low_equal)
qed
next
case False
then show ?thesis
using f1 f3 apply clarsimp
proof -
assume a1: "low_equal s1 s2"
have f2: "\<forall>s sa sb w wa wb sc. (\<not> low_equal s sa \<or> sb \<noteq> snd (fst (write_reg w (wa::'a word) wb s)) \<or> sc \<noteq> snd (fst (write_reg w wa wb sa))) \<or> low_equal sb sc"
by (meson write_reg_low_equal)
have "((ucast (get_CWP (cpu_reg_val PSR s1)), s1), False) = get_curr_win () s1"
by (simp add: get_curr_win_def simpler_gets_def)
then have f3: "ucast (get_CWP (cpu_reg_val PSR s1)) = fst (fst (get_curr_win () s1)) \<and> s1 = snd (fst (get_curr_win () s1))"
by (metis (no_types) prod.collapse prod.simps(1))
have "((ucast (get_CWP (cpu_reg_val PSR s2)), s2), False) = get_curr_win () s2"
by (simp add: get_curr_win_def simpler_gets_def)
then have f4: "ucast (get_CWP (cpu_reg_val PSR s2)) = fst (fst (get_curr_win () s2)) \<and> s2 = snd (fst (get_curr_win () s2))"
by (metis (no_types) prod.collapse prod.simps(1))
have f5: "\<forall>s sa sb sc w wa wb sd. (\<not> low_equal (s::'a sparc_state) sa \<or> sb \<noteq> snd (fst (sub_instr_sub1 sc w wa wb s)) \<or> sd \<noteq> snd (fst (sub_instr_sub1 sc w wa wb sa))) \<or> low_equal sb sd"
by (meson sub_instr_sub1_low_equal)
have "low_equal (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2))))))"
using f4 f3 f2 a1 by (simp add: cpu_reg_val_low_equal user_reg_val_low_equal)
then show "low_equal (snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) - get_operand2 (snd instr) s1 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))))) (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) (get_operand2 (snd instr) s1) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (get_curr_win () s1))))))))) (snd (fst (sub_instr_sub1 (fst instr) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) - get_operand2 (snd instr) s2 - ucast (get_icc_C (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))))) (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) (get_operand2 (snd instr) s2) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))))))))"
using f5 f4 f3 a1 by (simp add: cpu_reg_val_low_equal get_operand2_low_equal user_reg_val_low_equal)
qed
qed
qed
qed
lemma mul_instr_sub1_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (mul_instr_sub1 instr_name result s1)) \<and>
t2 = snd (fst (mul_instr_sub1 instr_name result s2))"
shows "low_equal t1 t2"
proof (cases "instr_name \<in> {arith_type SMULcc,arith_type UMULcc}")
case True
then show ?thesis using a1
apply (simp add: mul_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (clarsimp simp add: cpu_reg_val_low_equal)
using write_cpu_low_equal by blast
next
case False
then show ?thesis using a1
apply (simp add: mul_instr_sub1_def)
by (simp add: return_def)
qed
lemma mul_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (mul_instr instr s1)) \<and> t2 = snd (fst (mul_instr instr s2))"
shows "low_equal t1 t2"
using a1
apply (simp add: mul_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
proof -
assume a1: "low_equal s1 s2 \<and> t1 = snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) \<noteq> 0 then ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) else user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1))))))))))) \<and> t2 = snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) \<noteq> 0 then ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) else user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))"
have f2: "\<forall>s sa sb sc w sd. \<not> low_equal (s::'a sparc_state) sa \<or> sb \<noteq> snd (fst (mul_instr_sub1 sc w s)) \<or> sd \<noteq> snd (fst (mul_instr_sub1 sc w sa)) \<or> low_equal sb sd"
using mul_instr_sub1_low_equal by blast
have f3: "\<forall>s sa sb w wa wb sc. \<not> low_equal s sa \<or> sb \<noteq> snd (fst (write_reg w (wa::'a word) wb s)) \<or> sc \<noteq> snd (fst (write_reg w wa wb sa)) \<or> low_equal sb sc"
by (meson write_reg_low_equal)
have f4: "\<forall>s sa sb w c sc. \<not> low_equal (s::'a sparc_state) sa \<or> sb \<noteq> snd (fst (write_cpu w c s)) \<or> sc \<noteq> snd (fst (write_cpu w c sa)) \<or> low_equal sb sc"
by (meson write_cpu_low_equal)
have f5: "low_equal s1 s2 \<and> t1 = snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1))))))))))) \<and> t2 = snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))"
using a1 by presburger
have f6: "((ucast (get_CWP (cpu_reg_val PSR s1)), s1), False) = (fst (get_curr_win () s1), snd (get_curr_win () s1))"
by (simp add: get_curr_win_def simpler_gets_def)
have f7: "fst (fst (get_curr_win () s1)) = fst (fst (get_curr_win () s2))"
using f5 by (meson get_curr_win_low_equal)
have "((ucast (get_CWP (cpu_reg_val PSR s2)), s2), False) = (fst (get_curr_win () s2), snd (get_curr_win () s2))"
by (simp add: get_curr_win_def simpler_gets_def)
then have f8: "ucast (get_CWP (cpu_reg_val PSR s2)) = fst (fst (get_curr_win () s2)) \<and> s2 = snd (fst (get_curr_win () s2))"
by (metis prod.collapse prod.simps(1))
then have f9: "low_equal (snd (fst (get_curr_win () s1))) (snd (fst (get_curr_win () s2)))"
using f6 f5 by (metis (no_types) prod.collapse prod.simps(1))
have f10: "\<forall>s sa w wa. \<not> low_equal s sa \<or> user_reg_val (w::'a word) wa s = user_reg_val w wa sa"
using user_reg_val_low_equal by blast
have f11: "get_operand2 (snd instr) s1 = get_operand2 (snd instr) (snd (fst (get_curr_win () s2)))"
using f9 f6 by (metis (no_types) get_operand2_low_equal prod.collapse prod.simps(1))
then have f12: "uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2) = uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1)"
using f10 f9 f8 f7 by presburger
then have f13: "(word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) \<noteq> (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1)) else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) \<or> (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) \<noteq> (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2)) else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) \<or> low_equal (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2))))))"
using f9 f4 by presburger
have "get_operand_w5 (snd instr ! 3) = 0 \<and> low_equal (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) \<longrightarrow> write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) = write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1))))))"
using f10 f7 by force
then have f14: "get_operand_w5 (snd instr ! 3) \<noteq> 0 \<or> low_equal (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1))))))))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2))))))))) \<or> \<not> low_equal (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2))))))"
using f3 by metis
then have f15: "low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))) \<or> fst instr \<noteq> arith_type UMULcc \<or> get_operand_w5 (snd instr ! 3) \<noteq> 0 \<or> (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) \<noteq> (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1)) else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) \<or> (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) \<noteq> (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2)) else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))"
using f13 f12 f2 by fastforce
have f16: "user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) = user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))"
using f10 f9 f7 by presburger
{ assume "fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))"
moreover
{ assume "\<not> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))"
moreover
{ assume "low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))) \<noteq> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))"
moreover
{ assume "mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))))) \<noteq> mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))"
then have "write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))) \<noteq> write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) \<or> fst instr \<noteq> arith_type UMULcc"
by fastforce }
moreover
{ assume "mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))) \<noteq> mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))"
then have "write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))) \<noteq> write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) \<or> fst instr \<noteq> arith_type UMULcc"
by fastforce }
ultimately have "write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))) \<noteq> write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) \<or> write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))) \<noteq> write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) \<or> fst instr \<noteq> arith_type UMULcc"
by force }
ultimately have "write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))) \<noteq> write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) \<or> write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))) \<noteq> write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) \<or> \<not> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))) \<or> fst instr \<noteq> arith_type UMULcc"
by fastforce }
ultimately have "fst instr = arith_type UMULcc \<and> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))) \<and> write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))) = write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) \<and> write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))) = write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) \<longrightarrow> get_operand_w5 (snd instr ! 3) = 0 \<or> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))))"
by blast }
moreover
{ assume "\<not> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))"
moreover
{ assume "\<not> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))) \<and> snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1))))))))))) = snd (fst (mul_instr_sub1 (arith_type UMULcc) (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1))))))))))) \<and> snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))) = snd (fst (mul_instr_sub1 (arith_type UMULcc) (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))"
then have "\<not> low_equal (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1))))))))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))"
using f2 by blast
moreover
{ assume "(if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) \<noteq> (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))))"
moreover
{ assume "(if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) \<noteq> ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))"
then have "(if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) = (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1)) else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) \<longrightarrow> get_operand_w5 (snd instr ! 3) = 0"
by (metis f11 f16 f8) }
ultimately have "(if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) = (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1)) else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) \<longrightarrow> get_operand_w5 (snd instr ! 3) = 0"
by fastforce }
ultimately have "fst instr = arith_type UMULcc \<and> (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) = (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1)) else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) \<longrightarrow> get_operand_w5 (snd instr ! 3) = 0"
using f13 f7 f3 by fastforce }
moreover
{ assume "mul_instr_sub1 (arith_type UMULcc) (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2))))))))) \<noteq> mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))"
moreover
{ assume "(if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) \<noteq> ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))"
then have "(if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) = (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1)) else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) \<longrightarrow> get_operand_w5 (snd instr ! 3) = 0"
by (metis f11 f16 f8) }
ultimately have "fst instr = arith_type UMULcc \<and> (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) = (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1)) else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) \<longrightarrow> get_operand_w5 (snd instr ! 3) = 0"
by fastforce }
ultimately have "fst instr = arith_type UMULcc \<longrightarrow> get_operand_w5 (snd instr ! 3) = 0"
using f12 by fastforce }
moreover
{ assume "write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))) \<noteq> write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2))))))"
then have "fst instr = arith_type UMULcc \<longrightarrow> get_operand_w5 (snd instr ! 3) = 0"
by presburger }
ultimately have "fst instr = arith_type UMULcc \<longrightarrow> get_operand_w5 (snd instr ! 3) = 0 \<or> get_operand_w5 (snd instr ! 3) = 0 \<or> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))))"
by force
moreover
{ assume "fst instr \<noteq> arith_type UMULcc"
{ assume "fst instr \<noteq> arith_type UMULcc \<and> low_equal (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1))))))))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))"
moreover
{ assume "fst instr \<noteq> arith_type UMULcc \<and> low_equal (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1))))))))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2))))))))) \<and> snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))) = snd (fst (mul_instr_sub1 (arith_type UMUL) (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))"
then have "(fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> fst instr \<noteq> arith_type UMUL \<and> fst instr \<noteq> arith_type UMULcc \<or> (get_operand_w5 (snd instr ! 3) = 0 \<or> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))))) \<or> get_operand_w5 (snd instr ! 3) = 0"
using f2 by presburger }
ultimately have "fst instr \<noteq> arith_type UMULcc \<and> (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) \<noteq> ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) \<or> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> fst instr \<noteq> arith_type UMUL \<and> fst instr \<noteq> arith_type UMULcc \<or> (get_operand_w5 (snd instr ! 3) = 0 \<or> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))))) \<or> get_operand_w5 (snd instr ! 3) = 0"
by fastforce }
then have "(get_operand_w5 (snd instr ! 3) = 0 \<or> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))))) \<or> get_operand_w5 (snd instr ! 3) = 0"
using f16 f11 f9 f8 f7 f4 f3 f2 by force }
moreover
{ assume "get_operand_w5 (snd instr ! 3) = 0"
moreover
{ assume "(fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc) \<and> get_operand_w5 (snd instr ! 3) = 0"
moreover
{ assume "((fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc) \<and> get_operand_w5 (snd instr ! 3) = 0) \<and> \<not> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))"
moreover
{ assume "((fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc) \<and> get_operand_w5 (snd instr ! 3) = 0) \<and> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))) \<noteq> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))"
then have "((fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc) \<and> get_operand_w5 (snd instr ! 3) = 0) \<and> arith_type UMUL \<noteq> arith_type UMULcc \<or> fst instr \<noteq> arith_type UMULcc \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc) \<and> get_operand_w5 (snd instr ! 3) = 0"
by fastforce
then have "((fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc) \<and> get_operand_w5 (snd instr ! 3) = 0) \<and> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<or> fst instr \<noteq> arith_type UMULcc \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc) \<and> get_operand_w5 (snd instr ! 3) = 0"
by force }
ultimately have "((fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc) \<and> get_operand_w5 (snd instr ! 3) = 0) \<and> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<or> ((fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc) \<and> get_operand_w5 (snd instr ! 3) = 0) \<and> \<not> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))) \<or> fst instr \<noteq> arith_type UMULcc \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc) \<and> get_operand_w5 (snd instr ! 3) = 0"
by simp }
ultimately have "((fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc) \<and> get_operand_w5 (snd instr ! 3) = 0) \<and> \<not> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1)))) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (snd (fst (write_reg (if get_operand_w5 (snd instr ! 3) = 0 then user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))) else ucast (if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((if fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc then word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64 else word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))) \<or> fst instr \<noteq> arith_type UMULcc \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc) \<and> get_operand_w5 (snd instr ! 3) = 0 \<or> (get_operand_w5 (snd instr ! 3) = 0 \<or> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))))) \<and> (get_operand_w5 (snd instr ! 3) \<noteq> 0 \<or> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))))"
by auto
then have "fst instr \<noteq> arith_type UMULcc \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc) \<and> get_operand_w5 (snd instr ! 3) = 0 \<or> (get_operand_w5 (snd instr ! 3) = 0 \<or> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))))) \<and> (get_operand_w5 (snd instr ! 3) \<noteq> 0 \<or> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))))"
using f15 by presburger
then have "(get_operand_w5 (snd instr ! 3) = 0 \<or> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))))) \<and> (get_operand_w5 (snd instr ! 3) \<noteq> 0 \<or> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))))"
using f14 f13 f12 f2 by force }
ultimately have "(get_operand_w5 (snd instr ! 3) = 0 \<or> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))))) \<and> (get_operand_w5 (snd instr ! 3) \<noteq> 0 \<or> (fst instr \<noteq> arith_type UMUL \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMUL \<or> fst instr = arith_type UMULcc \<or> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))))"
using f16 f14 f11 f9 f8 f4 f2 by fastforce }
ultimately show "(get_operand_w5 (snd instr ! 3) \<noteq> 0 \<longrightarrow> (fst instr = arith_type UMUL \<longrightarrow> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMULcc \<longrightarrow> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMUL \<and> fst instr \<noteq> arith_type UMULcc \<longrightarrow> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3)) (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2)))))))))))))) \<and> (get_operand_w5 (snd instr ! 3) = 0 \<longrightarrow> (fst instr = arith_type UMUL \<longrightarrow> low_equal (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMUL) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr = arith_type UMULcc \<longrightarrow> low_equal (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * uint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (arith_type UMULcc) (ucast (word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (uint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * uint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))) \<and> (fst instr \<noteq> arith_type UMUL \<and> fst instr \<noteq> arith_type UMULcc \<longrightarrow> low_equal (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1))))))) (fst (fst (get_curr_win () s1))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s1))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))) * sint (get_operand2 (snd instr) s1))::word64) >> 32)) Y (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (mul_instr_sub1 (fst instr) (ucast (word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64)) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (write_cpu (ucast ((word_of_int (sint (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))) * sint (get_operand2 (snd instr) s2))::word64) >> 32)) Y (snd (fst (get_curr_win () s2))))))))))))))"
by blast
qed
lemma div_write_new_val_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (div_write_new_val i result temp_V s1)) \<and>
t2 = snd (fst (div_write_new_val i result temp_V s2))"
shows "low_equal t1 t2"
proof (cases "(fst i) \<in> {arith_type UDIVcc,arith_type SDIVcc}")
case True
then show ?thesis using a1
apply (simp add: div_write_new_val_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (clarsimp simp add: cpu_reg_val_low_equal)
using write_cpu_low_equal by blast
next
case False
then show ?thesis using a1
apply (simp add: div_write_new_val_def)
by (simp add: return_def)
qed
lemma div_comp_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (div_comp instr rs1 rd operand2 s1)) \<and>
t2 = snd (fst (div_comp instr rs1 rd operand2 s2))"
shows "low_equal t1 t2"
using a1
apply (simp add: div_comp_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (clarsimp simp add: get_curr_win_low_equal)
proof -
assume a1: "low_equal s1 s2"
have f2: "\<forall>s sa sb w wa wb sc. \<not> low_equal s sa \<or> sb \<noteq> snd (fst (write_reg w (wa::'a word) wb s)) \<or> sc \<noteq> snd (fst (write_reg w wa wb sa)) \<or> low_equal sb sc"
by (meson write_reg_low_equal)
have f3: "gets (\<lambda>s. ucast (get_CWP (cpu_reg_val PSR s))::'a word) = get_curr_win ()"
by (simp add: get_curr_win_def)
then have "((ucast (get_CWP (cpu_reg_val PSR s1)), s1), False) = (fst (get_curr_win () s1), snd (get_curr_win () s1))"
by (metis (no_types) prod.collapse simpler_gets_def)
then have f4: "ucast (get_CWP (cpu_reg_val PSR s1)) = fst (fst (get_curr_win () s1)) \<and> s1 = snd (fst (get_curr_win () s1))"
by (metis prod.collapse prod.simps(1))
have "((ucast (get_CWP (cpu_reg_val PSR s2)), s2), False) = (fst (get_curr_win () s2), snd (get_curr_win () s2))"
using f3 by (metis (no_types) prod.collapse simpler_gets_def)
then have f5: "ucast (get_CWP (cpu_reg_val PSR s2)) = fst (fst (get_curr_win () s2)) \<and> s2 = snd (fst (get_curr_win () s2))"
by (metis (no_types) prod.collapse prod.simps(1))
then have f6: "low_equal (snd (fst (get_curr_win () s1))) (snd (fst (get_curr_win () s2)))"
using f4 a1 by presburger
have f7: "\<forall>s sa sb p w wa sc. \<not> low_equal (s::'a sparc_state) sa \<or> sb \<noteq> snd (fst (div_write_new_val p w wa s)) \<or> sc \<noteq> snd (fst (div_write_new_val p w wa sa)) \<or> low_equal sb sc"
by (meson div_write_new_val_low_equal)
have f8: "cpu_reg_val PSR s2 = cpu_reg_val PSR s1"
using a1 by (simp add: cpu_reg_val_def low_equal_def)
then have "fst (fst (get_curr_win () s2)) = ucast (get_CWP (cpu_reg_val PSR s1))"
using f5 by presburger
then have f9: "fst (fst (get_curr_win () s2)) = fst (fst (get_curr_win () s1))"
using f4 by presburger
have f10: "fst (fst (get_curr_win () s1)) = fst (fst (get_curr_win () s2))"
using f8 f5 f4 by presburger
have f11: "(word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))::word64) = word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))"
using f5 f4 a1 by (metis (no_types) cpu_reg_val_def low_equal_def user_reg_val_low_equal)
have f12: "ucast (get_CWP (cpu_reg_val PSR s1)) = fst (fst (get_curr_win () s2))"
using f8 f5 by presburger
then have "rd = 0 \<longrightarrow> (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) = user_reg_val (ucast (get_CWP (cpu_reg_val PSR s1))) 0 (snd (fst (get_curr_win () s1)))"
using f6 user_reg_val_low_equal by fastforce
then have f13: "rd = 0 \<longrightarrow> write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (ucast (get_CWP (cpu_reg_val PSR s1))) 0 (snd (fst (get_curr_win () s1))) = write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s1))) rd (snd (fst (get_curr_win () s1))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s1)))"
using f12 f10 by presburger
have f14: "write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (ucast (get_CWP (cpu_reg_val PSR s1))) rd (snd (fst (get_curr_win () s2))) = write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2)))"
using f12 f11 by auto
have "write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (ucast (get_CWP (cpu_reg_val PSR s1))) rd (snd (fst (get_curr_win () s1))) = write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s1))) rd (snd (fst (get_curr_win () s1))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s1))) \<and> write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (ucast (get_CWP (cpu_reg_val PSR s1))) rd (snd (fst (get_curr_win () s2))) = write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))) \<longrightarrow> low_equal (snd (fst (write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s1))) rd (snd (fst (get_curr_win () s1))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))))))"
using f6 f2 by metis
moreover
{ assume "low_equal (snd (fst (write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s1))) rd (snd (fst (get_curr_win () s1))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))))))"
then have "low_equal (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (snd (fst (write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s1))) rd (snd (fst (get_curr_win () s1))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s1))))))))) (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (snd (fst (write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2)))))))))"
using f11 f9 f7 by metis
moreover
{ assume "low_equal (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (snd (fst (write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s1))) rd (snd (fst (get_curr_win () s1))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s1))))))))) (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (snd (fst (write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))))))))) \<noteq> low_equal (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (snd (fst (write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s1))))))))) (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (snd (fst (write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2)))))))))"
then have "div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2) = (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s1))) rd (snd (fst (get_curr_win () s1))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) \<longrightarrow> rd = 0"
by fastforce }
ultimately have "div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2) = (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s1))) rd (snd (fst (get_curr_win () s1))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) \<longrightarrow> rd = 0 \<or> (rd \<noteq> 0 \<longrightarrow> low_equal (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (snd (fst (write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s1))))))))) (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (snd (fst (write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2)))))))))) \<and> (rd = 0 \<longrightarrow> low_equal (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s1))))))))) (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2))))))))))"
by fastforce }
moreover
{ assume "write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (ucast (get_CWP (cpu_reg_val PSR s1))) rd (snd (fst (get_curr_win () s1))) \<noteq> write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s1))) rd (snd (fst (get_curr_win () s1))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s1)))"
then have "div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2) \<noteq> (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s1))) rd (snd (fst (get_curr_win () s1))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2))"
using f12 f9 by fastforce }
moreover
{ assume "write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (ucast (get_CWP (cpu_reg_val PSR s1))) rd (snd (fst (get_curr_win () s2))) \<noteq> write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2)))"
then have "rd = 0"
using f14 by presburger }
moreover
{ assume "rd = 0"
then have "rd = 0 \<and> low_equal (snd (fst (write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s1))) rd (snd (fst (get_curr_win () s1))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))))))"
using f13 f12 f6 f2 by metis
then have "rd = 0 \<and> low_equal (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (snd (fst (write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s1))) rd (snd (fst (get_curr_win () s1))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s1))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s1))))))))) (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (snd (fst (write_reg (if rd = 0 then user_reg_val (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2))) else div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2)))))))))"
using f11 f9 f7 by metis
then have "(rd \<noteq> 0 \<longrightarrow> low_equal (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (snd (fst (write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s1))))))))) (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (snd (fst (write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2)))))))))) \<and> (rd = 0 \<longrightarrow> low_equal (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s1))))))))) (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2))))))))))"
using f10 by fastforce }
ultimately show "(rd \<noteq> 0 \<longrightarrow> low_equal (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (snd (fst (write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s1))))))))) (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (snd (fst (write_reg (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (fst (fst (get_curr_win () s2))) rd (snd (fst (get_curr_win () s2)))))))))) \<and> (rd = 0 \<longrightarrow> low_equal (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s1))))) operand2 >> 31))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s1))))))))) (snd (fst (div_write_new_val instr (div_comp_result instr (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2)) (div_comp_temp_V instr (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 32)) (ucast (div_comp_temp_64bit instr (word_cat (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (user_reg_val (fst (fst (get_curr_win () s2))) rs1 (snd (fst (get_curr_win () s2))))) operand2 >> 31))) (snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0 (snd (fst (get_curr_win () s2))))))))))"
using f9 by fastforce
qed
lemma div_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (div_instr instr s1)) \<and> t2 = snd (fst (div_instr instr s2))"
shows "low_equal t1 t2"
using a1
apply (simp add: div_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: return_def)
apply (auto simp add: get_operand2_low_equal)
apply (simp add: raise_trap_def add_trap_set_def simpler_modify_def)
apply (auto simp add: traps_low_equal)
apply (blast intro: mod_trap_low_equal)
using div_comp_low_equal by blast
lemma get_curr_win_traps_low_equal:
assumes a1: "low_equal s1 s2"
shows "low_equal
(snd (fst (get_curr_win () s1))
\<lparr>traps := insert some_trap (traps (snd (fst (get_curr_win () s1))))\<rparr>)
(snd (fst (get_curr_win () s2))
\<lparr>traps := insert some_trap (traps (snd (fst (get_curr_win () s2))))\<rparr>)"
proof -
from a1 have f1: "low_equal (snd (fst (get_curr_win () s1))) (snd (fst (get_curr_win () s2)))"
using get_curr_win2_low_equal by auto
then have f2: "(traps (snd (fst (get_curr_win () s1)))) =
(traps (snd (fst (get_curr_win () s2))))"
using traps_low_equal by auto
then show ?thesis using f1 f2 mod_trap_low_equal
by fastforce
qed
lemma save_restore_instr_sub1_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (save_retore_sub1 result new_cwp rd s1)) \<and>
t2 = snd (fst (save_retore_sub1 result new_cwp rd s2))"
shows "low_equal t1 t2"
using a1
apply (simp add: save_retore_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (auto simp add: cpu_reg_val_low_equal)
using write_cpu_low_equal write_reg_low_equal
by fastforce
lemma get_WIM_bit_low_equal:
assumes a1: "low_equal s1 s2"
shows "get_WIM_bit (unat (((word_of_int ((uint (fst (fst (get_curr_win () s1))) - 1) mod NWINDOWS)))::word5))
(cpu_reg_val WIM (snd (fst (get_curr_win () s1)))) =
get_WIM_bit (unat (((word_of_int ((uint (fst (fst (get_curr_win () s2))) -1) mod NWINDOWS)))::word5))
(cpu_reg_val WIM (snd (fst (get_curr_win () s2))))"
proof -
from a1 have f1: "low_equal (snd (fst (get_curr_win () s1))) (snd (fst (get_curr_win () s2)))"
using get_curr_win2_low_equal by blast
then have f2: "(cpu_reg_val WIM (snd (fst (get_curr_win () s1)))) =
(cpu_reg_val WIM (snd (fst (get_curr_win () s2))))"
using cpu_reg_val_low_equal by auto
from a1 have "(fst (fst (get_curr_win () s1))) = (fst (fst (get_curr_win () s2)))"
using get_curr_win_low_equal by auto
then show ?thesis using f1 f2
by auto
qed
lemma get_WIM_bit_low_equal2:
assumes a1: "low_equal s1 s2"
shows "get_WIM_bit (unat (((word_of_int ((uint (fst (fst (get_curr_win () s1))) + 1) mod NWINDOWS)))::word5))
(cpu_reg_val WIM (snd (fst (get_curr_win () s1)))) =
get_WIM_bit (unat (((word_of_int ((uint (fst (fst (get_curr_win () s2))) + 1) mod NWINDOWS)))::word5))
(cpu_reg_val WIM (snd (fst (get_curr_win () s2))))"
proof -
from a1 have f1: "low_equal (snd (fst (get_curr_win () s1))) (snd (fst (get_curr_win () s2)))"
using get_curr_win2_low_equal by blast
then have f2: "(cpu_reg_val WIM (snd (fst (get_curr_win () s1)))) =
(cpu_reg_val WIM (snd (fst (get_curr_win () s2))))"
using cpu_reg_val_low_equal by auto
from a1 have "(fst (fst (get_curr_win () s1))) = (fst (fst (get_curr_win () s2)))"
using get_curr_win_low_equal by auto
then show ?thesis using f1 f2
by auto
qed
lemma save_restore_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (save_restore_instr instr s1)) \<and> t2 = snd (fst (save_restore_instr instr s2))"
shows "low_equal t1 t2"
proof (cases "fst instr = ctrl_type SAVE")
case True
then have f1: "fst instr = ctrl_type SAVE" by auto
then show ?thesis using a1
apply (simp add: save_restore_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply auto
apply (simp add: raise_trap_def add_trap_set_def simpler_modify_def)
apply (simp add: get_curr_win_traps_low_equal)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: get_WIM_bit_low_equal)
apply (simp add: get_WIM_bit_low_equal)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: get_curr_win_low_equal)
using get_curr_win2_low_equal save_restore_instr_sub1_low_equal get_addr2_low_equal
by metis
next
case False
then show ?thesis using a1
apply (simp add: save_restore_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply auto
apply (simp add: raise_trap_def add_trap_set_def simpler_modify_def)
apply (simp add: get_curr_win_traps_low_equal)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: get_WIM_bit_low_equal2)
apply (simp add: get_WIM_bit_low_equal2)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: get_curr_win_low_equal)
using get_curr_win2_low_equal save_restore_instr_sub1_low_equal get_addr2_low_equal
by metis
qed
lemma call_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (call_instr instr s1)) \<and> t2 = snd (fst (call_instr instr s2))"
shows "low_equal t1 t2"
using a1
apply (simp add: call_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (auto simp add: get_curr_win_low_equal)
using cpu_reg_val_low_equal get_curr_win2_low_equal
write_cpu_low_equal write_reg_low_equal
proof -
assume a1: "low_equal s1 s2"
assume "t1 = snd (fst (write_cpu (cpu_reg_val PC (snd (fst (get_curr_win () s1))) + (ucast (get_operand_w30 (snd instr ! 0)) << 2)) nPC (snd (fst (write_cpu (cpu_reg_val nPC (snd (fst (get_curr_win () s1)))) PC (snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) 15 (snd (fst (get_curr_win () s1)))))))))))"
assume "t2 = snd (fst (write_cpu (cpu_reg_val PC (snd (fst (get_curr_win () s2))) + (ucast (get_operand_w30 (snd instr ! 0)) << 2)) nPC (snd (fst (write_cpu (cpu_reg_val nPC (snd (fst (get_curr_win () s2)))) PC (snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 15 (snd (fst (get_curr_win () s2)))))))))))"
have "\<forall>c. cpu_reg_val c (snd (fst (get_curr_win () s1))) = cpu_reg_val c (snd (fst (get_curr_win () s2)))"
using a1 by (meson cpu_reg_val_low_equal get_curr_win2_low_equal)
then show "low_equal (snd (fst (write_cpu (cpu_reg_val PC (snd (fst (get_curr_win () s1))) + (ucast (get_operand_w30 (snd instr ! 0)) << 2)) nPC (snd (fst (write_cpu (cpu_reg_val nPC (snd (fst (get_curr_win () s1)))) PC (snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) 15 (snd (fst (get_curr_win () s1)))))))))))) (snd (fst (write_cpu (cpu_reg_val PC (snd (fst (get_curr_win () s2))) + (ucast (get_operand_w30 (snd instr ! 0)) << 2)) nPC (snd (fst (write_cpu (cpu_reg_val nPC (snd (fst (get_curr_win () s2)))) PC (snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 15 (snd (fst (get_curr_win () s2))))))))))))"
using a1 by (metis (no_types) get_curr_win2_low_equal write_cpu_low_equal write_reg_low_equal)
qed
lemma jmpl_instr_low_equal_sub1:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (write_cpu (get_addr (snd instr) (snd (fst (get_curr_win () s2)))) nPC
(snd (fst (write_cpu (cpu_reg_val nPC
(snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s1))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1)))))))
PC (snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s1))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1))))))))))) \<and>
t2 = snd (fst (write_cpu (get_addr (snd instr) (snd (fst (get_curr_win () s2)))) nPC
(snd (fst (write_cpu (cpu_reg_val nPC
(snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s2))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2)))))))
PC (snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s2))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2)))))))))))"
shows "low_equal t1 t2"
proof -
from a1 have f1: "low_equal (snd (fst (get_curr_win () s1))) (snd (fst (get_curr_win () s2)))"
using get_curr_win2_low_equal by blast
then have f2: "(cpu_reg_val PC (snd (fst (get_curr_win () s1)))) =
(cpu_reg_val PC (snd (fst (get_curr_win () s2))))"
using cpu_reg_val_low_equal by blast
then have f3: "low_equal
(snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s1))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1))))))
(snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s2))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2))))))"
using f1 write_reg_low_equal by fastforce
then have "(cpu_reg_val nPC
(snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s1))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1))))))) =
(cpu_reg_val nPC
(snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s2))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2)))))))"
using cpu_reg_val_low_equal by auto
then have f4: "low_equal
(snd (fst (write_cpu (cpu_reg_val nPC
(snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s1))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1)))))))
PC (snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s1))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s1)))))))))
(snd (fst (write_cpu (cpu_reg_val nPC
(snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s2))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2)))))))
PC (snd (fst (write_reg (cpu_reg_val PC (snd (fst (get_curr_win () s2))))
(fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! 3))
(snd (fst (get_curr_win () s2)))))))))"
using f3 write_cpu_low_equal by fastforce
then show ?thesis using write_cpu_low_equal
using assms by blast
qed
lemma jmpl_instr_low_equal_sub2:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (write_cpu (get_addr (snd instr) (snd (fst (get_curr_win () s2)))) nPC
(snd (fst (write_cpu (cpu_reg_val nPC
(snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1))))))) PC (snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1))))))))))) \<and>
t2 = snd (fst (write_cpu (get_addr (snd instr) (snd (fst (get_curr_win () s2)))) nPC
(snd (fst (write_cpu (cpu_reg_val nPC (snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2))))))) PC (snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2)))))))))))"
shows "low_equal t1 t2"
proof -
from a1 have f1: "low_equal (snd (fst (get_curr_win () s1))) (snd (fst (get_curr_win () s2)))"
using get_curr_win2_low_equal by blast
then have f2: "(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1)))) =
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2))))"
using user_reg_val_low_equal by blast
then have f3: "low_equal
(snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1))))))
(snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2))))))"
using f1 write_reg_low_equal by fastforce
then have "(cpu_reg_val nPC
(snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1))))))) =
(cpu_reg_val nPC (snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2)))))))"
using cpu_reg_val_low_equal by blast
then have "low_equal
(snd (fst (write_cpu (cpu_reg_val nPC
(snd (fst (write_reg (user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1))))))) PC (snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s1)))))))))
(snd (fst (write_cpu (cpu_reg_val nPC (snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2))))))) PC (snd (fst (write_reg
(user_reg_val (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) 0
(snd (fst (get_curr_win () s2)))))))))"
using f1 f2 f3 write_cpu_low_equal by fastforce
then show ?thesis
using write_cpu_low_equal
using assms by blast
qed
lemma jmpl_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (jmpl_instr instr s1)) \<and> t2 = snd (fst (jmpl_instr instr s2))"
shows "low_equal t1 t2"
using a1
apply (simp add: jmpl_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply auto
apply (simp add: raise_trap_def add_trap_set_def simpler_modify_def)
apply (simp add: get_curr_win_traps_low_equal)
apply (simp add: get_addr2_low_equal)
apply (simp add: get_addr2_low_equal)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp_all add: get_addr2_low_equal)
apply (simp_all add: get_curr_win_low_equal)
apply (case_tac "get_operand_w5 (snd instr ! 3) \<noteq> 0")
apply auto
using jmpl_instr_low_equal_sub1 apply blast
apply (simp_all add: get_curr_win_low_equal)
using jmpl_instr_low_equal_sub2 by blast
lemma rett_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
\<not> snd (rett_instr instr s1) \<and>
\<not> snd (rett_instr instr s2) \<and>
((ucast (get_S (cpu_reg_val PSR s1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR s2)))::word1) = 0 \<and>
t1 = snd (fst (rett_instr instr s1)) \<and> t2 = snd (fst (rett_instr instr s2))"
shows "low_equal t1 t2"
using a1
apply (simp add: rett_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply auto
apply (simp add: raise_trap_def add_trap_set_def simpler_modify_def)
apply (simp add: return_def)
using mod_trap_low_equal traps_low_equal apply fastforce
using cpu_reg_val_low_equal apply fastforce
using cpu_reg_val_low_equal apply fastforce
apply (simp add: bind_def h1_def h2_def Let_def)
by (simp add: case_prod_unfold fail_def)
lemma read_state_reg_low_equal:
assumes a1: "low_equal s1 s2 \<and>
((ucast (get_S (cpu_reg_val PSR s1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR s2)))::word1) = 0 \<and>
t1 = snd (fst (read_state_reg_instr instr s1)) \<and>
t2 = snd (fst (read_state_reg_instr instr s2))"
shows "low_equal t1 t2"
proof (cases "(fst instr \<in> {sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR} \<or>
(fst instr = sreg_type RDASR \<and> privileged_ASR (get_operand_w5 ((snd instr)!0))))")
case True
then have "(fst instr \<in> {sreg_type RDPSR,sreg_type RDWIM,sreg_type RDTBR} \<or>
(fst instr = sreg_type RDASR \<and> privileged_ASR (get_operand_w5 ((snd instr)!0))))
\<and> ((ucast (get_S (cpu_reg_val PSR (snd (fst (get_curr_win () s1))))))::word1) = 0
\<and> ((ucast (get_S (cpu_reg_val PSR (snd (fst (get_curr_win () s2))))))::word1) = 0"
by (metis assms get_curr_win_privilege)
then show ?thesis using a1
apply (simp add: read_state_reg_instr_def)
apply (simp add: Let_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: raise_trap_def add_trap_set_def simpler_modify_def)
apply clarsimp
using get_curr_win_traps_low_equal
by auto
next
case False
then have f1: "\<not>((fst instr = sreg_type RDPSR \<or>
fst instr = sreg_type RDWIM \<or>
fst instr = sreg_type RDTBR \<or>
fst instr = sreg_type RDASR \<and> privileged_ASR (get_operand_w5 (snd instr ! 0))))"
by blast
then show ?thesis
proof (cases "illegal_instruction_ASR (get_operand_w5 ((snd instr)!0))")
case True
then show ?thesis using a1 f1
apply read_state_reg_instr_privilege_proof
by (simp add: illegal_instruction_ASR_def)
next
case False
then have f2: "\<not>(illegal_instruction_ASR (get_operand_w5 ((snd instr)!0)))"
by auto
then show ?thesis
proof (cases "(get_operand_w5 ((snd instr)!1)) \<noteq> 0")
case True
then have f3: "(get_operand_w5 ((snd instr)!1)) \<noteq> 0"
by auto
then show ?thesis
proof (cases "fst instr = sreg_type RDY")
case True
then show ?thesis using a1 f1 f2 f3
apply (simp add: read_state_reg_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (auto simp add: get_curr_win_low_equal)
using cpu_reg_val_low_equal get_curr_win2_low_equal write_reg_low_equal
proof -
assume "low_equal s1 s2"
then have "low_equal (snd (fst (get_curr_win () s1))) (snd (fst (get_curr_win () s2)))"
by (meson get_curr_win2_low_equal)
then show "low_equal (snd (fst (write_reg (cpu_reg_val Y (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (cpu_reg_val Y (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))))))"
using cpu_reg_val_low_equal write_reg_low_equal by fastforce
qed
next
case False
then have f4: "\<not>(fst instr = sreg_type RDY)" by auto
then show ?thesis
proof (cases "fst instr = sreg_type RDASR")
case True
then show ?thesis using a1 f1 f2 f3 f4
apply read_state_reg_instr_privilege_proof
apply (clarsimp simp add: get_curr_win_low_equal)
using cpu_reg_val_low_equal get_curr_win2_low_equal write_reg_low_equal
proof -
assume a1: "low_equal s1 s2"
then have "cpu_reg_val (ASR (get_operand_w5 (snd instr ! 0))) (snd (fst (get_curr_win () s1))) = cpu_reg_val (ASR (get_operand_w5 (snd instr ! 0))) (snd (fst (get_curr_win () s2)))"
by (meson cpu_reg_val_low_equal get_curr_win2_low_equal)
then show "low_equal (snd (fst (write_reg (cpu_reg_val (ASR (get_operand_w5 (snd instr ! 0))) (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (cpu_reg_val (ASR (get_operand_w5 (snd instr ! 0))) (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))))))"
using a1 by (metis (no_types) get_curr_win2_low_equal write_reg_low_equal)
qed
next
case False
then have f5: "\<not>(fst instr = sreg_type RDASR)" by auto
then show ?thesis using a1 f1 f2 f3 f4 f5
apply read_state_reg_instr_privilege_proof
apply (clarsimp simp add: get_curr_win_low_equal)
using cpu_reg_val_low_equal get_curr_win2_low_equal write_reg_low_equal
proof -
assume a1: "low_equal s1 s2"
assume a2: "t1 = snd (fst (write_reg (cpu_reg_val TBR (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))))"
assume "t2 = snd (fst (write_reg (cpu_reg_val TBR (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2)))))"
have "\<forall>s. \<not> low_equal (snd (fst (get_curr_win () s1))) s \<or> snd (fst (write_reg (cpu_reg_val TBR s) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))))) = t1"
using a2 by (simp add: cpu_reg_val_low_equal)
then show "low_equal (snd (fst (write_reg (cpu_reg_val TBR (snd (fst (get_curr_win () s1)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1)))))) (snd (fst (write_reg (cpu_reg_val TBR (snd (fst (get_curr_win () s2)))) (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))))))"
using a2 a1 by (metis (no_types) get_curr_win2_low_equal write_reg_low_equal)
qed
qed
qed
next
case False
then show ?thesis using a1 f1 f2
apply (simp add: read_state_reg_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: return_def)
apply clarsimp
apply (simp add: case_prod_unfold)
using get_curr_win2_low_equal by auto
qed
qed
qed
lemma get_s_get_curr_win:
assumes a1: "low_equal s1 s2"
shows "get_S (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))) =
get_S (cpu_reg_val PSR (snd (fst (get_curr_win () s2))))"
proof -
from a1 have "low_equal (snd (fst (get_curr_win () s1)))
(snd (fst (get_curr_win () s2)))"
using get_curr_win2_low_equal by blast
then show ?thesis
using cpu_reg_val_low_equal
by fastforce
qed
lemma write_state_reg_low_equal:
assumes a1: "low_equal s1 s2 \<and>
((ucast (get_S (cpu_reg_val PSR s1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR s2)))::word1) = 0 \<and>
t1 = snd (fst (write_state_reg_instr instr s1)) \<and>
t2 = snd (fst (write_state_reg_instr instr s2))"
shows "low_equal t1 t2"
proof (cases "fst instr = sreg_type WRY")
case True
then show ?thesis using a1
apply write_state_reg_instr_privilege_proof
apply (simp add: simpler_modify_def)
apply (simp add: delayed_pool_add_def DELAYNUM_def)
apply (auto simp add: get_curr_win_low_equal)
using get_curr_win2_low_equal cpu_reg_mod_low_equal
user_reg_val_low_equal get_operand2_low_equal
proof -
assume a1: "low_equal s1 s2"
assume "t2 = cpu_reg_mod (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) XOR get_operand2 (snd instr) (snd (fst (get_curr_win () s2)))) Y (snd (fst (get_curr_win () s2)))"
assume "t1 = cpu_reg_mod (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) XOR get_operand2 (snd instr) (snd (fst (get_curr_win () s1)))) Y (snd (fst (get_curr_win () s1)))"
have f2: "low_equal (snd (fst (get_curr_win () s1))) (snd (fst (get_curr_win () s2)))"
using a1 by (meson get_curr_win2_low_equal)
then have f3: "\<And>w wa. user_reg_val w wa (snd (fst (get_curr_win () s2))) = user_reg_val w wa (snd (fst (get_curr_win () s1)))"
by (simp add: user_reg_val_low_equal)
have "\<And>is. get_operand2 is (snd (fst (get_curr_win () s2))) = get_operand2 is (snd (fst (get_curr_win () s1)))"
using f2 by (simp add: get_operand2_low_equal)
then show "low_equal (cpu_reg_mod (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) XOR get_operand2 (snd instr) (snd (fst (get_curr_win () s1)))) Y (snd (fst (get_curr_win () s1)))) (cpu_reg_mod (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) XOR get_operand2 (snd instr) (snd (fst (get_curr_win () s2)))) Y (snd (fst (get_curr_win () s2))))"
using f3 f2 by (metis cpu_reg_mod_low_equal)
qed
next
case False
then have f1: "\<not>(fst instr = sreg_type WRY)" by auto
then show ?thesis
proof (cases "fst instr = sreg_type WRASR")
case True
then have f1_1: "fst instr = sreg_type WRASR" by auto
then show ?thesis
proof (cases "privileged_ASR (get_operand_w5 (snd instr ! 3)) \<and>
get_S (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))) = 0")
case True
then show ?thesis using a1 f1 f1_1
apply write_state_reg_instr_privilege_proof
apply (clarsimp simp add: get_s_get_curr_win)
apply (simp add: raise_trap_def add_trap_set_def simpler_modify_def)
apply (clarsimp simp add: get_curr_win3_low_equal)
using traps_low_equal mod_trap_low_equal get_curr_win2_low_equal
by fastforce
next
case False
then have f1_2: "\<not> (privileged_ASR (get_operand_w5 (snd instr ! 3)) \<and>
get_S (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))) = 0)"
by auto
then show ?thesis
proof (cases "illegal_instruction_ASR (get_operand_w5 (snd instr ! 3))")
case True
then show ?thesis using a1 f1 f1_1 f1_2
apply write_state_reg_instr_privilege_proof
apply (clarsimp simp add: get_s_get_curr_win)
apply auto
apply (simp add: raise_trap_def add_trap_set_def simpler_modify_def)
apply (clarsimp simp add: get_curr_win3_low_equal)
using traps_low_equal mod_trap_low_equal get_curr_win2_low_equal
apply fastforce
apply (simp add: raise_trap_def add_trap_set_def simpler_modify_def)
apply (clarsimp simp add: get_curr_win3_low_equal)
using traps_low_equal mod_trap_low_equal get_curr_win2_low_equal
by fastforce
next
case False
then show ?thesis using a1 f1 f1_1 f1_2
apply write_state_reg_instr_privilege_proof
apply (clarsimp simp add: get_s_get_curr_win)
apply auto
apply (simp add: simpler_modify_def)
apply (simp add: delayed_pool_add_def DELAYNUM_def)
apply (auto simp add: get_curr_win_low_equal)
using get_curr_win2_low_equal cpu_reg_mod_low_equal
user_reg_val_low_equal get_operand2_low_equal
proof -
assume a1: "low_equal s1 s2"
assume "t2 = cpu_reg_mod (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) XOR get_operand2 (snd instr) (snd (fst (get_curr_win () s2)))) (ASR (get_operand_w5 (snd instr ! 3))) (snd (fst (get_curr_win () s2)))"
assume "t1 = cpu_reg_mod (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) XOR get_operand2 (snd instr) (snd (fst (get_curr_win () s1)))) (ASR (get_operand_w5 (snd instr ! 3))) (snd (fst (get_curr_win () s1)))"
have "low_equal (snd (fst (get_curr_win () s1))) (snd (fst (get_curr_win () s2)))"
using a1 by (meson get_curr_win2_low_equal)
then show "low_equal (cpu_reg_mod (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s1))) XOR get_operand2 (snd instr) (snd (fst (get_curr_win () s1)))) (ASR (get_operand_w5 (snd instr ! 3))) (snd (fst (get_curr_win () s1)))) (cpu_reg_mod (user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0)) (snd (fst (get_curr_win () s2))) XOR get_operand2 (snd instr) (snd (fst (get_curr_win () s2)))) (ASR (get_operand_w5 (snd instr ! 3))) (snd (fst (get_curr_win () s2))))"
using cpu_reg_mod_low_equal get_operand2_low_equal user_reg_val_low_equal by fastforce
next
assume f1: "\<not> illegal_instruction_ASR (get_operand_w5 (snd instr ! 3))"
assume f2: "fst instr = sreg_type WRASR"
assume f3: "snd (fst (write_state_reg_instr instr s1)) =
snd (fst (modify
(delayed_pool_add
(DELAYNUM,
user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) XOR
get_operand2 (snd instr) (snd (fst (get_curr_win () s1))),
ASR (get_operand_w5 (snd instr ! 3))))
(snd (fst (get_curr_win () s1))))) "
assume f4: "snd (fst (write_state_reg_instr instr s2)) =
snd (fst (modify
(delayed_pool_add
(DELAYNUM,
user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) XOR
get_operand2 (snd instr) (snd (fst (get_curr_win () s2))),
ASR (get_operand_w5 (snd instr ! 3))))
(snd (fst (get_curr_win () s2)))))"
assume f5: "low_equal s1 s2"
assume f6: "ucast (get_S (cpu_reg_val PSR s1)) = 0"
assume f7: "ucast (get_S (cpu_reg_val PSR s2)) = 0"
assume f8: "t1 = snd (fst (modify
(delayed_pool_add
(DELAYNUM,
user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) XOR
get_operand2 (snd instr) (snd (fst (get_curr_win () s1))),
ASR (get_operand_w5 (snd instr ! 3))))
(snd (fst (get_curr_win () s1)))))"
assume f9: "t2 = snd (fst (modify
(delayed_pool_add
(DELAYNUM,
user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) XOR
get_operand2 (snd instr) (snd (fst (get_curr_win () s2))),
ASR (get_operand_w5 (snd instr ! 3))))
(snd (fst (get_curr_win () s2)))))"
assume f10: "get_S (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))) \<noteq> 0"
assume f11: "(\<And>s1 s2 t1 t2.
low_equal s1 s2 \<Longrightarrow>
t1 = snd (fst (get_curr_win () s1)) \<Longrightarrow> t2 = snd (fst (get_curr_win () s2)) \<Longrightarrow> low_equal t1 t2)"
assume f12: "(\<And>s1 s2 t1 w cr t2.
low_equal s1 s2 \<and> t1 = cpu_reg_mod w cr s1 \<and> t2 = cpu_reg_mod w cr s2 \<Longrightarrow> low_equal t1 t2)"
assume f13: "(\<And>s1 s2 win ur. low_equal s1 s2 \<Longrightarrow> user_reg_val win ur s1 = user_reg_val win ur s2)"
assume f14: "(\<And>s1 s2 op_list. low_equal s1 s2 \<Longrightarrow> get_operand2 op_list s1 = get_operand2 op_list s2)"
show "low_equal
(snd (fst (modify
(delayed_pool_add
(DELAYNUM,
user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s1))) XOR
get_operand2 (snd instr) (snd (fst (get_curr_win () s1))),
ASR (get_operand_w5 (snd instr ! 3))))
(snd (fst (get_curr_win () s1))))))
(snd (fst (modify
(delayed_pool_add
(DELAYNUM,
user_reg_val (fst (fst (get_curr_win () s2))) (get_operand_w5 (snd instr ! Suc 0))
(snd (fst (get_curr_win () s2))) XOR
get_operand2 (snd instr) (snd (fst (get_curr_win () s2))),
ASR (get_operand_w5 (snd instr ! 3))))
(snd (fst (get_curr_win () s2))))))"
using f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14
using Sparc_Properties.ucast_0 assms get_curr_win_privilege by blast
qed
qed
qed
next
case False
then have f2: "\<not>(fst instr = sreg_type WRASR)" by auto
have f3: "get_S (cpu_reg_val PSR (snd (fst (get_curr_win () s1)))) = 0 \<and>
get_S (cpu_reg_val PSR (snd (fst (get_curr_win () s2)))) = 0"
using get_curr_win_privilege a1 by (metis ucast_id)
then show ?thesis
proof (cases "fst instr = sreg_type WRPSR")
case True
then show ?thesis using a1 f1 f2 f3
apply write_state_reg_instr_privilege_proof
apply (simp add: raise_trap_def add_trap_set_def simpler_modify_def)
apply (clarsimp simp add: get_curr_win3_low_equal)
using traps_low_equal mod_trap_low_equal get_curr_win2_low_equal
by fastforce
next
case False
then have f4: "\<not>(fst instr = sreg_type WRPSR)" by auto
then show ?thesis
proof (cases "fst instr = sreg_type WRWIM")
case True
then show ?thesis using a1 f1 f2 f3 f4
apply write_state_reg_instr_privilege_proof
apply (simp add: raise_trap_def add_trap_set_def simpler_modify_def)
apply (clarsimp simp add: get_curr_win3_low_equal)
using traps_low_equal mod_trap_low_equal get_curr_win2_low_equal
by fastforce
next
case False
then have f5: "\<not>(fst instr = sreg_type WRWIM)" by auto
then show ?thesis using a1 f1 f2 f3 f4 f5
apply write_state_reg_instr_privilege_proof
apply (simp add: raise_trap_def add_trap_set_def simpler_modify_def)
apply (clarsimp simp add: get_curr_win3_low_equal)
using traps_low_equal mod_trap_low_equal get_curr_win2_low_equal
by fastforce
qed
qed
qed
qed
lemma flush_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (flush_instr instr s1)) \<and>
t2 = snd (fst (flush_instr instr s2))"
shows "low_equal t1 t2"
using a1
apply (simp add: flush_instr_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def simpler_modify_def)
apply (simp add: flush_cache_all_def)
apply (simp add: low_equal_def)
apply (simp add: user_accessible_def)
apply (simp add: mem_equal_def)
by auto
lemma branch_instr_sub1_low_equal:
assumes a1: "low_equal s1 s2"
shows "branch_instr_sub1 instr_name s1 = branch_instr_sub1 instr_name s2"
using a1 apply (simp add: branch_instr_sub1_def)
by (simp add: low_equal_def)
lemma set_annul_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (set_annul True s1)) \<and>
t2 = snd (fst (set_annul True s2))"
shows "low_equal t1 t2"
using a1 apply (simp add: set_annul_def)
apply (simp add: simpler_modify_def annul_mod_def)
using state_var2_low_equal state_var_low_equal
by fastforce
lemma branch_instr_low_equal_sub0:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (write_cpu (cpu_reg_val PC s2 +
sign_ext24 (ucast (get_operand_w22 (snd instr ! Suc 0)) << 2))
nPC (snd (fst (write_cpu (cpu_reg_val nPC s2) PC s1))))) \<and>
t2 = snd (fst (write_cpu (cpu_reg_val PC s2 +
sign_ext24 (ucast (get_operand_w22 (snd instr ! Suc 0)) << 2))
nPC (snd (fst (write_cpu (cpu_reg_val nPC s2) PC s2)))))"
shows "low_equal t1 t2"
proof -
from a1 have "low_equal
(snd (fst (write_cpu (cpu_reg_val nPC s2) PC s1)))
(snd (fst (write_cpu (cpu_reg_val nPC s2) PC s2)))"
using write_cpu_low_equal by blast
then show ?thesis
using a1 write_cpu_low_equal by blast
qed
lemma branch_instr_low_equal_sub1:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (set_annul True (snd (fst (write_cpu
(cpu_reg_val PC s2 + sign_ext24
(ucast (get_operand_w22 (snd instr ! Suc 0)) << 2))
nPC (snd (fst (write_cpu (cpu_reg_val nPC s2) PC s1)))))))) \<and>
t2 = snd (fst (set_annul True (snd (fst (write_cpu
(cpu_reg_val PC s2 + sign_ext24
(ucast (get_operand_w22 (snd instr ! Suc 0)) << 2))
nPC (snd (fst (write_cpu (cpu_reg_val nPC s2) PC s2))))))))"
shows "low_equal t1 t2"
proof -
from a1 have "low_equal
(snd (fst (write_cpu
(cpu_reg_val PC s2 + sign_ext24
(ucast (get_operand_w22 (snd instr ! Suc 0)) << 2))
nPC (snd (fst (write_cpu (cpu_reg_val nPC s2) PC s1))))))
(snd (fst (write_cpu
(cpu_reg_val PC s2 + sign_ext24
(ucast (get_operand_w22 (snd instr ! Suc 0)) << 2))
nPC (snd (fst (write_cpu (cpu_reg_val nPC s2) PC s2))))))"
using branch_instr_low_equal_sub0 by blast
then show ?thesis using a1
using set_annul_low_equal by blast
qed
lemma branch_instr_low_equal_sub2:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (set_annul True
(snd (fst (write_cpu (cpu_reg_val nPC s2 + 4) nPC
(snd (fst (write_cpu (cpu_reg_val nPC s2) PC s1)))))))) \<and>
t2 = snd (fst (set_annul True
(snd (fst (write_cpu (cpu_reg_val nPC s2 + 4) nPC
(snd (fst (write_cpu (cpu_reg_val nPC s2) PC s2))))))))"
shows "low_equal t1 t2"
proof -
from a1 have "low_equal
(snd (fst (write_cpu (cpu_reg_val nPC s2) PC s1)))
(snd (fst (write_cpu (cpu_reg_val nPC s2) PC s2)))"
using write_cpu_low_equal by blast
then have "low_equal
(snd (fst (write_cpu (cpu_reg_val nPC s2 + 4) nPC
(snd (fst (write_cpu (cpu_reg_val nPC s2) PC s1))))))
(snd (fst (write_cpu (cpu_reg_val nPC s2 + 4) nPC
(snd (fst (write_cpu (cpu_reg_val nPC s2) PC s2))))))"
using write_cpu_low_equal by blast
then show ?thesis using a1
using set_annul_low_equal by blast
qed
lemma branch_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
t1 = snd (fst (branch_instr instr s1)) \<and>
t2 = snd (fst (branch_instr instr s2))"
shows "low_equal t1 t2"
using a1
apply (simp add: branch_instr_def)
apply (simp add: Let_def simpler_gets_def bind_def h1_def h2_def)
apply (simp add: case_prod_unfold return_def)
apply clarsimp
apply (simp add: branch_instr_sub1_low_equal)
apply (simp_all add: cpu_reg_val_low_equal)
apply (cases "branch_instr_sub1 (fst instr) s2 = 1")
apply clarsimp
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp_all add: cpu_reg_val_low_equal)
apply (simp add: case_prod_unfold)
apply (cases "fst instr = bicc_type BA \<and> get_operand_flag (snd instr ! 0) = 1")
apply clarsimp
using branch_instr_low_equal_sub1 apply blast
apply clarsimp
apply (simp add: return_def)
using branch_instr_low_equal_sub0 apply fastforce
apply (simp add: bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (cases "get_operand_flag (snd instr ! 0) = 1")
apply clarsimp
apply (simp_all add: cpu_reg_val_low_equal)
using branch_instr_low_equal_sub2 apply metis
apply (simp add: return_def)
using write_cpu_low_equal by metis
lemma dispath_instr_low_equal:
assumes a1: "low_equal s1 s2 \<and>
((ucast (get_S (cpu_reg_val PSR s1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR s2)))::word1) = 0 \<and>
\<not> snd (dispatch_instruction instr s1) \<and>
\<not> snd (dispatch_instruction instr s2) \<and>
t1 = (snd (fst (dispatch_instruction instr s1))) \<and>
t2 = (snd (fst (dispatch_instruction instr s2)))"
shows "low_equal t1 t2"
proof (cases "get_trap_set s1 = {}")
case True
then have f_no_traps: "get_trap_set s1 = {} \<and> get_trap_set s2 = {}"
using a1 by (simp add: low_equal_def get_trap_set_def)
then show ?thesis
proof (cases "fst instr \<in> {load_store_type LDSB,load_store_type LDUB,
load_store_type LDUBA,load_store_type LDUH,load_store_type LD,
load_store_type LDA,load_store_type LDD}")
case True
then show ?thesis using a1 f_no_traps
apply dispath_instr_privilege_proof
by (blast intro: load_instr_low_equal)
next
case False
then have f1: "fst instr \<notin> {load_store_type LDSB, load_store_type LDUB,
load_store_type LDUBA, load_store_type LDUH,
load_store_type LD, load_store_type LDA, load_store_type LDD}"
by auto
then show ?thesis
proof (cases "fst instr \<in> {load_store_type STB,load_store_type STH,
load_store_type ST,load_store_type STA,load_store_type STD}")
case True
then show ?thesis using a1 f_no_traps f1
apply dispath_instr_privilege_proof
using store_instr_low_equal by blast
next
case False
then have f2: "\<not>(fst instr \<in> {load_store_type STB,load_store_type STH,
load_store_type ST,load_store_type STA,load_store_type STD})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {sethi_type SETHI}")
case True
then show ?thesis using a1 f_no_traps f1 f2
apply dispath_instr_privilege_proof
by (auto intro: sethi_low_equal)
next
case False
then have f3: "\<not>(fst instr \<in> {sethi_type SETHI})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {nop_type NOP}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3
apply dispath_instr_privilege_proof
by (auto intro: nop_low_equal)
next
case False
then have f4: "\<not>(fst instr \<in> {nop_type NOP})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {logic_type ANDs,logic_type ANDcc,logic_type ANDN,
logic_type ANDNcc,logic_type ORs,logic_type ORcc,logic_type ORN,
logic_type XORs,logic_type XNOR}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4
apply dispath_instr_privilege_proof
using logical_instr_low_equal by blast
next
case False
then have f5: "\<not>(fst instr \<in> {logic_type ANDs,logic_type ANDcc,logic_type ANDN,
logic_type ANDNcc,logic_type ORs,logic_type ORcc,logic_type ORN,
logic_type XORs,logic_type XNOR})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {shift_type SLL,shift_type SRL,shift_type SRA}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5
apply dispath_instr_privilege_proof
using shift_instr_low_equal by blast
next
case False
then have f6: "\<not>(fst instr \<in> {shift_type SLL,shift_type SRL,shift_type SRA})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {arith_type ADD,arith_type ADDcc,arith_type ADDX}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5 f6
apply dispath_instr_privilege_proof
using add_instr_low_equal by blast
next
case False
then have f7: "\<not>(fst instr \<in> {arith_type ADD,arith_type ADDcc,arith_type ADDX})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {arith_type SUB,arith_type SUBcc,arith_type SUBX}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5 f6 f7
apply dispath_instr_privilege_proof
using sub_instr_low_equal by blast
next
case False
then have f8: "\<not>(fst instr \<in> {arith_type SUB,arith_type SUBcc,arith_type SUBX})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {arith_type UMUL,arith_type SMUL,arith_type SMULcc}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5 f6 f7 f8
apply dispath_instr_privilege_proof
using mul_instr_low_equal by blast
next
case False
then have f9: "\<not>(fst instr \<in> {arith_type UMUL,arith_type SMUL,
arith_type SMULcc})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {arith_type UDIV,arith_type UDIVcc,arith_type SDIV}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5 f6 f7 f8 f9
apply dispath_instr_privilege_proof
using div_instr_low_equal by blast
next
case False
then have f10: "\<not>(fst instr \<in> {arith_type UDIV,
arith_type UDIVcc,arith_type SDIV})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {ctrl_type SAVE,ctrl_type RESTORE}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
apply dispath_instr_privilege_proof
using save_restore_instr_low_equal by blast
next
case False
then have f11: "\<not>(fst instr \<in> {ctrl_type SAVE,ctrl_type RESTORE})"
by auto
then show ?thesis
proof (cases "fst instr \<in> {call_type CALL}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11
apply dispath_instr_privilege_proof
using call_instr_low_equal by blast
next
case False
then have f12: "\<not>(fst instr \<in> {call_type CALL})" by auto
then show ?thesis
proof (cases "fst instr \<in> {ctrl_type JMPL}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12
apply dispath_instr_privilege_proof
using jmpl_instr_low_equal by blast
next
case False
then have f13: "\<not>(fst instr \<in> {ctrl_type JMPL})" by auto
then show ?thesis
proof (cases "fst instr \<in> {ctrl_type RETT}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
f11 f12 f13
apply dispath_instr_privilege_proof
using rett_instr_low_equal by blast
next
case False
then have f14: "\<not>(fst instr \<in> {ctrl_type RETT})" by auto
then show ?thesis
proof (cases "fst instr \<in> {sreg_type RDY,sreg_type RDPSR,
sreg_type RDWIM, sreg_type RDTBR}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
f11 f12 f13 f14
apply dispath_instr_privilege_proof
using read_state_reg_low_equal by blast
next
case False
then have f15: "\<not>(fst instr \<in> {sreg_type RDY,sreg_type RDPSR,
sreg_type RDWIM, sreg_type RDTBR})" by auto
then show ?thesis
proof (cases "fst instr \<in> {sreg_type WRY,sreg_type WRPSR,
sreg_type WRWIM, sreg_type WRTBR}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5 f6 f7 f8 f9
f10 f11 f12 f13 f14 f15
apply dispath_instr_privilege_proof
using write_state_reg_low_equal by blast
next
case False
then have f16: "\<not>(fst instr \<in> {sreg_type WRY,sreg_type WRPSR,
sreg_type WRWIM, sreg_type WRTBR})" by auto
then show ?thesis
proof (cases "fst instr \<in> {load_store_type FLUSH}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5 f6 f7 f8 f9
f10 f11 f12 f13 f14 f15 f16
apply dispath_instr_privilege_proof
using flush_instr_low_equal by blast
next
case False
then have f17: "\<not>(fst instr \<in> {load_store_type FLUSH})" by auto
then show ?thesis
proof (cases "fst instr \<in> {bicc_type BE,bicc_type BNE,
bicc_type BGU,bicc_type BLE,bicc_type BL,bicc_type BGE,
bicc_type BNEG,bicc_type BG,bicc_type BCS,bicc_type BLEU,
bicc_type BCC,bicc_type BA,bicc_type BN}")
case True
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5 f6 f7 f8
f9 f10 f11 f12 f13 f14 f15 f16 f17
apply dispath_instr_privilege_proof
using branch_instr_low_equal by blast
next
case False
then show ?thesis using a1 f_no_traps f1 f2 f3 f4 f5 f6 f7 f8
f9 f10 f11 f12 f13 f14 f15 f16 f17
apply dispath_instr_privilege_proof
by (simp add: fail_def)
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
qed
next
case False
then have "get_trap_set s1 \<noteq> {} \<and> get_trap_set s2 \<noteq> {}"
using a1 by (simp add: low_equal_def get_trap_set_def)
then show ?thesis using a1
apply (simp add: dispatch_instruction_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def)
apply (simp add: Let_def)
by (simp add: return_def)
qed
lemma execute_instr_sub1_low_equal:
assumes a1: "low_equal s1 s2 \<and>
\<not> snd (execute_instr_sub1 instr s1) \<and>
\<not> snd (execute_instr_sub1 instr s2) \<and>
t1 = (snd (fst (execute_instr_sub1 instr s1))) \<and>
t2 = (snd (fst (execute_instr_sub1 instr s2)))"
shows "low_equal t1 t2"
proof (cases "get_trap_set s1 = {}")
case True
then have "get_trap_set s1 = {} \<and> get_trap_set s2 = {}"
using a1 by (simp add: low_equal_def get_trap_set_def)
then show ?thesis using a1
apply (simp add: execute_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (case_tac "fst instr \<noteq> call_type CALL \<and>
fst instr \<noteq> ctrl_type RETT \<and>
fst instr \<noteq> ctrl_type JMPL \<and>
fst instr \<noteq> bicc_type BE \<and>
fst instr \<noteq> bicc_type BNE \<and>
fst instr \<noteq> bicc_type BGU \<and>
fst instr \<noteq> bicc_type BLE \<and>
fst instr \<noteq> bicc_type BL \<and>
fst instr \<noteq> bicc_type BGE \<and>
fst instr \<noteq> bicc_type BNEG \<and>
fst instr \<noteq> bicc_type BG \<and>
fst instr \<noteq> bicc_type BCS \<and>
fst instr \<noteq> bicc_type BLEU \<and>
fst instr \<noteq> bicc_type BCC \<and>
fst instr \<noteq> bicc_type BA \<and> fst instr \<noteq> bicc_type BN")
apply clarsimp
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: low_equal_def)
apply (simp add: cpu_reg_val_def write_cpu_def cpu_reg_mod_def)
apply (simp add: simpler_modify_def return_def)
apply (simp add: user_accessible_mod_cpu_reg mem_equal_mod_cpu_reg)
apply clarsimp
by (auto simp add: return_def)
next
case False
then have "get_trap_set s1 \<noteq> {} \<and> get_trap_set s2 \<noteq> {}"
using a1 by (simp add: low_equal_def get_trap_set_def)
then show ?thesis using a1
apply (simp add: execute_instr_sub1_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (simp add: return_def)
qed
theorem non_interference_step:
assumes a1: "((ucast (get_S (cpu_reg_val PSR s1)))::word1) = 0 \<and>
good_context s1 \<and>
get_delayed_pool s1 = [] \<and> get_trap_set s1 = {} \<and>
((ucast (get_S (cpu_reg_val PSR s2)))::word1) = 0 \<and>
get_delayed_pool s2 = [] \<and> get_trap_set s2 = {} \<and>
good_context s2 \<and>
low_equal s1 s2"
shows "\<exists>t1 t2. Some t1 = NEXT s1 \<and> Some t2 = NEXT s2 \<and>
((ucast (get_S (cpu_reg_val PSR t1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR t2)))::word1) = 0 \<and>
low_equal t1 t2"
proof -
from a1 have "good_context s1 \<and> good_context s2" by auto
then have "NEXT s1 = Some (snd (fst (execute_instruction () s1))) \<and>
NEXT s2 = Some (snd (fst (execute_instruction () s2)))"
by (simp add: single_step)
then have "\<exists>t1 t2. Some t1 = NEXT s1 \<and> Some t2 = NEXT s2"
by auto
then have f0: "snd (execute_instruction() s1) = False \<and>
snd (execute_instruction() s2) = False"
by (auto simp add: NEXT_def case_prod_unfold)
then have f1: "\<exists>t1 t2. Some t1 = NEXT s1 \<and>
Some t2 = NEXT s2 \<and>
((ucast (get_S (cpu_reg_val PSR t1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR t2)))::word1) = 0"
using a1
apply (auto simp add: NEXT_def case_prod_unfold)
by (auto simp add: safe_privilege)
then show ?thesis
proof (cases "exe_mode_val s1")
case True
then have f_exe0: "exe_mode_val s1" by auto
then have f_exe: "exe_mode_val s1 \<and> exe_mode_val s2"
proof -
have "low_equal s1 s2" using a1 by auto
then have "state_var s1 = state_var s2" by (simp add: low_equal_def)
then have "exe_mode_val s1 = exe_mode_val s2" by (simp add: exe_mode_val_def)
then show ?thesis using f_exe0 by auto
qed
then show ?thesis
proof (cases "\<exists>e. fetch_instruction (delayed_pool_write s1) = Inl e")
case True
then have f_fetch_error: "\<exists>e. fetch_instruction (delayed_pool_write s1) = Inl e" by auto
then have f_fetch_error2: "(\<exists>e. fetch_instruction (delayed_pool_write s1) = Inl e) \<and>
(\<exists>e. fetch_instruction (delayed_pool_write s2) = Inl e)"
proof -
have "cpu_reg s1 = cpu_reg s2"
using a1 by (simp add: low_equal_def)
then have "cpu_reg_val PC s1 = cpu_reg_val PC s2"
by (simp add: cpu_reg_val_def)
then have "cpu_reg_val PC s1 = cpu_reg_val PC s2 \<and>
((ucast (get_S (cpu_reg_val PSR (delayed_pool_write s1))))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR (delayed_pool_write s2))))::word1) = 0"
using a1
by (auto simp add: empty_delayed_pool_write_privilege)
then show ?thesis using a1 f_fetch_error
apply (simp add: fetch_instruction_def)
apply (simp add: Let_def ucast_def)
apply clarsimp
apply (case_tac "uint (3 AND cpu_reg_val PC (delayed_pool_write s1)) = 0")
apply auto
apply (case_tac "fst (memory_read 8 (cpu_reg_val PC (delayed_pool_write s1))
(delayed_pool_write s1)) = None")
apply auto
apply (simp add: case_prod_unfold)
using a1 apply (auto simp add: mem_read_delayed_write_low_equal)
apply (simp add: case_prod_unfold)
using a1 apply (auto simp add: mem_read_delayed_write_low_equal)
apply (simp add: delayed_pool_write_def)
by (simp add: Let_def get_delayed_write_def)
qed
then show ?thesis
proof (cases "exe_mode_val s1")
case True
then have "exe_mode_val s1 \<and> exe_mode_val s2" using exe_mode_low_equal a1 by auto
then show ?thesis using f1
apply (simp add: NEXT_def execute_instruction_def)
apply (simp add: bind_def h1_def h2_def Let_def simpler_gets_def)
using a1 apply clarsimp
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: simpler_modify_def)
using f_fetch_error2 apply clarsimp
apply (simp add: raise_trap_def simpler_modify_def return_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: return_def simpler_modify_def)
apply (simp add: raise_trap_def simpler_modify_def return_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: return_def)
apply (simp add: delayed_pool_write_def get_delayed_write_def Let_def)
apply (simp add: low_equal_def)
apply (simp add: add_trap_set_def)
apply (simp add: cpu_reg_val_def)
apply clarsimp
by (simp add: mem_equal_mod_trap user_accessible_mod_trap)
next
case False
then have "\<not> (exe_mode_val s1) \<and> \<not> (exe_mode_val s2)"
using exe_mode_low_equal a1 by auto
then show ?thesis using f1
apply (simp add: NEXT_def execute_instruction_def)
apply (simp add: bind_def h1_def h2_def Let_def simpler_gets_def)
using a1 apply clarsimp
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
by (simp add: return_def)
qed
next
case False
then have f_fetch_suc: "(\<exists>v. fetch_instruction (delayed_pool_write s1) = Inr v)"
using fetch_instr_result_1 by auto
then have "(\<exists>v. fetch_instruction (delayed_pool_write s1) = Inr v \<and>
fetch_instruction (delayed_pool_write s2) = Inr v)"
proof -
have "cpu_reg s1 = cpu_reg s2"
using a1 by (simp add: low_equal_def)
then have "cpu_reg_val PC s1 = cpu_reg_val PC s2"
by (simp add: cpu_reg_val_def)
then have "cpu_reg_val PC s1 = cpu_reg_val PC s2 \<and>
((ucast (get_S (cpu_reg_val PSR (delayed_pool_write s1))))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR (delayed_pool_write s2))))::word1) = 0"
using a1
by (auto simp add: empty_delayed_pool_write_privilege)
then show ?thesis using a1 f_fetch_suc
apply (simp add: fetch_instruction_def)
apply (simp add: Let_def ucast_def)
apply clarsimp
apply (case_tac "uint (3 AND cpu_reg_val PC (delayed_pool_write s1)) = 0")
apply auto
apply (case_tac "fst (memory_read 8 (cpu_reg_val PC (delayed_pool_write s1))
(delayed_pool_write s1)) = None")
apply auto
apply (simp add: case_prod_unfold)
using a1 apply (auto simp add: mem_read_delayed_write_low_equal)
apply (simp add: case_prod_unfold)
using a1 apply (auto simp add: mem_read_delayed_write_low_equal)
apply (simp add: delayed_pool_write_def)
by (simp add: Let_def get_delayed_write_def)
qed
then have "(\<exists>v. fetch_instruction (delayed_pool_write s1) = Inr v \<and>
fetch_instruction (delayed_pool_write s2) = Inr v \<and>
\<not> (\<exists>e. (decode_instruction v) = Inl e))"
using dispatch_fail f0 a1 f_exe by auto
then have f_fetch_dec: "(\<exists>v. fetch_instruction (delayed_pool_write s1) = Inr v \<and>
fetch_instruction (delayed_pool_write s2) = Inr v \<and>
(\<exists>v1. (decode_instruction v) = Inr v1))"
using decode_instr_result_4 by auto
then show ?thesis
proof (cases "annul_val (delayed_pool_write s1)")
case True
then have "annul_val (delayed_pool_write s1) \<and> annul_val (delayed_pool_write s2)"
using a1
apply (simp add: low_equal_def)
by (simp add: delayed_pool_write_def get_delayed_write_def annul_val_def)
then show ?thesis using a1 f1 f_exe f_fetch_dec
apply (simp add: NEXT_def execute_instruction_def)
apply (simp add: exec_gets return_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: simpler_modify_def)
apply clarsimp
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: write_cpu_def cpu_reg_val_def set_annul_def)
apply (simp add: simpler_modify_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: write_cpu_def cpu_reg_val_def set_annul_def)
apply (simp add: simpler_modify_def)
apply (simp add: cpu_reg_mod_def annul_mod_def)
apply (simp add: delayed_pool_write_def get_delayed_write_def)
apply (simp add: write_annul_def)
apply clarsimp
apply (simp add: low_equal_def)
apply (simp add: user_accessible_annul mem_equal_annul)
by (metis)
next
case False
then have "\<not> annul_val (delayed_pool_write s1) \<and>
\<not> annul_val (delayed_pool_write s2)"
using a1 apply (simp add: low_equal_def)
apply (simp add: delayed_pool_write_def get_delayed_write_def)
by (simp add: annul_val_def)
then show ?thesis using a1 f1 f_exe f_fetch_dec
apply (simp add: NEXT_def execute_instruction_def)
apply (simp add: exec_gets return_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: simpler_modify_def)
apply clarsimp
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (case_tac "snd (execute_instr_sub1 (a, b)
(snd (fst (dispatch_instruction (a, b)
(delayed_pool_write s1))))) \<or>
snd (dispatch_instruction (a, b) (delayed_pool_write s1))")
apply auto
apply (case_tac "snd (execute_instr_sub1 (a, b)
(snd (fst (dispatch_instruction (a, b)
(delayed_pool_write s2))))) \<or>
snd (dispatch_instruction (a, b) (delayed_pool_write s2))")
apply auto
apply (simp add: simpler_modify_def)
apply (simp add: simpler_gets_def bind_def h1_def h2_def Let_def)
apply (simp add: case_prod_unfold)
apply (simp add: delayed_pool_write_def get_delayed_write_def)
by (meson dispath_instr_low_equal dispath_instr_privilege execute_instr_sub1_low_equal)
qed
qed
next
case False
then have f_non_exe: "exe_mode_val s1 = False" by auto
then have "exe_mode_val s1 = False \<and> exe_mode_val s2 = False"
proof -
have "low_equal s1 s2" using a1 by auto
then have "state_var s1 = state_var s2" by (simp add: low_equal_def)
then have "exe_mode_val s1 = exe_mode_val s2" by (simp add: exe_mode_val_def)
then show ?thesis using f_non_exe by auto
qed
then show ?thesis using f1 a1
apply (simp add: NEXT_def execute_instruction_def)
by (simp add: simpler_gets_def bind_def h1_def h2_def Let_def return_def)
qed
qed
function (sequential) SEQ:: "nat \<Rightarrow> ('a::len0) sparc_state \<Rightarrow> ('a) sparc_state option"
where "SEQ 0 s = Some s"
|"SEQ n s = (
case SEQ (n-1) s of None \<Rightarrow> None
| Some t \<Rightarrow> NEXT t
)"
by pat_completeness auto
termination by lexicographic_order
lemma SEQ_suc: "SEQ n s = Some t \<Longrightarrow> SEQ (Suc n) s = NEXT t"
apply (induction n)
apply clarsimp
by (simp add: option.case_eq_if)
definition user_seq_exe:: "nat \<Rightarrow> ('a::len0) sparc_state \<Rightarrow> bool" where
"user_seq_exe n s \<equiv> \<forall>i t. (i \<le> n \<and> SEQ i s = Some t) \<longrightarrow>
(good_context t \<and> get_delayed_pool t = [] \<and> get_trap_set t = {})"
text \<open>NIA is short for non-interference assumption.\<close>
definition "NIA t1 t2 \<equiv>
((ucast (get_S (cpu_reg_val PSR t1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR t2)))::word1) = 0 \<and>
good_context t1 \<and> get_delayed_pool t1 = [] \<and> get_trap_set t1 = {} \<and>
good_context t2 \<and> get_delayed_pool t2 = [] \<and> get_trap_set t2 = {} \<and>
low_equal t1 t2"
text \<open>NIC is short for non-interference conclusion.\<close>
definition "NIC t1 t2 \<equiv> (\<exists>u1 u2. Some u1 = NEXT t1 \<and> Some u2 = NEXT t2 \<and>
((ucast (get_S (cpu_reg_val PSR u1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR u2)))::word1) = 0 \<and>
low_equal u1 u2)"
lemma NIS_short: "\<forall>t1 t2. NIA t1 t2 \<longrightarrow> NIC t1 t2"
apply (simp add: NIA_def NIC_def)
using non_interference_step by auto
lemma non_interference_induct_case_sub1:
assumes a1: "(\<exists>t1. Some t1 = SEQ n s1 \<and>
(\<exists>t2. Some t2 = SEQ n s2 \<and>
NIA t1 t2))"
shows "(\<exists>t1. Some t1 = SEQ n s1 \<and>
(\<exists>t2. Some t2 = SEQ n s2 \<and>
NIA t1 t2 \<and>
NIC t1 t2))"
using NIS_short
using assms by auto
lemma non_interference_induct_case:
assumes a1:
"((\<forall>i t. i \<le> n \<and> SEQ i s1 = Some t \<longrightarrow>
good_context t \<and> get_delayed_pool t = [] \<and> get_trap_set t = {}) \<and>
(\<forall>i t. i \<le> n \<and> SEQ i s2 = Some t \<longrightarrow>
good_context t \<and> get_delayed_pool t = [] \<and> get_trap_set t = {}) \<longrightarrow>
(\<exists>t1. Some t1 = SEQ n s1 \<and>
(\<exists>t2. Some t2 = SEQ n s2 \<and>
((ucast (get_S (cpu_reg_val PSR t1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR t2)))::word1) = 0 \<and> low_equal t1 t2))) \<and>
(\<forall>i t. i \<le> Suc n \<and> SEQ i s1 = Some t \<longrightarrow>
good_context t \<and> get_delayed_pool t = [] \<and> get_trap_set t = {}) \<and>
(\<forall>i t. i \<le> Suc n \<and> SEQ i s2 = Some t \<longrightarrow>
good_context t \<and> get_delayed_pool t = [] \<and> get_trap_set t = {})"
shows "\<exists>t1. Some t1 = (case SEQ n s1 of None \<Rightarrow> None | Some x \<Rightarrow> NEXT x) \<and>
(\<exists>t2. Some t2 = (case SEQ n s2 of None \<Rightarrow> None | Some x \<Rightarrow> NEXT x) \<and>
((ucast (get_S (cpu_reg_val PSR t1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR t2)))::word1) = 0 \<and> low_equal t1 t2)"
proof -
from a1 have f1: "((\<forall>i t. i \<le> n \<and> SEQ i s1 = Some t \<longrightarrow>
good_context t \<and> get_delayed_pool t = [] \<and> get_trap_set t = {}) \<and>
(\<forall>i t. i \<le> n \<and> SEQ i s2 = Some t \<longrightarrow>
good_context t \<and> get_delayed_pool t = [] \<and> get_trap_set t = {}))"
by (metis le_SucI)
then have f2: "(\<exists>t1. Some t1 = SEQ n s1 \<and>
(\<exists>t2. Some t2 = SEQ n s2 \<and>
((ucast (get_S (cpu_reg_val PSR t1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR t2)))::word1) = 0 \<and>
low_equal t1 t2))"
using a1 by auto
then have f3: "(\<exists>t1. Some t1 = SEQ n s1 \<and>
(\<exists>t2. Some t2 = SEQ n s2 \<and>
NIA t1 t2))"
using f1 NIA_def by (metis (full_types) dual_order.refl)
then have "(\<exists>t1. Some t1 = SEQ n s1 \<and>
(\<exists>t2. Some t2 = SEQ n s2 \<and>
NIA t1 t2 \<and>
NIC t1 t2))"
using non_interference_induct_case_sub1 by blast
then have "(\<exists>t1. Some t1 = SEQ n s1 \<and>
(\<exists>t2. Some t2 = SEQ n s2 \<and>
(((ucast (get_S (cpu_reg_val PSR t1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR t2)))::word1) = 0 \<and>
good_context t1 \<and> get_delayed_pool t1 = [] \<and> get_trap_set t1 = {} \<and>
good_context t2 \<and> get_delayed_pool t2 = [] \<and> get_trap_set t2 = {} \<and>
low_equal t1 t2) \<and>
(\<exists>u1 u2. Some u1 = NEXT t1 \<and> Some u2 = NEXT t2 \<and>
((ucast (get_S (cpu_reg_val PSR u1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR u2)))::word1) = 0 \<and>
low_equal u1 u2)))"
using NIA_def NIC_def by fastforce
then show ?thesis
by (metis option.simps(5))
qed
lemma non_interference_induct_case_sub2:
assumes a1:
"(user_seq_exe n s1 \<and>
user_seq_exe n s2 \<longrightarrow>
(\<exists>t1. Some t1 = SEQ n s1 \<and>
(\<exists>t2. Some t2 = SEQ n s2 \<and>
((ucast (get_S (cpu_reg_val PSR t1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR t2)))::word1) = 0 \<and> low_equal t1 t2))) \<and>
user_seq_exe (Suc n) s1 \<and>
user_seq_exe (Suc n) s2"
shows "\<exists>t1. Some t1 = (case SEQ n s1 of None \<Rightarrow> None | Some x \<Rightarrow> NEXT x) \<and>
(\<exists>t2. Some t2 = (case SEQ n s2 of None \<Rightarrow> None | Some x \<Rightarrow> NEXT x) \<and>
((ucast (get_S (cpu_reg_val PSR t1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR t2)))::word1) = 0 \<and> low_equal t1 t2)"
using a1
by (simp add: non_interference_induct_case user_seq_exe_def)
theorem non_interference:
assumes a1:
"((ucast (get_S (cpu_reg_val PSR s1)))::word1) = 0 \<and>
good_context s1 \<and>
get_delayed_pool s1 = [] \<and> get_trap_set s1 = {} \<and>
((ucast (get_S (cpu_reg_val PSR s2)))::word1) = 0 \<and>
get_delayed_pool s2 = [] \<and> get_trap_set s2 = {} \<and>
good_context s2 \<and>
user_seq_exe n s1 \<and> user_seq_exe n s2 \<and>
low_equal s1 s2"
shows "(\<exists>t1 t2. Some t1 = SEQ n s1 \<and> Some t2 = SEQ n s2 \<and>
((ucast (get_S (cpu_reg_val PSR t1)))::word1) = 0 \<and>
((ucast (get_S (cpu_reg_val PSR t2)))::word1) = 0 \<and>
low_equal t1 t2)"
using a1
apply (induction n)
apply (simp add: user_seq_exe_def)
apply clarsimp
by (simp add: non_interference_induct_case_sub2)
end
diff --git a/thys/SPARCv8/SparcModel_MMU/Sparc_State.thy b/thys/SPARCv8/SparcModel_MMU/Sparc_State.thy
--- a/thys/SPARCv8/SparcModel_MMU/Sparc_State.thy
+++ b/thys/SPARCv8/SparcModel_MMU/Sparc_State.thy
@@ -1,1007 +1,1007 @@
(*
* Copyright 2016, NTU
*
* This software may be distributed and modified according to the terms of
* the BSD 2-Clause license. Note that NO WARRANTY is provided.
* See "LICENSE_BSD2.txt" for details.
*
* Author: Zhe Hou, David Sanan.
*)
section \<open>SPARC V8 state model\<close>
theory Sparc_State
imports Main Sparc_Types "../lib/wp/DetMonadLemmas" MMU
begin
section \<open>state as a function\<close>
record cpu_cache =
dcache:: cache_context
icache:: cache_context
text\<open>
The state @{term sparc_state} is defined as a tuple @{term cpu_context},
@{term user_context}, @{term mem_context}, defining the state of the CPU registers,
user registers, memory, cache, and delayed write pool respectively.
Additionally, a boolean indicates whether the state is
undefined or not.
\<close>
record (overloaded) ('a) sparc_state =
cpu_reg:: cpu_context
user_reg:: "('a) user_context"
sys_reg:: sys_context
mem:: mem_context
mmu:: MMU_state
cache:: cpu_cache
dwrite:: delayed_write_pool
state_var:: sparc_state_var
traps:: "Trap set"
undef:: bool
section\<open>functions for state member access\<close>
definition cpu_reg_val:: "CPU_register \<Rightarrow> ('a) sparc_state \<Rightarrow> reg_type"
where
"cpu_reg_val reg state \<equiv> (cpu_reg state) reg"
definition cpu_reg_mod :: "word32 \<Rightarrow> CPU_register \<Rightarrow> ('a) sparc_state \<Rightarrow>
('a) sparc_state"
where "cpu_reg_mod data_w32 cpu state \<equiv>
state\<lparr>cpu_reg := ((cpu_reg state)(cpu := data_w32))\<rparr>"
text \<open>r[0] = 0. Otherwise read the actual value.\<close>
definition user_reg_val:: "('a) window_size \<Rightarrow> user_reg_type \<Rightarrow> ('a) sparc_state \<Rightarrow> reg_type"
where
"user_reg_val window ur state \<equiv>
if ur = 0 then 0
else (user_reg state) window ur"
text \<open>Write a global register. win should be initialised as NWINDOWS.\<close>
fun (sequential) global_reg_mod :: "word32 \<Rightarrow> nat \<Rightarrow> user_reg_type \<Rightarrow>
('a::len0) sparc_state \<Rightarrow> ('a) sparc_state"
where
"global_reg_mod data_w32 0 ur state = state"
|
"global_reg_mod data_w32 win ur state = (
let win_word = word_of_int (int (win-1));
ns = state\<lparr>user_reg :=
(user_reg state)(win_word := ((user_reg state) win_word)(ur := data_w32))\<rparr>
in
global_reg_mod data_w32 (win-1) ur ns
)"
text \<open>Compute the next window.\<close>
definition next_window :: "('a::len0) window_size \<Rightarrow> ('a) window_size"
where
"next_window win \<equiv>
if (uint win) < (NWINDOWS - 1) then (win + 1)
else 0
"
text \<open>Compute the previous window.\<close>
definition pre_window :: "('a::len0) window_size \<Rightarrow> ('a::len0) window_size"
where
"pre_window win \<equiv>
if (uint win) > 0 then (win - 1)
else (word_of_int (NWINDOWS - 1))
"
text \<open>write an output register.
Also write ur+16 of the previous window.\<close>
definition out_reg_mod :: "word32 \<Rightarrow> ('a::len0) window_size \<Rightarrow> user_reg_type \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where
"out_reg_mod data_w32 win ur state \<equiv>
let state' = state\<lparr>user_reg :=
(user_reg state)(win := ((user_reg state) win)(ur := data_w32))\<rparr>;
win' = pre_window win;
ur' = ur + 16
in
state'\<lparr>user_reg :=
(user_reg state')(win' := ((user_reg state') win')(ur' := data_w32))\<rparr>
"
text \<open>Write a input register.
Also write ur-16 of the next window.\<close>
definition in_reg_mod :: "word32 \<Rightarrow> ('a::len0) window_size \<Rightarrow> user_reg_type \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where
"in_reg_mod data_w32 win ur state \<equiv>
let state' = state\<lparr>user_reg :=
(user_reg state)(win := ((user_reg state) win)(ur := data_w32))\<rparr>;
win' = next_window win;
ur' = ur - 16
in
state'\<lparr>user_reg :=
(user_reg state')(win' := ((user_reg state') win')(ur' := data_w32))\<rparr>
"
text \<open>Do not modify r[0].\<close>
definition user_reg_mod :: "word32 \<Rightarrow> ('a::len0) window_size \<Rightarrow> user_reg_type \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where
"user_reg_mod data_w32 win ur state \<equiv>
if ur = 0 then state
else if 0 < ur \<and> ur < 8 then
global_reg_mod data_w32 (nat NWINDOWS) ur state
else if 7 < ur \<and> ur < 16 then
out_reg_mod data_w32 win ur state
else if 15 < ur \<and> ur < 24 then
state\<lparr>user_reg :=
(user_reg state)(win := ((user_reg state) win)(ur := data_w32))\<rparr>
else \<^cancel>\<open>if 23 < ur \<and> ur < 32 then\<close>
in_reg_mod data_w32 win ur state
\<^cancel>\<open>else state\<close>
"
definition sys_reg_val :: "sys_reg \<Rightarrow> ('a) sparc_state \<Rightarrow> reg_type"
where
"sys_reg_val reg state \<equiv> (sys_reg state) reg"
definition sys_reg_mod :: "word32 \<Rightarrow> sys_reg \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where
"sys_reg_mod data_w32 sys state \<equiv> state\<lparr>sys_reg := (sys_reg state)(sys := data_w32)\<rparr>"
text \<open>The following fucntions deal with physical memory.
N.B. Physical memory address in SPARCv8 is 36-bit.\<close>
text \<open>LEON3 doesn't distinguish ASI 8 and 9; 10 and 11 for read access
for both user and supervisor.
We recently discovered that the compiled machine code by
the sparc-elf compiler often reads asi = 10 (user data)
when the actual content is store in asi = 8 (user instruction).
For testing purposes, we don't distinguish asi = 8,9,10,11
for reading access.\<close>
definition mem_val:: "asi_type \<Rightarrow> phys_address \<Rightarrow>
('a) sparc_state \<Rightarrow> mem_val_type option"
where
"mem_val asi add state \<equiv>
let asi8 = word_of_int 8;
asi9 = word_of_int 9;
asi10 = word_of_int 10;
asi11 = word_of_int 11;
r1 = (mem state) asi8 add
in
if r1 = None then
let r2 = (mem state) asi9 add in
if r2 = None then
let r3 = (mem state) asi10 add in
if r3 = None then
(mem state) asi11 add
else r3
else r2
else r1
"
text \<open>An alternative way to read values from memory.
Some implementations may use this definition.\<close>
definition mem_val_alt:: "asi_type \<Rightarrow> phys_address \<Rightarrow>
('a) sparc_state \<Rightarrow> mem_val_type option"
where
"mem_val_alt asi add state \<equiv>
let r1 = (mem state) asi add;
asi8 = word_of_int 8;
asi9 = word_of_int 9;
asi10 = word_of_int 10;
asi11 = word_of_int 11
in
if r1 = None \<and> (uint asi) = 8 then
let r2 = (mem state) asi9 add in
r2
else if r1 = None \<and> (uint asi) = 9 then
let r2 = (mem state) asi8 add in
r2
else if r1 = None \<and> (uint asi) = 10 then
let r2 = (mem state) asi11 add in
if r2 = None then
let r3 = (mem state) asi8 add in
if r3 = None then
(mem state) asi9 add
else r3
else r2
else if r1 = None \<and> (uint asi) = 11 then
let r2 = (mem state) asi10 add in
if r2 = None then
let r3 = (mem state) asi8 add in
if r3 = None then
(mem state) asi9 add
else r3
else r2
else r1"
definition mem_mod :: "asi_type \<Rightarrow> phys_address \<Rightarrow> mem_val_type \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where
"mem_mod asi addr val state \<equiv>
let state1 = state\<lparr>mem := (mem state)
(asi := ((mem state) asi)(addr := Some val))\<rparr>
in \<comment> \<open>Only allow one of \<open>asi\<close> 8 and 9 (10 and 11) to have value.\<close>
if (uint asi) = 8 \<or> (uint asi) = 10 then
let asi2 = word_of_int ((uint asi) + 1) in
state1\<lparr>mem := (mem state1)
(asi2 := ((mem state1) asi2)(addr := None))\<rparr>
else if (uint asi) = 9 \<or> (uint asi) = 11 then
let asi2 = word_of_int ((uint asi) - 1) in
state1\<lparr>mem := (mem state1)(asi2 := ((mem state1) asi2)(addr := None))\<rparr>
else state1
"
text \<open>An alternative way to write memory. This method insists that
for each address, it can only hold a value in one of ASI = 8,9,10,11.\<close>
definition mem_mod_alt :: "asi_type \<Rightarrow> phys_address \<Rightarrow> mem_val_type \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where
"mem_mod_alt asi addr val state \<equiv>
let state1 = state\<lparr>mem := (mem state)
(asi := ((mem state) asi)(addr := Some val))\<rparr>;
asi8 = word_of_int 8;
asi9 = word_of_int 9;
asi10 = word_of_int 10;
asi11 = word_of_int 11
in
\<comment> \<open>Only allow one of \<open>asi\<close> 8, 9, 10, 11 to have value.\<close>
if (uint asi) = 8 then
let state2 = state1\<lparr>mem := (mem state1)
(asi9 := ((mem state1) asi9)(addr := None))\<rparr>;
state3 = state2\<lparr>mem := (mem state2)
(asi10 := ((mem state2) asi10)(addr := None))\<rparr>;
state4 = state3\<lparr>mem := (mem state3)
(asi11 := ((mem state3) asi11)(addr := None))\<rparr>
in
state4
else if (uint asi) = 9 then
let state2 = state1\<lparr>mem := (mem state1)
(asi8 := ((mem state1) asi8)(addr := None))\<rparr>;
state3 = state2\<lparr>mem := (mem state2)
(asi10 := ((mem state2) asi10)(addr := None))\<rparr>;
state4 = state3\<lparr>mem := (mem state3)
(asi11 := ((mem state3) asi11)(addr := None))\<rparr>
in
state4
else if (uint asi) = 10 then
let state2 = state1\<lparr>mem := (mem state1)
(asi9 := ((mem state1) asi9)(addr := None))\<rparr>;
state3 = state2\<lparr>mem := (mem state2)
(asi8 := ((mem state2) asi8)(addr := None))\<rparr>;
state4 = state3\<lparr>mem := (mem state3)
(asi11 := ((mem state3) asi11)(addr := None))\<rparr>
in
state4
else if (uint asi) = 11 then
let state2 = state1\<lparr>mem := (mem state1)
(asi9 := ((mem state1) asi9)(addr := None))\<rparr>;
state3 = state2\<lparr>mem := (mem state2)
(asi10 := ((mem state2) asi10)(addr := None))\<rparr>;
state4 = state3\<lparr>mem := (mem state3)
(asi8 := ((mem state3) asi8)(addr := None))\<rparr>
in
state4
else state1
"
text \<open>Given an ASI (word8), an address (word32) addr,
read the 32bit value from the memory addresses
starting from address addr' where addr' = addr
exception that the last two bits are 0's.
That is, read the data from
addr', addr'+1, addr'+2, addr'+3.\<close>
definition mem_val_w32 :: "asi_type \<Rightarrow> phys_address \<Rightarrow>
('a) sparc_state \<Rightarrow> word32 option"
where
"mem_val_w32 asi addr state \<equiv>
- let addr' = bitAND addr 0b111111111111111111111111111111111100;
+ let addr' = (AND) addr 0b111111111111111111111111111111111100;
addr0 = addr';
addr1 = addr' + 1;
addr2 = addr' + 2;
addr3 = addr' + 3;
r0 = mem_val_alt asi addr0 state;
r1 = mem_val_alt asi addr1 state;
r2 = mem_val_alt asi addr2 state;
r3 = mem_val_alt asi addr3 state
in
if r0 = None \<or> r1 = None \<or> r2 = None \<or> r3 = None then
None
else
let byte0 = case r0 of Some v \<Rightarrow> v;
byte1 = case r1 of Some v \<Rightarrow> v;
byte2 = case r2 of Some v \<Rightarrow> v;
byte3 = case r3 of Some v \<Rightarrow> v
in
- Some (bitOR (bitOR (bitOR ((ucast(byte0)) << 24)
+ Some ((OR) ((OR) ((OR) ((ucast(byte0)) << 24)
((ucast(byte1)) << 16))
((ucast(byte2)) << 8))
(ucast(byte3)))
"
text \<open>
Let \<open>addr'\<close> be \<open>addr\<close> with last two bits set to 0's.
Write the 32bit data in the memory address \<open>addr'\<close>
(and the following 3 addresses).
\<open>byte_mask\<close> decides which byte of the 32bits are written.
\<close>
definition mem_mod_w32 :: "asi_type \<Rightarrow> phys_address \<Rightarrow> word4 \<Rightarrow> word32 \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where
"mem_mod_w32 asi addr byte_mask data_w32 state \<equiv>
- let addr' = bitAND addr 0b111111111111111111111111111111111100;
- addr0 = bitOR addr' 0b000000000000000000000000000000000000;
- addr1 = bitOR addr' 0b000000000000000000000000000000000001;
- addr2 = bitOR addr' 0b000000000000000000000000000000000010;
- addr3 = bitOR addr' 0b000000000000000000000000000000000011;
+ let addr' = (AND) addr 0b111111111111111111111111111111111100;
+ addr0 = (OR) addr' 0b000000000000000000000000000000000000;
+ addr1 = (OR) addr' 0b000000000000000000000000000000000001;
+ addr2 = (OR) addr' 0b000000000000000000000000000000000010;
+ addr3 = (OR) addr' 0b000000000000000000000000000000000011;
byte0 = (ucast (data_w32 >> 24))::mem_val_type;
byte1 = (ucast (data_w32 >> 16))::mem_val_type;
byte2 = (ucast (data_w32 >> 8))::mem_val_type;
byte3 = (ucast data_w32)::mem_val_type;
- s0 = if ((bitAND byte_mask (0b1000::word4)) >> 3) = 1 then
+ s0 = if (((AND) byte_mask (0b1000::word4)) >> 3) = 1 then
mem_mod asi addr0 byte0 state
else state;
- s1 = if ((bitAND byte_mask (0b0100::word4)) >> 2) = 1 then
+ s1 = if (((AND) byte_mask (0b0100::word4)) >> 2) = 1 then
mem_mod asi addr1 byte1 s0
else s0;
- s2 = if ((bitAND byte_mask (0b0010::word4)) >> 1) = 1 then
+ s2 = if (((AND) byte_mask (0b0010::word4)) >> 1) = 1 then
mem_mod asi addr2 byte2 s1
else s1;
- s3 = if (bitAND byte_mask (0b0001::word4)) = 1 then
+ s3 = if ((AND) byte_mask (0b0001::word4)) = 1 then
mem_mod asi addr3 byte3 s2
else s2
in
s3
"
text \<open>The following functions deal with virtual addresses.
These are based on functions written by David Sanan.\<close>
definition load_word_mem :: "('a) sparc_state \<Rightarrow> virtua_address \<Rightarrow> asi_type \<Rightarrow>
machine_word option"
where "load_word_mem state va asi \<equiv>
let pair = (virt_to_phys va (mmu state) (mem state)) in
case pair of
Some pair \<Rightarrow> (
if mmu_readable (get_acc_flag (snd pair)) asi then
(mem_val_w32 asi (fst pair) state)
else None)
| None \<Rightarrow> None"
definition store_word_mem ::"('a) sparc_state \<Rightarrow> virtua_address \<Rightarrow> machine_word \<Rightarrow>
word4 \<Rightarrow> asi_type \<Rightarrow> ('a) sparc_state option"
where "store_word_mem state va wd byte_mask asi \<equiv>
let pair = (virt_to_phys va (mmu state) (mem state)) in
case pair of
Some pair \<Rightarrow> (
if mmu_writable (get_acc_flag (snd pair)) asi then
Some (mem_mod_w32 asi (fst pair) byte_mask wd state)
else None)
| None \<Rightarrow> None"
definition icache_val:: "cache_type \<Rightarrow> ('a) sparc_state \<Rightarrow> mem_val_type option"
where "icache_val c state \<equiv> icache (cache state) c"
definition dcache_val:: "cache_type \<Rightarrow> ('a) sparc_state \<Rightarrow> mem_val_type option"
where "dcache_val c state \<equiv> dcache (cache state) c"
definition icache_mod :: "cache_type \<Rightarrow> mem_val_type \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "icache_mod c val state \<equiv>
state\<lparr>cache := ((cache state)
\<lparr>icache := (icache (cache state))(c := Some val)\<rparr>)\<rparr>
"
definition dcache_mod :: "cache_type \<Rightarrow> mem_val_type \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "dcache_mod c val state \<equiv>
state\<lparr>cache := ((cache state)
\<lparr>dcache := (dcache (cache state))(c := Some val)\<rparr>)\<rparr>
"
text \<open>Check if the memory address is in the cache or not.\<close>
definition icache_miss :: "virtua_address \<Rightarrow> ('a) sparc_state \<Rightarrow> bool"
where
"icache_miss addr state \<equiv>
let line_len = 12;
tag = (ucast (addr >> line_len))::cache_tag;
line = (ucast (0b0::word1))::cache_line_size
in
if (icache_val (tag,line) state) = None then True
else False
"
text \<open>Check if the memory address is in the cache or not.\<close>
definition dcache_miss :: "virtua_address \<Rightarrow> ('a) sparc_state \<Rightarrow> bool"
where
"dcache_miss addr state \<equiv>
let line_len = 12;
tag = (ucast (addr >> line_len))::cache_tag;
line = (ucast (0b0::word1))::cache_line_size
in
if (dcache_val (tag,line) state) = None then True
else False
"
definition read_data_cache:: "('a) sparc_state \<Rightarrow> virtua_address \<Rightarrow> machine_word option"
where "read_data_cache state va \<equiv>
let tag = (ucast (va >> 12))::word20;
- offset0 = bitAND ((ucast va)::word12) 0b111111111100;
- offset1 = bitOR offset0 0b000000000001;
- offset2 = bitOR offset0 0b000000000010;
- offset3 = bitOR offset0 0b000000000011;
+ offset0 = (AND) ((ucast va)::word12) 0b111111111100;
+ offset1 = (OR) offset0 0b000000000001;
+ offset2 = (OR) offset0 0b000000000010;
+ offset3 = (OR) offset0 0b000000000011;
r0 = dcache_val (tag,offset0) state;
r1 = dcache_val (tag,offset1) state;
r2 = dcache_val (tag,offset2) state;
r3 = dcache_val (tag,offset3) state
in
if r0 = None \<or> r1 = None \<or> r2 = None \<or> r3 = None then
None
else
let byte0 = case r0 of Some v \<Rightarrow> v;
byte1 = case r1 of Some v \<Rightarrow> v;
byte2 = case r2 of Some v \<Rightarrow> v;
byte3 = case r3 of Some v \<Rightarrow> v
in
- Some (bitOR (bitOR (bitOR ((ucast(byte0)) << 24)
+ Some ((OR) ((OR) ((OR) ((ucast(byte0)) << 24)
((ucast(byte1)) << 16))
((ucast(byte2)) << 8))
(ucast(byte3)))
"
definition read_instr_cache:: "('a) sparc_state \<Rightarrow> virtua_address \<Rightarrow> machine_word option"
where "read_instr_cache state va \<equiv>
let tag = (ucast (va >> 12))::word20;
- offset0 = bitAND ((ucast va)::word12) 0b111111111100;
- offset1 = bitOR offset0 0b000000000001;
- offset2 = bitOR offset0 0b000000000010;
- offset3 = bitOR offset0 0b000000000011;
+ offset0 = (AND) ((ucast va)::word12) 0b111111111100;
+ offset1 = (OR) offset0 0b000000000001;
+ offset2 = (OR) offset0 0b000000000010;
+ offset3 = (OR) offset0 0b000000000011;
r0 = icache_val (tag,offset0) state;
r1 = icache_val (tag,offset1) state;
r2 = icache_val (tag,offset2) state;
r3 = icache_val (tag,offset3) state
in
if r0 = None \<or> r1 = None \<or> r2 = None \<or> r3 = None then
None
else
let byte0 = case r0 of Some v \<Rightarrow> v;
byte1 = case r1 of Some v \<Rightarrow> v;
byte2 = case r2 of Some v \<Rightarrow> v;
byte3 = case r3 of Some v \<Rightarrow> v
in
- Some (bitOR (bitOR (bitOR ((ucast(byte0)) << 24)
+ Some ((OR) ((OR) ((OR) ((ucast(byte0)) << 24)
((ucast(byte1)) << 16))
((ucast(byte2)) << 8))
(ucast(byte3)))
"
definition add_data_cache :: "('a) sparc_state \<Rightarrow> virtua_address \<Rightarrow> machine_word \<Rightarrow>
word4 \<Rightarrow> ('a) sparc_state"
where
"add_data_cache state va word byte_mask \<equiv>
let tag = (ucast (va >> 12))::word20;
- offset0 = bitAND ((ucast va)::word12) 0b111111111100;
- offset1 = bitOR offset0 0b000000000001;
- offset2 = bitOR offset0 0b000000000010;
- offset3 = bitOR offset0 0b000000000011;
+ offset0 = (AND) ((ucast va)::word12) 0b111111111100;
+ offset1 = (OR) offset0 0b000000000001;
+ offset2 = (OR) offset0 0b000000000010;
+ offset3 = (OR) offset0 0b000000000011;
byte0 = (ucast (word >> 24))::mem_val_type;
byte1 = (ucast (word >> 16))::mem_val_type;
byte2 = (ucast (word >> 8))::mem_val_type;
byte3 = (ucast word)::mem_val_type;
- s0 = if ((bitAND byte_mask (0b1000::word4)) >> 3) = 1 then
+ s0 = if (((AND) byte_mask (0b1000::word4)) >> 3) = 1 then
dcache_mod (tag,offset0) byte0 state
else state;
- s1 = if ((bitAND byte_mask (0b0100::word4)) >> 2) = 1 then
+ s1 = if (((AND) byte_mask (0b0100::word4)) >> 2) = 1 then
dcache_mod (tag,offset1) byte1 s0
else s0;
- s2 = if ((bitAND byte_mask (0b0010::word4)) >> 1) = 1 then
+ s2 = if (((AND) byte_mask (0b0010::word4)) >> 1) = 1 then
dcache_mod (tag,offset2) byte2 s1
else s1;
- s3 = if (bitAND byte_mask (0b0001::word4)) = 1 then
+ s3 = if ((AND) byte_mask (0b0001::word4)) = 1 then
dcache_mod (tag,offset3) byte3 s2
else s2
in s3
"
definition add_instr_cache :: "('a) sparc_state \<Rightarrow> virtua_address \<Rightarrow> machine_word \<Rightarrow>
word4 \<Rightarrow> ('a) sparc_state"
where
"add_instr_cache state va word byte_mask \<equiv>
let tag = (ucast (va >> 12))::word20;
- offset0 = bitAND ((ucast va)::word12) 0b111111111100;
- offset1 = bitOR offset0 0b000000000001;
- offset2 = bitOR offset0 0b000000000010;
- offset3 = bitOR offset0 0b000000000011;
+ offset0 = (AND) ((ucast va)::word12) 0b111111111100;
+ offset1 = (OR) offset0 0b000000000001;
+ offset2 = (OR) offset0 0b000000000010;
+ offset3 = (OR) offset0 0b000000000011;
byte0 = (ucast (word >> 24))::mem_val_type;
byte1 = (ucast (word >> 16))::mem_val_type;
byte2 = (ucast (word >> 8))::mem_val_type;
byte3 = (ucast word)::mem_val_type;
- s0 = if ((bitAND byte_mask (0b1000::word4)) >> 3) = 1 then
+ s0 = if (((AND) byte_mask (0b1000::word4)) >> 3) = 1 then
icache_mod (tag,offset0) byte0 state
else state;
- s1 = if ((bitAND byte_mask (0b0100::word4)) >> 2) = 1 then
+ s1 = if (((AND) byte_mask (0b0100::word4)) >> 2) = 1 then
icache_mod (tag,offset1) byte1 s0
else s0;
- s2 = if ((bitAND byte_mask (0b0010::word4)) >> 1) = 1 then
+ s2 = if (((AND) byte_mask (0b0010::word4)) >> 1) = 1 then
icache_mod (tag,offset2) byte2 s1
else s1;
- s3 = if (bitAND byte_mask (0b0001::word4)) = 1 then
+ s3 = if ((AND) byte_mask (0b0001::word4)) = 1 then
icache_mod (tag,offset3) byte3 s2
else s2
in s3
"
definition empty_cache ::"cache_context" where "empty_cache c \<equiv> None"
definition flush_data_cache:: "('a) sparc_state \<Rightarrow> ('a) sparc_state" where
"flush_data_cache state \<equiv> state\<lparr>cache := ((cache state)\<lparr>dcache := empty_cache\<rparr>)\<rparr>"
definition flush_instr_cache:: "('a) sparc_state \<Rightarrow> ('a) sparc_state" where
"flush_instr_cache state \<equiv> state\<lparr>cache := ((cache state)\<lparr>icache := empty_cache\<rparr>)\<rparr>"
definition flush_cache_all:: "('a) sparc_state \<Rightarrow> ('a) sparc_state" where
"flush_cache_all state \<equiv> state\<lparr>cache := ((cache state)\<lparr>
icache := empty_cache, dcache := empty_cache\<rparr>)\<rparr>"
text \<open>Check if the FI or FD bit of CCR is 1.
If FI is 1 then flush instruction cache.
If FD is 1 then flush data cache.\<close>
definition ccr_flush :: "('a) sparc_state \<Rightarrow> ('a) sparc_state"
where
"ccr_flush state \<equiv>
let ccr_val = sys_reg_val CCR state;
\<comment> \<open>\<open>FI\<close> is bit 21 of \<open>CCR\<close>\<close>
- fi_val = (bitAND ccr_val (0b00000000001000000000000000000000)) >> 21;
- fd_val = (bitAND ccr_val (0b00000000010000000000000000000000)) >> 22;
+ fi_val = ((AND) ccr_val (0b00000000001000000000000000000000)) >> 21;
+ fd_val = ((AND) ccr_val (0b00000000010000000000000000000000)) >> 22;
state1 = (if fi_val = 1 then flush_instr_cache state else state)
in
if fd_val = 1 then flush_data_cache state1 else state1"
definition get_delayed_pool :: "('a) sparc_state \<Rightarrow> delayed_write_pool"
where "get_delayed_pool state \<equiv> dwrite state"
definition exe_pool :: "(int \<times> reg_type \<times> CPU_register) \<Rightarrow> (int \<times> reg_type \<times> CPU_register)"
where "exe_pool w \<equiv> case w of (n,v,c) \<Rightarrow> ((n-1),v,c)"
text \<open>Minus 1 to the delayed count for all the members in the set.
Assuming all members have delay > 0.\<close>
primrec delayed_pool_minus :: "delayed_write_pool \<Rightarrow> delayed_write_pool"
where
"delayed_pool_minus [] = []"
|
"delayed_pool_minus (x#xs) = (exe_pool x)#(delayed_pool_minus xs)"
text \<open>Add a delayed-write to the pool.\<close>
definition delayed_pool_add :: "(int \<times> reg_type \<times> CPU_register) \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where
"delayed_pool_add dw s \<equiv>
let (i,v,cr) = dw in
if i = 0 then \<comment> \<open>Write the value to the register immediately.\<close>
cpu_reg_mod v cr s
else \<comment> \<open>Add to delayed write pool.\<close>
let curr_pool = get_delayed_pool s in
s\<lparr>dwrite := curr_pool@[dw]\<rparr>"
text \<open>Remove a delayed-write from the pool.
Assume that the delayed-write to be removed has delay 0.
i.e., it has been executed.\<close>
definition delayed_pool_rm :: "(int \<times> reg_type \<times> CPU_register) \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where
"delayed_pool_rm dw s \<equiv>
let curr_pool = get_delayed_pool s in
case dw of (n,v,cr) \<Rightarrow>
(if n = 0 then
s\<lparr>dwrite := List.remove1 dw curr_pool\<rparr>
else s)
"
text \<open>Remove all the entries with delay = 0, i.e., those that are written.\<close>
primrec delayed_pool_rm_written :: "delayed_write_pool \<Rightarrow> delayed_write_pool"
where
"delayed_pool_rm_written [] = []"
|
"delayed_pool_rm_written (x#xs) =
(if fst x = 0 then delayed_pool_rm_written xs else x#(delayed_pool_rm_written xs))
"
definition annul_val :: "('a) sparc_state \<Rightarrow> bool"
where "annul_val state \<equiv> get_annul (state_var state)"
definition annul_mod :: "bool \<Rightarrow> ('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "annul_mod b s \<equiv> s\<lparr>state_var := write_annul b (state_var s)\<rparr>"
definition reset_trap_val :: "('a) sparc_state \<Rightarrow> bool"
where "reset_trap_val state \<equiv> get_reset_trap (state_var state)"
definition reset_trap_mod :: "bool \<Rightarrow> ('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "reset_trap_mod b s \<equiv> s\<lparr>state_var := write_reset_trap b (state_var s)\<rparr>"
definition exe_mode_val :: "('a) sparc_state \<Rightarrow> bool"
where "exe_mode_val state \<equiv> get_exe_mode (state_var state)"
definition exe_mode_mod :: "bool \<Rightarrow> ('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "exe_mode_mod b s \<equiv> s\<lparr>state_var := write_exe_mode b (state_var s)\<rparr>"
definition reset_mode_val :: "('a) sparc_state \<Rightarrow> bool"
where "reset_mode_val state \<equiv> get_reset_mode (state_var state)"
definition reset_mode_mod :: "bool \<Rightarrow> ('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "reset_mode_mod b s \<equiv> s\<lparr>state_var := write_reset_mode b (state_var s)\<rparr>"
definition err_mode_val :: "('a) sparc_state \<Rightarrow> bool"
where "err_mode_val state \<equiv> get_err_mode (state_var state)"
definition err_mode_mod :: "bool \<Rightarrow> ('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "err_mode_mod b s \<equiv> s\<lparr>state_var := write_err_mode b (state_var s)\<rparr>"
definition ticc_trap_type_val :: "('a) sparc_state \<Rightarrow> word7"
where "ticc_trap_type_val state \<equiv> get_ticc_trap_type (state_var state)"
definition ticc_trap_type_mod :: "word7 \<Rightarrow> ('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "ticc_trap_type_mod w s \<equiv> s\<lparr>state_var := write_ticc_trap_type w (state_var s)\<rparr>"
definition interrupt_level_val :: "('a) sparc_state \<Rightarrow> word3"
where "interrupt_level_val state \<equiv> get_interrupt_level (state_var state)"
definition interrupt_level_mod :: "word3 \<Rightarrow> ('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "interrupt_level_mod w s \<equiv> s\<lparr>state_var := write_interrupt_level w (state_var s)\<rparr>"
definition store_barrier_pending_val :: "('a) sparc_state \<Rightarrow> bool"
where "store_barrier_pending_val state \<equiv>
get_store_barrier_pending (state_var state)"
definition store_barrier_pending_mod :: "bool \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "store_barrier_pending_mod w s \<equiv>
s\<lparr>state_var := write_store_barrier_pending w (state_var s)\<rparr>"
definition pb_block_ldst_byte_val :: "virtua_address \<Rightarrow> ('a) sparc_state
\<Rightarrow> bool"
where "pb_block_ldst_byte_val add state \<equiv>
(atm_ldst_byte (state_var state)) add"
definition pb_block_ldst_byte_mod :: "virtua_address \<Rightarrow> bool \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "pb_block_ldst_byte_mod add b s \<equiv>
s\<lparr>state_var := ((state_var s)
\<lparr>atm_ldst_byte := (atm_ldst_byte (state_var s))(add := b)\<rparr>)\<rparr>"
text \<open>We only read the address such that add mod 4 = 0.
add mod 4 represents the current word.\<close>
definition pb_block_ldst_word_val :: "virtua_address \<Rightarrow> ('a) sparc_state
\<Rightarrow> bool"
where "pb_block_ldst_word_val add state \<equiv>
- let add0 = (bitAND add (0b11111111111111111111111111111100::word32)) in
+ let add0 = ((AND) add (0b11111111111111111111111111111100::word32)) in
(atm_ldst_word (state_var state)) add0"
text \<open>We only write the address such that add mod 4 = 0.
add mod 4 represents the current word.\<close>
definition pb_block_ldst_word_mod :: "virtua_address \<Rightarrow> bool \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "pb_block_ldst_word_mod add b s \<equiv>
- let add0 = (bitAND add (0b11111111111111111111111111111100::word32)) in
+ let add0 = ((AND) add (0b11111111111111111111111111111100::word32)) in
s\<lparr>state_var := ((state_var s)
\<lparr>atm_ldst_word := (atm_ldst_word (state_var s))(add0 := b)\<rparr>)\<rparr>"
definition get_trap_set :: "('a) sparc_state \<Rightarrow> Trap set"
where "get_trap_set state \<equiv> (traps state)"
definition add_trap_set :: "Trap \<Rightarrow> ('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "add_trap_set t s \<equiv> s\<lparr>traps := (traps s) \<union> {t}\<rparr>"
definition emp_trap_set :: "('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "emp_trap_set s \<equiv> s\<lparr>traps := {}\<rparr>"
definition state_undef:: "('a) sparc_state \<Rightarrow> bool"
where "state_undef state \<equiv> (undef state)"
text \<open>The \<open>memory_read\<close> interface that conforms with the SPARCv8 manual.\<close>
definition memory_read :: "asi_type \<Rightarrow> virtua_address \<Rightarrow>
('a) sparc_state \<Rightarrow>
((word32 option) \<times> ('a) sparc_state)"
where "memory_read asi addr state \<equiv>
let asi_int = uint asi in \<comment> \<open>See Page 25 and 35 for ASI usage in LEON 3FT.\<close>
if asi_int = 1 then \<comment> \<open>Forced cache miss.\<close>
\<comment> \<open>Directly read from memory.\<close>
let r1 = load_word_mem state addr (word_of_int 8) in
if r1 = None then
let r2 = load_word_mem state addr (word_of_int 10) in
if r2 = None then
(None,state)
else (r2,state)
else (r1,state)
else if asi_int = 2 then \<comment> \<open>System registers.\<close>
\<comment> \<open>See Table 19, Page 34 for System Register address map in LEON 3FT.\<close>
if uint addr = 0 then \<comment> \<open>Cache control register.\<close>
((Some (sys_reg_val CCR state)), state)
else if uint addr = 8 then \<comment> \<open>Instruction cache configuration register.\<close>
((Some (sys_reg_val ICCR state)), state)
else if uint addr = 12 then \<comment> \<open>Data cache configuration register.\<close>
((Some (sys_reg_val DCCR state)), state)
else \<comment> \<open>Invalid address.\<close>
(None, state)
else if asi_int \<in> {8,9} then \<comment> \<open>Access instruction memory.\<close>
let ccr_val = (sys_reg state) CCR in
if ccr_val AND 1 \<noteq> 0 then \<comment> \<open>Cache is enabled. Update cache.\<close>
\<comment> \<open>We don't go through the tradition, i.e., read from cache first,\<close>
\<comment> \<open>if the address is not cached, then read from memory,\<close>
\<comment> \<open>because performance is not an issue here.\<close>
\<comment> \<open>Thus we directly read from memory and update the cache.\<close>
let data = load_word_mem state addr asi in
case data of
Some w \<Rightarrow> (Some w,(add_instr_cache state addr w (0b1111::word4)))
|None \<Rightarrow> (None, state)
else \<comment> \<open>Cache is disabled. Just read from memory.\<close>
((load_word_mem state addr asi),state)
else if asi_int \<in> {10,11} then \<comment> \<open>Access data memory.\<close>
let ccr_val = (sys_reg state) CCR in
if ccr_val AND 1 \<noteq> 0 then \<comment> \<open>Cache is enabled. Update cache.\<close>
\<comment> \<open>We don't go through the tradition, i.e., read from cache first,\<close>
\<comment> \<open>if the address is not cached, then read from memory,\<close>
\<comment> \<open>because performance is not an issue here.\<close>
\<comment> \<open>Thus we directly read from memory and update the cache.\<close>
let data = load_word_mem state addr asi in
case data of
Some w \<Rightarrow> (Some w,(add_data_cache state addr w (0b1111::word4)))
|None \<Rightarrow> (None, state)
else \<comment> \<open>Cache is disabled. Just read from memory.\<close>
((load_word_mem state addr asi),state)
\<comment> \<open>We don't access instruction cache tag. i.e., \<open>asi = 12\<close>.\<close>
else if asi_int = 13 then \<comment> \<open>Read instruction cache data.\<close>
let cache_result = read_instr_cache state addr in
case cache_result of
Some w \<Rightarrow> (Some w, state)
|None \<Rightarrow> (None, state)
\<comment> \<open>We don't access data cache tag. i.e., \<open>asi = 14\<close>.\<close>
else if asi_int = 15 then \<comment> \<open>Read data cache data.\<close>
let cache_result = read_data_cache state addr in
case cache_result of
Some w \<Rightarrow> (Some w, state)
|None \<Rightarrow> (None, state)
else if asi_int \<in> {16,17} then \<comment> \<open>Flush entire instruction/data cache.\<close>
(None, state) \<comment> \<open>Has no effect for memory read.\<close>
else if asi_int \<in> {20,21} then \<comment> \<open>MMU diagnostic cache access.\<close>
(None, state) \<comment> \<open>Not considered in this model.\<close>
else if asi_int = 24 then \<comment> \<open>Flush cache and TLB in LEON3.\<close>
\<comment> \<open>But is not used for memory read.\<close>
(None, state)
else if asi_int = 25 then \<comment> \<open>MMU registers.\<close>
\<comment> \<open>Treat MMU registers as memory addresses that are not in the main memory.\<close>
((mmu_reg_val (mmu state) addr), state)
else if asi_int = 28 then \<comment> \<open>MMU bypass.\<close>
\<comment> \<open>Directly use addr as a physical address.\<close>
\<comment> \<open>Append 0000 in the front of addr.\<close>
\<comment> \<open>In this case, (ucast addr) suffices.\<close>
((mem_val_w32 asi (ucast addr) state), state)
else if asi_int = 29 then \<comment> \<open>MMU diagnostic access.\<close>
(None, state) \<comment> \<open>Not considered in this model.\<close>
else \<comment> \<open>Not considered in this model.\<close>
(None, state)
"
text \<open>Get the value of a memory address and an ASI.\<close>
definition mem_val_asi:: "asi_type \<Rightarrow> phys_address \<Rightarrow>
('a) sparc_state \<Rightarrow> mem_val_type option"
where "mem_val_asi asi add state \<equiv> (mem state) asi add"
text \<open>Check if an address is used in ASI 9 or 11.\<close>
definition sup_addr :: "phys_address \<Rightarrow> ('a) sparc_state \<Rightarrow> bool"
where
"sup_addr addr state \<equiv>
- let addr' = bitAND addr 0b111111111111111111111111111111111100;
- addr0 = bitOR addr' 0b000000000000000000000000000000000000;
- addr1 = bitOR addr' 0b000000000000000000000000000000000001;
- addr2 = bitOR addr' 0b000000000000000000000000000000000010;
- addr3 = bitOR addr' 0b000000000000000000000000000000000011;
+ let addr' = (AND) addr 0b111111111111111111111111111111111100;
+ addr0 = (OR) addr' 0b000000000000000000000000000000000000;
+ addr1 = (OR) addr' 0b000000000000000000000000000000000001;
+ addr2 = (OR) addr' 0b000000000000000000000000000000000010;
+ addr3 = (OR) addr' 0b000000000000000000000000000000000011;
r0 = mem_val_asi 9 addr0 state;
r1 = mem_val_asi 9 addr1 state;
r2 = mem_val_asi 9 addr2 state;
r3 = mem_val_asi 9 addr3 state;
r4 = mem_val_asi 11 addr0 state;
r5 = mem_val_asi 11 addr1 state;
r6 = mem_val_asi 11 addr2 state;
r7 = mem_val_asi 11 addr3 state
in
if r0 = None \<and> r1 = None \<and> r2 = None \<and> r3 = None \<and>
r4 = None \<and> r5 = None \<and> r6 = None \<and> r7 = None
then False
else True
"
text \<open>The \<open>memory_write\<close> interface that conforms with SPARCv8 manual.\<close>
text \<open>LEON3 forbids user to write an address in ASI 9 and 11.\<close>
definition memory_write_asi :: "asi_type \<Rightarrow> virtua_address \<Rightarrow> word4 \<Rightarrow> word32 \<Rightarrow>
('a) sparc_state \<Rightarrow>
('a) sparc_state option"
where
"memory_write_asi asi addr byte_mask data_w32 state \<equiv>
let asi_int = uint asi; \<comment> \<open>See Page 25 and 35 for ASI usage in LEON 3FT.\<close>
psr_val = cpu_reg_val PSR state;
s_val = get_S psr_val
in
if asi_int = 1 then \<comment> \<open>Forced cache miss.\<close>
\<comment> \<open>Directly write to memory.\<close>
\<comment> \<open>Assuming writing into \<open>asi = 10\<close>.\<close>
store_word_mem state addr data_w32 byte_mask (word_of_int 10)
else if asi_int = 2 then \<comment> \<open>System registers.\<close>
\<comment> \<open>See Table 19, Page 34 for System Register address map in LEON 3FT.\<close>
if uint addr = 0 then \<comment> \<open>Cache control register.\<close>
let s1 = (sys_reg_mod data_w32 CCR state) in
\<comment> \<open>Flush the instruction cache if FI of CCR is 1;\<close>
\<comment> \<open>flush the data cache if FD of CCR is 1.\<close>
Some (ccr_flush s1)
else if uint addr = 8 then \<comment> \<open>Instruction cache configuration register.\<close>
Some (sys_reg_mod data_w32 ICCR state)
else if uint addr = 12 then \<comment> \<open>Data cache configuration register.\<close>
Some (sys_reg_mod data_w32 DCCR state)
else \<comment> \<open>Invalid address.\<close>
None
else if asi_int \<in> {8,9} then \<comment> \<open>Access instruction memory.\<close>
\<comment> \<open>Write to memory. LEON3 does write-through. Both cache and the memory are updated.\<close>
let ns = add_instr_cache state addr data_w32 byte_mask in
store_word_mem ns addr data_w32 byte_mask asi
else if asi_int \<in> {10,11} then \<comment> \<open>Access data memory.\<close>
\<comment> \<open>Write to memory. LEON3 does write-through. Both cache and the memory are updated.\<close>
let ns = add_data_cache state addr data_w32 byte_mask in
store_word_mem ns addr data_w32 byte_mask asi
\<comment> \<open>We don't access instruction cache tag. i.e., \<open>asi = 12\<close>.\<close>
else if asi_int = 13 then \<comment> \<open>Write instruction cache data.\<close>
Some (add_instr_cache state addr data_w32 (0b1111::word4))
\<comment> \<open>We don't access data cache tag. i.e., asi = 14.\<close>
else if asi_int = 15 then \<comment> \<open>Write data cache data.\<close>
Some (add_data_cache state addr data_w32 (0b1111::word4))
else if asi_int = 16 then \<comment> \<open>Flush instruction cache.\<close>
Some (flush_instr_cache state)
else if asi_int = 17 then \<comment> \<open>Flush data cache.\<close>
Some (flush_data_cache state)
else if asi_int \<in> {20,21} then \<comment> \<open>MMU diagnostic cache access.\<close>
None \<comment> \<open>Not considered in this model.\<close>
else if asi_int = 24 then \<comment> \<open>Flush TLB and cache in LEON3.\<close>
\<comment> \<open>We don't consider TLB here.\<close>
Some (flush_cache_all state)
else if asi_int = 25 then \<comment> \<open>MMU registers.\<close>
\<comment> \<open>Treat MMU registers as memory addresses that are not in the main memory.\<close>
let mmu_state' = mmu_reg_mod (mmu state) addr data_w32 in
case mmu_state' of
Some mmus \<Rightarrow> Some (state\<lparr>mmu := mmus\<rparr>)
|None \<Rightarrow> None
else if asi_int = 28 then \<comment> \<open>MMU bypass.\<close>
\<comment> \<open>Write to virtual address as physical address.\<close>
\<comment> \<open>Append 0000 in front of addr.\<close>
Some (mem_mod_w32 asi (ucast addr) byte_mask data_w32 state)
else if asi_int = 29 then \<comment> \<open>MMU diagnostic access.\<close>
None \<comment> \<open>Not considered in this model.\<close>
else \<comment> \<open>Not considered in this model.\<close>
None
"
definition memory_write :: "asi_type \<Rightarrow> virtua_address \<Rightarrow> word4 \<Rightarrow> word32 \<Rightarrow>
('a) sparc_state \<Rightarrow>
('a) sparc_state option"
where
"memory_write asi addr byte_mask data_w32 state \<equiv>
let result = memory_write_asi asi addr byte_mask data_w32 state in
case result of
None \<Rightarrow> None
| Some s1 \<Rightarrow> Some (store_barrier_pending_mod False s1)"
text \<open>monad for sequential operations over the register representation\<close>
type_synonym ('a,'e) sparc_state_monad = "(('a) sparc_state,'e) det_monad"
text \<open>Given a word32 value, a cpu register,
write the value in the cpu register.\<close>
definition write_cpu :: "word32 \<Rightarrow> CPU_register \<Rightarrow> ('a,unit) sparc_state_monad"
where "write_cpu w cr \<equiv>
do
modify (\<lambda>s. (cpu_reg_mod w cr s));
return ()
od"
definition write_cpu_tt :: "word8 \<Rightarrow> ('a,unit) sparc_state_monad"
where "write_cpu_tt w \<equiv>
do
tbr_val \<leftarrow> gets (\<lambda>s. (cpu_reg_val TBR s));
new_tbr_val \<leftarrow> gets (\<lambda>s. (write_tt w tbr_val));
write_cpu new_tbr_val TBR;
return ()
od"
text \<open>Given a word32 value, a word4 window, a user register,
write the value in the user register.
N.B. CWP is a 5 bit value, but we only use the last 4 bits,
since there are only 16 windows.\<close>
definition write_reg :: "word32 \<Rightarrow> ('a::len0) word \<Rightarrow> user_reg_type \<Rightarrow>
('a,unit) sparc_state_monad"
where "write_reg w win ur \<equiv>
do
modify (\<lambda>s.(user_reg_mod w win ur s));
return ()
od"
definition set_annul :: "bool \<Rightarrow> ('a,unit) sparc_state_monad"
where "set_annul b \<equiv>
do
modify (\<lambda>s. (annul_mod b s));
return ()
od"
definition set_reset_trap :: "bool \<Rightarrow> ('a,unit) sparc_state_monad"
where "set_reset_trap b \<equiv>
do
modify (\<lambda>s. (reset_trap_mod b s));
return ()
od"
definition set_exe_mode :: "bool \<Rightarrow> ('a,unit) sparc_state_monad"
where "set_exe_mode b \<equiv>
do
modify (\<lambda>s. (exe_mode_mod b s));
return ()
od"
definition set_reset_mode :: "bool \<Rightarrow> ('a,unit) sparc_state_monad"
where "set_reset_mode b \<equiv>
do
modify (\<lambda>s. (reset_mode_mod b s));
return ()
od"
definition set_err_mode :: "bool \<Rightarrow> ('a,unit) sparc_state_monad"
where "set_err_mode b \<equiv>
do
modify (\<lambda>s. (err_mode_mod b s));
return ()
od"
fun get_delayed_0 :: "(int \<times> reg_type \<times> CPU_register) list \<Rightarrow>
(int \<times> reg_type \<times> CPU_register) list"
where
"get_delayed_0 [] = []"
|
"get_delayed_0 (x # xs) =
(if fst x = 0 then x # (get_delayed_0 xs)
else get_delayed_0 xs)"
text \<open>Get a list of delayed-writes with delay 0.\<close>
definition get_delayed_write :: "delayed_write_pool \<Rightarrow> (int \<times> reg_type \<times> CPU_register) list"
where
"get_delayed_write dwp \<equiv> get_delayed_0 dwp"
definition delayed_write :: "(int \<times> reg_type \<times> CPU_register) \<Rightarrow> ('a) sparc_state \<Rightarrow>
('a) sparc_state"
where "delayed_write dw s \<equiv>
let (n,v,r) = dw in
if n = 0 then
cpu_reg_mod v r s
else s"
primrec delayed_write_all :: "(int \<times> reg_type \<times> CPU_register) list \<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "delayed_write_all [] s = s"
|"delayed_write_all (x # xs) s =
delayed_write_all xs (delayed_write x s)"
primrec delayed_pool_rm_list :: "(int \<times> reg_type \<times> CPU_register) list\<Rightarrow>
('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "delayed_pool_rm_list [] s = s"
|"delayed_pool_rm_list (x # xs) s =
delayed_pool_rm_list xs (delayed_pool_rm x s)"
definition delayed_pool_write :: "('a) sparc_state \<Rightarrow> ('a) sparc_state"
where "delayed_pool_write s \<equiv>
let dwp0 = get_delayed_pool s;
dwp1 = delayed_pool_minus dwp0;
wl = get_delayed_write dwp1;
s1 = delayed_write_all wl s;
s2 = delayed_pool_rm_list wl s1
in s2"
definition raise_trap :: "Trap \<Rightarrow> ('a,unit) sparc_state_monad"
where "raise_trap t \<equiv>
do
modify (\<lambda>s. (add_trap_set t s));
return ()
od"
end
diff --git a/thys/SPARCv8/SparcModel_MMU/Sparc_Types.thy b/thys/SPARCv8/SparcModel_MMU/Sparc_Types.thy
--- a/thys/SPARCv8/SparcModel_MMU/Sparc_Types.thy
+++ b/thys/SPARCv8/SparcModel_MMU/Sparc_Types.thy
@@ -1,791 +1,791 @@
(*
* Copyright 2016, NTU
*
* This software may be distributed and modified according to the terms of
* the BSD 2-Clause license. Note that NO WARRANTY is provided.
* See "LICENSE_BSD2.txt" for details.
*
* Author: Zhe Hou, David Sanan.
*)
section \<open>SPARC V8 architecture CPU model\<close>
theory Sparc_Types
imports Main "../lib/WordDecl"
begin
text \<open>The following type definitions are taken from David Sanan's
definitions for SPARC machines.\<close>
type_synonym machine_word = word32
type_synonym byte = word8
type_synonym phys_address = word36
type_synonym virtua_address = word32
type_synonym page_address = word24
type_synonym offset = word12
type_synonym table_entry = word8
definition page_size :: "word32" where "page_size \<equiv> 4096"
type_synonym virtua_page_address = word20
type_synonym context_type = word8
type_synonym word_length_t1 = word_length8
type_synonym word_length_t2 = word_length6
type_synonym word_length_t3 = word_length6
type_synonym word_length_offset = word_length12
type_synonym word_length_page = word_length24
type_synonym word_length_phys_address = word_length36
type_synonym word_length_virtua_address = word_length32
type_synonym word_length_entry_type = word_length2
type_synonym word_length_machine_word = word_length32
definition length_machine_word :: "nat"
where "length_machine_word \<equiv> LENGTH(word_length_machine_word)"
text_raw \<open>\newpage\<close>
section \<open>CPU Register Definitions\<close>
text\<open>
The definitions below come from the SPARC Architecture Manual, Version 8.
The LEON3 processor has been certified SPARC V8 conformant (2005).
\<close>
definition leon3khz ::"word32"
where
"leon3khz \<equiv> 33000"
text \<open>The following type definitions for MMU is taken from
David Sanan's definitions for MMU.\<close>
text\<open>
The definitions below come from the UT699 LEON 3FT/SPARC V8 Microprocessor Functional Manual,
Aeroflex, June 20, 2012, p35.
\<close>
datatype MMU_register
= CR \<comment> \<open>Control Register\<close>
| CTP \<comment> \<open>ConText Pointer register\<close>
| CNR \<comment> \<open>Context Register\<close>
| FTSR \<comment> \<open>Fault Status Register\<close>
| FAR \<comment> \<open>Fault Address Register\<close>
lemma MMU_register_induct:
"P CR \<Longrightarrow> P CTP \<Longrightarrow> P CNR \<Longrightarrow> P FTSR \<Longrightarrow> P FAR
\<Longrightarrow> P x"
by (cases x) auto
lemma UNIV_MMU_register [no_atp]: "UNIV = {CR, CTP, CNR, FTSR, FAR}"
apply (safe)
apply (case_tac x)
apply (auto intro:MMU_register_induct)
done
instantiation MMU_register :: enum begin
definition "enum_MMU_register = [ CR, CTP, CNR, FTSR, FAR ]"
definition
"enum_all_MMU_register P \<longleftrightarrow> P CR \<and> P CTP \<and> P CNR \<and> P FTSR \<and> P FAR "
definition
"enum_ex_MMU_register P \<longleftrightarrow> P CR \<or> P CTP \<or> P CNR \<or> P FTSR \<or> P FAR"
instance proof
qed (simp_all only: enum_MMU_register_def enum_all_MMU_register_def
enum_ex_MMU_register_def UNIV_MMU_register, simp_all)
end
type_synonym MMU_context = "MMU_register \<Rightarrow> machine_word"
text \<open>\<open>PTE_flags\<close> is the last 8 bits of a PTE. See page 242 of SPARCv8 manual.
\<^item> C - bit 7
\<^item> M - bit 6,
\<^item> R - bit 5
\<^item> ACC - bit 4~2
\<^item> ET - bit 1~0.\<close>
type_synonym PTE_flags = word8
text \<open>
@{term CPU_register} datatype is an enumeration with the CPU registers defined in the SPARC V8
architecture.
\<close>
datatype CPU_register =
PSR \<comment> \<open>Processor State Register\<close>
| WIM \<comment> \<open>Window Invalid Mask\<close>
| TBR \<comment> \<open>Trap Base Register\<close>
| Y \<comment> \<open>Multiply/Divide Register\<close>
| PC \<comment> \<open>Program Counter\<close>
| nPC \<comment> \<open>next Program Counter\<close>
| DTQ \<comment> \<open>Deferred-Trap Queue\<close>
| FSR \<comment> \<open>Floating-Point State Register\<close>
| FQ \<comment> \<open>Floating-Point Deferred-Trap Queue\<close>
| CSR \<comment> \<open>Coprocessor State Register\<close>
| CQ \<comment> \<open>Coprocessor Deferred-Trap Queue\<close>
(*| CCR -- "Cache Control Register"*)
| ASR "word5" \<comment> \<open>Ancillary State Register\<close>
text \<open>The following two functions are dummies since we will not use
ASRs. Future formalisation may add more details to this.\<close>
definition privileged_ASR :: "word5 \<Rightarrow> bool"
where
"privileged_ASR r \<equiv> False
"
definition illegal_instruction_ASR :: "word5 \<Rightarrow> bool"
where
"illegal_instruction_ASR r \<equiv> False
"
definition get_tt :: "word32 \<Rightarrow> word8"
where
"get_tt tbr \<equiv>
- ucast ((bitAND tbr 0b00000000000000000000111111110000) >> 4)
+ ucast (((AND) tbr 0b00000000000000000000111111110000) >> 4)
"
text \<open>Write the tt field of the TBR register.
Return the new value of TBR.\<close>
definition write_tt :: "word8 \<Rightarrow> word32 \<Rightarrow> word32"
where
"write_tt new_tt_val tbr_val \<equiv>
- let tmp = bitAND tbr_val 0b111111111111111111111000000001111 in
- bitOR tmp (((ucast new_tt_val)::word32) << 4)
+ let tmp = (AND) tbr_val 0b111111111111111111111000000001111 in
+ (OR) tmp (((ucast new_tt_val)::word32) << 4)
"
-text \<open>Get the nth bit of WIM. This equals (bitAND WIM $2^n$).
+text \<open>Get the nth bit of WIM. This equals ((AND) WIM $2^n$).
N.B. the first bit of WIM is the 0th bit.\<close>
definition get_WIM_bit :: "nat \<Rightarrow> word32 \<Rightarrow> word1"
where
"get_WIM_bit n wim \<equiv>
let mask = ((ucast (0b1::word1))::word32) << n in
- ucast ((bitAND mask wim) >> n)
+ ucast (((AND) mask wim) >> n)
"
definition get_CWP :: "word32 \<Rightarrow> word5"
where
"get_CWP psr \<equiv>
- ucast (bitAND psr 0b00000000000000000000000000011111)
+ ucast ((AND) psr 0b00000000000000000000000000011111)
"
definition get_ET :: "word32 \<Rightarrow> word1"
where
"get_ET psr \<equiv>
- ucast ((bitAND psr 0b00000000000000000000000000100000) >> 5)
+ ucast (((AND) psr 0b00000000000000000000000000100000) >> 5)
"
definition get_PIL :: "word32 \<Rightarrow> word4"
where
"get_PIL psr \<equiv>
- ucast ((bitAND psr 0b00000000000000000000111100000000) >> 8)
+ ucast (((AND) psr 0b00000000000000000000111100000000) >> 8)
"
definition get_PS :: "word32 \<Rightarrow> word1"
where
"get_PS psr \<equiv>
- ucast ((bitAND psr 0b00000000000000000000000001000000) >> 6)
+ ucast (((AND) psr 0b00000000000000000000000001000000) >> 6)
"
definition get_S :: "word32 \<Rightarrow> word1"
where
"get_S psr \<equiv>
- \<^cancel>\<open>ucast ((bitAND psr 0b00000000000000000000000010000000) >> 7)\<close>
- if (bitAND psr (0b00000000000000000000000010000000::word32)) = 0 then 0
+ \<^cancel>\<open>ucast (((AND) psr 0b00000000000000000000000010000000) >> 7)\<close>
+ if ((AND) psr (0b00000000000000000000000010000000::word32)) = 0 then 0
else 1
"
definition get_icc_N :: "word32 \<Rightarrow> word1"
where
"get_icc_N psr \<equiv>
- ucast ((bitAND psr 0b00000000100000000000000000000000) >> 23)
+ ucast (((AND) psr 0b00000000100000000000000000000000) >> 23)
"
definition get_icc_Z :: "word32 \<Rightarrow> word1"
where
"get_icc_Z psr \<equiv>
- ucast ((bitAND psr 0b00000000010000000000000000000000) >> 22)
+ ucast (((AND) psr 0b00000000010000000000000000000000) >> 22)
"
definition get_icc_V :: "word32 \<Rightarrow> word1"
where
"get_icc_V psr \<equiv>
- ucast ((bitAND psr 0b00000000001000000000000000000000) >> 21)
+ ucast (((AND) psr 0b00000000001000000000000000000000) >> 21)
"
definition get_icc_C :: "word32 \<Rightarrow> word1"
where
"get_icc_C psr \<equiv>
- ucast ((bitAND psr 0b00000000000100000000000000000000) >> 20)
+ ucast (((AND) psr 0b00000000000100000000000000000000) >> 20)
"
definition update_S :: "word1 \<Rightarrow> word32 \<Rightarrow> word32"
where
"update_S s_val psr_val \<equiv>
- let tmp0 = bitAND psr_val 0b11111111111111111111111101111111 in
- bitOR tmp0 (((ucast s_val)::word32) << 7)
+ let tmp0 = (AND) psr_val 0b11111111111111111111111101111111 in
+ (OR) tmp0 (((ucast s_val)::word32) << 7)
"
text \<open>Update the CWP field of PSR.
Return the new value of PSR.\<close>
definition update_CWP :: "word5 \<Rightarrow> word32 \<Rightarrow> word32"
where
"update_CWP cwp_val psr_val \<equiv>
- let tmp0 = bitAND psr_val (0b11111111111111111111111111100000::word32);
+ let tmp0 = (AND) psr_val (0b11111111111111111111111111100000::word32);
s_val = ((ucast (get_S psr_val))::word1)
in
if s_val = 0 then
- bitAND (bitOR tmp0 ((ucast cwp_val)::word32)) (0b11111111111111111111111101111111::word32)
+ (AND) ((OR) tmp0 ((ucast cwp_val)::word32)) (0b11111111111111111111111101111111::word32)
else
- bitOR (bitOR tmp0 ((ucast cwp_val)::word32)) (0b00000000000000000000000010000000::word32)
+ (OR) ((OR) tmp0 ((ucast cwp_val)::word32)) (0b00000000000000000000000010000000::word32)
"
text \<open>Update the the ET, CWP, and S fields of PSR.
Return the new value of PSR.\<close>
definition update_PSR_rett :: "word5 \<Rightarrow> word1 \<Rightarrow> word1 \<Rightarrow> word32 \<Rightarrow> word32"
where
"update_PSR_rett cwp_val et_val s_val psr_val \<equiv>
- let tmp0 = bitAND psr_val 0b11111111111111111111111101000000;
- tmp1 = bitOR tmp0 ((ucast cwp_val)::word32);
- tmp2 = bitOR tmp1 (((ucast et_val)::word32) << 5);
- tmp3 = bitOR tmp2 (((ucast s_val)::word32) << 7)
+ let tmp0 = (AND) psr_val 0b11111111111111111111111101000000;
+ tmp1 = (OR) tmp0 ((ucast cwp_val)::word32);
+ tmp2 = (OR) tmp1 (((ucast et_val)::word32) << 5);
+ tmp3 = (OR) tmp2 (((ucast s_val)::word32) << 7)
in
tmp3
"
definition update_PSR_exe_trap :: "word5 \<Rightarrow> word1 \<Rightarrow> word1 \<Rightarrow> word32 \<Rightarrow> word32"
where
"update_PSR_exe_trap cwp_val et_val ps_val psr_val \<equiv>
- let tmp0 = bitAND psr_val 0b11111111111111111111111110000000;
- tmp1 = bitOR tmp0 ((ucast cwp_val)::word32);
- tmp2 = bitOR tmp1 (((ucast et_val)::word32) << 5);
- tmp3 = bitOR tmp2 (((ucast ps_val)::word32) << 6)
+ let tmp0 = (AND) psr_val 0b11111111111111111111111110000000;
+ tmp1 = (OR) tmp0 ((ucast cwp_val)::word32);
+ tmp2 = (OR) tmp1 (((ucast et_val)::word32) << 5);
+ tmp3 = (OR) tmp2 (((ucast ps_val)::word32) << 6)
in
tmp3
"
text \<open>Update the N, Z, V, C fields of PSR.
Return the new value of PSR.\<close>
definition update_PSR_icc :: "word1 \<Rightarrow> word1 \<Rightarrow> word1 \<Rightarrow> word1 \<Rightarrow> word32 \<Rightarrow> word32"
where
"update_PSR_icc n_val z_val v_val c_val psr_val \<equiv>
let
n_val_32 = if n_val = 0 then 0
else (0b00000000100000000000000000000000::word32);
z_val_32 = if z_val = 0 then 0
else (0b00000000010000000000000000000000::word32);
v_val_32 = if v_val = 0 then 0
else (0b00000000001000000000000000000000::word32);
c_val_32 = if c_val = 0 then 0
else (0b00000000000100000000000000000000::word32);
- tmp0 = bitAND psr_val (0b11111111000011111111111111111111::word32);
- tmp1 = bitOR tmp0 n_val_32;
- tmp2 = bitOR tmp1 z_val_32;
- tmp3 = bitOR tmp2 v_val_32;
- tmp4 = bitOR tmp3 c_val_32
+ tmp0 = (AND) psr_val (0b11111111000011111111111111111111::word32);
+ tmp1 = (OR) tmp0 n_val_32;
+ tmp2 = (OR) tmp1 z_val_32;
+ tmp3 = (OR) tmp2 v_val_32;
+ tmp4 = (OR) tmp3 c_val_32
in
tmp4
"
text \<open>Update the ET, PIL fields of PSR.
Return the new value of PSR.\<close>
definition update_PSR_et_pil :: "word1 \<Rightarrow> word4 \<Rightarrow> word32 \<Rightarrow> word32"
where
"update_PSR_et_pil et pil psr_val \<equiv>
- let tmp0 = bitAND psr_val 0b111111111111111111111000011011111;
- tmp1 = bitOR tmp0 (((ucast et)::word32) << 5);
- tmp2 = bitOR tmp1 (((ucast pil)::word32) << 8)
+ let tmp0 = (AND) psr_val 0b111111111111111111111000011011111;
+ tmp1 = (OR) tmp0 (((ucast et)::word32) << 5);
+ tmp2 = (OR) tmp1 (((ucast pil)::word32) << 8)
in
tmp2
"
text \<open>
SPARC V8 architecture is organized in windows of 32 user registers.
The data stored in a register is defined as a 32 bits word @{term reg_type}:
\<close>
type_synonym reg_type = "word32"
text \<open>
The access to the value of a CPU register of type @{term CPU_register} is
defined by a total function @{term cpu_context}
\<close>
type_synonym cpu_context = "CPU_register \<Rightarrow> reg_type"
text \<open>
User registers are defined with the type @{term user_reg} represented by a 5 bits word.
\<close>
type_synonym user_reg_type = "word5"
definition PSR_S ::"reg_type"
where "PSR_S \<equiv> 6"
text \<open>
Each window context is defined by a total function @{term window_context} from @{term user_register}
to @{term reg_type} (32 bits word storing the actual value of the register).
\<close>
type_synonym window_context = "user_reg_type \<Rightarrow> reg_type"
text \<open>
The number of windows is implementation dependent.
The LEON architecture is composed of 16 different windows (a 4 bits word).
\<close>
definition NWINDOWS :: "int"
where "NWINDOWS \<equiv> 8"
text \<open>Maximum number of windows is 32 in SPARCv8.\<close>
type_synonym ('a) window_size = "'a word"
text \<open>
Finally the user context is defined by another total function @{term user_context} from
@{term window_size} to @{term window_context}. That is, the user context is a function taking as
argument a register set window and a register within that window, and it returns the value stored
in that user register.
\<close>
type_synonym ('a) user_context = "('a) window_size \<Rightarrow> window_context"
datatype sys_reg =
CCR \<comment> \<open>Cache control register\<close>
|ICCR \<comment> \<open>Instruction cache configuration register\<close>
|DCCR \<comment> \<open>Data cache configuration register\<close>
type_synonym sys_context = "sys_reg \<Rightarrow> reg_type"
text\<open>
The memory model is defined by a total function from 32 bits words to 8 bits words
\<close>
type_synonym asi_type = "word8"
text \<open>
The memory is defined as a function from page address to page, which is also defined
as a function from physical address to @{term "machine_word"}
\<close>
type_synonym mem_val_type = "word8"
type_synonym mem_context = "asi_type \<Rightarrow> phys_address \<Rightarrow> mem_val_type option"
type_synonym cache_tag = "word20"
type_synonym cache_line_size = "word12"
type_synonym cache_type = "(cache_tag \<times> cache_line_size)"
type_synonym cache_context = "cache_type \<Rightarrow> mem_val_type option"
text \<open>The delayed-write pool generated from write state register instructions.\<close>
type_synonym delayed_write_pool = "(int \<times> reg_type \<times> CPU_register) list"
definition DELAYNUM :: "int"
where "DELAYNUM \<equiv> 0"
text \<open>Convert a set to a list.\<close>
definition list_of_set :: "'a set \<Rightarrow> 'a list"
where "list_of_set s = (SOME l. set l = s)"
lemma set_list_of_set: "finite s \<Longrightarrow> set (list_of_set s) = s"
unfolding list_of_set_def
by (metis (mono_tags) finite_list some_eq_ex)
type_synonym ANNUL = "bool"
type_synonym RESET_TRAP = "bool"
type_synonym EXECUTE_MODE = "bool"
type_synonym RESET_MODE = "bool"
type_synonym ERROR_MODE = "bool"
type_synonym TICC_TRAP_TYPE = "word7"
type_synonym INTERRUPT_LEVEL = "word3"
type_synonym STORE_BARRIER_PENDING = "bool"
text \<open>The processor asserts this signal to ensure that the
memory system will not process another SWAP or
LDSTUB operation to the same memory byte.\<close>
type_synonym pb_block_ldst_byte = "virtua_address \<Rightarrow> bool"
text\<open>The processor asserts this signal to ensure that the
memory system will not process another SWAP or
LDSTUB operation to the same memory word.\<close>
type_synonym pb_block_ldst_word = "virtua_address \<Rightarrow> bool"
record sparc_state_var =
annul:: ANNUL
resett:: RESET_TRAP
exe:: EXECUTE_MODE
reset:: RESET_MODE
err:: ERROR_MODE
ticc:: TICC_TRAP_TYPE
itrpt_lvl:: INTERRUPT_LEVEL
st_bar:: STORE_BARRIER_PENDING
atm_ldst_byte:: pb_block_ldst_byte
atm_ldst_word:: pb_block_ldst_word
definition get_annul :: "sparc_state_var \<Rightarrow> bool"
where "get_annul v \<equiv> annul v"
definition get_reset_trap :: "sparc_state_var \<Rightarrow> bool"
where "get_reset_trap v \<equiv> resett v"
definition get_exe_mode :: "sparc_state_var \<Rightarrow> bool"
where "get_exe_mode v \<equiv> exe v"
definition get_reset_mode :: "sparc_state_var \<Rightarrow> bool"
where "get_reset_mode v \<equiv> reset v"
definition get_err_mode :: "sparc_state_var \<Rightarrow> bool"
where "get_err_mode v \<equiv> err v"
definition get_ticc_trap_type :: "sparc_state_var \<Rightarrow> word7"
where "get_ticc_trap_type v \<equiv> ticc v"
definition get_interrupt_level :: "sparc_state_var \<Rightarrow> word3"
where "get_interrupt_level v \<equiv> itrpt_lvl v"
definition get_store_barrier_pending :: "sparc_state_var \<Rightarrow> bool"
where "get_store_barrier_pending v \<equiv> st_bar v"
definition write_annul :: "bool \<Rightarrow> sparc_state_var \<Rightarrow> sparc_state_var"
where "write_annul b v \<equiv> v\<lparr>annul := b\<rparr>"
definition write_reset_trap :: "bool \<Rightarrow> sparc_state_var \<Rightarrow> sparc_state_var"
where "write_reset_trap b v \<equiv> v\<lparr>resett := b\<rparr>"
definition write_exe_mode :: "bool \<Rightarrow> sparc_state_var \<Rightarrow> sparc_state_var"
where "write_exe_mode b v \<equiv> v\<lparr>exe := b\<rparr>"
definition write_reset_mode :: "bool \<Rightarrow> sparc_state_var \<Rightarrow> sparc_state_var"
where "write_reset_mode b v \<equiv> v\<lparr>reset := b\<rparr>"
definition write_err_mode :: "bool \<Rightarrow> sparc_state_var \<Rightarrow> sparc_state_var"
where "write_err_mode b v \<equiv> v\<lparr>err := b\<rparr>"
definition write_ticc_trap_type :: "word7 \<Rightarrow> sparc_state_var \<Rightarrow> sparc_state_var"
where "write_ticc_trap_type w v \<equiv> v\<lparr>ticc := w\<rparr>"
definition write_interrupt_level :: "word3 \<Rightarrow> sparc_state_var \<Rightarrow> sparc_state_var"
where "write_interrupt_level w v \<equiv> v\<lparr>itrpt_lvl := w\<rparr>"
definition write_store_barrier_pending :: "bool \<Rightarrow> sparc_state_var \<Rightarrow> sparc_state_var"
where "write_store_barrier_pending b v \<equiv> v\<lparr>st_bar := b\<rparr>"
text \<open>Given a word7 value, find the highest bit,
and fill the left bits to be the highest bit.\<close>
definition sign_ext7::"word7 \<Rightarrow> word32"
where
"sign_ext7 w \<equiv>
- let highest_bit = (bitAND w 0b1000000) >> 6 in
+ let highest_bit = ((AND) w 0b1000000) >> 6 in
if highest_bit = 0 then
(ucast w)::word32
- else bitOR ((ucast w)::word32) 0b11111111111111111111111110000000
+ else (OR) ((ucast w)::word32) 0b11111111111111111111111110000000
"
definition zero_ext8 :: "word8 \<Rightarrow> word32"
where
"zero_ext8 w \<equiv> (ucast w)::word32
"
text \<open>Given a word8 value, find the highest bit,
and fill the left bits to be the highest bit.\<close>
definition sign_ext8::"word8 \<Rightarrow> word32"
where
"sign_ext8 w \<equiv>
- let highest_bit = (bitAND w 0b10000000) >> 7 in
+ let highest_bit = ((AND) w 0b10000000) >> 7 in
if highest_bit = 0 then
(ucast w)::word32
- else bitOR ((ucast w)::word32) 0b11111111111111111111111100000000
+ else (OR) ((ucast w)::word32) 0b11111111111111111111111100000000
"
text \<open>Given a word13 value, find the highest bit,
and fill the left bits to be the highest bit.\<close>
definition sign_ext13::"word13 \<Rightarrow> word32"
where
"sign_ext13 w \<equiv>
- let highest_bit = (bitAND w 0b1000000000000) >> 12 in
+ let highest_bit = ((AND) w 0b1000000000000) >> 12 in
if highest_bit = 0 then
(ucast w)::word32
- else bitOR ((ucast w)::word32) 0b11111111111111111110000000000000
+ else (OR) ((ucast w)::word32) 0b11111111111111111110000000000000
"
definition zero_ext16 :: "word16 \<Rightarrow> word32"
where
"zero_ext16 w \<equiv> (ucast w)::word32
"
text \<open>Given a word16 value, find the highest bit,
and fill the left bits to be the highest bit.\<close>
definition sign_ext16::"word16 \<Rightarrow> word32"
where
"sign_ext16 w \<equiv>
- let highest_bit = (bitAND w 0b1000000000000000) >> 15 in
+ let highest_bit = ((AND) w 0b1000000000000000) >> 15 in
if highest_bit = 0 then
(ucast w)::word32
- else bitOR ((ucast w)::word32) 0b11111111111111110000000000000000
+ else (OR) ((ucast w)::word32) 0b11111111111111110000000000000000
"
text \<open>Given a word22 value, find the highest bit,
and fill the left bits to tbe the highest bit.\<close>
definition sign_ext22::"word22 \<Rightarrow> word32"
where
"sign_ext22 w \<equiv>
- let highest_bit = (bitAND w 0b1000000000000000000000) >> 21 in
+ let highest_bit = ((AND) w 0b1000000000000000000000) >> 21 in
if highest_bit = 0 then
(ucast w)::word32
- else bitOR ((ucast w)::word32) 0b11111111110000000000000000000000
+ else (OR) ((ucast w)::word32) 0b11111111110000000000000000000000
"
text \<open>Given a word24 value, find the highest bit,
and fill the left bits to tbe the highest bit.\<close>
definition sign_ext24::"word24 \<Rightarrow> word32"
where
"sign_ext24 w \<equiv>
- let highest_bit = (bitAND w 0b100000000000000000000000) >> 23 in
+ let highest_bit = ((AND) w 0b100000000000000000000000) >> 23 in
if highest_bit = 0 then
(ucast w)::word32
- else bitOR ((ucast w)::word32) 0b11111111000000000000000000000000
+ else (OR) ((ucast w)::word32) 0b11111111000000000000000000000000
"
text\<open>
Operations to be defined.
The SPARC V8 architecture is composed of the following set of instructions:
\<^item> Load Integer Instructions
\<^item> Load Floating-point Instructions
\<^item> Load Coprocessor Instructions
\<^item> Store Integer Instructions
\<^item> Store Floating-point Instructions
\<^item> Store Coprocessor Instructions
\<^item> Atomic Load-Store Unsigned Byte Instructions
\<^item> SWAP Register With Memory Instruction
\<^item> SETHI Instructions
\<^item> NOP Instruction
\<^item> Logical Instructions
\<^item> Shift Instructions
\<^item> Add Instructions
\<^item> Tagged Add Instructions
\<^item> Subtract Instructions
\<^item> Tagged Subtract Instructions
\<^item> Multiply Step Instruction
\<^item> Multiply Instructions
\<^item> Divide Instructions
\<^item> SAVE and RESTORE Instructions
\<^item> Branch on Integer Condition Codes Instructions
\<^item> Branch on Floating-point Condition Codes Instructions
\<^item> Branch on Coprocessor Condition Codes Instructions
\<^item> Call and Link Instruction
\<^item> Jump and Link Instruction
\<^item> Return from Trap Instruction
\<^item> Trap on Integer Condition Codes Instructions
\<^item> Read State Register Instructions
\<^item> Write State Register Instructions
\<^item> STBAR Instruction
\<^item> Unimplemented Instruction
\<^item> Flush Instruction Memory
\<^item> Floating-point Operate (FPop) Instructions
\<^item> Convert Integer to Floating point Instructions
\<^item> Convert Floating point to Integer Instructions
\<^item> Convert Between Floating-point Formats Instructions
\<^item> Floating-point Move Instructions
\<^item> Floating-point Square Root Instructions
\<^item> Floating-point Add and Subtract Instructions
\<^item> Floating-point Multiply and Divide Instructions
\<^item> Floating-point Compare Instructions
\<^item> Coprocessor Operate Instructions
\<close>
text \<open>The CALL instruction.\<close>
datatype call_type = CALL \<comment> \<open>Call and Link\<close>
text \<open>The SETHI instruction.\<close>
datatype sethi_type = SETHI \<comment> \<open>Set High 22 bits of r Register\<close>
text \<open>The NOP instruction.\<close>
datatype nop_type = NOP \<comment> \<open>No Operation\<close>
text \<open>The Branch on integer condition codes instructions.\<close>
datatype bicc_type =
BE \<comment> \<open>Branch on Equal\<close>
| BNE \<comment> \<open>Branch on Not Equal\<close>
| BGU \<comment> \<open>Branch on Greater Unsigned\<close>
| BLE \<comment> \<open>Branch on Less or Equal\<close>
| BL \<comment> \<open>Branch on Less\<close>
| BGE \<comment> \<open>Branch on Greater or Equal\<close>
| BNEG \<comment> \<open>Branch on Negative\<close>
| BG \<comment> \<open>Branch on Greater\<close>
| BCS \<comment> \<open>Branch on Carry Set (Less than, Unsigned)\<close>
| BLEU \<comment> \<open>Branch on Less or Equal Unsigned\<close>
| BCC \<comment> \<open>Branch on Carry Clear (Greater than or Equal, Unsigned)\<close>
| BA \<comment> \<open>Branch Always\<close>
| BN \<comment> \<open>Branch Never\<close> \<comment> \<open>Added for unconditional branches\<close>
| BPOS \<comment> \<open>Branch on Positive\<close>
| BVC \<comment> \<open>Branch on Overflow Clear\<close>
| BVS \<comment> \<open>Branch on Overflow Set\<close>
text \<open>Memory instructions. That is, load and store.\<close>
datatype load_store_type =
LDSB \<comment> \<open>Load Signed Byte\<close>
| LDUB \<comment> \<open>Load Unsigned Byte\<close>
| LDUBA \<comment> \<open>Load Unsigned Byte from Alternate space\<close>
| LDUH \<comment> \<open>Load Unsigned Halfword\<close>
| LD \<comment> \<open>Load Word\<close>
| LDA \<comment> \<open>Load Word from Alternate space\<close>
| LDD \<comment> \<open>Load Doubleword\<close>
| STB \<comment> \<open>Store Byte\<close>
| STH \<comment> \<open>Store Halfword\<close>
| ST \<comment> \<open>Store Word\<close>
| STA \<comment> \<open>Store Word into Alternate space\<close>
| STD \<comment> \<open>Store Doubleword\<close>
| LDSBA \<comment> \<open>Load Signed Byte from Alternate space\<close>
| LDSH \<comment> \<open>Load Signed Halfword\<close>
| LDSHA \<comment> \<open>Load Signed Halfword from Alternate space\<close>
| LDUHA \<comment> \<open>Load Unsigned Halfword from Alternate space\<close>
| LDDA \<comment> \<open>Load Doubleword from Alternate space\<close>
| STBA \<comment> \<open>Store Byte into Alternate space\<close>
| STHA \<comment> \<open>Store Halfword into Alternate space\<close>
| STDA \<comment> \<open>Store Doubleword into Alternate space\<close>
| LDSTUB \<comment> \<open>Atomic Load Store Unsigned Byte\<close>
| LDSTUBA \<comment> \<open>Atomic Load Store Unsinged Byte in Alternate space\<close>
| SWAP \<comment> \<open>Swap r Register with Mmemory\<close>
| SWAPA \<comment> \<open>Swap r Register with Mmemory in Alternate space\<close>
| FLUSH \<comment> \<open>Flush Instruction Memory\<close>
| STBAR \<comment> \<open>Store Barrier\<close>
text \<open>Arithmetic instructions.\<close>
datatype arith_type =
ADD \<comment> \<open>Add\<close>
| ADDcc \<comment> \<open>Add and modify icc\<close>
| ADDX \<comment> \<open>Add with Carry\<close>
| SUB \<comment> \<open>Subtract\<close>
| SUBcc \<comment> \<open>Subtract and modify icc\<close>
| SUBX \<comment> \<open>Subtract with Carry\<close>
| UMUL \<comment> \<open>Unsigned Integer Multiply\<close>
| SMUL \<comment> \<open>Signed Integer Multiply\<close>
| SMULcc \<comment> \<open>Signed Integer Multiply and modify icc\<close>
| UDIV \<comment> \<open>Unsigned Integer Divide\<close>
| UDIVcc \<comment> \<open>Unsigned Integer Divide and modify icc\<close>
| SDIV \<comment> \<open>Signed Integer Divide\<close>
| ADDXcc \<comment> \<open>Add with Carry and modify icc\<close>
| TADDcc \<comment> \<open>Tagged Add and modify icc\<close>
| TADDccTV \<comment> \<open>Tagged Add and modify icc and Trap on overflow\<close>
| SUBXcc \<comment> \<open>Subtract with Carry and modify icc\<close>
| TSUBcc \<comment> \<open>Tagged Subtract and modify icc\<close>
| TSUBccTV \<comment> \<open>Tagged Subtract and modify icc and Trap on overflow\<close>
| MULScc \<comment> \<open>Multiply Step and modify icc\<close>
| UMULcc \<comment> \<open>Unsigned Integer Multiply and modify icc\<close>
| SDIVcc \<comment> \<open>Signed Integer Divide and modify icc\<close>
text \<open>Logical instructions.\<close>
datatype logic_type =
ANDs \<comment> \<open>And\<close>
| ANDcc \<comment> \<open>And and modify icc\<close>
| ANDN \<comment> \<open>And Not\<close>
| ANDNcc \<comment> \<open>And Not and modify icc\<close>
| ORs \<comment> \<open>Inclusive-Or\<close>
| ORcc \<comment> \<open>Inclusive-Or and modify icc\<close>
| ORN \<comment> \<open>Inclusive Or Not\<close>
| XORs \<comment> \<open>Exclusive-Or\<close>
| XNOR \<comment> \<open>Exclusive-Nor\<close>
| ORNcc \<comment> \<open>Inclusive-Or Not and modify icc\<close>
| XORcc \<comment> \<open>Exclusive-Or and modify icc\<close>
| XNORcc \<comment> \<open>Exclusive-Nor and modify icc\<close>
text \<open>Shift instructions.\<close>
datatype shift_type =
SLL \<comment> \<open>Shift Left Logical\<close>
| SRL \<comment> \<open>Shift Right Logical\<close>
| SRA \<comment> \<open>Shift Right Arithmetic\<close>
text \<open>Other Control-transfer instructions.\<close>
datatype ctrl_type =
JMPL \<comment> \<open>Jump and Link\<close>
| RETT \<comment> \<open>Return from Trap\<close>
| SAVE \<comment> \<open>Save caller's window\<close>
| RESTORE \<comment> \<open>Restore caller's window\<close>
text \<open>Access state registers instructions.\<close>
datatype sreg_type =
RDASR \<comment> \<open>Read Ancillary State Register\<close>
| RDY \<comment> \<open>Read Y Register\<close>
| RDPSR \<comment> \<open>Read Processor State Register\<close>
| RDWIM \<comment> \<open>Read Window Invalid Mask Register\<close>
| RDTBR \<comment> \<open>Read Trap Base Regiser\<close>
| WRASR \<comment> \<open>Write Ancillary State Register\<close>
| WRY \<comment> \<open>Write Y Register\<close>
| WRPSR \<comment> \<open>Write Processor State Register\<close>
| WRWIM \<comment> \<open>Write Window Invalid Mask Register\<close>
| WRTBR \<comment> \<open>Write Trap Base Register\<close>
text \<open>Unimplemented instruction.\<close>
datatype uimp_type = UNIMP \<comment> \<open>Unimplemented\<close>
text \<open>Trap on integer condition code instructions.\<close>
datatype ticc_type =
TA \<comment> \<open>Trap Always\<close>
| TN \<comment> \<open>Trap Never\<close>
| TNE \<comment> \<open>Trap on Not Equal\<close>
| TE \<comment> \<open>Trap on Equal\<close>
| TG \<comment> \<open>Trap on Greater\<close>
| TLE \<comment> \<open>Trap on Less or Equal\<close>
| TGE \<comment> \<open>Trap on Greater or Equal\<close>
| TL \<comment> \<open>Trap on Less\<close>
| TGU \<comment> \<open>Trap on Greater Unsigned\<close>
| TLEU \<comment> \<open>Trap on Less or Equal Unsigned\<close>
| TCC \<comment> \<open>Trap on Carry Clear (Greater than or Equal, Unsigned)\<close>
| TCS \<comment> \<open>Trap on Carry Set (Less Than, Unsigned)\<close>
| TPOS \<comment> \<open>Trap on Postive\<close>
| TNEG \<comment> \<open>Trap on Negative\<close>
| TVC \<comment> \<open>Trap on Overflow Clear\<close>
| TVS \<comment> \<open>Trap on Overflow Set\<close>
datatype sparc_operation =
call_type call_type
| sethi_type sethi_type
| nop_type nop_type
| bicc_type bicc_type
| load_store_type load_store_type
| arith_type arith_type
| logic_type logic_type
| shift_type shift_type
| ctrl_type ctrl_type
| sreg_type sreg_type
| uimp_type uimp_type
| ticc_type ticc_type
datatype Trap =
reset
|data_store_error
|instruction_access_MMU_miss
|instruction_access_error
|r_register_access_error
|instruction_access_exception
|privileged_instruction
|illegal_instruction
|unimplemented_FLUSH
|watchpoint_detected
|fp_disabled
|cp_disabled
|window_overflow
|window_underflow
|mem_address_not_aligned
|fp_exception
|cp_exception
|data_access_error
|data_access_MMU_miss
|data_access_exception
|tag_overflow
|division_by_zero
|trap_instruction
|interrupt_level_n
datatype Exception =
\<comment> \<open>The following are processor states that are not in the instruction model,\<close>
\<comment> \<open>but we MAY want to deal with these from hardware perspective.\<close>
\<^cancel>\<open>|execute_mode\<close>
\<^cancel>\<open>|reset_mode\<close>
\<^cancel>\<open>|error_mode\<close>
\<comment> \<open>The following are self-defined exceptions.\<close>
invalid_cond_f2
|invalid_op2_f2
|illegal_instruction2 \<comment> \<open>when \<open>i = 0\<close> for load/store not from alternate space\<close>
|invalid_op3_f3_op11
|case_impossible
|invalid_op3_f3_op10
|invalid_op_f3
|unsupported_instruction
|fetch_instruction_error
|invalid_trap_cond
end
diff --git a/thys/Saturation_Framework/Calculi.thy b/thys/Saturation_Framework/Calculi.thy
--- a/thys/Saturation_Framework/Calculi.thy
+++ b/thys/Saturation_Framework/Calculi.thy
@@ -1,920 +1,950 @@
(* Title: Calculi of the Saturation Framework
* Author: Sophie Tourret <stourret at mpi-inf.mpg.de>, 2018-2020 *)
section \<open>Calculi\<close>
text \<open>In this section, the section 2.2 to 2.4 of the report are covered. This
includes results on calculi equipped with a redundancy criterion or with a
family of redundancy criteria, as well as a proof that various notions of
redundancy are equivalent\<close>
theory Calculi
imports
Consequence_Relations_and_Inference_Systems
Ordered_Resolution_Prover.Lazy_List_Liminf
Ordered_Resolution_Prover.Lazy_List_Chain
begin
subsection \<open>Calculi with a Redundancy Criterion\<close>
locale calculus_with_red_crit = inference_system Inf + consequence_relation Bot entails
for
Bot :: "'f set" and
Inf :: \<open>'f inference set\<close> and
entails :: "'f set \<Rightarrow> 'f set \<Rightarrow> bool" (infix "\<Turnstile>" 50)
+ fixes
Red_Inf :: "'f set \<Rightarrow> 'f inference set" and
Red_F :: "'f set \<Rightarrow> 'f set"
assumes
Red_Inf_to_Inf: "Red_Inf N \<subseteq> Inf" and
Red_F_Bot: "B \<in> Bot \<Longrightarrow> N \<Turnstile> {B} \<Longrightarrow> N - Red_F N \<Turnstile> {B}" and
Red_F_of_subset: "N \<subseteq> N' \<Longrightarrow> Red_F N \<subseteq> Red_F N'" and
Red_Inf_of_subset: "N \<subseteq> N' \<Longrightarrow> Red_Inf N \<subseteq> Red_Inf N'" and
Red_F_of_Red_F_subset: "N' \<subseteq> Red_F N \<Longrightarrow> Red_F N \<subseteq> Red_F (N - N')" and
Red_Inf_of_Red_F_subset: "N' \<subseteq> Red_F N \<Longrightarrow> Red_Inf N \<subseteq> Red_Inf (N - N')" and
Red_Inf_of_Inf_to_N: "\<iota> \<in> Inf \<Longrightarrow> concl_of \<iota> \<in> N \<Longrightarrow> \<iota> \<in> Red_Inf N"
begin
lemma Red_Inf_of_Inf_to_N_subset: "{\<iota> \<in> Inf. concl_of \<iota> \<in> N} \<subseteq> Red_Inf N"
using Red_Inf_of_Inf_to_N by blast
(* lem:red-concl-implies-red-inf *)
lemma red_concl_to_red_inf:
assumes
i_in: "\<iota> \<in> Inf" and
concl: "concl_of \<iota> \<in> Red_F N"
shows "\<iota> \<in> Red_Inf N"
proof -
have "\<iota> \<in> Red_Inf (Red_F N)" by (simp add: Red_Inf_of_Inf_to_N i_in concl)
then have i_in_Red: "\<iota> \<in> Red_Inf (N \<union> Red_F N)" by (simp add: Red_Inf_of_Inf_to_N concl i_in)
have red_n_subs: "Red_F N \<subseteq> Red_F (N \<union> Red_F N)" by (simp add: Red_F_of_subset)
then have "\<iota> \<in> Red_Inf ((N \<union> Red_F N) - (Red_F N - N))" using Red_Inf_of_Red_F_subset i_in_Red
by (meson Diff_subset subsetCE subset_trans)
then show ?thesis by (metis Diff_cancel Diff_subset Un_Diff Un_Diff_cancel contra_subsetD
calculus_with_red_crit.Red_Inf_of_subset calculus_with_red_crit_axioms sup_bot.right_neutral)
qed
definition saturated :: "'f set \<Rightarrow> bool" where
- "saturated N \<equiv> Inf_from N \<subseteq> Red_Inf N"
+ "saturated N \<longleftrightarrow> Inf_from N \<subseteq> Red_Inf N"
definition reduc_saturated :: "'f set \<Rightarrow> bool" where
-"reduc_saturated N \<equiv> Inf_from (N - Red_F N) \<subseteq> Red_Inf N"
+ "reduc_saturated N \<longleftrightarrow> Inf_from (N - Red_F N) \<subseteq> Red_Inf N"
lemma Red_Inf_without_red_F:
"Red_Inf (N - Red_F N) = Red_Inf N"
using Red_Inf_of_subset [of "N - Red_F N" N]
and Red_Inf_of_Red_F_subset [of "Red_F N" N] by blast
lemma saturated_without_red_F:
assumes saturated: "saturated N"
shows "saturated (N - Red_F N)"
proof -
have "Inf_from (N - Red_F N) \<subseteq> Inf_from N" unfolding Inf_from_def by auto
also have "Inf_from N \<subseteq> Red_Inf N" using saturated unfolding saturated_def by auto
also have "Red_Inf N \<subseteq> Red_Inf (N - Red_F N)" using Red_Inf_of_Red_F_subset by auto
finally have "Inf_from (N - Red_F N) \<subseteq> Red_Inf (N - Red_F N)" by auto
then show ?thesis unfolding saturated_def by auto
qed
-definition Sup_Red_Inf_llist :: "'f set llist \<Rightarrow> 'f inference set" where
- "Sup_Red_Inf_llist D = (\<Union>i \<in> {i. enat i < llength D}. Red_Inf (lnth D i))"
-
-lemma Sup_Red_Inf_unit: "Sup_Red_Inf_llist (LCons X LNil) = Red_Inf X"
- using Sup_Red_Inf_llist_def enat_0_iff(1) by simp
-
definition fair :: "'f set llist \<Rightarrow> bool" where
- "fair D \<equiv> Inf_from (Liminf_llist D) \<subseteq> Sup_Red_Inf_llist D"
+ "fair D \<longleftrightarrow> Inf_from (Liminf_llist D) \<subseteq> Sup_llist (lmap Red_Inf D)"
inductive "derive" :: "'f set \<Rightarrow> 'f set \<Rightarrow> bool" (infix "\<rhd>Red" 50) where
derive: "M - N \<subseteq> Red_F N \<Longrightarrow> M \<rhd>Red N"
lemma gt_Max_notin: \<open>finite A \<Longrightarrow> A \<noteq> {} \<Longrightarrow> x > Max A \<Longrightarrow> x \<notin> A\<close> by auto
lemma equiv_Sup_Liminf:
assumes
in_Sup: "C \<in> Sup_llist D" and
not_in_Liminf: "C \<notin> Liminf_llist D"
shows
- "\<exists> i \<in> {i. enat (Suc i) < llength D}. C \<in> lnth D i - lnth D (Suc i)"
+ "\<exists>i \<in> {i. enat (Suc i) < llength D}. C \<in> lnth D i - lnth D (Suc i)"
proof -
obtain i where C_D_i: "C \<in> Sup_upto_llist D i" and "i < llength D"
using elem_Sup_llist_imp_Sup_upto_llist in_Sup by fast
then obtain j where j: "j \<ge> i \<and> enat j < llength D \<and> C \<notin> lnth D j" using not_in_Liminf
unfolding Sup_llist_def chain_def Liminf_llist_def by auto
obtain k where k: "C \<in> lnth D k" "enat k < llength D" "k \<le> i" using C_D_i
unfolding Sup_upto_llist_def by auto
let ?S = "{i. i < j \<and> i \<ge> k \<and> C \<in> lnth D i}"
- define l where "l \<equiv> Max ?S"
+ define l where "l = Max ?S"
have \<open>k \<in> {i. i < j \<and> k \<le> i \<and> C \<in> lnth D i}\<close> using k j by (auto simp: order.order_iff_strict)
then have nempty: "{i. i < j \<and> k \<le> i \<and> C \<in> lnth D i} \<noteq> {}" by auto
- then have l_prop: "l < j \<and> l \<ge> k \<and> C \<in> (lnth D l)" using Max_in[of ?S, OF _ nempty] unfolding l_def by auto
+ then have l_prop: "l < j \<and> l \<ge> k \<and> C \<in> lnth D l" using Max_in[of ?S, OF _ nempty] unfolding l_def by auto
then have "C \<in> lnth D l - lnth D (Suc l)" using j gt_Max_notin[OF _ nempty, of "Suc l"]
unfolding l_def[symmetric] by (auto intro: Suc_lessI)
then show ?thesis
proof (rule bexI[of _ l])
show "l \<in> {i. enat (Suc i) < llength D}"
using l_prop j by (clarify, metis Suc_leI dual_order.order_iff_strict enat_ord_simps(2) less_trans)
qed
qed
(* lem:nonpersistent-is-redundant *)
lemma Red_in_Sup:
assumes deriv: "chain (\<rhd>Red) D"
shows "Sup_llist D - Liminf_llist D \<subseteq> Red_F (Sup_llist D)"
proof
fix C
assume C_in_subset: "C \<in> Sup_llist D - Liminf_llist D"
{
fix C i
assume
in_ith_elem: "C \<in> lnth D i - lnth D (Suc i)" and
i: "enat (Suc i) < llength D"
have "lnth D i \<rhd>Red lnth D (Suc i)" using i deriv in_ith_elem chain_lnth_rel by auto
then have "C \<in> Red_F (lnth D (Suc i))" using in_ith_elem derive.cases by blast
then have "C \<in> Red_F (Sup_llist D)" using Red_F_of_subset
by (meson contra_subsetD i lnth_subset_Sup_llist)
}
then show "C \<in> Red_F (Sup_llist D)" using equiv_Sup_Liminf[of C] C_in_subset by fast
qed
(* lem:redundant-remains-redundant-during-run 1/2 *)
lemma Red_Inf_subset_Liminf:
assumes deriv: \<open>chain (\<rhd>Red) D\<close> and
i: \<open>enat i < llength D\<close>
shows \<open>Red_Inf (lnth D i) \<subseteq> Red_Inf (Liminf_llist D)\<close>
proof -
have Sup_in_diff: \<open>Red_Inf (Sup_llist D) \<subseteq> Red_Inf (Sup_llist D - (Sup_llist D - Liminf_llist D))\<close>
using Red_Inf_of_Red_F_subset[OF Red_in_Sup] deriv by auto
also have \<open>Sup_llist D - (Sup_llist D - Liminf_llist D) = Liminf_llist D\<close>
by (simp add: Liminf_llist_subset_Sup_llist double_diff)
then have Red_Inf_Sup_in_Liminf: \<open>Red_Inf (Sup_llist D) \<subseteq> Red_Inf (Liminf_llist D)\<close> using Sup_in_diff by auto
have \<open>lnth D i \<subseteq> Sup_llist D\<close> unfolding Sup_llist_def using i by blast
then have "Red_Inf (lnth D i) \<subseteq> Red_Inf (Sup_llist D)" using Red_Inf_of_subset
unfolding Sup_llist_def by auto
then show ?thesis using Red_Inf_Sup_in_Liminf by auto
qed
(* lem:redundant-remains-redundant-during-run 2/2 *)
lemma Red_F_subset_Liminf:
assumes deriv: \<open>chain (\<rhd>Red) D\<close> and
i: \<open>enat i < llength D\<close>
shows \<open>Red_F (lnth D i) \<subseteq> Red_F (Liminf_llist D)\<close>
proof -
have Sup_in_diff: \<open>Red_F (Sup_llist D) \<subseteq> Red_F (Sup_llist D - (Sup_llist D - Liminf_llist D))\<close>
using Red_F_of_Red_F_subset[OF Red_in_Sup] deriv by auto
also have \<open>Sup_llist D - (Sup_llist D - Liminf_llist D) = Liminf_llist D\<close>
by (simp add: Liminf_llist_subset_Sup_llist double_diff)
then have Red_F_Sup_in_Liminf: \<open>Red_F (Sup_llist D) \<subseteq> Red_F (Liminf_llist D)\<close>
using Sup_in_diff by auto
have \<open>lnth D i \<subseteq> Sup_llist D\<close> unfolding Sup_llist_def using i by blast
then have "Red_F (lnth D i) \<subseteq> Red_F (Sup_llist D)" using Red_F_of_subset
unfolding Sup_llist_def by auto
then show ?thesis using Red_F_Sup_in_Liminf by auto
qed
(* lem:N-i-is-persistent-or-redundant *)
lemma i_in_Liminf_or_Red_F:
assumes
deriv: \<open>chain (\<rhd>Red) D\<close> and
i: \<open>enat i < llength D\<close>
shows \<open>lnth D i \<subseteq> Red_F (Liminf_llist D) \<union> Liminf_llist D\<close>
proof (rule,rule)
fix C
assume C: \<open>C \<in> lnth D i\<close>
and C_not_Liminf: \<open>C \<notin> Liminf_llist D\<close>
have \<open>C \<in> Sup_llist D\<close> unfolding Sup_llist_def using C i by auto
then obtain j where j: \<open>C \<in> lnth D j - lnth D (Suc j)\<close> \<open>enat (Suc j) < llength D\<close>
using equiv_Sup_Liminf[of C D] C_not_Liminf by auto
then have \<open>C \<in> Red_F (lnth D (Suc j))\<close>
using deriv by (meson chain_lnth_rel contra_subsetD derive.cases)
then show \<open>C \<in> Red_F (Liminf_llist D)\<close> using Red_F_subset_Liminf[of D "Suc j"] deriv j(2) by blast
qed
(* lem:fairness-implies-saturation *)
lemma fair_implies_Liminf_saturated:
assumes
deriv: \<open>chain (\<rhd>Red) D\<close> and
fair: \<open>fair D\<close>
shows \<open>saturated (Liminf_llist D)\<close>
unfolding saturated_def
proof
fix \<iota>
assume \<iota>: \<open>\<iota> \<in> Inf_from (Liminf_llist D)\<close>
- have \<open>\<iota> \<in> Sup_Red_Inf_llist D\<close> using fair \<iota> unfolding fair_def by auto
+ have \<open>\<iota> \<in> Sup_llist (lmap Red_Inf D)\<close> using fair \<iota> unfolding fair_def by auto
then obtain i where i: \<open>enat i < llength D\<close> \<open>\<iota> \<in> Red_Inf (lnth D i)\<close>
- unfolding Sup_Red_Inf_llist_def by auto
+ unfolding Sup_llist_def by auto
then show \<open>\<iota> \<in> Red_Inf (Liminf_llist D)\<close>
using deriv i_in_Liminf_or_Red_F[of D i] Red_Inf_subset_Liminf by blast
qed
end
locale static_refutational_complete_calculus = calculus_with_red_crit +
assumes static_refutational_complete: "B \<in> Bot \<Longrightarrow> saturated N \<Longrightarrow> N \<Turnstile> {B} \<Longrightarrow> \<exists>B'\<in>Bot. B' \<in> N"
+begin
+
+lemma dynamic_refutational_complete_Liminf:
+ fixes B D
+ assumes
+ bot_elem: \<open>B \<in> Bot\<close> and
+ deriv: \<open>chain (\<rhd>Red) D\<close> and
+ fair: \<open>fair D\<close> and
+ unsat: \<open>lnth D 0 \<Turnstile> {B}\<close>
+ shows \<open>\<exists>B'\<in>Bot. B' \<in> Liminf_llist D\<close>
+proof -
+ have non_empty: \<open>\<not> lnull D\<close> using chain_not_lnull[OF deriv] .
+ have subs: \<open>lnth D 0 \<subseteq> Sup_llist D\<close>
+ using lhd_subset_Sup_llist[of D] non_empty by (simp add: lhd_conv_lnth)
+ have \<open>Sup_llist D \<Turnstile> {B}\<close>
+ using unsat subset_entailed[OF subs] entails_trans[of "Sup_llist D" "lnth D 0"] by auto
+ then have Sup_no_Red: \<open>Sup_llist D - Red_F (Sup_llist D) \<Turnstile> {B}\<close>
+ using bot_elem Red_F_Bot by auto
+ have Sup_no_Red_in_Liminf: \<open>Sup_llist D - Red_F (Sup_llist D) \<subseteq> Liminf_llist D\<close>
+ using deriv Red_in_Sup by auto
+ have Liminf_entails_Bot: \<open>Liminf_llist D \<Turnstile> {B}\<close>
+ using Sup_no_Red subset_entailed[OF Sup_no_Red_in_Liminf] entails_trans by blast
+ have \<open>saturated (Liminf_llist D)\<close>
+ using deriv fair fair_implies_Liminf_saturated unfolding saturated_def by auto
+ then show ?thesis
+ using bot_elem static_refutational_complete Liminf_entails_Bot by auto
+qed
+
+end
locale dynamic_refutational_complete_calculus = calculus_with_red_crit +
assumes
- dynamic_refutational_complete: "B \<in> Bot \<Longrightarrow> chain (\<rhd>Red) D \<Longrightarrow> fair D
- \<Longrightarrow> lnth D 0 \<Turnstile> {B} \<Longrightarrow> \<exists>i \<in> {i. enat i < llength D}. \<exists>B'\<in>Bot. B' \<in> lnth D i"
+ dynamic_refutational_complete: "B \<in> Bot \<Longrightarrow> chain (\<rhd>Red) D \<Longrightarrow> fair D \<Longrightarrow> lnth D 0 \<Turnstile> {B} \<Longrightarrow>
+ \<exists>i \<in> {i. enat i < llength D}. \<exists>B'\<in>Bot. B' \<in> lnth D i"
begin
(* lem:dynamic-ref-compl-implies-static *)
sublocale static_refutational_complete_calculus
proof
fix B N
assume
bot_elem: \<open>B \<in> Bot\<close> and
saturated_N: "saturated N" and
refut_N: "N \<Turnstile> {B}"
define D where "D = LCons N LNil"
have[simp]: \<open>\<not> lnull D\<close> by (auto simp: D_def)
have deriv_D: \<open>chain (\<rhd>Red) D\<close> by (simp add: chain.chain_singleton D_def)
have liminf_is_N: "Liminf_llist D = N" by (simp add: D_def Liminf_llist_LCons)
have head_D: "N = lnth D 0" by (simp add: D_def)
- have "Sup_Red_Inf_llist D = Red_Inf N" by (simp add: D_def Sup_Red_Inf_unit)
+ have "Sup_llist (lmap Red_Inf D) = Red_Inf N" by (simp add: D_def)
then have fair_D: "fair D" using saturated_N by (simp add: fair_def saturated_def liminf_is_N)
- obtain i B' where B'_is_bot: \<open>B' \<in> Bot\<close> and B'_in: "B' \<in> (lnth D i)" and \<open>i < llength D\<close>
+ obtain i B' where B'_is_bot: \<open>B' \<in> Bot\<close> and B'_in: "B' \<in> lnth D i" and \<open>i < llength D\<close>
using dynamic_refutational_complete[of B D] bot_elem fair_D head_D saturated_N deriv_D refut_N
by auto
then have "i = 0"
by (auto simp: D_def enat_0_iff)
show \<open>\<exists>B'\<in>Bot. B' \<in> N\<close>
using B'_is_bot B'_in unfolding \<open>i = 0\<close> head_D[symmetric] by auto
qed
end
(* lem:static-ref-compl-implies-dynamic *)
sublocale static_refutational_complete_calculus \<subseteq> dynamic_refutational_complete_calculus
proof
fix B D
assume
- bot_elem: \<open>B \<in> Bot\<close> and
- deriv: \<open>chain (\<rhd>Red) D\<close> and
- fair: \<open>fair D\<close> and
- unsat: \<open>lnth D 0 \<Turnstile> {B}\<close>
- have non_empty: \<open>\<not> lnull D\<close> using chain_not_lnull[OF deriv] .
- have subs: \<open>lnth D 0 \<subseteq> Sup_llist D\<close>
- using lhd_subset_Sup_llist[of D] non_empty by (simp add: lhd_conv_lnth)
- have \<open>Sup_llist D \<Turnstile> {B}\<close>
- using unsat subset_entailed[OF subs] entails_trans[of "Sup_llist D" "lnth D 0"] by auto
- then have Sup_no_Red: \<open>Sup_llist D - Red_F (Sup_llist D) \<Turnstile> {B}\<close>
- using bot_elem Red_F_Bot by auto
- have Sup_no_Red_in_Liminf: \<open>Sup_llist D - Red_F (Sup_llist D) \<subseteq> Liminf_llist D\<close>
- using deriv Red_in_Sup by auto
- have Liminf_entails_Bot: \<open>Liminf_llist D \<Turnstile> {B}\<close>
- using Sup_no_Red subset_entailed[OF Sup_no_Red_in_Liminf] entails_trans by blast
- have \<open>saturated (Liminf_llist D)\<close>
- using deriv fair fair_implies_Liminf_saturated unfolding saturated_def by auto
-
- then have \<open>\<exists>B'\<in>Bot. B' \<in> (Liminf_llist D)\<close>
- using bot_elem static_refutational_complete Liminf_entails_Bot by auto
- then show \<open>\<exists>i\<in>{i. enat i < llength D}. \<exists>B'\<in>Bot. B' \<in> lnth D i\<close>
- unfolding Liminf_llist_def by auto
+ \<open>B \<in> Bot\<close> and
+ \<open>chain (\<rhd>Red) D\<close> and
+ \<open>fair D\<close> and
+ \<open>lnth D 0 \<Turnstile> {B}\<close>
+ then have \<open>\<exists>B'\<in>Bot. B' \<in> Liminf_llist D\<close>
+ by (rule dynamic_refutational_complete_Liminf)
+ then show \<open>\<exists>i\<in>{i. enat i < llength D}. \<exists>B'\<in>Bot. B' \<in> lnth D i\<close>
+ unfolding Liminf_llist_def by auto
qed
subsection \<open>Calculi with a Family of Redundancy Criteria\<close>
-locale calculus_with_red_crit_family = inference_system Inf + consequence_relation_family Bot Q entails_q
+locale calculus_with_red_crit_family =
+ inference_system Inf + consequence_relation_family Bot Q entails_q
for
Bot :: "'f set" and
Inf :: \<open>'f inference set\<close> and
- Q :: "'q itself" and
- entails_q :: "'q \<Rightarrow> ('f set \<Rightarrow> 'f set \<Rightarrow> bool)"
+ Q :: "'q set" and
+ entails_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'f set \<Rightarrow> bool"
+ fixes
- Red_Inf_q :: "'q \<Rightarrow> ('f set \<Rightarrow> 'f inference set)" and
- Red_F_q :: "'q \<Rightarrow> ('f set \<Rightarrow> 'f set)"
+ Red_Inf_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'f inference set" and
+ Red_F_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'f set"
assumes
- all_red_crit: "calculus_with_red_crit Bot Inf (entails_q q) (Red_Inf_q q) (Red_F_q q)"
+ Q_nonempty: "Q \<noteq> {}" and
+ all_red_crit: "\<forall>q \<in> Q. calculus_with_red_crit Bot Inf (entails_q q) (Red_Inf_q q) (Red_F_q q)"
begin
definition Red_Inf_Q :: "'f set \<Rightarrow> 'f inference set" where
- "Red_Inf_Q N = \<Inter> {X N |X. X \<in> (Red_Inf_q ` UNIV)}"
+ "Red_Inf_Q N = (\<Inter>q \<in> Q. Red_Inf_q q N)"
definition Red_F_Q :: "'f set \<Rightarrow> 'f set" where
- "Red_F_Q N = \<Inter> {X N |X. X \<in> (Red_F_q ` UNIV)}"
+ "Red_F_Q N = (\<Inter>q \<in> Q. Red_F_q q N)"
(* lem:intersection-of-red-crit *)
-lemma inter_red_crit: "calculus_with_red_crit Bot Inf entails_Q Red_Inf_Q Red_F_Q"
+sublocale calculus_with_red_crit Bot Inf entails_Q Red_Inf_Q Red_F_Q
unfolding calculus_with_red_crit_def calculus_with_red_crit_axioms_def
proof (intro conjI)
show "consequence_relation Bot entails_Q"
using intersect_cons_rel_family .
next
show "\<forall>N. Red_Inf_Q N \<subseteq> Inf"
unfolding Red_Inf_Q_def
proof
fix N
- show "\<Inter> {X N |X. X \<in> Red_Inf_q ` UNIV} \<subseteq> Inf"
+ show "(\<Inter>q \<in> Q. Red_Inf_q q N) \<subseteq> Inf"
proof (intro Inter_subset)
fix Red_Infs
- assume one_red_inf: "Red_Infs \<in> {X N |X. X \<in> Red_Inf_q ` UNIV}"
- show "Red_Infs \<subseteq> Inf" using one_red_inf
- proof
- assume "\<exists>Red_Inf_qi. Red_Infs = Red_Inf_qi N \<and> Red_Inf_qi \<in> Red_Inf_q ` UNIV"
- then obtain Red_Inf_qi where
- red_infs_def: "Red_Infs = Red_Inf_qi N" and red_inf_qi_in: "Red_Inf_qi \<in> Red_Inf_q ` UNIV"
- by blast
- obtain qi where red_inf_qi_def: "Red_Inf_qi = Red_Inf_q qi" and qi_in: "qi \<in> UNIV"
- using red_inf_qi_in by blast
- show "Red_Infs \<subseteq> Inf"
- using all_red_crit calculus_with_red_crit.Red_Inf_to_Inf red_inf_qi_def
- red_infs_def by blast
- qed
+ assume one_red_inf: "Red_Infs \<in> (\<lambda>q. Red_Inf_q q N) ` Q"
+ show "Red_Infs \<subseteq> Inf"
+ using one_red_inf all_red_crit calculus_with_red_crit.Red_Inf_to_Inf by blast
next
- show "{X N |X. X \<in> Red_Inf_q ` UNIV} \<noteq> {}" by blast
+ show "(\<lambda>q. Red_Inf_q q N) ` Q \<noteq> {}"
+ using Q_nonempty by blast
qed
qed
next
show "\<forall>B N. B \<in> Bot \<longrightarrow> N \<Turnstile>Q {B} \<longrightarrow> N - Red_F_Q N \<Turnstile>Q {B}"
proof (intro allI impI)
fix B N
assume
B_in: "B \<in> Bot" and
N_unsat: "N \<Turnstile>Q {B}"
show "N - Red_F_Q N \<Turnstile>Q {B}" unfolding entails_Q_def Red_F_Q_def
proof
fix qi
+ assume qi_in: "qi \<in> Q"
define entails_qi (infix "\<Turnstile>qi" 50) where "entails_qi = entails_q qi"
have cons_rel_qi: "consequence_relation Bot entails_qi"
- unfolding entails_qi_def using all_red_crit calculus_with_red_crit.axioms(1) by blast
+ unfolding entails_qi_def using qi_in all_red_crit calculus_with_red_crit.axioms(1) by blast
define Red_F_qi where "Red_F_qi = Red_F_q qi"
have red_qi_in_Q: "Red_F_Q N \<subseteq> Red_F_qi N"
- unfolding Red_F_Q_def Red_F_qi_def using image_iff by blast
- then have "N - (Red_F_qi N) \<subseteq> N - (Red_F_Q N)" by blast
- then have entails_1: "(N - Red_F_Q N) \<Turnstile>qi (N - Red_F_qi N)"
- using all_red_crit
+ unfolding Red_F_Q_def Red_F_qi_def using qi_in image_iff by blast
+ then have "N - Red_F_qi N \<subseteq> N - Red_F_Q N" by blast
+ then have entails_1: "N - Red_F_Q N \<Turnstile>qi N - Red_F_qi N"
+ using qi_in all_red_crit
unfolding calculus_with_red_crit_def consequence_relation_def entails_qi_def by metis
- have N_unsat_qi: "N \<Turnstile>qi {B}" using N_unsat unfolding entails_qi_def entails_Q_def by simp
- then have N_unsat_qi: "(N - Red_F_qi N) \<Turnstile>qi {B}"
- using all_red_crit Red_F_qi_def calculus_with_red_crit.Red_F_Bot[OF _ B_in] entails_qi_def
+ have N_unsat_qi: "N \<Turnstile>qi {B}" using qi_in N_unsat unfolding entails_qi_def entails_Q_def
+ by simp
+ then have N_unsat_qi: "N - Red_F_qi N \<Turnstile>qi {B}"
+ using qi_in all_red_crit Red_F_qi_def calculus_with_red_crit.Red_F_Bot[OF _ B_in]
+ entails_qi_def
by fastforce
- show "(N - \<Inter> {X N |X. X \<in> Red_F_q ` UNIV}) \<Turnstile>qi {B}"
+ show "N - (\<Inter>q \<in> Q. Red_F_q q N) \<Turnstile>qi {B}"
using consequence_relation.entails_trans[OF cons_rel_qi entails_1 N_unsat_qi]
unfolding Red_F_Q_def .
qed
qed
next
show "\<forall>N1 N2. N1 \<subseteq> N2 \<longrightarrow> Red_F_Q N1 \<subseteq> Red_F_Q N2"
proof (intro allI impI)
fix N1 :: "'f set"
and N2 :: "'f set"
assume
N1_in_N2: "N1 \<subseteq> N2"
show "Red_F_Q N1 \<subseteq> Red_F_Q N2"
proof
- fix x
- assume x_in: "x \<in> Red_F_Q N1"
- then have "\<forall>qi. x \<in> Red_F_q qi N1" unfolding Red_F_Q_def by blast
- then have "\<forall>qi. x \<in> Red_F_q qi N2"
- using N1_in_N2 all_red_crit calculus_with_red_crit.axioms(2) calculus_with_red_crit.Red_F_of_subset by blast
- then show "x \<in> Red_F_Q N2" unfolding Red_F_Q_def by blast
+ fix C
+ assume "C \<in> Red_F_Q N1"
+ then have "\<forall>qi \<in> Q. C \<in> Red_F_q qi N1" unfolding Red_F_Q_def by blast
+ then have "\<forall>qi \<in> Q. C \<in> Red_F_q qi N2"
+ using N1_in_N2 all_red_crit calculus_with_red_crit.axioms(2)
+ calculus_with_red_crit.Red_F_of_subset by blast
+ then show "C \<in> Red_F_Q N2" unfolding Red_F_Q_def by blast
qed
qed
next
show "\<forall>N1 N2. N1 \<subseteq> N2 \<longrightarrow> Red_Inf_Q N1 \<subseteq> Red_Inf_Q N2"
proof (intro allI impI)
fix N1 :: "'f set"
and N2 :: "'f set"
assume
N1_in_N2: "N1 \<subseteq> N2"
show "Red_Inf_Q N1 \<subseteq> Red_Inf_Q N2"
proof
- fix x
- assume x_in: "x \<in> Red_Inf_Q N1"
- then have "\<forall>qi. x \<in> Red_Inf_q qi N1" unfolding Red_Inf_Q_def by blast
- then have "\<forall>qi. x \<in> Red_Inf_q qi N2"
- using N1_in_N2 all_red_crit calculus_with_red_crit.axioms(2) calculus_with_red_crit.Red_Inf_of_subset by blast
- then show "x \<in> Red_Inf_Q N2" unfolding Red_Inf_Q_def by blast
+ fix \<iota>
+ assume "\<iota> \<in> Red_Inf_Q N1"
+ then have "\<forall>qi \<in> Q. \<iota> \<in> Red_Inf_q qi N1" unfolding Red_Inf_Q_def by blast
+ then have "\<forall>qi \<in> Q. \<iota> \<in> Red_Inf_q qi N2"
+ using N1_in_N2 all_red_crit calculus_with_red_crit.axioms(2)
+ calculus_with_red_crit.Red_Inf_of_subset by blast
+ then show "\<iota> \<in> Red_Inf_Q N2" unfolding Red_Inf_Q_def by blast
qed
qed
next
show "\<forall>N2 N1. N2 \<subseteq> Red_F_Q N1 \<longrightarrow> Red_F_Q N1 \<subseteq> Red_F_Q (N1 - N2)"
proof (intro allI impI)
fix N2 N1
assume N2_in_Red_N1: "N2 \<subseteq> Red_F_Q N1"
show "Red_F_Q N1 \<subseteq> Red_F_Q (N1 - N2)"
proof
- fix x
- assume x_in: "x \<in> Red_F_Q N1"
- then have "\<forall>qi. x \<in> Red_F_q qi N1" unfolding Red_F_Q_def by blast
- moreover have "\<forall>qi. N2 \<subseteq> Red_F_q qi N1" using N2_in_Red_N1 unfolding Red_F_Q_def by blast
- ultimately have "\<forall>qi. x \<in> Red_F_q qi (N1 - N2)"
- using all_red_crit calculus_with_red_crit.axioms(2) calculus_with_red_crit.Red_F_of_Red_F_subset by blast
- then show "x \<in> Red_F_Q (N1 - N2)" unfolding Red_F_Q_def by blast
+ fix C
+ assume "C \<in> Red_F_Q N1"
+ then have "\<forall>qi \<in> Q. C \<in> Red_F_q qi N1" unfolding Red_F_Q_def by blast
+ moreover have "\<forall>qi \<in> Q. N2 \<subseteq> Red_F_q qi N1" using N2_in_Red_N1 unfolding Red_F_Q_def by blast
+ ultimately have "\<forall>qi \<in> Q. C \<in> Red_F_q qi (N1 - N2)"
+ using all_red_crit calculus_with_red_crit.axioms(2)
+ calculus_with_red_crit.Red_F_of_Red_F_subset
+ by blast
+ then show "C \<in> Red_F_Q (N1 - N2)" unfolding Red_F_Q_def by blast
qed
qed
next
show "\<forall>N2 N1. N2 \<subseteq> Red_F_Q N1 \<longrightarrow> Red_Inf_Q N1 \<subseteq> Red_Inf_Q (N1 - N2)"
proof (intro allI impI)
fix N2 N1
assume N2_in_Red_N1: "N2 \<subseteq> Red_F_Q N1"
show "Red_Inf_Q N1 \<subseteq> Red_Inf_Q (N1 - N2)"
proof
- fix x
- assume x_in: "x \<in> Red_Inf_Q N1"
- then have "\<forall>qi. x \<in> Red_Inf_q qi N1" unfolding Red_Inf_Q_def by blast
- moreover have "\<forall>qi. N2 \<subseteq> Red_F_q qi N1" using N2_in_Red_N1 unfolding Red_F_Q_def by blast
- ultimately have "\<forall>qi. x \<in> Red_Inf_q qi (N1 - N2)"
- using all_red_crit calculus_with_red_crit.axioms(2) calculus_with_red_crit.Red_Inf_of_Red_F_subset by blast
- then show "x \<in> Red_Inf_Q (N1 - N2)" unfolding Red_Inf_Q_def by blast
+ fix \<iota>
+ assume "\<iota> \<in> Red_Inf_Q N1"
+ then have "\<forall>qi \<in> Q. \<iota> \<in> Red_Inf_q qi N1" unfolding Red_Inf_Q_def by blast
+ moreover have "\<forall>qi \<in> Q. N2 \<subseteq> Red_F_q qi N1" using N2_in_Red_N1 unfolding Red_F_Q_def by blast
+ ultimately have "\<forall>qi \<in> Q. \<iota> \<in> Red_Inf_q qi (N1 - N2)"
+ using all_red_crit calculus_with_red_crit.axioms(2)
+ calculus_with_red_crit.Red_Inf_of_Red_F_subset by blast
+ then show "\<iota> \<in> Red_Inf_Q (N1 - N2)" unfolding Red_Inf_Q_def by blast
qed
qed
next
show "\<forall>\<iota> N. \<iota> \<in> Inf \<longrightarrow> concl_of \<iota> \<in> N \<longrightarrow> \<iota> \<in> Red_Inf_Q N"
proof (intro allI impI)
fix \<iota> N
assume
i_in: "\<iota> \<in> Inf" and
concl_in: "concl_of \<iota> \<in> N"
- then have "\<forall>qi. \<iota> \<in> Red_Inf_q qi N"
+ then have "\<forall>qi \<in> Q. \<iota> \<in> Red_Inf_q qi N"
using all_red_crit calculus_with_red_crit.axioms(2) calculus_with_red_crit.Red_Inf_of_Inf_to_N by blast
then show "\<iota> \<in> Red_Inf_Q N" unfolding Red_Inf_Q_def by blast
qed
qed
-sublocale inter_red_crit_calculus: calculus_with_red_crit
- where Bot=Bot
- and Inf=Inf
- and entails=entails_Q
- and Red_Inf=Red_Inf_Q
- and Red_F=Red_F_Q
- using inter_red_crit .
-
(* lem:satur-wrt-intersection-of-red *)
lemma sat_int_to_sat_q: "calculus_with_red_crit.saturated Inf Red_Inf_Q N \<longleftrightarrow>
- (\<forall>qi. calculus_with_red_crit.saturated Inf (Red_Inf_q qi) N)" for N
+ (\<forall>qi \<in> Q. calculus_with_red_crit.saturated Inf (Red_Inf_q qi) N)" for N
proof
fix N
assume inter_sat: "calculus_with_red_crit.saturated Inf Red_Inf_Q N"
- show "\<forall>qi. calculus_with_red_crit.saturated Inf (Red_Inf_q qi) N"
+ show "\<forall>qi \<in> Q. calculus_with_red_crit.saturated Inf (Red_Inf_q qi) N"
proof
fix qi
- interpret one: calculus_with_red_crit Bot Inf "entails_q qi" "Red_Inf_q qi" "Red_F_q qi"
- by (rule all_red_crit)
+ assume qi_in: "qi \<in> Q"
+ then interpret one: calculus_with_red_crit Bot Inf "entails_q qi" "Red_Inf_q qi" "Red_F_q qi"
+ by (metis all_red_crit)
show "one.saturated N"
- using inter_sat unfolding one.saturated_def inter_red_crit_calculus.saturated_def Red_Inf_Q_def
- by blast
+ using qi_in inter_sat
+ unfolding one.saturated_def saturated_def Red_Inf_Q_def by blast
qed
next
fix N
- assume all_sat: "\<forall>qi. calculus_with_red_crit.saturated Inf (Red_Inf_q qi) N"
- show "inter_red_crit_calculus.saturated N" unfolding inter_red_crit_calculus.saturated_def Red_Inf_Q_def
+ assume all_sat: "\<forall>qi \<in> Q. calculus_with_red_crit.saturated Inf (Red_Inf_q qi) N"
+ show "saturated N"
+ unfolding saturated_def Red_Inf_Q_def
proof
- fix x
- assume x_in: "x \<in> Inf_from N"
- have "\<forall>Red_Inf_qi \<in> Red_Inf_q ` UNIV. x \<in> Red_Inf_qi N"
+ fix \<iota>
+ assume \<iota>_in: "\<iota> \<in> Inf_from N"
+ have "\<forall>Red_Inf_qi \<in> Red_Inf_q ` Q. \<iota> \<in> Red_Inf_qi N"
proof
fix Red_Inf_qi
- assume red_inf_in: "Red_Inf_qi \<in> Red_Inf_q ` UNIV"
- then obtain qi where red_inf_qi_def: "Red_Inf_qi = Red_Inf_q qi" by blast
- interpret one: calculus_with_red_crit Bot Inf "entails_q qi" "Red_Inf_q qi" "Red_F_q qi"
- by (rule all_red_crit)
- have "one.saturated N" using all_sat red_inf_qi_def by blast
- then show "x \<in> Red_Inf_qi N" unfolding one.saturated_def using x_in red_inf_qi_def by blast
+ assume red_inf_in: "Red_Inf_qi \<in> Red_Inf_q ` Q"
+ then obtain qi where
+ qi_in: "qi \<in> Q" and
+ red_inf_qi_def: "Red_Inf_qi = Red_Inf_q qi" by blast
+ then interpret one: calculus_with_red_crit Bot Inf "entails_q qi" "Red_Inf_q qi" "Red_F_q qi"
+ by (metis all_red_crit)
+ have "one.saturated N" using qi_in all_sat red_inf_qi_def by blast
+ then show "\<iota> \<in> Red_Inf_qi N" unfolding one.saturated_def using \<iota>_in red_inf_qi_def by blast
qed
- then show "x \<in> \<Inter> {X N |X. X \<in> Red_Inf_q ` UNIV}" by blast
+ then show "\<iota> \<in> (\<Inter>q \<in> Q. Red_Inf_q q N)" by blast
qed
qed
(* lem:checking-static-ref-compl-for-intersections *)
lemma stat_ref_comp_from_bot_in_sat:
- "\<forall>N. (calculus_with_red_crit.saturated Inf Red_Inf_Q N \<and> (\<forall>B \<in> Bot. B \<notin> N)) \<longrightarrow>
- (\<exists>B \<in> Bot. \<exists>qi. \<not> entails_q qi N {B})
- \<Longrightarrow> static_refutational_complete_calculus Bot Inf entails_Q Red_Inf_Q Red_F_Q"
+ "(\<forall>N. calculus_with_red_crit.saturated Inf Red_Inf_Q N \<and> (\<forall>B \<in> Bot. B \<notin> N) \<longrightarrow>
+ (\<exists>B \<in> Bot. \<exists>qi \<in> Q. \<not> entails_q qi N {B})) \<Longrightarrow>
+ static_refutational_complete_calculus Bot Inf entails_Q Red_Inf_Q Red_F_Q"
proof (rule ccontr)
assume
N_saturated: "\<forall>N. (calculus_with_red_crit.saturated Inf Red_Inf_Q N \<and> (\<forall>B \<in> Bot. B \<notin> N)) \<longrightarrow>
- (\<exists>B \<in> Bot. \<exists>qi. \<not> entails_q qi N {B})" and
+ (\<exists>B \<in> Bot. \<exists>qi \<in> Q. \<not> entails_q qi N {B})" and
no_stat_ref_comp: "\<not> static_refutational_complete_calculus Bot Inf (\<Turnstile>Q) Red_Inf_Q Red_F_Q"
obtain N1 B1 where B1_in:
"B1 \<in> Bot" and N1_saturated: "calculus_with_red_crit.saturated Inf Red_Inf_Q N1" and
N1_unsat: "N1 \<Turnstile>Q {B1}" and no_B_in_N1: "\<forall>B \<in> Bot. B \<notin> N1"
- using no_stat_ref_comp by (metis inter_red_crit static_refutational_complete_calculus.intro
- static_refutational_complete_calculus_axioms.intro)
- obtain B2 qi where no_qi:"\<not> entails_q qi N1 {B2}" using N_saturated N1_saturated no_B_in_N1 by blast
+ using no_stat_ref_comp by (metis calculus_with_red_crit_axioms
+ static_refutational_complete_calculus.intro
+ static_refutational_complete_calculus_axioms.intro)
+ obtain B2 qi where
+ qi_in: "qi \<in> Q" and
+ no_qi: "\<not> entails_q qi N1 {B2}"
+ using N_saturated N1_saturated no_B_in_N1 by auto
have "N1 \<Turnstile>Q {B2}" using N1_unsat B1_in intersect_cons_rel_family
unfolding consequence_relation_def by metis
- then have "entails_q qi N1 {B2}" unfolding entails_Q_def by blast
- then show "False" using no_qi by simp
+ then have "entails_q qi N1 {B2}" unfolding entails_Q_def using qi_in by blast
+ then show False using no_qi by simp
qed
end
+subsection \<open>Families of Calculi with a Family of Redundancy Criteria\<close>
+
+locale calculus_family_with_red_crit_family =
+ inference_system_family Q Inf_q + consequence_relation_family Bot Q entails_q
+ for
+ Bot :: "'f set" and
+ Q :: "'q set" and
+ Inf_q :: \<open>'q \<Rightarrow> 'f inference set\<close> and
+ entails_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'f set \<Rightarrow> bool"
+ + fixes
+ Red_Inf_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'f inference set" and
+ Red_F_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'f set"
+ assumes
+ Q_nonempty: "Q \<noteq> {}" and
+ all_red_crit:
+ "\<forall>q \<in> Q. calculus_with_red_crit Bot (Inf_q q) (entails_q q) (Red_Inf_q q) (Red_F_q q)"
+
subsection \<open>Variations on a Theme\<close>
locale calculus_with_reduced_red_crit = calculus_with_red_crit Bot Inf entails Red_Inf Red_F
for
Bot :: "'f set" and
Inf :: \<open>'f inference set\<close> and
entails :: "'f set \<Rightarrow> 'f set \<Rightarrow> bool" (infix "\<Turnstile>" 50) and
Red_Inf :: "'f set \<Rightarrow> 'f inference set" and
Red_F :: "'f set \<Rightarrow> 'f set"
+ assumes
inf_in_red_inf: "Inf_from2 UNIV (Red_F N) \<subseteq> Red_Inf N"
begin
(* lem:reduced-rc-implies-sat-equiv-reduced-sat *)
lemma sat_eq_reduc_sat: "saturated N \<longleftrightarrow> reduc_saturated N"
proof
fix N
assume "saturated N"
then show "reduc_saturated N"
using Red_Inf_without_red_F saturated_without_red_F
unfolding saturated_def reduc_saturated_def
by blast
next
fix N
assume red_sat_n: "reduc_saturated N"
show "saturated N" unfolding saturated_def
- proof
- fix \<iota>
- assume i_in: "\<iota> \<in> Inf_from N"
- show "\<iota> \<in> Red_Inf N"
- using i_in red_sat_n inf_in_red_inf unfolding reduc_saturated_def Inf_from_def Inf_from2_def by blast
- qed
+ using red_sat_n inf_in_red_inf unfolding reduc_saturated_def Inf_from_def Inf_from2_def
+ by blast
qed
end
-locale reduc_static_refutational_complete_calculus = calculus_with_red_crit +
- assumes reduc_static_refutational_complete: "B \<in> Bot \<Longrightarrow> reduc_saturated N \<Longrightarrow> N \<Turnstile> {B} \<Longrightarrow> \<exists>B'\<in>Bot. B' \<in> N"
+locale reduc_static_refutational_complete_calculus = calculus_with_red_crit +
+ assumes reduc_static_refutational_complete:
+ "B \<in> Bot \<Longrightarrow> reduc_saturated N \<Longrightarrow> N \<Turnstile> {B} \<Longrightarrow> \<exists>B'\<in>Bot. B' \<in> N"
locale reduc_static_refutational_complete_reduc_calculus = calculus_with_reduced_red_crit +
- assumes reduc_static_refutational_complete: "B \<in> Bot \<Longrightarrow> reduc_saturated N \<Longrightarrow> N \<Turnstile> {B} \<Longrightarrow> \<exists>B'\<in>Bot. B' \<in> N"
+ assumes reduc_static_refutational_complete:
+ "B \<in> Bot \<Longrightarrow> reduc_saturated N \<Longrightarrow> N \<Turnstile> {B} \<Longrightarrow> \<exists>B'\<in>Bot. B' \<in> N"
begin
sublocale reduc_static_refutational_complete_calculus
by (simp add: calculus_with_red_crit_axioms reduc_static_refutational_complete
- reduc_static_refutational_complete_calculus_axioms.intro reduc_static_refutational_complete_calculus_def)
+ reduc_static_refutational_complete_calculus_axioms.intro
+ reduc_static_refutational_complete_calculus_def)
(* cor:reduced-rc-implies-st-ref-comp-equiv-reduced-st-ref-comp 1/2 *)
sublocale static_refutational_complete_calculus
proof
fix B N
assume
bot_elem: \<open>B \<in> Bot\<close> and
saturated_N: "saturated N" and
refut_N: "N \<Turnstile> {B}"
have reduc_saturated_N: "reduc_saturated N" using saturated_N sat_eq_reduc_sat by blast
show "\<exists>B'\<in>Bot. B' \<in> N" using reduc_static_refutational_complete[OF bot_elem reduc_saturated_N refut_N] .
qed
+
end
context calculus_with_reduced_red_crit
begin
(* cor:reduced-rc-implies-st-ref-comp-equiv-reduced-st-ref-comp 2/2 *)
-lemma stat_ref_comp_imp_red_stat_ref_comp: "static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F \<Longrightarrow>
- reduc_static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
+lemma stat_ref_comp_imp_red_stat_ref_comp:
+ "static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F \<Longrightarrow>
+ reduc_static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
proof
fix B N
assume
stat_ref_comp: "static_refutational_complete_calculus Bot Inf (\<Turnstile>) Red_Inf Red_F" and
bot_elem: \<open>B \<in> Bot\<close> and
saturated_N: "reduc_saturated N" and
refut_N: "N \<Turnstile> {B}"
have reduc_saturated_N: "saturated N" using saturated_N sat_eq_reduc_sat by blast
show "\<exists>B'\<in>Bot. B' \<in> N"
- using Calculi.static_refutational_complete_calculus.static_refutational_complete[OF stat_ref_comp
+ using static_refutational_complete_calculus.static_refutational_complete[OF stat_ref_comp
bot_elem reduc_saturated_N refut_N] .
qed
+
end
context calculus_with_red_crit
begin
definition Red_Red_Inf :: "'f set \<Rightarrow> 'f inference set" where
"Red_Red_Inf N = Red_Inf N \<union> Inf_from2 UNIV (Red_F N)"
lemma reduced_calc_is_calc: "calculus_with_red_crit Bot Inf entails Red_Red_Inf Red_F"
proof
fix N
show "Red_Red_Inf N \<subseteq> Inf"
unfolding Red_Red_Inf_def Inf_from2_def Inf_from_def using Red_Inf_to_Inf by auto
next
fix B N
assume
b_in: "B \<in> Bot" and
n_entails: "N \<Turnstile> {B}"
show "N - Red_F N \<Turnstile> {B}"
by (simp add: Red_F_Bot b_in n_entails)
next
fix N N' :: "'f set"
assume "N \<subseteq> N'"
then show "Red_F N \<subseteq> Red_F N'" by (simp add: Red_F_of_subset)
next
fix N N' :: "'f set"
assume n_in: "N \<subseteq> N'"
then have "Inf_from (UNIV - (Red_F N')) \<subseteq> Inf_from (UNIV - (Red_F N))"
using Red_F_of_subset[OF n_in] unfolding Inf_from_def by auto
then have "Inf_from2 UNIV (Red_F N) \<subseteq> Inf_from2 UNIV (Red_F N')"
unfolding Inf_from2_def by auto
then show "Red_Red_Inf N \<subseteq> Red_Red_Inf N'"
unfolding Red_Red_Inf_def using Red_Inf_of_subset[OF n_in] by blast
next
fix N N' :: "'f set"
assume "N' \<subseteq> Red_F N"
then show "Red_F N \<subseteq> Red_F (N - N')" by (simp add: Red_F_of_Red_F_subset)
next
fix N N' :: "'f set"
assume np_subs: "N' \<subseteq> Red_F N"
have "Red_F N \<subseteq> Red_F (N - N')" by (simp add: Red_F_of_Red_F_subset np_subs)
then have "Inf_from (UNIV - (Red_F (N - N'))) \<subseteq> Inf_from (UNIV - (Red_F N))"
by (metis Diff_subset Red_F_of_subset eq_iff)
then have "Inf_from2 UNIV (Red_F N) \<subseteq> Inf_from2 UNIV (Red_F (N - N'))"
unfolding Inf_from2_def by auto
then show "Red_Red_Inf N \<subseteq> Red_Red_Inf (N - N')"
unfolding Red_Red_Inf_def using Red_Inf_of_Red_F_subset[OF np_subs] by blast
next
fix \<iota> N
assume "\<iota> \<in> Inf"
"concl_of \<iota> \<in> N"
then show "\<iota> \<in> Red_Red_Inf N"
by (simp add: Red_Inf_of_Inf_to_N Red_Red_Inf_def)
qed
lemma inf_subs_reduced_red_inf: "Inf_from2 UNIV (Red_F N) \<subseteq> Red_Red_Inf N"
unfolding Red_Red_Inf_def by simp
(* lem:red'-is-reduced-redcrit *)
text \<open>The following is a lemma and not a sublocale as was previously used in similar cases.
Here, a sublocale cannot be used because it would create an infinitely descending
chain of sublocales. \<close>
lemma reduc_calc: "calculus_with_reduced_red_crit Bot Inf entails Red_Red_Inf Red_F"
using inf_subs_reduced_red_inf reduced_calc_is_calc
by (simp add: calculus_with_reduced_red_crit.intro calculus_with_reduced_red_crit_axioms_def)
-interpretation reduc_calc : calculus_with_reduced_red_crit Bot Inf entails Red_Red_Inf Red_F
- using reduc_calc by simp
+interpretation reduc_calc: calculus_with_reduced_red_crit Bot Inf entails Red_Red_Inf Red_F
+ by (fact reduc_calc)
(* lem:saturation-red-vs-red'-1 *)
lemma sat_imp_red_calc_sat: "saturated N \<Longrightarrow> reduc_calc.saturated N"
unfolding saturated_def reduc_calc.saturated_def Red_Red_Inf_def by blast
(* lem:saturation-red-vs-red'-2 1/2 (i) \<longleftrightarrow> (ii) *)
lemma red_sat_eq_red_calc_sat: "reduc_saturated N \<longleftrightarrow> reduc_calc.saturated N"
proof
assume red_sat_n: "reduc_saturated N"
show "reduc_calc.saturated N"
unfolding reduc_calc.saturated_def
proof
fix \<iota>
assume i_in: "\<iota> \<in> Inf_from N"
show "\<iota> \<in> Red_Red_Inf N"
using i_in red_sat_n unfolding reduc_saturated_def Inf_from2_def Inf_from_def Red_Red_Inf_def by blast
qed
next
assume red_sat_n: "reduc_calc.saturated N"
show "reduc_saturated N"
unfolding reduc_saturated_def
proof
fix \<iota>
assume i_in: "\<iota> \<in> Inf_from (N - Red_F N)"
show "\<iota> \<in> Red_Inf N"
using i_in red_sat_n unfolding Inf_from_def reduc_calc.saturated_def Red_Red_Inf_def Inf_from2_def by blast
qed
qed
(* lem:saturation-red-vs-red'-2 2/2 (i) \<longleftrightarrow> (iii) *)
lemma red_sat_eq_sat: "reduc_saturated N \<longleftrightarrow> saturated (N - Red_F N)"
unfolding reduc_saturated_def saturated_def by (simp add: Red_Inf_without_red_F)
(* thm:reduced-stat-ref-compl 1/3 (i) \<longleftrightarrow> (iii) *)
theorem stat_is_stat_red: "static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F \<longleftrightarrow>
static_refutational_complete_calculus Bot Inf entails Red_Red_Inf Red_F"
proof
assume
stat_ref1: "static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
show "static_refutational_complete_calculus Bot Inf entails Red_Red_Inf Red_F"
using reduc_calc.calculus_with_red_crit_axioms
unfolding static_refutational_complete_calculus_def static_refutational_complete_calculus_axioms_def
proof
show "\<forall>B N. B \<in> Bot \<longrightarrow> reduc_calc.saturated N \<longrightarrow> N \<Turnstile> {B} \<longrightarrow> (\<exists>B'\<in>Bot. B' \<in> N)"
proof (clarify)
fix B N
assume
b_in: "B \<in> Bot" and
n_sat: "reduc_calc.saturated N" and
n_imp_b: "N \<Turnstile> {B}"
have "saturated (N - Red_F N)" using red_sat_eq_red_calc_sat[of N] red_sat_eq_sat[of N] n_sat by blast
moreover have "(N - Red_F N) \<Turnstile> {B}" using n_imp_b b_in by (simp add: reduc_calc.Red_F_Bot)
ultimately show "\<exists>B'\<in>Bot. B'\<in> N"
using stat_ref1 by (meson DiffD1 b_in static_refutational_complete_calculus.static_refutational_complete)
qed
qed
next
assume
stat_ref3: "static_refutational_complete_calculus Bot Inf entails Red_Red_Inf Red_F"
show "static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
unfolding static_refutational_complete_calculus_def static_refutational_complete_calculus_axioms_def
using calculus_with_red_crit_axioms
proof
show "\<forall>B N. B \<in> Bot \<longrightarrow> saturated N \<longrightarrow> N \<Turnstile> {B} \<longrightarrow> (\<exists>B'\<in>Bot. B' \<in> N)"
proof clarify
fix B N
assume
b_in: "B \<in> Bot" and
n_sat: "saturated N" and
n_imp_b: "N \<Turnstile> {B}"
then show "\<exists>B'\<in> Bot. B' \<in> N"
using stat_ref3 sat_imp_red_calc_sat[OF n_sat]
by (meson static_refutational_complete_calculus.static_refutational_complete)
qed
qed
qed
(* thm:reduced-stat-ref-compl 2/3 (iv) \<longleftrightarrow> (iii) *)
theorem red_stat_red_is_stat_red: "reduc_static_refutational_complete_calculus Bot Inf entails Red_Red_Inf Red_F \<longleftrightarrow>
static_refutational_complete_calculus Bot Inf entails Red_Red_Inf Red_F"
using reduc_calc.stat_ref_comp_imp_red_stat_ref_comp
by (metis reduc_calc.sat_eq_reduc_sat reduc_static_refutational_complete_calculus.axioms(2)
reduc_static_refutational_complete_calculus_axioms_def reduced_calc_is_calc
static_refutational_complete_calculus.intro static_refutational_complete_calculus_axioms.intro)
(* thm:reduced-stat-ref-compl 3/3 (ii) \<longleftrightarrow> (iii) *)
theorem red_stat_is_stat_red: "reduc_static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F \<longleftrightarrow>
static_refutational_complete_calculus Bot Inf entails Red_Red_Inf Red_F"
using reduc_calc.calculus_with_red_crit_axioms calculus_with_red_crit_axioms red_sat_eq_red_calc_sat
unfolding static_refutational_complete_calculus_def static_refutational_complete_calculus_axioms_def
reduc_static_refutational_complete_calculus_def reduc_static_refutational_complete_calculus_axioms_def
by blast
-definition Sup_Red_F_llist :: "'f set llist \<Rightarrow> 'f set" where
-"Sup_Red_F_llist D = (\<Union>i \<in> {i. enat i < llength D}. Red_F (lnth D i))"
-
-lemma Sup_Red_F_unit: "Sup_Red_F_llist (LCons X LNil) = Red_F X"
-using Sup_Red_F_llist_def enat_0_iff(1) by simp
-
-lemma sup_red_f_in_red_liminf: "chain derive D \<Longrightarrow> Sup_Red_F_llist D \<subseteq> Red_F (Liminf_llist D)"
+lemma sup_red_f_in_red_liminf:
+ "chain derive D \<Longrightarrow> Sup_llist (lmap Red_F D) \<subseteq> Red_F (Liminf_llist D)"
proof
fix N
assume
deriv: "chain derive D" and
- n_in_sup: "N \<in> Sup_Red_F_llist D"
+ n_in_sup: "N \<in> Sup_llist (lmap Red_F D)"
obtain i0 where i_smaller: "enat i0 < llength D" and n_in: "N \<in> Red_F (lnth D i0)"
- using n_in_sup unfolding Sup_Red_F_llist_def by blast
+ using n_in_sup by (metis Sup_llist_imp_exists_index llength_lmap lnth_lmap)
have "Red_F (lnth D i0) \<subseteq> Red_F (Liminf_llist D)"
using i_smaller by (simp add: deriv Red_F_subset_Liminf)
then show "N \<in> Red_F (Liminf_llist D)"
using n_in by fast
qed
-lemma sup_red_inf_in_red_liminf: "chain derive D \<Longrightarrow> Sup_Red_Inf_llist D \<subseteq> Red_Inf (Liminf_llist D)"
+lemma sup_red_inf_in_red_liminf:
+ "chain derive D \<Longrightarrow> Sup_llist (lmap Red_Inf D) \<subseteq> Red_Inf (Liminf_llist D)"
proof
fix \<iota>
assume
deriv: "chain derive D" and
- i_in_sup: "\<iota> \<in> Sup_Red_Inf_llist D"
+ i_in_sup: "\<iota> \<in> Sup_llist (lmap Red_Inf D)"
obtain i0 where i_smaller: "enat i0 < llength D" and n_in: "\<iota> \<in> Red_Inf (lnth D i0)"
- using i_in_sup unfolding Sup_Red_Inf_llist_def by blast
+ using i_in_sup unfolding Sup_llist_def by auto
have "Red_Inf (lnth D i0) \<subseteq> Red_Inf (Liminf_llist D)"
using i_smaller by (simp add: deriv Red_Inf_subset_Liminf)
then show "\<iota> \<in> Red_Inf (Liminf_llist D)"
using n_in by fast
qed
definition reduc_fair :: "'f set llist \<Rightarrow> bool" where
-"reduc_fair D \<equiv> Inf_from (Liminf_llist D - (Sup_Red_F_llist D)) \<subseteq> Sup_Red_Inf_llist D"
+ "reduc_fair D \<longleftrightarrow>
+ Inf_from (Liminf_llist D - Sup_llist (lmap Red_F D)) \<subseteq> Sup_llist (lmap Red_Inf D)"
(* lem:red-fairness-implies-red-saturation *)
-lemma reduc_fair_imp_Liminf_reduc_sat: "chain derive D \<Longrightarrow> reduc_fair D \<Longrightarrow> reduc_saturated (Liminf_llist D)"
+lemma reduc_fair_imp_Liminf_reduc_sat:
+ "chain derive D \<Longrightarrow> reduc_fair D \<Longrightarrow> reduc_saturated (Liminf_llist D)"
unfolding reduc_saturated_def
proof -
fix D
assume
deriv: "chain derive D" and
red_fair: "reduc_fair D"
- have "Inf_from (Liminf_llist D - Red_F (Liminf_llist D)) \<subseteq> Inf_from (Liminf_llist D - Sup_Red_F_llist D)"
+ have "Inf_from (Liminf_llist D - Red_F (Liminf_llist D))
+ \<subseteq> Inf_from (Liminf_llist D - Sup_llist (lmap Red_F D))"
using sup_red_f_in_red_liminf[OF deriv] unfolding Inf_from_def by blast
- then have "Inf_from (Liminf_llist D - Red_F (Liminf_llist D)) \<subseteq> Sup_Red_Inf_llist D"
+ then have "Inf_from (Liminf_llist D - Red_F (Liminf_llist D)) \<subseteq> Sup_llist (lmap Red_Inf D)"
using red_fair unfolding reduc_fair_def by simp
then show "Inf_from (Liminf_llist D - Red_F (Liminf_llist D)) \<subseteq> Red_Inf (Liminf_llist D)"
using sup_red_inf_in_red_liminf[OF deriv] by fast
qed
end
locale reduc_dynamic_refutational_complete_calculus = calculus_with_red_crit +
assumes
reduc_dynamic_refutational_complete: "B \<in> Bot \<Longrightarrow> chain derive D \<Longrightarrow> reduc_fair D
\<Longrightarrow> lnth D 0 \<Turnstile> {B} \<Longrightarrow> \<exists>i \<in> {i. enat i < llength D}. \<exists> B'\<in>Bot. B' \<in> lnth D i"
begin
sublocale reduc_static_refutational_complete_calculus
proof
fix B N
assume
bot_elem: \<open>B \<in> Bot\<close> and
saturated_N: "reduc_saturated N" and
refut_N: "N \<Turnstile> {B}"
define D where "D = LCons N LNil"
have[simp]: \<open>\<not> lnull D\<close> by (auto simp: D_def)
have deriv_D: \<open>chain (\<rhd>Red) D\<close> by (simp add: chain.chain_singleton D_def)
have liminf_is_N: "Liminf_llist D = N" by (simp add: D_def Liminf_llist_LCons)
have head_D: "N = lnth D 0" by (simp add: D_def)
- have "Sup_Red_F_llist D = Red_F N" by (simp add: D_def Sup_Red_F_unit)
- moreover have "Sup_Red_Inf_llist D = Red_Inf N" by (simp add: D_def Sup_Red_Inf_unit)
+ have "Sup_llist (lmap Red_F D) = Red_F N" by (simp add: D_def)
+ moreover have "Sup_llist (lmap Red_Inf D) = Red_Inf N" by (simp add: D_def)
ultimately have fair_D: "reduc_fair D"
using saturated_N liminf_is_N unfolding reduc_fair_def reduc_saturated_def
by (simp add: reduc_fair_def reduc_saturated_def liminf_is_N)
- obtain i B' where B'_is_bot: \<open>B' \<in> Bot\<close> and B'_in: "B' \<in> (lnth D i)" and \<open>i < llength D\<close>
+ obtain i B' where B'_is_bot: \<open>B' \<in> Bot\<close> and B'_in: "B' \<in> lnth D i" and \<open>i < llength D\<close>
using reduc_dynamic_refutational_complete[of B D] bot_elem fair_D head_D saturated_N deriv_D refut_N
by auto
then have "i = 0"
by (auto simp: D_def enat_0_iff)
show \<open>\<exists>B'\<in>Bot. B' \<in> N\<close>
using B'_is_bot B'_in unfolding \<open>i = 0\<close> head_D[symmetric] by auto
qed
end
sublocale reduc_static_refutational_complete_calculus \<subseteq> reduc_dynamic_refutational_complete_calculus
proof
fix B D
assume
bot_elem: \<open>B \<in> Bot\<close> and
deriv: \<open>chain (\<rhd>Red) D\<close> and
fair: \<open>reduc_fair D\<close> and
- unsat: \<open>(lnth D 0) \<Turnstile> {B}\<close>
+ unsat: \<open>lnth D 0 \<Turnstile> {B}\<close>
have non_empty: \<open>\<not> lnull D\<close> using chain_not_lnull[OF deriv] .
- have subs: \<open>(lnth D 0) \<subseteq> Sup_llist D\<close>
+ have subs: \<open>lnth D 0 \<subseteq> Sup_llist D\<close>
using lhd_subset_Sup_llist[of D] non_empty by (simp add: lhd_conv_lnth)
have \<open>Sup_llist D \<Turnstile> {B}\<close>
using unsat subset_entailed[OF subs] entails_trans[of "Sup_llist D" "lnth D 0"] by auto
then have Sup_no_Red: \<open>Sup_llist D - Red_F (Sup_llist D) \<Turnstile> {B}\<close>
using bot_elem Red_F_Bot by auto
have Sup_no_Red_in_Liminf: \<open>Sup_llist D - Red_F (Sup_llist D) \<subseteq> Liminf_llist D\<close>
using deriv Red_in_Sup by auto
have Liminf_entails_Bot: \<open>Liminf_llist D \<Turnstile> {B}\<close>
using Sup_no_Red subset_entailed[OF Sup_no_Red_in_Liminf] entails_trans by blast
have \<open>reduc_saturated (Liminf_llist D)\<close>
using deriv fair reduc_fair_imp_Liminf_reduc_sat unfolding reduc_saturated_def
by auto
- then have \<open>\<exists>B'\<in>Bot. B' \<in> (Liminf_llist D)\<close>
+ then have \<open>\<exists>B'\<in>Bot. B' \<in> Liminf_llist D\<close>
using bot_elem reduc_static_refutational_complete Liminf_entails_Bot
by auto
then show \<open>\<exists>i\<in>{i. enat i < llength D}. \<exists>B'\<in>Bot. B' \<in> lnth D i\<close>
unfolding Liminf_llist_def by auto
qed
context calculus_with_red_crit
begin
lemma dyn_equiv_stat: "dynamic_refutational_complete_calculus Bot Inf entails Red_Inf Red_F =
static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
proof
assume "dynamic_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
then interpret dynamic_refutational_complete_calculus Bot Inf entails Red_Inf Red_F
by simp
show "static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
by (simp add: static_refutational_complete_calculus_axioms)
next
assume "static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
then interpret static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F
by simp
show "dynamic_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
by (simp add: dynamic_refutational_complete_calculus_axioms)
qed
lemma red_dyn_equiv_red_stat: "reduc_dynamic_refutational_complete_calculus Bot Inf entails Red_Inf Red_F =
reduc_static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
proof
assume "reduc_dynamic_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
then interpret reduc_dynamic_refutational_complete_calculus Bot Inf entails Red_Inf Red_F
by simp
show "reduc_static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
by (simp add: reduc_static_refutational_complete_calculus_axioms)
next
assume "reduc_static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
then interpret reduc_static_refutational_complete_calculus Bot Inf entails Red_Inf Red_F
by simp
show "reduc_dynamic_refutational_complete_calculus Bot Inf entails Red_Inf Red_F"
by (simp add: reduc_dynamic_refutational_complete_calculus_axioms)
qed
-interpretation reduc_calc : calculus_with_reduced_red_crit Bot Inf entails Red_Red_Inf Red_F
-using reduc_calc by simp
+interpretation reduc_calc: calculus_with_reduced_red_crit Bot Inf entails Red_Red_Inf Red_F
+ by (fact reduc_calc)
(* thm:reduced-dyn-ref-compl 1/3 (v) \<longleftrightarrow> (vii) *)
-theorem dyn_ref_eq_dyn_ref_red: "dynamic_refutational_complete_calculus Bot Inf entails Red_Inf Red_F \<longleftrightarrow>
- dynamic_refutational_complete_calculus Bot Inf entails Red_Red_Inf Red_F"
+theorem dyn_ref_eq_dyn_ref_red:
+ "dynamic_refutational_complete_calculus Bot Inf entails Red_Inf Red_F \<longleftrightarrow>
+ dynamic_refutational_complete_calculus Bot Inf entails Red_Red_Inf Red_F"
using dyn_equiv_stat stat_is_stat_red reduc_calc.dyn_equiv_stat by meson
(* thm:reduced-dyn-ref-compl 2/3 (viii) \<longleftrightarrow> (vii) *)
-theorem red_dyn_ref_red_eq_dyn_ref_red: "reduc_dynamic_refutational_complete_calculus Bot Inf
- entails Red_Red_Inf Red_F \<longleftrightarrow>
- dynamic_refutational_complete_calculus Bot Inf entails Red_Red_Inf Red_F"
+theorem red_dyn_ref_red_eq_dyn_ref_red:
+ "reduc_dynamic_refutational_complete_calculus Bot Inf entails Red_Red_Inf Red_F \<longleftrightarrow>
+ dynamic_refutational_complete_calculus Bot Inf entails Red_Red_Inf Red_F"
using red_dyn_equiv_red_stat dyn_equiv_stat red_stat_red_is_stat_red
by (simp add: reduc_calc.dyn_equiv_stat reduc_calc.red_dyn_equiv_red_stat)
(* thm:reduced-dyn-ref-compl 3/3 (vi) \<longleftrightarrow> (vii) *)
-theorem red_dyn_ref_eq_dyn_ref_red: "reduc_dynamic_refutational_complete_calculus Bot Inf entails Red_Inf Red_F \<longleftrightarrow>
- dynamic_refutational_complete_calculus Bot Inf entails Red_Red_Inf Red_F"
+theorem red_dyn_ref_eq_dyn_ref_red:
+ "reduc_dynamic_refutational_complete_calculus Bot Inf entails Red_Inf Red_F \<longleftrightarrow>
+ dynamic_refutational_complete_calculus Bot Inf entails Red_Red_Inf Red_F"
using red_dyn_equiv_red_stat dyn_equiv_stat red_stat_is_stat_red
reduc_calc.dyn_equiv_stat reduc_calc.red_dyn_equiv_red_stat
by blast
end
end
diff --git a/thys/Saturation_Framework/Consequence_Relations_and_Inference_Systems.thy b/thys/Saturation_Framework/Consequence_Relations_and_Inference_Systems.thy
--- a/thys/Saturation_Framework/Consequence_Relations_and_Inference_Systems.thy
+++ b/thys/Saturation_Framework/Consequence_Relations_and_Inference_Systems.thy
@@ -1,102 +1,130 @@
(* Title: Consequence Relations and Inference Systems of the Saturation Framework
* Author: Sophie Tourret <stourret at mpi-inf.mpg.de>, 2018-2020 *)
section \<open>Consequence Relations and Inference Systems\<close>
text \<open>This section introduces the most basic notions upon which the framework is
built: consequence relations and inference systems. It also defines the notion
of a family of consequence relations. This corresponds to section 2.1 of the report.\<close>
theory Consequence_Relations_and_Inference_Systems
imports Main
begin
subsection \<open>Consequence Relations\<close>
locale consequence_relation =
fixes
Bot :: "'f set" and
entails :: "'f set \<Rightarrow> 'f set \<Rightarrow> bool" (infix "\<Turnstile>" 50)
assumes
bot_not_empty: "Bot \<noteq> {}" and
- bot_implies_all: "B \<in> Bot \<Longrightarrow> {B} \<Turnstile> N1" and
+ bot_entails_all: "B \<in> Bot \<Longrightarrow> {B} \<Turnstile> N1" and
subset_entailed: "N2 \<subseteq> N1 \<Longrightarrow> N1 \<Turnstile> N2" and
all_formulas_entailed: "(\<forall>C \<in> N2. N1 \<Turnstile> {C}) \<Longrightarrow> N1 \<Turnstile> N2" and
- entails_trans [trans]: "N1 \<Turnstile> N2 \<Longrightarrow> N2 \<Turnstile> N3 \<Longrightarrow> N1 \<Turnstile> N3"
+ entails_trans[trans]: "N1 \<Turnstile> N2 \<Longrightarrow> N2 \<Turnstile> N3 \<Longrightarrow> N1 \<Turnstile> N3"
begin
lemma entail_set_all_formulas: "N1 \<Turnstile> N2 \<longleftrightarrow> (\<forall>C \<in> N2. N1 \<Turnstile> {C})"
by (meson all_formulas_entailed empty_subsetI insert_subset subset_entailed entails_trans)
lemma entail_union: "N \<Turnstile> N1 \<and> N \<Turnstile> N2 \<longleftrightarrow> N \<Turnstile> N1 \<union> N2"
using entail_set_all_formulas[of N N1] entail_set_all_formulas[of N N2]
entail_set_all_formulas[of N "N1 \<union> N2"] by blast
lemma entail_unions: "(\<forall>i \<in> I. N \<Turnstile> Ni i) \<longleftrightarrow> N \<Turnstile> \<Union> (Ni ` I)"
using entail_set_all_formulas[of N "\<Union> (Ni ` I)"] entail_set_all_formulas[of N]
Complete_Lattices.UN_ball_bex_simps(2)[of Ni I "\<lambda>C. N \<Turnstile> {C}", symmetric]
by meson
lemma entail_all_bot: "(\<exists>B \<in> Bot. N \<Turnstile> {B}) \<Longrightarrow> (\<forall>B' \<in> Bot. N \<Turnstile> {B'})"
- using bot_implies_all entails_trans by blast
+ using bot_entails_all entails_trans by blast
-lemma subset_entailed_strong: "N1 \<Turnstile> N2 \<Longrightarrow> N1 \<union> N2 \<Turnstile> N3 \<Longrightarrow> N1 \<Turnstile> N3"
+lemma entails_trans_strong: "N1 \<Turnstile> N2 \<Longrightarrow> N1 \<union> N2 \<Turnstile> N3 \<Longrightarrow> N1 \<Turnstile> N3"
by (meson entail_union entails_trans order_refl subset_entailed)
end
subsection \<open>Families of Consequence Relations\<close>
locale consequence_relation_family =
fixes
Bot :: "'f set" and
- Q :: "'q itself" and
+ Q :: "'q set" and
entails_q :: "'q \<Rightarrow> ('f set \<Rightarrow> 'f set \<Rightarrow> bool)"
assumes
- Bot_not_empty: "Bot \<noteq> {}" and
- q_cons_rel: "consequence_relation Bot (entails_q q)"
+ Q_nonempty: "Q \<noteq> {}" and
+ q_cons_rel: "\<forall>q \<in> Q. consequence_relation Bot (entails_q q)"
begin
+lemma bot_not_empty: "Bot \<noteq> {}"
+ using Q_nonempty consequence_relation.bot_not_empty consequence_relation_family.q_cons_rel
+ consequence_relation_family_axioms by blast
+
definition entails_Q :: "'f set \<Rightarrow> 'f set \<Rightarrow> bool" (infix "\<Turnstile>Q" 50) where
- "(N1 \<Turnstile>Q N2) = (\<forall>q. entails_q q N1 N2)"
+ "N1 \<Turnstile>Q N2 \<longleftrightarrow> (\<forall>q \<in> Q. entails_q q N1 N2)"
(* lem:intersection-of-conseq-rel *)
lemma intersect_cons_rel_family: "consequence_relation Bot entails_Q"
- unfolding consequence_relation_def
-proof (intro conjI)
- show \<open>Bot \<noteq> {}\<close> using Bot_not_empty .
-next
- show "\<forall>B N. B \<in> Bot \<longrightarrow> {B} \<Turnstile>Q N"
- unfolding entails_Q_def by (metis consequence_relation_def q_cons_rel)
-next
- show "\<forall>N2 N1. N2 \<subseteq> N1 \<longrightarrow> N1 \<Turnstile>Q N2"
- unfolding entails_Q_def by (metis consequence_relation_def q_cons_rel)
-next
- show "\<forall>N2 N1. (\<forall>C\<in>N2. N1 \<Turnstile>Q {C}) \<longrightarrow> N1 \<Turnstile>Q N2"
- unfolding entails_Q_def by (metis consequence_relation_def q_cons_rel)
-next
- show "\<forall>N1 N2 N3. N1 \<Turnstile>Q N2 \<longrightarrow> N2 \<Turnstile>Q N3 \<longrightarrow> N1 \<Turnstile>Q N3"
- unfolding entails_Q_def by (metis consequence_relation_def q_cons_rel)
-qed
+ unfolding consequence_relation_def entails_Q_def
+ by (intro conjI bot_not_empty) (metis consequence_relation_def q_cons_rel)+
end
subsection \<open>Inference Systems\<close>
datatype 'f inference =
Infer (prems_of: "'f list") (concl_of: "'f")
locale inference_system =
fixes
Inf :: \<open>'f inference set\<close>
begin
-definition Inf_from :: "'f set \<Rightarrow> 'f inference set" where
+definition Inf_from :: "'f set \<Rightarrow> 'f inference set" where
"Inf_from N = {\<iota> \<in> Inf. set (prems_of \<iota>) \<subseteq> N}"
definition Inf_from2 :: "'f set \<Rightarrow> 'f set \<Rightarrow> 'f inference set" where
"Inf_from2 N M = Inf_from (N \<union> M) - Inf_from (N - M)"
+lemma Inf_if_Inf_from: "\<iota> \<in> Inf_from N \<Longrightarrow> \<iota> \<in> Inf"
+ unfolding Inf_from_def by simp
+
+lemma Inf_if_Inf_from2: "\<iota> \<in> Inf_from2 N M \<Longrightarrow> \<iota> \<in> Inf"
+ unfolding Inf_from2_def Inf_from_def by simp
+
+lemma Inf_from2_alt:
+ "Inf_from2 N M = {\<iota> \<in> Inf. \<iota> \<in> Inf_from (N \<union> M) \<and> set (prems_of \<iota>) \<inter> M \<noteq> {}}"
+ unfolding Inf_from_def Inf_from2_def by auto
+
+lemma Inf_from_mono: "N \<subseteq> N' \<Longrightarrow> Inf_from N \<subseteq> Inf_from N'"
+ unfolding Inf_from_def by auto
+
+lemma Inf_from2_mono: "N \<subseteq> N' \<Longrightarrow> M \<subseteq> M' \<Longrightarrow> Inf_from2 N M \<subseteq> Inf_from2 N' M'"
+ unfolding Inf_from2_alt using Inf_from_mono[of "N \<union> M" "N' \<union> M'"] by auto
+
+end
+
+subsection \<open>Families of Inference Systems\<close>
+
+locale inference_system_family =
+ fixes
+ Q :: "'q set" and
+ Inf_q :: "'q \<Rightarrow> 'f inference set"
+ assumes
+ Q_nonempty: "Q \<noteq> {}"
+begin
+
+definition Inf_from_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'f inference set" where
+ "Inf_from_q q = inference_system.Inf_from (Inf_q q)"
+
+definition Inf_from2_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'f set \<Rightarrow> 'f inference set" where
+ "Inf_from2_q q = inference_system.Inf_from2 (Inf_q q)"
+
+lemma Inf_from2_q_alt:
+ "Inf_from2_q q N M = {\<iota> \<in> Inf_q q. \<iota> \<in> Inf_from_q q (N \<union> M) \<and> set (prems_of \<iota>) \<inter> M \<noteq> {}}"
+ unfolding Inf_from_q_def Inf_from2_q_def inference_system.Inf_from2_alt by auto
+
end
end
diff --git a/thys/Saturation_Framework/Labeled_Lifting_to_Non_Ground_Calculi.thy b/thys/Saturation_Framework/Labeled_Lifting_to_Non_Ground_Calculi.thy
--- a/thys/Saturation_Framework/Labeled_Lifting_to_Non_Ground_Calculi.thy
+++ b/thys/Saturation_Framework/Labeled_Lifting_to_Non_Ground_Calculi.thy
@@ -1,434 +1,356 @@
(* Title: Labeled Lifting to Non-Ground Calculi of the Saturation Framework
* Author: Sophie Tourret <stourret at mpi-inf.mpg.de>, 2019-2020 *)
section \<open>Labeled Liftings\<close>
text \<open>This section formalizes the extension of the lifting results to labeled
calculi. This corresponds to section 3.4 of the report.\<close>
theory Labeled_Lifting_to_Non_Ground_Calculi
imports Lifting_to_Non_Ground_Calculi
begin
subsection \<open>Labeled Lifting with a Family of Well-founded Orderings\<close>
-locale labeled_lifting_w_wf_ord_family =
- lifting_with_wf_ordering_family Bot_F Inf_F Bot_G entails_G Inf_G Red_Inf_G Red_F_G \<G>_F \<G>_Inf Prec_F
+locale labeled_lifting_w_wf_ord_family = no_labels: lifting_with_wf_ordering_family Bot_F Inf_F
+ Bot_G entails_G Inf_G Red_Inf_G Red_F_G \<G>_F \<G>_Inf Prec_F
for
Bot_F :: "'f set" and
Inf_F :: "'f inference set" and
Bot_G :: "'g set" and
entails_G :: "'g set \<Rightarrow> 'g set \<Rightarrow> bool" (infix "\<Turnstile>G" 50) and
Inf_G :: "'g inference set" and
Red_Inf_G :: "'g set \<Rightarrow> 'g inference set" and
Red_F_G :: "'g set \<Rightarrow> 'g set" and
\<G>_F :: "'f \<Rightarrow> 'g set" and
\<G>_Inf :: "'f inference \<Rightarrow> 'g inference set option" and
- Prec_F :: "'g \<Rightarrow> 'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<sqsubset>" 50)
+ Prec_F :: "'g \<Rightarrow> 'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<sqsubset>" 50)
+ fixes
- l :: \<open>'l itself\<close> and
Inf_FL :: \<open>('f \<times> 'l) inference set\<close>
assumes
Inf_F_to_Inf_FL: \<open>\<iota>\<^sub>F \<in> Inf_F \<Longrightarrow> length (Ll :: 'l list) = length (prems_of \<iota>\<^sub>F) \<Longrightarrow>
\<exists>L0. Infer (zip (prems_of \<iota>\<^sub>F) Ll) (concl_of \<iota>\<^sub>F, L0) \<in> Inf_FL\<close> and
Inf_FL_to_Inf_F: \<open>\<iota>\<^sub>F\<^sub>L \<in> Inf_FL \<Longrightarrow> Infer (map fst (prems_of \<iota>\<^sub>F\<^sub>L)) (fst (concl_of \<iota>\<^sub>F\<^sub>L)) \<in> Inf_F\<close>
begin
definition to_F :: \<open>('f \<times> 'l) inference \<Rightarrow> 'f inference\<close> where
\<open>to_F \<iota>\<^sub>F\<^sub>L = Infer (map fst (prems_of \<iota>\<^sub>F\<^sub>L)) (fst (concl_of \<iota>\<^sub>F\<^sub>L))\<close>
-definition Bot_FL :: \<open>('f \<times> 'l) set\<close> where \<open>Bot_FL = Bot_F \<times> UNIV\<close>
+abbreviation Bot_FL :: \<open>('f \<times> 'l) set\<close> where
+ \<open>Bot_FL \<equiv> Bot_F \<times> UNIV\<close>
-definition \<G>_F_L :: \<open>('f \<times> 'l) \<Rightarrow> 'g set\<close> where \<open>\<G>_F_L CL = \<G>_F (fst CL)\<close>
+abbreviation \<G>_F_L :: \<open>('f \<times> 'l) \<Rightarrow> 'g set\<close> where
+ \<open>\<G>_F_L CL \<equiv> \<G>_F (fst CL)\<close>
-definition \<G>_Inf_L :: \<open>('f \<times> 'l) inference \<Rightarrow> 'g inference set option\<close> where \<open>\<G>_Inf_L \<iota>\<^sub>F\<^sub>L = \<G>_Inf (to_F \<iota>\<^sub>F\<^sub>L)\<close>
+abbreviation \<G>_Inf_L :: \<open>('f \<times> 'l) inference \<Rightarrow> 'g inference set option\<close> where
+ \<open>\<G>_Inf_L \<iota>\<^sub>F\<^sub>L \<equiv> \<G>_Inf (to_F \<iota>\<^sub>F\<^sub>L)\<close>
(* lem:labeled-grounding-function *)
-sublocale labeled_standard_lifting: standard_lifting
- where
- Bot_F = Bot_FL and
- Inf_F = Inf_FL and
- \<G>_F = \<G>_F_L and
- \<G>_Inf = \<G>_Inf_L
+sublocale standard_lifting Bot_FL Inf_FL Bot_G Inf_G "(\<Turnstile>G)" Red_Inf_G Red_F_G \<G>_F_L \<G>_Inf_L
proof
show "Bot_FL \<noteq> {}"
- unfolding Bot_FL_def using Bot_F_not_empty by simp
+ using no_labels.Bot_F_not_empty by simp
next
- show "B\<in>Bot_FL \<Longrightarrow> \<G>_F_L B \<noteq> {}" for B
- unfolding \<G>_F_L_def Bot_FL_def using Bot_map_not_empty by auto
+ show "B \<in> Bot_FL \<Longrightarrow> \<G>_F_L B \<noteq> {}" for B
+ using no_labels.Bot_map_not_empty by auto
next
- show "B\<in>Bot_FL \<Longrightarrow> \<G>_F_L B \<subseteq> Bot_G" for B
- unfolding \<G>_F_L_def Bot_FL_def using Bot_map by force
+ show "B \<in> Bot_FL \<Longrightarrow> \<G>_F_L B \<subseteq> Bot_G" for B
+ using no_labels.Bot_map by force
next
fix CL
show "\<G>_F_L CL \<inter> Bot_G \<noteq> {} \<longrightarrow> CL \<in> Bot_FL"
- unfolding \<G>_F_L_def Bot_FL_def using Bot_cond
- by (metis SigmaE UNIV_I UNIV_Times_UNIV mem_Sigma_iff prod.sel(1))
+ using no_labels.Bot_cond by (metis SigmaE UNIV_I UNIV_Times_UNIV mem_Sigma_iff prod.sel(1))
next
fix \<iota>
assume
i_in: \<open>\<iota> \<in> Inf_FL\<close> and
ground_not_none: \<open>\<G>_Inf_L \<iota> \<noteq> None\<close>
then show "the (\<G>_Inf_L \<iota>) \<subseteq> Red_Inf_G (\<G>_F_L (concl_of \<iota>))"
- unfolding \<G>_Inf_L_def \<G>_F_L_def to_F_def using inf_map Inf_FL_to_Inf_F by fastforce
+ unfolding to_F_def using no_labels.inf_map Inf_FL_to_Inf_F by fastforce
qed
-abbreviation Labeled_Empty_Order :: \<open> ('f \<times> 'l) \<Rightarrow> ('f \<times> 'l) \<Rightarrow> bool\<close> where
- "Labeled_Empty_Order C1 C2 \<equiv> False"
+sublocale lifting_with_wf_ordering_family Bot_FL Inf_FL Bot_G entails_G Inf_G Red_Inf_G Red_F_G
+ \<G>_F_L \<G>_Inf_L "\<lambda>g Cl Cl'. False"
+ by unfold_locales simp+
-sublocale labeled_lifting_w_empty_ord_family :
- lifting_with_wf_ordering_family Bot_FL Inf_FL Bot_G entails_G Inf_G Red_Inf_G Red_F_G
- \<G>_F_L \<G>_Inf_L "\<lambda>g. Labeled_Empty_Order"
-proof
- show "po_on Labeled_Empty_Order UNIV"
- unfolding po_on_def by (simp add: transp_onI wfp_on_imp_irreflp_on)
- show "wfp_on Labeled_Empty_Order UNIV"
- unfolding wfp_on_def by simp
-qed
-
-notation "labeled_standard_lifting.entails_\<G>" (infix "\<Turnstile>\<G>L" 50)
+notation entails_\<G> (infix "\<Turnstile>\<G>L" 50)
(* lem:labeled-consequence *)
lemma labeled_entailment_lifting: "NL1 \<Turnstile>\<G>L NL2 \<longleftrightarrow> fst ` NL1 \<Turnstile>\<G> fst ` NL2"
- unfolding labeled_standard_lifting.entails_\<G>_def \<G>_F_L_def entails_\<G>_def by auto
+ by simp
-lemma (in-) subset_fst: "A \<subseteq> fst ` AB \<Longrightarrow> \<forall>x \<in> A. \<exists>y. (x,y) \<in> AB" by fastforce
-
-lemma red_inf_impl: "\<iota> \<in> labeled_lifting_w_empty_ord_family.Red_Inf_\<G> NL \<Longrightarrow> to_F \<iota> \<in> Red_Inf_\<G> (fst ` NL)"
- unfolding labeled_lifting_w_empty_ord_family.Red_Inf_\<G>_def Red_Inf_\<G>_def \<G>_Inf_L_def \<G>_F_L_def to_F_def
- using Inf_FL_to_Inf_F by auto
+lemma red_inf_impl: "\<iota> \<in> Red_Inf_\<G> NL \<Longrightarrow> to_F \<iota> \<in> no_labels.Red_Inf_\<G> (fst ` NL)"
+ unfolding Red_Inf_\<G>_def no_labels.Red_Inf_\<G>_def using Inf_FL_to_Inf_F by (auto simp: to_F_def)
(* lem:labeled-saturation *)
-lemma labeled_saturation_lifting:
- "labeled_lifting_w_empty_ord_family.lifted_calculus_with_red_crit.saturated NL \<Longrightarrow>
- empty_order_lifting.lifted_calculus_with_red_crit.saturated (fst ` NL)"
- unfolding labeled_lifting_w_empty_ord_family.lifted_calculus_with_red_crit.saturated_def
- empty_order_lifting.lifted_calculus_with_red_crit.saturated_def
- labeled_standard_lifting.Non_ground.Inf_from_def Non_ground.Inf_from_def
+lemma labeled_saturation_lifting: "saturated NL \<Longrightarrow> no_labels.saturated (fst ` NL)"
+ unfolding saturated_def no_labels.saturated_def Inf_from_def no_labels.Inf_from_def
proof clarify
fix \<iota>
assume
- subs_Red_Inf: "{\<iota> \<in> Inf_FL. set (prems_of \<iota>) \<subseteq> NL} \<subseteq> labeled_lifting_w_empty_ord_family.Red_Inf_\<G> NL" and
+ subs_Red_Inf: "{\<iota> \<in> Inf_FL. set (prems_of \<iota>) \<subseteq> NL} \<subseteq> Red_Inf_\<G> NL" and
i_in: "\<iota> \<in> Inf_F" and
i_prems: "set (prems_of \<iota>) \<subseteq> fst ` NL"
- define Lli where "Lli i \<equiv> (SOME x. ((prems_of \<iota>)!i,x) \<in> NL)" for i
+ define Lli where "Lli i = (SOME x. ((prems_of \<iota>)!i,x) \<in> NL)" for i
have [simp]:"((prems_of \<iota>)!i,Lli i) \<in> NL" if "i < length (prems_of \<iota>)" for i
- using that subset_fst[OF i_prems] unfolding Lli_def by (meson nth_mem someI_ex)
- define Ll where "Ll \<equiv> map Lli [0..<length (prems_of \<iota>)]"
+ using that i_prems unfolding Lli_def by (metis nth_mem someI_ex DomainE Domain_fst subset_eq)
+ define Ll where "Ll = map Lli [0..<length (prems_of \<iota>)]"
have Ll_length: "length Ll = length (prems_of \<iota>)" unfolding Ll_def by auto
have subs_NL: "set (zip (prems_of \<iota>) Ll) \<subseteq> NL" unfolding Ll_def by (auto simp:in_set_zip)
obtain L0 where L0: "Infer (zip (prems_of \<iota>) Ll) (concl_of \<iota>, L0) \<in> Inf_FL"
using Inf_F_to_Inf_FL[OF i_in Ll_length] ..
define \<iota>_FL where "\<iota>_FL = Infer (zip (prems_of \<iota>) Ll) (concl_of \<iota>, L0)"
then have "set (prems_of \<iota>_FL) \<subseteq> NL" using subs_NL by simp
then have "\<iota>_FL \<in> {\<iota> \<in> Inf_FL. set (prems_of \<iota>) \<subseteq> NL}" unfolding \<iota>_FL_def using L0 by blast
- then have "\<iota>_FL \<in> labeled_lifting_w_empty_ord_family.Red_Inf_\<G> NL" using subs_Red_Inf by fast
+ then have "\<iota>_FL \<in> Red_Inf_\<G> NL" using subs_Red_Inf by fast
moreover have "\<iota> = to_F \<iota>_FL" unfolding to_F_def \<iota>_FL_def using Ll_length by (cases \<iota>) auto
- ultimately show "\<iota> \<in> Red_Inf_\<G> (fst ` NL)" by (auto intro:red_inf_impl)
+ ultimately show "\<iota> \<in> no_labels.Red_Inf_\<G> (fst ` NL)" by (auto intro: red_inf_impl)
qed
(* lem:labeled-static-ref-compl *)
-lemma stat_ref_comp_to_labeled_sta_ref_comp: "static_refutational_complete_calculus Bot_F Inf_F (\<Turnstile>\<G>) Red_Inf_\<G> Red_F_\<G> \<Longrightarrow>
- static_refutational_complete_calculus Bot_FL Inf_FL (\<Turnstile>\<G>L)
- labeled_lifting_w_empty_ord_family.Red_Inf_\<G> labeled_lifting_w_empty_ord_family.Red_F_\<G>"
- unfolding static_refutational_complete_calculus_def
-proof (rule conjI impI; clarify)
- interpret calculus_with_red_crit Bot_FL Inf_FL labeled_standard_lifting.entails_\<G>
- labeled_lifting_w_empty_ord_family.Red_Inf_\<G> labeled_lifting_w_empty_ord_family.Red_F_\<G>
- by (simp add:
- labeled_lifting_w_empty_ord_family.lifted_calculus_with_red_crit.calculus_with_red_crit_axioms)
- show "calculus_with_red_crit Bot_FL Inf_FL (\<Turnstile>\<G>L) labeled_lifting_w_empty_ord_family.Red_Inf_\<G>
- labeled_lifting_w_empty_ord_family.Red_F_\<G>"
- by standard
-next
+lemma stat_ref_comp_to_labeled_sta_ref_comp:
+ assumes static:
+ "static_refutational_complete_calculus Bot_F Inf_F (\<Turnstile>\<G>) no_labels.Red_Inf_\<G> no_labels.Red_F_\<G>"
+ shows "static_refutational_complete_calculus Bot_FL Inf_FL (\<Turnstile>\<G>L) Red_Inf_\<G> Red_F_\<G>"
+proof
+ fix Bl :: \<open>'f \<times> 'l\<close> and Nl :: \<open>('f \<times> 'l) set\<close>
assume
- calc: "calculus_with_red_crit Bot_F Inf_F (\<Turnstile>\<G>) Red_Inf_\<G> Red_F_\<G>" and
- static: "static_refutational_complete_calculus_axioms Bot_F Inf_F (\<Turnstile>\<G>) Red_Inf_\<G>"
- show "static_refutational_complete_calculus_axioms Bot_FL Inf_FL (\<Turnstile>\<G>L)
- labeled_lifting_w_empty_ord_family.Red_Inf_\<G>"
- unfolding static_refutational_complete_calculus_axioms_def
- proof (intro conjI impI allI)
- fix Bl :: \<open>'f \<times> 'l\<close> and Nl :: \<open>('f \<times> 'l) set\<close>
- assume
- Bl_in: \<open>Bl \<in> Bot_FL\<close> and
- Nl_sat: \<open>labeled_lifting_w_empty_ord_family.lifted_calculus_with_red_crit.saturated Nl\<close> and
- Nl_entails_Bl: \<open>Nl \<Turnstile>\<G>L {Bl}\<close>
- have static_axioms: "B \<in> Bot_F \<longrightarrow> empty_order_lifting.lifted_calculus_with_red_crit.saturated N \<longrightarrow>
- N \<Turnstile>\<G> {B} \<longrightarrow> (\<exists>B'\<in>Bot_F. B' \<in> N)" for B N
- using static[unfolded static_refutational_complete_calculus_axioms_def] by fast
- define B where "B = fst Bl"
- have B_in: "B \<in> Bot_F" using Bl_in Bot_FL_def B_def SigmaE by force
- define N where "N = fst ` Nl"
- have N_sat: "empty_order_lifting.lifted_calculus_with_red_crit.saturated N"
- using N_def Nl_sat labeled_saturation_lifting by blast
- have N_entails_B: "N \<Turnstile>\<G> {B}"
- using Nl_entails_Bl unfolding labeled_entailment_lifting N_def B_def by force
- have "\<exists>B' \<in> Bot_F. B' \<in> N" using B_in N_sat N_entails_B static_axioms[of B N] by blast
- then obtain B' where in_Bot: "B' \<in> Bot_F" and in_N: "B' \<in> N" by force
- then have "B' \<in> fst ` Bot_FL" unfolding Bot_FL_def by fastforce
- obtain Bl' where in_Nl: "Bl' \<in> Nl" and fst_Bl': "fst Bl' = B'"
- using in_N unfolding N_def by blast
- have "Bl' \<in> Bot_FL" unfolding Bot_FL_def using fst_Bl' in_Bot vimage_fst by fastforce
- then show \<open>\<exists>Bl'\<in>Bot_FL. Bl' \<in> Nl\<close> using in_Nl by blast
- qed
+ Bl_in: \<open>Bl \<in> Bot_FL\<close> and
+ Nl_sat: \<open>saturated Nl\<close> and
+ Nl_entails_Bl: \<open>Nl \<Turnstile>\<G>L {Bl}\<close>
+ define B where "B = fst Bl"
+ have B_in: "B \<in> Bot_F" using Bl_in B_def SigmaE by force
+ define N where "N = fst ` Nl"
+ have N_sat: "no_labels.saturated N"
+ using N_def Nl_sat labeled_saturation_lifting by blast
+ have N_entails_B: "N \<Turnstile>\<G> {B}"
+ using Nl_entails_Bl unfolding labeled_entailment_lifting N_def B_def by force
+ have "\<exists>B' \<in> Bot_F. B' \<in> N" using B_in N_sat N_entails_B
+ using static[unfolded static_refutational_complete_calculus_def
+ static_refutational_complete_calculus_axioms_def] by blast
+ then obtain B' where in_Bot: "B' \<in> Bot_F" and in_N: "B' \<in> N" by force
+ then have "B' \<in> fst ` Bot_FL" by fastforce
+ obtain Bl' where in_Nl: "Bl' \<in> Nl" and fst_Bl': "fst Bl' = B'"
+ using in_N unfolding N_def by blast
+ have "Bl' \<in> Bot_FL" using fst_Bl' in_Bot vimage_fst by fastforce
+ then show \<open>\<exists>Bl'\<in>Bot_FL. Bl' \<in> Nl\<close> using in_Nl by blast
qed
end
subsection \<open>Labeled Lifting with a Family of Redundancy Criteria\<close>
locale labeled_lifting_with_red_crit_family = no_labels: standard_lifting_with_red_crit_family Inf_F
- Bot_G Inf_G Q entails_q Red_Inf_q Red_F_q Bot_F \<G>_F_q \<G>_Inf_q "\<lambda>g. Empty_Order"
+ Bot_G Q Inf_G_q entails_q Red_Inf_q Red_F_q Bot_F \<G>_F_q \<G>_Inf_q "\<lambda>g Cl Cl'. False"
for
Bot_F :: "'f set" and
Inf_F :: "'f inference set" and
Bot_G :: "'g set" and
- Q :: "'q itself" and
+ Q :: "'q set" and
entails_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g set \<Rightarrow> bool" and
- Inf_G :: "'g inference set" and
+ Inf_G_q :: "'q \<Rightarrow> 'g inference set" and
Red_Inf_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g inference set" and
Red_F_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g set" and
\<G>_F_q :: "'q \<Rightarrow> 'f \<Rightarrow> 'g set" and
\<G>_Inf_q :: "'q \<Rightarrow> 'f inference \<Rightarrow> 'g inference set option"
+ fixes
- l :: "'l itself" and
Inf_FL :: \<open>('f \<times> 'l) inference set\<close>
assumes
- Inf_F_to_Inf_FL: \<open>\<iota>\<^sub>F \<in> Inf_F \<Longrightarrow> length (Ll :: 'l list) = length (prems_of \<iota>\<^sub>F) \<Longrightarrow> \<exists>L0. Infer (zip (prems_of \<iota>\<^sub>F) Ll) (concl_of \<iota>\<^sub>F, L0) \<in> Inf_FL\<close> and
+ Inf_F_to_Inf_FL:
+ \<open>\<iota>\<^sub>F \<in> Inf_F \<Longrightarrow> length (Ll :: 'l list) = length (prems_of \<iota>\<^sub>F) \<Longrightarrow>
+ \<exists>L0. Infer (zip (prems_of \<iota>\<^sub>F) Ll) (concl_of \<iota>\<^sub>F, L0) \<in> Inf_FL\<close> and
Inf_FL_to_Inf_F: \<open>\<iota>\<^sub>F\<^sub>L \<in> Inf_FL \<Longrightarrow> Infer (map fst (prems_of \<iota>\<^sub>F\<^sub>L)) (fst (concl_of \<iota>\<^sub>F\<^sub>L)) \<in> Inf_F\<close>
begin
definition to_F :: \<open>('f \<times> 'l) inference \<Rightarrow> 'f inference\<close> where
\<open>to_F \<iota>\<^sub>F\<^sub>L = Infer (map fst (prems_of \<iota>\<^sub>F\<^sub>L)) (fst (concl_of \<iota>\<^sub>F\<^sub>L))\<close>
-definition Bot_FL :: \<open>('f \<times> 'l) set\<close> where \<open>Bot_FL = Bot_F \<times> UNIV\<close>
-
-definition \<G>_F_L_q :: \<open>'q \<Rightarrow> ('f \<times> 'l) \<Rightarrow> 'g set\<close> where \<open>\<G>_F_L_q q CL = \<G>_F_q q (fst CL)\<close>
+abbreviation Bot_FL :: \<open>('f \<times> 'l) set\<close> where
+ \<open>Bot_FL \<equiv> Bot_F \<times> UNIV\<close>
-definition \<G>_Inf_L_q :: \<open>'q \<Rightarrow> ('f \<times> 'l) inference \<Rightarrow> 'g inference set option\<close> where
- \<open>\<G>_Inf_L_q q \<iota>\<^sub>F\<^sub>L = \<G>_Inf_q q (to_F \<iota>\<^sub>F\<^sub>L)\<close>
+abbreviation \<G>_F_L_q :: \<open>'q \<Rightarrow> ('f \<times> 'l) \<Rightarrow> 'g set\<close> where
+ \<open>\<G>_F_L_q q CL \<equiv> \<G>_F_q q (fst CL)\<close>
-definition \<G>_set_L_q :: "'q \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> 'g set" where
+abbreviation \<G>_Inf_L_q :: \<open>'q \<Rightarrow> ('f \<times> 'l) inference \<Rightarrow> 'g inference set option\<close> where
+ \<open>\<G>_Inf_L_q q \<iota>\<^sub>F\<^sub>L \<equiv> \<G>_Inf_q q (to_F \<iota>\<^sub>F\<^sub>L)\<close>
+
+abbreviation \<G>_set_L_q :: "'q \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> 'g set" where
"\<G>_set_L_q q N \<equiv> \<Union> (\<G>_F_L_q q ` N)"
definition Red_Inf_\<G>_L_q :: "'q \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) inference set" where
- "Red_Inf_\<G>_L_q q N = {\<iota> \<in> Inf_FL. ((\<G>_Inf_L_q q \<iota>) \<noteq> None \<and> the (\<G>_Inf_L_q q \<iota>) \<subseteq> Red_Inf_q q (\<G>_set_L_q q N))
- \<or> ((\<G>_Inf_L_q q \<iota> = None) \<and> \<G>_F_L_q q (concl_of \<iota>) \<subseteq> (\<G>_set_L_q q N \<union> Red_F_q q (\<G>_set_L_q q N)))}"
+ "Red_Inf_\<G>_L_q q N = {\<iota> \<in> Inf_FL. (\<G>_Inf_L_q q \<iota> \<noteq> None \<and> the (\<G>_Inf_L_q q \<iota>) \<subseteq> Red_Inf_q q (\<G>_set_L_q q N))
+ \<or> (\<G>_Inf_L_q q \<iota> = None \<and> \<G>_F_L_q q (concl_of \<iota>) \<subseteq> \<G>_set_L_q q N \<union> Red_F_q q (\<G>_set_L_q q N))}"
-definition Red_Inf_\<G>_L_Q :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) inference set" where
- "Red_Inf_\<G>_L_Q N = \<Inter> {X N |X. X \<in> (Red_Inf_\<G>_L_q ` UNIV)}"
+abbreviation Red_Inf_\<G>_L_Q :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) inference set" where
+ "Red_Inf_\<G>_L_Q N \<equiv> (\<Inter>q \<in> Q. Red_Inf_\<G>_L_q q N)"
-definition Labeled_Empty_Order :: \<open> ('f \<times> 'l) \<Rightarrow> ('f \<times> 'l) \<Rightarrow> bool\<close> where
- "Labeled_Empty_Order C1 C2 \<equiv> False"
+abbreviation entails_\<G>_L_q :: "'q \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> bool" where
+ "entails_\<G>_L_q q N1 N2 \<equiv> entails_q q (\<G>_set_L_q q N1) (\<G>_set_L_q q N2)"
+
+lemma lifting_q:
+ assumes "q \<in> Q"
+ shows "labeled_lifting_w_wf_ord_family Bot_F Inf_F Bot_G (entails_q q) (Inf_G_q q) (Red_Inf_q q)
+ (Red_F_q q) (\<G>_F_q q) (\<G>_Inf_q q) (\<lambda>g Cl Cl'. False) Inf_FL"
+ using assms no_labels.standard_lifting_family Inf_F_to_Inf_FL Inf_FL_to_Inf_F
+ by (simp add: labeled_lifting_w_wf_ord_family_axioms_def labeled_lifting_w_wf_ord_family_def)
+
+lemma lifted_q:
+ assumes q_in: "q \<in> Q"
+ shows "standard_lifting Bot_FL Inf_FL Bot_G (Inf_G_q q) (entails_q q) (Red_Inf_q q) (Red_F_q q)
+ (\<G>_F_L_q q) (\<G>_Inf_L_q q)"
+proof -
+ interpret q_lifting: labeled_lifting_w_wf_ord_family Bot_F Inf_F Bot_G "entails_q q" "Inf_G_q q"
+ "Red_Inf_q q" "Red_F_q q" "\<G>_F_q q" "\<G>_Inf_q q" "\<lambda>g Cl Cl'. False" Inf_FL
+ using lifting_q[OF q_in] .
+ have "\<G>_Inf_L_q q = q_lifting.\<G>_Inf_L"
+ unfolding to_F_def q_lifting.to_F_def by simp
+ then show ?thesis
+ using q_lifting.standard_lifting_axioms by simp
+qed
+
+lemma ord_fam_lifted_q:
+ assumes q_in: "q \<in> Q"
+ shows "lifting_with_wf_ordering_family Bot_FL Inf_FL Bot_G (entails_q q) (Inf_G_q q) (Red_Inf_q q)
+ (Red_F_q q) (\<G>_F_L_q q) (\<G>_Inf_L_q q) (\<lambda>g Cl Cl'. False)"
+proof -
+ interpret standard_q_lifting: standard_lifting Bot_FL Inf_FL Bot_G "Inf_G_q q" "entails_q q"
+ "Red_Inf_q q" "Red_F_q q" "\<G>_F_L_q q" "\<G>_Inf_L_q q"
+ using lifted_q[OF q_in] .
+ have "minimal_element (\<lambda>Cl Cl'. False) UNIV"
+ by (simp add: minimal_element.intro po_on_def transp_onI wfp_on_imp_irreflp_on)
+ then show ?thesis
+ using standard_q_lifting.standard_lifting_axioms
+ by (simp add: lifting_with_wf_ordering_family_axioms_def lifting_with_wf_ordering_family_def)
+qed
definition Red_F_\<G>_empty_L_q :: "'q \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set" where
"Red_F_\<G>_empty_L_q q N = {C. \<forall>D \<in> \<G>_F_L_q q C. D \<in> Red_F_q q (\<G>_set_L_q q N) \<or>
- (\<exists>E \<in> N. Labeled_Empty_Order E C \<and> D \<in> \<G>_F_L_q q E)}"
-
-definition Red_F_\<G>_empty_L :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set" where
- "Red_F_\<G>_empty_L N = \<Inter> {X N |X. X \<in> (Red_F_\<G>_empty_L_q ` UNIV)}"
+ (\<exists>E \<in> N. False \<and> D \<in> \<G>_F_L_q q E)}"
-definition entails_\<G>_L_q :: "'q \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> bool" where
- "entails_\<G>_L_q q N1 N2 \<equiv> entails_q q (\<G>_set_L_q q N1) (\<G>_set_L_q q N2)"
+abbreviation Red_F_\<G>_empty_L :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set" where
+ "Red_F_\<G>_empty_L N \<equiv> (\<Inter>q \<in> Q. Red_F_\<G>_empty_L_q q N)"
-definition entails_\<G>_L_Q :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> bool" (infix "\<Turnstile>\<inter>L" 50) where
- "entails_\<G>_L_Q N1 N2 \<equiv> \<forall>q. entails_\<G>_L_q q N1 N2"
-
-lemma lifting_q: "labeled_lifting_w_wf_ord_family Bot_F Inf_F Bot_G (entails_q q) Inf_G (Red_Inf_q q)
- (Red_F_q q) (\<G>_F_q q) (\<G>_Inf_q q) (\<lambda>g. Empty_Order) Inf_FL"
+lemma all_lifted_red_crit:
+ assumes q_in: "q \<in> Q"
+ shows "calculus_with_red_crit Bot_FL Inf_FL (entails_\<G>_L_q q) (Red_Inf_\<G>_L_q q)
+ (Red_F_\<G>_empty_L_q q)"
proof -
- fix q
- show "labeled_lifting_w_wf_ord_family Bot_F Inf_F Bot_G (entails_q q) Inf_G (Red_Inf_q q)
- (Red_F_q q) (\<G>_F_q q) (\<G>_Inf_q q) (\<lambda>g. Empty_Order) Inf_FL"
- using no_labels.standard_lifting_family Inf_F_to_Inf_FL Inf_FL_to_Inf_F
- by (simp add: labeled_lifting_w_wf_ord_family_axioms_def labeled_lifting_w_wf_ord_family_def)
-qed
-
-lemma lifted_q: "standard_lifting Bot_FL Inf_FL Bot_G Inf_G (entails_q q) (Red_Inf_q q)
- (Red_F_q q) (\<G>_F_L_q q) (\<G>_Inf_L_q q)"
-proof -
- fix q
- interpret q_lifting: labeled_lifting_w_wf_ord_family Bot_F Inf_F Bot_G "entails_q q" Inf_G
- "Red_Inf_q q" "Red_F_q q" "\<G>_F_q q" "\<G>_Inf_q q" "\<lambda>g. Empty_Order" l Inf_FL
- using lifting_q .
- have "\<G>_F_L_q q = q_lifting.\<G>_F_L"
- unfolding \<G>_F_L_q_def q_lifting.\<G>_F_L_def by simp
- moreover have "\<G>_Inf_L_q q = q_lifting.\<G>_Inf_L"
- unfolding \<G>_Inf_L_q_def q_lifting.\<G>_Inf_L_def to_F_def q_lifting.to_F_def by simp
- moreover have "Bot_FL = q_lifting.Bot_FL"
- unfolding Bot_FL_def q_lifting.Bot_FL_def by simp
- ultimately show "standard_lifting Bot_FL Inf_FL Bot_G Inf_G (entails_q q) (Red_Inf_q q) (Red_F_q q)
- (\<G>_F_L_q q) (\<G>_Inf_L_q q)"
- using q_lifting.labeled_standard_lifting.standard_lifting_axioms by simp
+ interpret ord_q_lifting: lifting_with_wf_ordering_family Bot_FL Inf_FL Bot_G "entails_q q"
+ "Inf_G_q q" "Red_Inf_q q" "Red_F_q q" "\<G>_F_L_q q" "\<G>_Inf_L_q q" "\<lambda>g Cl Cl'. False"
+ using ord_fam_lifted_q[OF q_in] .
+ have "Red_Inf_\<G>_L_q q = ord_q_lifting.Red_Inf_\<G>"
+ unfolding Red_Inf_\<G>_L_q_def ord_q_lifting.Red_Inf_\<G>_def by simp
+ moreover have "Red_F_\<G>_empty_L_q q = ord_q_lifting.Red_F_\<G>"
+ unfolding Red_F_\<G>_empty_L_q_def ord_q_lifting.Red_F_\<G>_def by simp
+ ultimately show ?thesis
+ using ord_q_lifting.calculus_with_red_crit_axioms by argo
qed
-lemma ord_fam_lifted_q: "lifting_with_wf_ordering_family Bot_FL Inf_FL Bot_G (entails_q q) Inf_G
- (Red_Inf_q q) (Red_F_q q) (\<G>_F_L_q q) (\<G>_Inf_L_q q) (\<lambda>g. Labeled_Empty_Order)"
-proof -
- fix q
- interpret standard_q_lifting: standard_lifting Bot_FL Inf_FL Bot_G Inf_G "entails_q q"
- "Red_Inf_q q" "Red_F_q q" "\<G>_F_L_q q" "\<G>_Inf_L_q q"
- using lifted_q .
- have "minimal_element Labeled_Empty_Order UNIV"
- unfolding Labeled_Empty_Order_def
- by (simp add: minimal_element.intro po_on_def transp_onI wfp_on_imp_irreflp_on)
- then show "lifting_with_wf_ordering_family Bot_FL Inf_FL Bot_G (entails_q q) Inf_G
- (Red_Inf_q q) (Red_F_q q) (\<G>_F_L_q q) (\<G>_Inf_L_q q) (\<lambda>g. Labeled_Empty_Order)"
- using standard_q_lifting.standard_lifting_axioms
- by (simp add: lifting_with_wf_ordering_family_axioms.intro lifting_with_wf_ordering_family_def)
-qed
+lemma all_lifted_cons_rel:
+ assumes q_in: "q \<in> Q"
+ shows "consequence_relation Bot_FL (entails_\<G>_L_q q)"
+ using all_lifted_red_crit calculus_with_red_crit_def q_in by blast
-lemma all_lifted_red_crit: "calculus_with_red_crit Bot_FL Inf_FL (entails_\<G>_L_q q) (Red_Inf_\<G>_L_q q)
- (Red_F_\<G>_empty_L_q q)"
-proof -
- fix q
- interpret ord_q_lifting: lifting_with_wf_ordering_family Bot_FL Inf_FL Bot_G "entails_q q" Inf_G
- "Red_Inf_q q" "Red_F_q q" "\<G>_F_L_q q" "\<G>_Inf_L_q q" "\<lambda>g. Labeled_Empty_Order"
- using ord_fam_lifted_q .
- have "entails_\<G>_L_q q = ord_q_lifting.entails_\<G>"
- unfolding entails_\<G>_L_q_def \<G>_set_L_q_def ord_q_lifting.entails_\<G>_def by simp
- moreover have "Red_Inf_\<G>_L_q q = ord_q_lifting.Red_Inf_\<G>"
- unfolding Red_Inf_\<G>_L_q_def ord_q_lifting.Red_Inf_\<G>_def \<G>_set_L_q_def by simp
- moreover have "Red_F_\<G>_empty_L_q q = ord_q_lifting.Red_F_\<G>"
- unfolding Red_F_\<G>_empty_L_q_def ord_q_lifting.Red_F_\<G>_def \<G>_set_L_q_def by simp
- ultimately show "calculus_with_red_crit Bot_FL Inf_FL (entails_\<G>_L_q q) (Red_Inf_\<G>_L_q q)
- (Red_F_\<G>_empty_L_q q)"
- using ord_q_lifting.lifted_calculus_with_red_crit.calculus_with_red_crit_axioms by argo
-qed
+sublocale consequence_relation_family Bot_FL Q entails_\<G>_L_q
+ using all_lifted_cons_rel by (simp add: consequence_relation_family.intro no_labels.Q_nonempty)
-lemma all_lifted_cons_rel: "consequence_relation Bot_FL (entails_\<G>_L_q q)"
-proof -
- fix q
- interpret q_red_crit: calculus_with_red_crit Bot_FL Inf_FL "entails_\<G>_L_q q" "Red_Inf_\<G>_L_q q"
- "Red_F_\<G>_empty_L_q q"
- using all_lifted_red_crit .
- show "consequence_relation Bot_FL (entails_\<G>_L_q q)"
- using q_red_crit.consequence_relation_axioms .
-qed
+sublocale calculus_with_red_crit_family Bot_FL Inf_FL Q entails_\<G>_L_q Red_Inf_\<G>_L_q Red_F_\<G>_empty_L_q
+ using calculus_with_red_crit_family.intro[OF consequence_relation_family_axioms]
+ by (simp add: all_lifted_red_crit calculus_with_red_crit_family_axioms_def no_labels.Q_nonempty)
-sublocale labeled_cons_rel_family: consequence_relation_family Bot_FL Q entails_\<G>_L_q
- using all_lifted_cons_rel no_labels.lifted_calc_w_red_crit_family.Bot_not_empty
- unfolding Bot_FL_def
- by (simp add: consequence_relation_family.intro)
+notation no_labels.entails_\<G>_Q (infix "\<Turnstile>\<inter>\<G>" 50)
-sublocale with_labels: calculus_with_red_crit_family Bot_FL Inf_FL Q entails_\<G>_L_q Red_Inf_\<G>_L_q
- Red_F_\<G>_empty_L_q
- using calculus_with_red_crit_family.intro[OF labeled_cons_rel_family.consequence_relation_family_axioms]
- all_lifted_cons_rel
- by (simp add: all_lifted_red_crit calculus_with_red_crit_family_axioms_def)
+abbreviation entails_\<G>_L_Q :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> bool" (infix "\<Turnstile>\<inter>\<G>L" 50) where
+ "(\<Turnstile>\<inter>\<G>L) \<equiv> entails_Q"
-notation "no_labels.entails_\<G>_Q" (infix "\<Turnstile>\<inter>" 50)
+lemmas entails_\<G>_L_Q_def = entails_Q_def
(* lem:labeled-consequence-intersection *)
-lemma labeled_entailment_lifting: "NL1 \<Turnstile>\<inter>L NL2 \<longleftrightarrow> fst ` NL1 \<Turnstile>\<inter> fst ` NL2"
- unfolding no_labels.entails_\<G>_Q_def no_labels.entails_\<G>_q_def no_labels.\<G>_set_q_def
- entails_\<G>_L_Q_def entails_\<G>_L_q_def \<G>_set_L_q_def \<G>_F_L_q_def
- by force
+lemma labeled_entailment_lifting: "NL1 \<Turnstile>\<inter>\<G>L NL2 \<longleftrightarrow> fst ` NL1 \<Turnstile>\<inter>\<G> fst ` NL2"
+ unfolding no_labels.entails_\<G>_Q_def entails_\<G>_L_Q_def by force
-lemma subset_fst: "A \<subseteq> fst ` AB \<Longrightarrow> \<forall>x \<in> A. \<exists>y. (x,y) \<in> AB" by fastforce
-
-lemma red_inf_impl: "\<iota> \<in> with_labels.Red_Inf_Q NL \<Longrightarrow>
- to_F \<iota> \<in> no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` NL)"
- unfolding no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q_def with_labels.Red_Inf_Q_def
+lemma red_inf_impl: "\<iota> \<in> Red_Inf_Q NL \<Longrightarrow> to_F \<iota> \<in> no_labels.Red_Inf_\<G>_Q (fst ` NL)"
+ unfolding no_labels.Red_Inf_\<G>_Q_def Red_Inf_Q_def
proof clarify
fix X Xa q
assume
- i_in_inter: "\<iota> \<in> \<Inter> {X NL |X. X \<in> Red_Inf_\<G>_L_q ` UNIV}"
- have i_in_q: "\<iota> \<in> Red_Inf_\<G>_L_q q NL" using i_in_inter image_eqI by blast
+ q_in: "q \<in> Q" and
+ i_in_inter: "\<iota> \<in> (\<Inter>q \<in> Q. Red_Inf_\<G>_L_q q NL)"
+ have i_in_q: "\<iota> \<in> Red_Inf_\<G>_L_q q NL" using q_in i_in_inter image_eqI by blast
then have i_in: "\<iota> \<in> Inf_FL" unfolding Red_Inf_\<G>_L_q_def by blast
have to_F_in: "to_F \<iota> \<in> Inf_F" unfolding to_F_def using Inf_FL_to_Inf_F[OF i_in] .
have rephrase1: "(\<Union>CL\<in>NL. \<G>_F_q q (fst CL)) = (\<Union> (\<G>_F_q q ` fst ` NL))" by blast
have rephrase2: "fst (concl_of \<iota>) = concl_of (to_F \<iota>)"
unfolding concl_of_def to_F_def by simp
- have subs_red: "((\<G>_Inf_L_q q \<iota>) \<noteq> None \<and> the (\<G>_Inf_L_q q \<iota>) \<subseteq> Red_Inf_q q (\<G>_set_L_q q NL))
- \<or> ((\<G>_Inf_L_q q \<iota> = None) \<and> \<G>_F_L_q q (concl_of \<iota>) \<subseteq> (\<G>_set_L_q q NL \<union> Red_F_q q (\<G>_set_L_q q NL)))"
+ have subs_red: "(\<G>_Inf_L_q q \<iota> \<noteq> None \<and> the (\<G>_Inf_L_q q \<iota>) \<subseteq> Red_Inf_q q (\<G>_set_L_q q NL))
+ \<or> (\<G>_Inf_L_q q \<iota> = None \<and> \<G>_F_L_q q (concl_of \<iota>) \<subseteq> \<G>_set_L_q q NL \<union> Red_F_q q (\<G>_set_L_q q NL))"
using i_in_q unfolding Red_Inf_\<G>_L_q_def by blast
- then have to_F_subs_red: "((\<G>_Inf_q q (to_F \<iota>)) \<noteq> None \<and>
+ then have to_F_subs_red: "(\<G>_Inf_q q (to_F \<iota>) \<noteq> None \<and>
the (\<G>_Inf_q q (to_F \<iota>)) \<subseteq> Red_Inf_q q (no_labels.\<G>_set_q q (fst ` NL)))
- \<or> ((\<G>_Inf_q q (to_F \<iota>) = None) \<and>
- \<G>_F_q q (concl_of (to_F \<iota>)) \<subseteq> (no_labels.\<G>_set_q q (fst ` NL) \<union> Red_F_q q (no_labels.\<G>_set_q q (fst ` NL))))"
- unfolding \<G>_Inf_L_q_def \<G>_set_L_q_def no_labels.\<G>_set_q_def \<G>_F_L_q_def
+ \<or> (\<G>_Inf_q q (to_F \<iota>) = None \<and>
+ \<G>_F_q q (concl_of (to_F \<iota>))
+ \<subseteq> no_labels.\<G>_set_q q (fst ` NL) \<union> Red_F_q q (no_labels.\<G>_set_q q (fst ` NL)))"
using rephrase1 rephrase2 by metis
then show "to_F \<iota> \<in> no_labels.Red_Inf_\<G>_q q (fst ` NL)"
using to_F_in unfolding no_labels.Red_Inf_\<G>_q_def by simp
qed
(* lem:labeled-saturation-intersection *)
-lemma labeled_family_saturation_lifting: "with_labels.inter_red_crit_calculus.saturated NL \<Longrightarrow>
- no_labels.lifted_calc_w_red_crit_family.inter_red_crit_calculus.saturated (fst ` NL)"
- unfolding with_labels.inter_red_crit_calculus.saturated_def
- no_labels.lifted_calc_w_red_crit_family.inter_red_crit_calculus.saturated_def
- with_labels.Inf_from_def no_labels.Non_ground.Inf_from_def
+lemma labeled_family_saturation_lifting: "saturated NL \<Longrightarrow> no_labels.saturated (fst ` NL)"
+ unfolding saturated_def no_labels.saturated_def Inf_from_def no_labels.Inf_from_def
proof clarify
fix \<iota>F
assume
- labeled_sat: "{\<iota> \<in> Inf_FL. set (prems_of \<iota>) \<subseteq> NL} \<subseteq> with_labels.Red_Inf_Q NL" and
+ labeled_sat: "{\<iota> \<in> Inf_FL. set (prems_of \<iota>) \<subseteq> NL} \<subseteq> Red_Inf_Q NL" and
iF_in: "\<iota>F \<in> Inf_F" and
iF_prems: "set (prems_of \<iota>F) \<subseteq> fst ` NL"
- define Lli where "Lli i \<equiv> (SOME x. ((prems_of \<iota>F)!i,x) \<in> NL)" for i
+ define Lli where "Lli i = (SOME x. ((prems_of \<iota>F)!i,x) \<in> NL)" for i
have [simp]:"((prems_of \<iota>F)!i,Lli i) \<in> NL" if "i < length (prems_of \<iota>F)" for i
- using that subset_fst[OF iF_prems] nth_mem someI_ex unfolding Lli_def
- by metis
- define Ll where "Ll \<equiv> map Lli [0..<length (prems_of \<iota>F)]"
+ using that iF_prems nth_mem someI_ex unfolding Lli_def by (metis DomainE Domain_fst subset_eq)
+ define Ll where "Ll = map Lli [0..<length (prems_of \<iota>F)]"
have Ll_length: "length Ll = length (prems_of \<iota>F)" unfolding Ll_def by auto
have subs_NL: "set (zip (prems_of \<iota>F) Ll) \<subseteq> NL" unfolding Ll_def by (auto simp:in_set_zip)
obtain L0 where L0: "Infer (zip (prems_of \<iota>F) Ll) (concl_of \<iota>F, L0) \<in> Inf_FL"
using Inf_F_to_Inf_FL[OF iF_in Ll_length] ..
define \<iota>FL where "\<iota>FL = Infer (zip (prems_of \<iota>F) Ll) (concl_of \<iota>F, L0)"
then have "set (prems_of \<iota>FL) \<subseteq> NL" using subs_NL by simp
then have "\<iota>FL \<in> {\<iota> \<in> Inf_FL. set (prems_of \<iota>) \<subseteq> NL}" unfolding \<iota>FL_def using L0 by blast
- then have "\<iota>FL \<in> with_labels.Red_Inf_Q NL" using labeled_sat by fast
+ then have "\<iota>FL \<in> Red_Inf_Q NL" using labeled_sat by fast
moreover have "\<iota>F = to_F \<iota>FL" unfolding to_F_def \<iota>FL_def using Ll_length by (cases \<iota>F) auto
- ultimately show "\<iota>F \<in> no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` NL)"
- by (auto intro:red_inf_impl)
+ ultimately show "\<iota>F \<in> no_labels.Red_Inf_\<G>_Q (fst ` NL)"
+ by (auto intro: red_inf_impl)
qed
(* thm:labeled-static-ref-compl-intersection *)
-theorem labeled_static_ref: "static_refutational_complete_calculus Bot_F Inf_F (\<Turnstile>\<inter>)
- no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q
- no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_F_Q
- \<Longrightarrow> static_refutational_complete_calculus Bot_FL Inf_FL (\<Turnstile>\<inter>L) with_labels.Red_Inf_Q with_labels.Red_F_Q"
- unfolding static_refutational_complete_calculus_def
-proof (rule conjI impI; clarify)
- show "calculus_with_red_crit Bot_FL Inf_FL (\<Turnstile>\<inter>L) with_labels.Red_Inf_Q with_labels.Red_F_Q"
- using with_labels.inter_red_crit_calculus.calculus_with_red_crit_axioms
- unfolding labeled_cons_rel_family.entails_Q_def entails_\<G>_L_Q_def .
-next
+theorem labeled_static_ref:
+ assumes calc: "static_refutational_complete_calculus Bot_F Inf_F (\<Turnstile>\<inter>\<G>) no_labels.Red_Inf_\<G>_Q
+ no_labels.Red_F_\<G>_empty"
+ shows "static_refutational_complete_calculus Bot_FL Inf_FL (\<Turnstile>\<inter>\<G>L) Red_Inf_Q Red_F_Q"
+proof
+ fix Bl :: \<open>'f \<times> 'l\<close> and Nl :: \<open>('f \<times> 'l) set\<close>
assume
- calc: "calculus_with_red_crit Bot_F Inf_F (\<Turnstile>\<inter>)
- no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q
- no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_F_Q" and
- static: "static_refutational_complete_calculus_axioms Bot_F Inf_F (\<Turnstile>\<inter>)
- no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q"
- show "static_refutational_complete_calculus_axioms Bot_FL Inf_FL (\<Turnstile>\<inter>L) with_labels.Red_Inf_Q"
- unfolding static_refutational_complete_calculus_axioms_def
- proof (intro conjI impI allI)
- fix Bl :: \<open>'f \<times> 'l\<close> and Nl :: \<open>('f \<times> 'l) set\<close>
- assume
- Bl_in: \<open>Bl \<in> Bot_FL\<close> and
- Nl_sat: \<open>with_labels.inter_red_crit_calculus.saturated Nl\<close> and
- Nl_entails_Bl: \<open>Nl \<Turnstile>\<inter>L {Bl}\<close>
- have static_axioms: "B \<in> Bot_F \<longrightarrow>
- no_labels.lifted_calc_w_red_crit_family.inter_red_crit_calculus.saturated N \<longrightarrow>
- N \<Turnstile>\<inter> {B} \<longrightarrow> (\<exists>B'\<in>Bot_F. B' \<in> N)" for B N
- using static[unfolded static_refutational_complete_calculus_axioms_def] by fast
- define B where "B = fst Bl"
- have B_in: "B \<in> Bot_F" using Bl_in Bot_FL_def B_def SigmaE by force
- define N where "N = fst ` Nl"
- have N_sat: "no_labels.lifted_calc_w_red_crit_family.inter_red_crit_calculus.saturated N"
- using N_def Nl_sat labeled_family_saturation_lifting by blast
- have N_entails_B: "N \<Turnstile>\<inter> {B}"
- using Nl_entails_Bl unfolding labeled_entailment_lifting N_def B_def by force
- have "\<exists>B' \<in> Bot_F. B' \<in> N" using B_in N_sat N_entails_B static_axioms[of B N] by blast
- then obtain B' where in_Bot: "B' \<in> Bot_F" and in_N: "B' \<in> N" by force
- then have "B' \<in> fst ` Bot_FL" unfolding Bot_FL_def by fastforce
- obtain Bl' where in_Nl: "Bl' \<in> Nl" and fst_Bl': "fst Bl' = B'"
- using in_N unfolding N_def by blast
- have "Bl' \<in> Bot_FL" unfolding Bot_FL_def using fst_Bl' in_Bot vimage_fst by fastforce
- then show \<open>\<exists>Bl'\<in>Bot_FL. Bl' \<in> Nl\<close> using in_Nl by blast
- qed
+ Bl_in: \<open>Bl \<in> Bot_FL\<close> and
+ Nl_sat: \<open>saturated Nl\<close> and
+ Nl_entails_Bl: \<open>Nl \<Turnstile>\<inter>\<G>L {Bl}\<close>
+ define B where "B = fst Bl"
+ have B_in: "B \<in> Bot_F" using Bl_in B_def SigmaE by force
+ define N where "N = fst ` Nl"
+ have N_sat: "no_labels.saturated N"
+ using N_def Nl_sat labeled_family_saturation_lifting by blast
+ have N_entails_B: "N \<Turnstile>\<inter>\<G> {B}"
+ using Nl_entails_Bl unfolding labeled_entailment_lifting N_def B_def by force
+ have "\<exists>B' \<in> Bot_F. B' \<in> N" using B_in N_sat N_entails_B
+ calc[unfolded static_refutational_complete_calculus_def
+ static_refutational_complete_calculus_axioms_def]
+ by blast
+ then obtain B' where in_Bot: "B' \<in> Bot_F" and in_N: "B' \<in> N" by force
+ then have "B' \<in> fst ` Bot_FL" by fastforce
+ obtain Bl' where in_Nl: "Bl' \<in> Nl" and fst_Bl': "fst Bl' = B'"
+ using in_N unfolding N_def by blast
+ have "Bl' \<in> Bot_FL" using fst_Bl' in_Bot vimage_fst by fastforce
+ then show \<open>\<exists>Bl'\<in>Bot_FL. Bl' \<in> Nl\<close> using in_Nl by blast
qed
end
end
diff --git a/thys/Saturation_Framework/Lifting_to_Non_Ground_Calculi.thy b/thys/Saturation_Framework/Lifting_to_Non_Ground_Calculi.thy
--- a/thys/Saturation_Framework/Lifting_to_Non_Ground_Calculi.thy
+++ b/thys/Saturation_Framework/Lifting_to_Non_Ground_Calculi.thy
@@ -1,865 +1,752 @@
(* Title: Lifting to Non-Ground Calculi of the Saturation Framework
* Author: Sophie Tourret <stourret at mpi-inf.mpg.de>, 2018-2020 *)
section \<open>Lifting to Non-ground Calculi\<close>
text \<open>The section 3.1 to 3.3 of the report are covered by the current section.
Various forms of lifting are proven correct. These allow to obtain the dynamic
refutational completeness of a non-ground calculus from the static refutational
completeness of its ground counterpart.\<close>
theory Lifting_to_Non_Ground_Calculi
imports
Calculi
Well_Quasi_Orders.Minimal_Elements
begin
subsection \<open>Standard Lifting\<close>
-locale standard_lifting = Non_ground: inference_system Inf_F +
- Ground: calculus_with_red_crit Bot_G Inf_G entails_G Red_Inf_G Red_F_G
+locale standard_lifting = inference_system Inf_F +
+ ground: calculus_with_red_crit Bot_G Inf_G entails_G Red_Inf_G Red_F_G
for
Bot_F :: \<open>'f set\<close> and
Inf_F :: \<open>'f inference set\<close> and
Bot_G :: \<open>'g set\<close> and
Inf_G :: \<open>'g inference set\<close> and
entails_G :: \<open>'g set \<Rightarrow> 'g set \<Rightarrow> bool\<close> (infix "\<Turnstile>G" 50) and
Red_Inf_G :: \<open>'g set \<Rightarrow> 'g inference set\<close> and
Red_F_G :: \<open>'g set \<Rightarrow> 'g set\<close>
+ fixes
\<G>_F :: \<open>'f \<Rightarrow> 'g set\<close> and
\<G>_Inf :: \<open>'f inference \<Rightarrow> 'g inference set option\<close>
assumes
Bot_F_not_empty: "Bot_F \<noteq> {}" and
Bot_map_not_empty: \<open>B \<in> Bot_F \<Longrightarrow> \<G>_F B \<noteq> {}\<close> and
Bot_map: \<open>B \<in> Bot_F \<Longrightarrow> \<G>_F B \<subseteq> Bot_G\<close> and
Bot_cond: \<open>\<G>_F C \<inter> Bot_G \<noteq> {} \<longrightarrow> C \<in> Bot_F\<close> and
inf_map: \<open>\<iota> \<in> Inf_F \<Longrightarrow> \<G>_Inf \<iota> \<noteq> None \<Longrightarrow> the (\<G>_Inf \<iota>) \<subseteq> Red_Inf_G (\<G>_F (concl_of \<iota>))\<close>
begin
abbreviation \<G>_set :: \<open>'f set \<Rightarrow> 'g set\<close> where
\<open>\<G>_set N \<equiv> \<Union> (\<G>_F ` N)\<close>
lemma \<G>_subset: \<open>N1 \<subseteq> N2 \<Longrightarrow> \<G>_set N1 \<subseteq> \<G>_set N2\<close> by auto
-definition entails_\<G> :: \<open>'f set \<Rightarrow> 'f set \<Rightarrow> bool\<close> (infix "\<Turnstile>\<G>" 50) where
+abbreviation entails_\<G> :: \<open>'f set \<Rightarrow> 'f set \<Rightarrow> bool\<close> (infix "\<Turnstile>\<G>" 50) where
\<open>N1 \<Turnstile>\<G> N2 \<equiv> \<G>_set N1 \<Turnstile>G \<G>_set N2\<close>
lemma subs_Bot_G_entails:
assumes
not_empty: \<open>sB \<noteq> {}\<close> and
in_bot: \<open>sB \<subseteq> Bot_G\<close>
shows \<open>sB \<Turnstile>G N\<close>
proof -
have \<open>\<exists>B. B \<in> sB\<close> using not_empty by auto
then obtain B where B_in: \<open>B \<in> sB\<close> by auto
- then have r_trans: \<open>{B} \<Turnstile>G N\<close> using Ground.bot_implies_all in_bot by auto
- have l_trans: \<open>sB \<Turnstile>G {B}\<close> using B_in Ground.subset_entailed by auto
- then show ?thesis using r_trans Ground.entails_trans[of sB "{B}"] by auto
+ then have r_trans: \<open>{B} \<Turnstile>G N\<close> using ground.bot_entails_all in_bot by auto
+ have l_trans: \<open>sB \<Turnstile>G {B}\<close> using B_in ground.subset_entailed by auto
+ then show ?thesis using r_trans ground.entails_trans[of sB "{B}"] by auto
qed
(* lem:derived-consequence-relation *)
-sublocale lifted_consequence_relation: consequence_relation
- where Bot=Bot_F and entails=entails_\<G>
+sublocale consequence_relation Bot_F entails_\<G>
proof
show "Bot_F \<noteq> {}" using Bot_F_not_empty .
next
show \<open>B\<in>Bot_F \<Longrightarrow> {B} \<Turnstile>\<G> N\<close> for B N
proof -
assume \<open>B \<in> Bot_F\<close>
then show \<open>{B} \<Turnstile>\<G> N\<close>
- using Bot_map Ground.bot_implies_all[of _ "\<G>_set N"] subs_Bot_G_entails Bot_map_not_empty
- unfolding entails_\<G>_def
+ using Bot_map ground.bot_entails_all[of _ "\<G>_set N"] subs_Bot_G_entails Bot_map_not_empty
by auto
qed
next
fix N1 N2 :: \<open>'f set\<close>
assume
\<open>N2 \<subseteq> N1\<close>
- then show \<open>N1 \<Turnstile>\<G> N2\<close> using entails_\<G>_def \<G>_subset Ground.subset_entailed by auto
+ then show \<open>N1 \<Turnstile>\<G> N2\<close> using \<G>_subset ground.subset_entailed by auto
next
fix N1 N2
assume
N1_entails_C: \<open>\<forall>C \<in> N2. N1 \<Turnstile>\<G> {C}\<close>
- show \<open>N1 \<Turnstile>\<G> N2\<close> using Ground.all_formulas_entailed N1_entails_C entails_\<G>_def
- by (smt UN_E UN_I Ground.entail_set_all_formulas singletonI)
+ show \<open>N1 \<Turnstile>\<G> N2\<close> using ground.all_formulas_entailed N1_entails_C
+ by (smt UN_E UN_I ground.entail_set_all_formulas singletonI)
next
fix N1 N2 N3
assume
\<open>N1 \<Turnstile>\<G> N2\<close> and \<open>N2 \<Turnstile>\<G> N3\<close>
- then show \<open>N1 \<Turnstile>\<G> N3\<close> using entails_\<G>_def Ground.entails_trans by blast
+ then show \<open>N1 \<Turnstile>\<G> N3\<close> using ground.entails_trans by blast
qed
end
subsection \<open>Strong Standard Lifting\<close>
(* rmk:strong-standard-lifting *)
-locale strong_standard_lifting = Non_ground: inference_system Inf_F +
- Ground: calculus_with_red_crit Bot_G Inf_G entails_G Red_Inf_G Red_F_G
+locale strong_standard_lifting = inference_system Inf_F +
+ ground: calculus_with_red_crit Bot_G Inf_G entails_G Red_Inf_G Red_F_G
for
Bot_F :: \<open>'f set\<close> and
Inf_F :: \<open>'f inference set\<close> and
Bot_G :: \<open>'g set\<close> and
Inf_G :: \<open>'g inference set\<close> and
entails_G :: \<open>'g set \<Rightarrow> 'g set \<Rightarrow> bool\<close> (infix "\<Turnstile>G" 50) and
Red_Inf_G :: \<open>'g set \<Rightarrow> 'g inference set\<close> and
Red_F_G :: \<open>'g set \<Rightarrow> 'g set\<close>
+ fixes
\<G>_F :: \<open>'f \<Rightarrow> 'g set\<close> and
\<G>_Inf :: \<open>'f inference \<Rightarrow> 'g inference set option\<close>
assumes
Bot_F_not_empty: "Bot_F \<noteq> {}" and
Bot_map_not_empty: \<open>B \<in> Bot_F \<Longrightarrow> \<G>_F B \<noteq> {}\<close> and
Bot_map: \<open>B \<in> Bot_F \<Longrightarrow> \<G>_F B \<subseteq> Bot_G\<close> and
Bot_cond: \<open>\<G>_F C \<inter> Bot_G \<noteq> {} \<longrightarrow> C \<in> Bot_F\<close> and
strong_inf_map: \<open>\<iota> \<in> Inf_F \<Longrightarrow> \<G>_Inf \<iota> \<noteq> None \<Longrightarrow> concl_of ` (the (\<G>_Inf \<iota>)) \<subseteq> (\<G>_F (concl_of \<iota>))\<close> and
inf_map_in_Inf: \<open>\<iota> \<in> Inf_F \<Longrightarrow> \<G>_Inf \<iota> \<noteq> None \<Longrightarrow> the (\<G>_Inf \<iota>) \<subseteq> Inf_G\<close>
begin
-sublocale standard_lifting
+sublocale standard_lifting Bot_F Inf_F Bot_G Inf_G "(\<Turnstile>G)" Red_Inf_G Red_F_G \<G>_F \<G>_Inf
proof
show "Bot_F \<noteq> {}" using Bot_F_not_empty .
next
fix B
assume b_in: "B \<in> Bot_F"
show "\<G>_F B \<noteq> {}" using Bot_map_not_empty[OF b_in] .
next
fix B
assume b_in: "B \<in> Bot_F"
show "\<G>_F B \<subseteq> Bot_G" using Bot_map[OF b_in] .
next
show "\<And>C. \<G>_F C \<inter> Bot_G \<noteq> {} \<longrightarrow> C \<in> Bot_F" using Bot_cond .
next
fix \<iota>
assume i_in: "\<iota> \<in> Inf_F" and
some_g: "\<G>_Inf \<iota> \<noteq> None"
show "the (\<G>_Inf \<iota>) \<subseteq> Red_Inf_G (\<G>_F (concl_of \<iota>))"
proof
fix \<iota>G
assume ig_in1: "\<iota>G \<in> the (\<G>_Inf \<iota>)"
then have ig_in2: "\<iota>G \<in> Inf_G" using inf_map_in_Inf[OF i_in some_g] by blast
show "\<iota>G \<in> Red_Inf_G (\<G>_F (concl_of \<iota>))"
- using strong_inf_map[OF i_in some_g] Ground.Red_Inf_of_Inf_to_N[OF ig_in2]
+ using strong_inf_map[OF i_in some_g] ground.Red_Inf_of_Inf_to_N[OF ig_in2]
ig_in1 by blast
qed
qed
end
subsection \<open>Lifting with a Family of Well-founded Orderings\<close>
locale lifting_with_wf_ordering_family =
standard_lifting Bot_F Inf_F Bot_G Inf_G entails_G Red_Inf_G Red_F_G \<G>_F \<G>_Inf
for
Bot_F :: \<open>'f set\<close> and
Inf_F :: \<open>'f inference set\<close> and
Bot_G :: \<open>'g set\<close> and
entails_G :: \<open>'g set \<Rightarrow> 'g set \<Rightarrow> bool\<close> (infix "\<Turnstile>G" 50) and
Inf_G :: \<open>'g inference set\<close> and
Red_Inf_G :: \<open>'g set \<Rightarrow> 'g inference set\<close> and
Red_F_G :: \<open>'g set \<Rightarrow> 'g set\<close> and
\<G>_F :: "'f \<Rightarrow> 'g set" and
\<G>_Inf :: "'f inference \<Rightarrow> 'g inference set option"
+ fixes
Prec_F_g :: \<open>'g \<Rightarrow> 'f \<Rightarrow> 'f \<Rightarrow> bool\<close>
assumes
all_wf: "minimal_element (Prec_F_g g) UNIV"
begin
definition Red_Inf_\<G> :: "'f set \<Rightarrow> 'f inference set" where
\<open>Red_Inf_\<G> N = {\<iota> \<in> Inf_F. (\<G>_Inf \<iota> \<noteq> None \<and> the (\<G>_Inf \<iota>) \<subseteq> Red_Inf_G (\<G>_set N))
- \<or> (\<G>_Inf \<iota> = None \<and> \<G>_F (concl_of \<iota>) \<subseteq> (\<G>_set N \<union> Red_F_G (\<G>_set N)))}\<close>
+ \<or> (\<G>_Inf \<iota> = None \<and> \<G>_F (concl_of \<iota>) \<subseteq> \<G>_set N \<union> Red_F_G (\<G>_set N))}\<close>
definition Red_F_\<G> :: "'f set \<Rightarrow> 'f set" where
\<open>Red_F_\<G> N = {C. \<forall>D \<in> \<G>_F C. D \<in> Red_F_G (\<G>_set N) \<or> (\<exists>E \<in> N. Prec_F_g D E C \<and> D \<in> \<G>_F E)}\<close>
lemma Prec_trans:
assumes
\<open>Prec_F_g D A B\<close> and
\<open>Prec_F_g D B C\<close>
shows
\<open>Prec_F_g D A C\<close>
using minimal_element.po assms unfolding po_on_def transp_on_def by (smt UNIV_I all_wf)
lemma prop_nested_in_set: "D \<in> P C \<Longrightarrow> C \<in> {C. \<forall>D \<in> P C. A D \<or> B C D} \<Longrightarrow> A D \<or> B C D"
by blast
(* lem:wolog-C'-nonredundant *)
lemma Red_F_\<G>_equiv_def:
\<open>Red_F_\<G> N = {C. \<forall>Di \<in> \<G>_F C. Di \<in> Red_F_G (\<G>_set N) \<or>
(\<exists>E \<in> (N - Red_F_\<G> N). Prec_F_g Di E C \<and> Di \<in> \<G>_F E)}\<close>
-proof (rule;clarsimp)
+proof (rule; clarsimp)
fix C D
assume
C_in: \<open>C \<in> Red_F_\<G> N\<close> and
D_in: \<open>D \<in> \<G>_F C\<close> and
not_sec_case: \<open>\<forall>E \<in> N - Red_F_\<G> N. Prec_F_g D E C \<longrightarrow> D \<notin> \<G>_F E\<close>
have C_in_unfolded: "C \<in> {C. \<forall>Di \<in> \<G>_F C. Di \<in> Red_F_G (\<G>_set N) \<or>
(\<exists>E\<in>N. Prec_F_g Di E C \<and> Di \<in> \<G>_F E)}"
using C_in unfolding Red_F_\<G>_def .
have neg_not_sec_case: \<open>\<not> (\<exists>E\<in>N - Red_F_\<G> N. Prec_F_g D E C \<and> D \<in> \<G>_F E)\<close>
using not_sec_case by clarsimp
have unfol_C_D: \<open>D \<in> Red_F_G (\<G>_set N) \<or> (\<exists>E\<in>N. Prec_F_g D E C \<and> D \<in> \<G>_F E)\<close>
using prop_nested_in_set[of D \<G>_F C "\<lambda>x. x \<in> Red_F_G (\<Union> (\<G>_F ` N))"
"\<lambda>x y. \<exists>E \<in> N. Prec_F_g y E x \<and> y \<in> \<G>_F E", OF D_in C_in_unfolded] by blast
show \<open>D \<in> Red_F_G (\<G>_set N)\<close>
proof (rule ccontr)
assume contrad: \<open>D \<notin> Red_F_G (\<G>_set N)\<close>
have non_empty: \<open>\<exists>E\<in>N. Prec_F_g D E C \<and> D \<in> \<G>_F E\<close> using contrad unfol_C_D by auto
define B where \<open>B = {E \<in> N. Prec_F_g D E C \<and> D \<in> \<G>_F E}\<close>
then have B_non_empty: \<open>B \<noteq> {}\<close> using non_empty by auto
interpret minimal_element "Prec_F_g D" UNIV using all_wf[of D] .
obtain F :: 'f where F: \<open>F = min_elt B\<close> by auto
then have D_in_F: \<open>D \<in> \<G>_F F\<close> unfolding B_def using non_empty
by (smt Sup_UNIV Sup_upper UNIV_I contra_subsetD empty_iff empty_subsetI mem_Collect_eq
min_elt_mem unfol_C_D)
have F_prec: \<open>Prec_F_g D F C\<close> using F min_elt_mem[of B, OF _ B_non_empty] unfolding B_def by auto
have F_not_in: \<open>F \<notin> Red_F_\<G> N\<close>
proof
assume F_in: \<open>F \<in> Red_F_\<G> N\<close>
have unfol_F_D: \<open>D \<in> Red_F_G (\<G>_set N) \<or> (\<exists>G\<in>N. Prec_F_g D G F \<and> D \<in> \<G>_F G)\<close>
using F_in D_in_F unfolding Red_F_\<G>_def by auto
then have \<open>\<exists>G\<in>N. Prec_F_g D G F \<and> D \<in> \<G>_F G\<close> using contrad D_in unfolding Red_F_\<G>_def by auto
then obtain G where G_in: \<open>G \<in> N\<close> and G_prec: \<open>Prec_F_g D G F\<close> and G_map: \<open>D \<in> \<G>_F G\<close> by auto
have \<open>Prec_F_g D G C\<close> using G_prec F_prec Prec_trans by blast
then have \<open>G \<in> B\<close> unfolding B_def using G_in G_map by auto
then show \<open>False\<close> using F G_prec min_elt_minimal[of B G, OF _ B_non_empty] by auto
qed
have \<open>F \<in> N\<close> using F by (metis B_def B_non_empty mem_Collect_eq min_elt_mem top_greatest)
then have \<open>F \<in> N - Red_F_\<G> N\<close> using F_not_in by auto
then show \<open>False\<close>
using D_in_F neg_not_sec_case F_prec by blast
qed
next
fix C
assume only_if: \<open>\<forall>D\<in>\<G>_F C. D \<in> Red_F_G (\<G>_set N) \<or> (\<exists>E\<in>N - Red_F_\<G> N. Prec_F_g D E C \<and> D \<in> \<G>_F E)\<close>
show \<open>C \<in> Red_F_\<G> N\<close> unfolding Red_F_\<G>_def using only_if by auto
qed
(* lem:lifting-main-technical *)
lemma not_red_map_in_map_not_red: \<open>\<G>_set N - Red_F_G (\<G>_set N) \<subseteq> \<G>_set (N - Red_F_\<G> N)\<close>
proof
fix D
assume
D_hyp: \<open>D \<in> \<G>_set N - Red_F_G (\<G>_set N)\<close>
interpret minimal_element "Prec_F_g D" UNIV using all_wf[of D] .
have D_in: \<open>D \<in> \<G>_set N\<close> using D_hyp by blast
have D_not_in: \<open>D \<notin> Red_F_G (\<G>_set N)\<close> using D_hyp by blast
have exist_C: \<open>\<exists>C. C \<in> N \<and> D \<in> \<G>_F C\<close> using D_in by auto
define B where \<open>B = {C \<in> N. D \<in> \<G>_F C}\<close>
obtain C where C: \<open>C = min_elt B\<close> by auto
have C_in_N: \<open>C \<in> N\<close>
using exist_C by (metis B_def C empty_iff mem_Collect_eq min_elt_mem top_greatest)
have D_in_C: \<open>D \<in> \<G>_F C\<close>
using exist_C by (metis B_def C empty_iff mem_Collect_eq min_elt_mem top_greatest)
have C_not_in: \<open>C \<notin> Red_F_\<G> N\<close>
proof
assume C_in: \<open>C \<in> Red_F_\<G> N\<close>
have \<open>D \<in> Red_F_G (\<G>_set N) \<or> (\<exists>E\<in>N. Prec_F_g D E C \<and> D \<in> \<G>_F E)\<close>
using C_in D_in_C unfolding Red_F_\<G>_def by auto
then show \<open>False\<close>
proof
assume \<open>D \<in> Red_F_G (\<G>_set N)\<close>
then show \<open>False\<close> using D_not_in by simp
next
assume \<open>\<exists>E\<in>N. Prec_F_g D E C \<and> D \<in> \<G>_F E\<close>
then show \<open>False\<close>
using C by (metis (no_types, lifting) B_def UNIV_I empty_iff mem_Collect_eq
min_elt_minimal top_greatest)
qed
qed
show \<open>D \<in> \<G>_set (N - Red_F_\<G> N)\<close> using D_in_C C_not_in C_in_N by blast
qed
(* lem:nonredundant-entails-redundant *)
lemma Red_F_Bot_F: \<open>B \<in> Bot_F \<Longrightarrow> N \<Turnstile>\<G> {B} \<Longrightarrow> N - Red_F_\<G> N \<Turnstile>\<G> {B}\<close>
proof -
fix B N
assume
B_in: \<open>B \<in> Bot_F\<close> and
N_entails: \<open>N \<Turnstile>\<G> {B}\<close>
then have to_bot: \<open>\<G>_set N - Red_F_G (\<G>_set N) \<Turnstile>G \<G>_F B\<close>
- using Ground.Red_F_Bot Bot_map unfolding entails_\<G>_def
- by (smt cSup_singleton Ground.entail_set_all_formulas image_insert image_is_empty subsetCE)
+ using ground.Red_F_Bot Bot_map
+ by (smt cSup_singleton ground.entail_set_all_formulas image_insert image_is_empty subsetCE)
have from_f: \<open>\<G>_set (N - Red_F_\<G> N) \<Turnstile>G \<G>_set N - Red_F_G (\<G>_set N)\<close>
- using Ground.subset_entailed[OF not_red_map_in_map_not_red] by blast
- then have \<open>\<G>_set (N - Red_F_\<G> N) \<Turnstile>G \<G>_F B\<close> using to_bot Ground.entails_trans by blast
- then show \<open>N - Red_F_\<G> N \<Turnstile>\<G> {B}\<close> using Bot_map unfolding entails_\<G>_def by simp
+ using ground.subset_entailed[OF not_red_map_in_map_not_red] by blast
+ then have \<open>\<G>_set (N - Red_F_\<G> N) \<Turnstile>G \<G>_F B\<close> using to_bot ground.entails_trans by blast
+ then show \<open>N - Red_F_\<G> N \<Turnstile>\<G> {B}\<close> using Bot_map by simp
qed
(* lem:redundancy-monotonic-addition 1/2 *)
lemma Red_F_of_subset_F: \<open>N \<subseteq> N' \<Longrightarrow> Red_F_\<G> N \<subseteq> Red_F_\<G> N'\<close>
- using Ground.Red_F_of_subset unfolding Red_F_\<G>_def by (smt Collect_mono \<G>_subset subset_iff)
+ using ground.Red_F_of_subset unfolding Red_F_\<G>_def by clarsimp (meson \<G>_subset subsetD)
(* lem:redundancy-monotonic-addition 2/2 *)
lemma Red_Inf_of_subset_F: \<open>N \<subseteq> N' \<Longrightarrow> Red_Inf_\<G> N \<subseteq> Red_Inf_\<G> N'\<close>
- using Collect_mono \<G>_subset subset_iff Ground.Red_Inf_of_subset unfolding Red_Inf_\<G>_def
- by (smt Ground.Red_F_of_subset Un_iff)
+ using Collect_mono \<G>_subset subset_iff ground.Red_Inf_of_subset unfolding Red_Inf_\<G>_def
+ by (smt ground.Red_F_of_subset Un_iff)
(* lem:redundancy-monotonic-deletion-forms *)
lemma Red_F_of_Red_F_subset_F: \<open>N' \<subseteq> Red_F_\<G> N \<Longrightarrow> Red_F_\<G> N \<subseteq> Red_F_\<G> (N - N')\<close>
proof
fix N N' C
assume
N'_in_Red_F_N: \<open>N' \<subseteq> Red_F_\<G> N\<close> and
C_in_red_F_N: \<open>C \<in> Red_F_\<G> N\<close>
have lem8: \<open>\<forall>D \<in> \<G>_F C. D \<in> Red_F_G (\<G>_set N) \<or> (\<exists>E \<in> (N - Red_F_\<G> N). Prec_F_g D E C \<and> D \<in> \<G>_F E)\<close>
using Red_F_\<G>_equiv_def C_in_red_F_N by blast
show \<open>C \<in> Red_F_\<G> (N - N')\<close> unfolding Red_F_\<G>_def
proof (rule,rule)
fix D
assume \<open>D \<in> \<G>_F C\<close>
then have \<open>D \<in> Red_F_G (\<G>_set N) \<or> (\<exists>E \<in> (N - Red_F_\<G> N). Prec_F_g D E C \<and> D \<in> \<G>_F E)\<close>
using lem8 by auto
then show \<open>D \<in> Red_F_G (\<G>_set (N - N')) \<or> (\<exists>E\<in>N - N'. Prec_F_g D E C \<and> D \<in> \<G>_F E)\<close>
proof
assume \<open>D \<in> Red_F_G (\<G>_set N)\<close>
then have \<open>D \<in> Red_F_G (\<G>_set N - Red_F_G (\<G>_set N))\<close>
- using Ground.Red_F_of_Red_F_subset[of "Red_F_G (\<G>_set N)" "\<G>_set N"] by auto
+ using ground.Red_F_of_Red_F_subset[of "Red_F_G (\<G>_set N)" "\<G>_set N"] by auto
then have \<open>D \<in> Red_F_G (\<G>_set (N - Red_F_\<G> N))\<close>
- using Ground.Red_F_of_subset[OF not_red_map_in_map_not_red[of N]] by auto
+ using ground.Red_F_of_subset[OF not_red_map_in_map_not_red[of N]] by auto
then have \<open>D \<in> Red_F_G (\<G>_set (N - N'))\<close>
using N'_in_Red_F_N \<G>_subset[of "N - Red_F_\<G> N" "N - N'"]
- by (smt DiffE DiffI Ground.Red_F_of_subset subsetCE subsetI)
+ by (smt DiffE DiffI ground.Red_F_of_subset subsetCE subsetI)
then show ?thesis by blast
next
assume \<open>\<exists>E\<in>N - Red_F_\<G> N. Prec_F_g D E C \<and> D \<in> \<G>_F E\<close>
then obtain E where
E_in: \<open>E\<in>N - Red_F_\<G> N\<close> and
E_prec_C: \<open>Prec_F_g D E C\<close> and
D_in: \<open>D \<in> \<G>_F E\<close>
by auto
have \<open>E \<in> N - N'\<close> using E_in N'_in_Red_F_N by blast
then show ?thesis using E_prec_C D_in by blast
qed
qed
qed
(* lem:redundancy-monotonic-deletion-infs *)
lemma Red_Inf_of_Red_F_subset_F: \<open>N' \<subseteq> Red_F_\<G> N \<Longrightarrow> Red_Inf_\<G> N \<subseteq> Red_Inf_\<G> (N - N') \<close>
proof
fix N N' \<iota>
assume
N'_in_Red_F_N: \<open>N' \<subseteq> Red_F_\<G> N\<close> and
i_in_Red_Inf_N: \<open>\<iota> \<in> Red_Inf_\<G> N\<close>
have i_in: \<open>\<iota> \<in> Inf_F\<close> using i_in_Red_Inf_N unfolding Red_Inf_\<G>_def by blast
{
assume not_none: "\<G>_Inf \<iota> \<noteq> None"
have \<open>\<forall>\<iota>' \<in> the (\<G>_Inf \<iota>). \<iota>' \<in> Red_Inf_G (\<G>_set N)\<close>
using not_none i_in_Red_Inf_N unfolding Red_Inf_\<G>_def by auto
then have \<open>\<forall>\<iota>' \<in> the (\<G>_Inf \<iota>). \<iota>' \<in> Red_Inf_G (\<G>_set N - Red_F_G (\<G>_set N))\<close>
- using not_none Ground.Red_Inf_of_Red_F_subset by blast
+ using not_none ground.Red_Inf_of_Red_F_subset by blast
then have ip_in_Red_Inf_G: \<open>\<forall>\<iota>' \<in> the (\<G>_Inf \<iota>). \<iota>' \<in> Red_Inf_G (\<G>_set (N - Red_F_\<G> N))\<close>
- using not_none Ground.Red_Inf_of_subset[OF not_red_map_in_map_not_red[of N]] by auto
+ using not_none ground.Red_Inf_of_subset[OF not_red_map_in_map_not_red[of N]] by auto
then have not_none_in: \<open>\<forall>\<iota>' \<in> the (\<G>_Inf \<iota>). \<iota>' \<in> Red_Inf_G (\<G>_set (N - N'))\<close>
using not_none N'_in_Red_F_N
- by (meson Diff_mono Ground.Red_Inf_of_subset \<G>_subset subset_iff subset_refl)
+ by (meson Diff_mono ground.Red_Inf_of_subset \<G>_subset subset_iff subset_refl)
then have "the (\<G>_Inf \<iota>) \<subseteq> Red_Inf_G (\<G>_set (N - N'))" by blast
}
moreover {
assume none: "\<G>_Inf \<iota> = None"
have ground_concl_subs: "\<G>_F (concl_of \<iota>) \<subseteq> (\<G>_set N \<union> Red_F_G (\<G>_set N))"
using none i_in_Red_Inf_N unfolding Red_Inf_\<G>_def by blast
then have d_in_imp12: "D \<in> \<G>_F (concl_of \<iota>) \<Longrightarrow> D \<in> \<G>_set N - Red_F_G (\<G>_set N) \<or> D \<in> Red_F_G (\<G>_set N)"
by blast
have d_in_imp1: "D \<in> \<G>_set N - Red_F_G (\<G>_set N) \<Longrightarrow> D \<in> \<G>_set (N - N')"
using not_red_map_in_map_not_red N'_in_Red_F_N by blast
have d_in_imp_d_in: "D \<in> Red_F_G (\<G>_set N) \<Longrightarrow> D \<in> Red_F_G (\<G>_set N - Red_F_G (\<G>_set N))"
- using Ground.Red_F_of_Red_F_subset[of "Red_F_G (\<G>_set N)" "\<G>_set N"] by blast
+ using ground.Red_F_of_Red_F_subset[of "Red_F_G (\<G>_set N)" "\<G>_set N"] by blast
have g_subs1: "\<G>_set N - Red_F_G (\<G>_set N) \<subseteq> \<G>_set (N - Red_F_\<G> N)"
using not_red_map_in_map_not_red unfolding Red_F_\<G>_def by auto
have g_subs2: "\<G>_set (N - Red_F_\<G> N) \<subseteq> \<G>_set (N - N')"
using N'_in_Red_F_N by blast
have d_in_imp2: "D \<in> Red_F_G (\<G>_set N) \<Longrightarrow> D \<in> Red_F_G (\<G>_set (N - N'))"
- using Ground.Red_F_of_subset Ground.Red_F_of_subset[OF g_subs1]
- Ground.Red_F_of_subset[OF g_subs2] d_in_imp_d_in by blast
+ using ground.Red_F_of_subset ground.Red_F_of_subset[OF g_subs1]
+ ground.Red_F_of_subset[OF g_subs2] d_in_imp_d_in by blast
have "\<G>_F (concl_of \<iota>) \<subseteq> (\<G>_set (N - N') \<union> Red_F_G (\<G>_set (N - N')))"
using d_in_imp12 d_in_imp1 d_in_imp2
- by (smt Ground.Red_F_of_Red_F_subset Ground.Red_F_of_subset UnCI UnE Un_Diff_cancel2
+ by (smt ground.Red_F_of_Red_F_subset ground.Red_F_of_subset UnCI UnE Un_Diff_cancel2
ground_concl_subs g_subs1 g_subs2 subset_iff)
}
ultimately show \<open>\<iota> \<in> Red_Inf_\<G> (N - N')\<close> using i_in unfolding Red_Inf_\<G>_def by auto
qed
(* lem:concl-contained-implies-red-inf *)
lemma Red_Inf_of_Inf_to_N_F:
assumes
i_in: \<open>\<iota> \<in> Inf_F\<close> and
concl_i_in: \<open>concl_of \<iota> \<in> N\<close>
shows
\<open>\<iota> \<in> Red_Inf_\<G> N \<close>
proof -
have \<open>\<iota> \<in> Inf_F \<Longrightarrow> \<G>_Inf \<iota> \<noteq> None \<Longrightarrow> the (\<G>_Inf \<iota>) \<subseteq> Red_Inf_G (\<G>_F (concl_of \<iota>))\<close> using inf_map by simp
moreover have \<open>Red_Inf_G (\<G>_F (concl_of \<iota>)) \<subseteq> Red_Inf_G (\<G>_set N)\<close>
- using concl_i_in Ground.Red_Inf_of_subset by blast
+ using concl_i_in ground.Red_Inf_of_subset by blast
moreover have "\<iota> \<in> Inf_F \<Longrightarrow> \<G>_Inf \<iota> = None \<Longrightarrow> concl_of \<iota> \<in> N \<Longrightarrow> \<G>_F (concl_of \<iota>) \<subseteq> \<G>_set N"
by blast
ultimately show ?thesis using i_in concl_i_in unfolding Red_Inf_\<G>_def by auto
qed
(* thm:FRedsqsubset-is-red-crit and also thm:lifted-red-crit if ordering empty *)
-sublocale lifted_calculus_with_red_crit: calculus_with_red_crit
- where
- Bot = Bot_F and Inf = Inf_F and entails = entails_\<G> and
- Red_Inf = Red_Inf_\<G> and Red_F = Red_F_\<G>
+sublocale calculus_with_red_crit Bot_F Inf_F entails_\<G> Red_Inf_\<G> Red_F_\<G>
proof
fix B N N' \<iota>
show \<open>Red_Inf_\<G> N \<subseteq> Inf_F\<close> unfolding Red_Inf_\<G>_def by blast
show \<open>B \<in> Bot_F \<Longrightarrow> N \<Turnstile>\<G> {B} \<Longrightarrow> N - Red_F_\<G> N \<Turnstile>\<G> {B}\<close> using Red_F_Bot_F by simp
show \<open>N \<subseteq> N' \<Longrightarrow> Red_F_\<G> N \<subseteq> Red_F_\<G> N'\<close> using Red_F_of_subset_F by simp
show \<open>N \<subseteq> N' \<Longrightarrow> Red_Inf_\<G> N \<subseteq> Red_Inf_\<G> N'\<close> using Red_Inf_of_subset_F by simp
show \<open>N' \<subseteq> Red_F_\<G> N \<Longrightarrow> Red_F_\<G> N \<subseteq> Red_F_\<G> (N - N')\<close> using Red_F_of_Red_F_subset_F by simp
show \<open>N' \<subseteq> Red_F_\<G> N \<Longrightarrow> Red_Inf_\<G> N \<subseteq> Red_Inf_\<G> (N - N')\<close> using Red_Inf_of_Red_F_subset_F by simp
show \<open>\<iota> \<in> Inf_F \<Longrightarrow> concl_of \<iota> \<in> N \<Longrightarrow> \<iota> \<in> Red_Inf_\<G> N\<close> using Red_Inf_of_Inf_to_N_F by simp
qed
-lemma lifted_calc_is_calc: "calculus_with_red_crit Bot_F Inf_F entails_\<G> Red_Inf_\<G> Red_F_\<G>"
- using lifted_calculus_with_red_crit.calculus_with_red_crit_axioms .
+lemma grounded_inf_in_ground_inf: "\<iota> \<in> Inf_F \<Longrightarrow> \<G>_Inf \<iota> \<noteq> None \<Longrightarrow> the (\<G>_Inf \<iota>) \<subseteq> Inf_G"
+ using inf_map ground.Red_Inf_to_Inf by blast
-lemma grounded_inf_in_ground_inf: "\<iota> \<in> Inf_F \<Longrightarrow> \<G>_Inf \<iota> \<noteq> None \<Longrightarrow> the (\<G>_Inf \<iota>) \<subseteq> Inf_G"
- using inf_map Ground.Red_Inf_to_Inf by blast
+abbreviation ground_Inf_redundant :: "'f set \<Rightarrow> bool" where
+ "ground_Inf_redundant N \<equiv>
+ ground.Inf_from (\<G>_set N)
+ \<subseteq> {\<iota>. \<exists>\<iota>'\<in> Inf_from N. \<G>_Inf \<iota>' \<noteq> None \<and> \<iota> \<in> the (\<G>_Inf \<iota>')} \<union> Red_Inf_G (\<G>_set N)"
+
+lemma sat_inf_imp_ground_red:
+ assumes
+ "saturated N" and
+ "\<iota>' \<in> Inf_from N" and
+ "\<G>_Inf \<iota>' \<noteq> None \<and> \<iota> \<in> the (\<G>_Inf \<iota>')"
+ shows "\<iota> \<in> Red_Inf_G (\<G>_set N)"
+ using assms Red_Inf_\<G>_def unfolding saturated_def by auto
(* lem:sat-wrt-finf *)
-lemma sat_imp_ground_sat: "lifted_calculus_with_red_crit.saturated N \<Longrightarrow> Ground.Inf_from (\<G>_set N) \<subseteq>
- ({\<iota>. \<exists>\<iota>'\<in> Non_ground.Inf_from N. \<G>_Inf \<iota>' \<noteq> None \<and> \<iota> \<in> the (\<G>_Inf \<iota>')} \<union> Red_Inf_G (\<G>_set N)) \<Longrightarrow>
- Ground.saturated (\<G>_set N)"
-proof -
- fix N
- assume
- sat_n: "lifted_calculus_with_red_crit.saturated N" and
- inf_grounded_in: "Ground.Inf_from (\<G>_set N) \<subseteq>
- ({\<iota>. \<exists>\<iota>'\<in> Non_ground.Inf_from N. \<G>_Inf \<iota>' \<noteq> None \<and> \<iota> \<in> the (\<G>_Inf \<iota>')} \<union> Red_Inf_G (\<G>_set N))"
- show "Ground.saturated (\<G>_set N)" unfolding Ground.saturated_def
- proof
- fix \<iota>
- assume i_in: "\<iota> \<in> Ground.Inf_from (\<G>_set N)"
- {
- assume "\<iota> \<in> {\<iota>. \<exists>\<iota>'\<in> Non_ground.Inf_from N. \<G>_Inf \<iota>' \<noteq> None \<and> \<iota> \<in> the (\<G>_Inf \<iota>')}"
- then obtain \<iota>' where "\<iota>'\<in> Non_ground.Inf_from N" "\<G>_Inf \<iota>' \<noteq> None" "\<iota> \<in> the (\<G>_Inf \<iota>')" by blast
- then have "\<iota> \<in> Red_Inf_G (\<G>_set N)"
- using Red_Inf_\<G>_def sat_n unfolding lifted_calculus_with_red_crit.saturated_def by auto
- }
- then show "\<iota> \<in> Red_Inf_G (\<G>_set N)" using inf_grounded_in i_in by blast
- qed
-qed
+lemma sat_imp_ground_sat: "saturated N \<Longrightarrow> ground_Inf_redundant N \<Longrightarrow> ground.saturated (\<G>_set N)"
+ unfolding ground.saturated_def using sat_inf_imp_ground_red by auto
(* thm:finf-complete *)
theorem stat_ref_comp_to_non_ground:
assumes
stat_ref_G: "static_refutational_complete_calculus Bot_G Inf_G entails_G Red_Inf_G Red_F_G" and
- sat_n_imp: "\<And>N. (lifted_calculus_with_red_crit.saturated N \<Longrightarrow> Ground.Inf_from (\<G>_set N) \<subseteq>
- ({\<iota>. \<exists>\<iota>'\<in> Non_ground.Inf_from N. \<G>_Inf \<iota>' \<noteq> None \<and> \<iota> \<in> the (\<G>_Inf \<iota>')} \<union> Red_Inf_G (\<G>_set N)))"
+ sat_n_imp: "\<And>N. saturated N \<Longrightarrow> ground_Inf_redundant N"
shows
"static_refutational_complete_calculus Bot_F Inf_F entails_\<G> Red_Inf_\<G> Red_F_\<G>"
proof
fix B N
assume
b_in: "B \<in> Bot_F" and
- sat_n: "lifted_calculus_with_red_crit.saturated N" and
+ sat_n: "saturated N" and
n_entails_bot: "N \<Turnstile>\<G> {B}"
have ground_n_entails: "\<G>_set N \<Turnstile>G \<G>_F B"
- using n_entails_bot unfolding entails_\<G>_def by simp
+ using n_entails_bot by simp
then obtain BG where bg_in1: "BG \<in> \<G>_F B"
using Bot_map_not_empty[OF b_in] by blast
then have bg_in: "BG \<in> Bot_G"
using Bot_map[OF b_in] by blast
have ground_n_entails_bot: "\<G>_set N \<Turnstile>G {BG}"
- using ground_n_entails bg_in1 Ground.entail_set_all_formulas by blast
- have "Ground.Inf_from (\<G>_set N) \<subseteq>
- ({\<iota>. \<exists>\<iota>'\<in> Non_ground.Inf_from N. \<G>_Inf \<iota>' \<noteq> None \<and> \<iota> \<in> the (\<G>_Inf \<iota>')} \<union> Red_Inf_G (\<G>_set N))"
+ using ground_n_entails bg_in1 ground.entail_set_all_formulas by blast
+ have "ground.Inf_from (\<G>_set N) \<subseteq>
+ {\<iota>. \<exists>\<iota>'\<in> Inf_from N. \<G>_Inf \<iota>' \<noteq> None \<and> \<iota> \<in> the (\<G>_Inf \<iota>')} \<union> Red_Inf_G (\<G>_set N)"
using sat_n_imp[OF sat_n] .
- have "Ground.saturated (\<G>_set N)"
+ have "ground.saturated (\<G>_set N)"
using sat_imp_ground_sat[OF sat_n sat_n_imp[OF sat_n]] .
then have "\<exists>BG'\<in>Bot_G. BG' \<in> (\<G>_set N)"
- using stat_ref_G Ground.calculus_with_red_crit_axioms bg_in ground_n_entails_bot
+ using stat_ref_G ground.calculus_with_red_crit_axioms bg_in ground_n_entails_bot
unfolding static_refutational_complete_calculus_def static_refutational_complete_calculus_axioms_def
by blast
then show "\<exists>B'\<in> Bot_F. B' \<in> N"
using bg_in Bot_cond Bot_map_not_empty Bot_cond by blast
qed
end
-abbreviation Empty_Order where
- "Empty_Order C1 C2 \<equiv> False"
+lemma wf_empty_rel: "minimal_element (\<lambda>_ _. False) UNIV"
+ by (simp add: minimal_element.intro po_on_def transp_onI wfp_on_imp_irreflp_on)
lemma any_to_empty_order_lifting:
"lifting_with_wf_ordering_family Bot_F Inf_F Bot_G entails_G Inf_G Red_Inf_G Red_F_G \<G>_F
\<G>_Inf Prec_F_g \<Longrightarrow> lifting_with_wf_ordering_family Bot_F Inf_F Bot_G entails_G Inf_G Red_Inf_G
- Red_F_G \<G>_F \<G>_Inf (\<lambda>g. Empty_Order)"
+ Red_F_G \<G>_F \<G>_Inf (\<lambda>g C C'. False)"
proof -
fix Bot_F Inf_F Bot_G entails_G Inf_G Red_Inf_G Red_F_G \<G>_F \<G>_Inf Prec_F_g
assume lift: "lifting_with_wf_ordering_family Bot_F Inf_F Bot_G entails_G Inf_G Red_Inf_G
Red_F_G \<G>_F \<G>_Inf Prec_F_g"
then interpret lift_g:
lifting_with_wf_ordering_family Bot_F Inf_F Bot_G entails_G Inf_G Red_Inf_G Red_F_G \<G>_F
\<G>_Inf Prec_F_g
by auto
- have empty_wf: "minimal_element ((\<lambda>g. Empty_Order) g) UNIV"
- by (simp add: lift_g.all_wf minimal_element.intro po_on_def transp_on_def wfp_on_def
- wfp_on_imp_irreflp_on)
- then show "lifting_with_wf_ordering_family Bot_F Inf_F Bot_G entails_G Inf_G Red_Inf_G Red_F_G
- \<G>_F \<G>_Inf (\<lambda>g. Empty_Order)"
- by (simp add: empty_wf lift_g.standard_lifting_axioms
+ show "lifting_with_wf_ordering_family Bot_F Inf_F Bot_G entails_G Inf_G Red_Inf_G Red_F_G
+ \<G>_F \<G>_Inf (\<lambda>g C C'. False)"
+ by (simp add: wf_empty_rel lift_g.standard_lifting_axioms
lifting_with_wf_ordering_family_axioms.intro lifting_with_wf_ordering_family_def)
qed
+lemma po_on_empty_rel[simp]: "po_on (\<lambda>_ _. False) UNIV"
+ unfolding po_on_def irreflp_on_def transp_on_def by auto
+
locale lifting_equivalence_with_empty_order =
any_order_lifting: lifting_with_wf_ordering_family Bot_F Inf_F Bot_G entails_G Inf_G Red_Inf_G
Red_F_G \<G>_F \<G>_Inf Prec_F_g +
empty_order_lifting: lifting_with_wf_ordering_family Bot_F Inf_F Bot_G entails_G Inf_G Red_Inf_G
- Red_F_G \<G>_F \<G>_Inf "\<lambda>g. Empty_Order"
+ Red_F_G \<G>_F \<G>_Inf "\<lambda>g C C'. False"
for
\<G>_F :: \<open>'f \<Rightarrow> 'g set\<close> and
\<G>_Inf :: \<open>'f inference \<Rightarrow> 'g inference set option\<close> and
Bot_F :: \<open>'f set\<close> and
Inf_F :: \<open>'f inference set\<close> and
Bot_G :: \<open>'g set\<close> and
Inf_G :: \<open>'g inference set\<close> and
entails_G :: \<open>'g set \<Rightarrow> 'g set \<Rightarrow> bool\<close> (infix "\<Turnstile>G" 50) and
Red_Inf_G :: \<open>'g set \<Rightarrow> 'g inference set\<close> and
Red_F_G :: \<open>'g set \<Rightarrow> 'g set\<close> and
Prec_F_g :: \<open>'g \<Rightarrow> 'f \<Rightarrow> 'f \<Rightarrow> bool\<close>
sublocale lifting_with_wf_ordering_family \<subseteq> lifting_equivalence_with_empty_order
-proof
- show "po_on Empty_Order UNIV"
- unfolding po_on_def by (simp add: transp_onI wfp_on_imp_irreflp_on)
- show "wfp_on Empty_Order UNIV"
- unfolding wfp_on_def by simp
-qed
+ by unfold_locales simp+
context lifting_equivalence_with_empty_order
begin
(* lem:saturation-indep-of-sqsubset *)
lemma saturated_empty_order_equiv_saturated:
- "any_order_lifting.lifted_calculus_with_red_crit.saturated N =
- empty_order_lifting.lifted_calculus_with_red_crit.saturated N" by standard
+ "any_order_lifting.saturated N = empty_order_lifting.saturated N"
+ by (rule refl)
(* lem:static-ref-compl-indep-of-sqsubset *)
lemma static_empty_order_equiv_static:
- "static_refutational_complete_calculus Bot_F Inf_F
- any_order_lifting.entails_\<G> empty_order_lifting.Red_Inf_\<G> empty_order_lifting.Red_F_\<G> =
- static_refutational_complete_calculus Bot_F Inf_F any_order_lifting.entails_\<G>
- any_order_lifting.Red_Inf_\<G> any_order_lifting.Red_F_\<G>"
+ "static_refutational_complete_calculus Bot_F Inf_F any_order_lifting.entails_\<G>
+ empty_order_lifting.Red_Inf_\<G> empty_order_lifting.Red_F_\<G> =
+ static_refutational_complete_calculus Bot_F Inf_F any_order_lifting.entails_\<G>
+ any_order_lifting.Red_Inf_\<G> any_order_lifting.Red_F_\<G>"
unfolding static_refutational_complete_calculus_def
by (rule iffI) (standard,(standard)[],simp)+
(* thm:FRedsqsubset-is-dyn-ref-compl *)
theorem static_to_dynamic:
"static_refutational_complete_calculus Bot_F Inf_F
any_order_lifting.entails_\<G> empty_order_lifting.Red_Inf_\<G> empty_order_lifting.Red_F_\<G> =
dynamic_refutational_complete_calculus Bot_F Inf_F
any_order_lifting.entails_\<G> any_order_lifting.Red_Inf_\<G> any_order_lifting.Red_F_\<G> "
- (is "?static=?dynamic")
-proof
- assume ?static
- then have static_general:
- "static_refutational_complete_calculus Bot_F Inf_F any_order_lifting.entails_\<G>
- any_order_lifting.Red_Inf_\<G> any_order_lifting.Red_F_\<G>" (is "?static_gen")
- using static_empty_order_equiv_static by simp
- interpret static_refutational_complete_calculus Bot_F Inf_F any_order_lifting.entails_\<G>
- any_order_lifting.Red_Inf_\<G> any_order_lifting.Red_F_\<G>
- using static_general .
- show "?dynamic" by standard
-next
- assume dynamic_gen: ?dynamic
- interpret dynamic_refutational_complete_calculus Bot_F Inf_F any_order_lifting.entails_\<G>
- any_order_lifting.Red_Inf_\<G> any_order_lifting.Red_F_\<G>
- using dynamic_gen .
- have "static_refutational_complete_calculus Bot_F Inf_F any_order_lifting.entails_\<G>
- any_order_lifting.Red_Inf_\<G> any_order_lifting.Red_F_\<G>"
- by standard
- then show "?static" using static_empty_order_equiv_static by simp
-qed
+ using any_order_lifting.dyn_equiv_stat static_empty_order_equiv_static by blast
end
subsection \<open>Lifting with a Family of Redundancy Criteria\<close>
-locale standard_lifting_with_red_crit_family = Non_ground: inference_system Inf_F
- + Ground_family: calculus_with_red_crit_family Bot_G Inf_G Q entails_q Red_Inf_q Red_F_q
+locale standard_lifting_with_red_crit_family = inference_system Inf_F
+ + ground: calculus_family_with_red_crit_family Bot_G Q Inf_G_q entails_q Red_Inf_q Red_F_q
for
Inf_F :: "'f inference set" and
Bot_G :: "'g set" and
- Inf_G :: \<open>'g inference set\<close> and
- Q :: "'q itself" and
- entails_q :: "'q \<Rightarrow> ('g set \<Rightarrow> 'g set \<Rightarrow> bool)" and
- Red_Inf_q :: "'q \<Rightarrow> ('g set \<Rightarrow> 'g inference set)" and
- Red_F_q :: "'q \<Rightarrow> ('g set \<Rightarrow> 'g set)"
+ Q :: "'q set" and
+ Inf_G_q :: \<open>'q \<Rightarrow> 'g inference set\<close> and
+ entails_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g set \<Rightarrow> bool" and
+ Red_Inf_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g inference set" and
+ Red_F_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g set"
+ fixes
Bot_F :: "'f set" and
\<G>_F_q :: "'q \<Rightarrow> 'f \<Rightarrow> 'g set" and
\<G>_Inf_q :: "'q \<Rightarrow> 'f inference \<Rightarrow> 'g inference set option" and
Prec_F_g :: "'g \<Rightarrow> 'f \<Rightarrow> 'f \<Rightarrow> bool"
assumes
- standard_lifting_family: "lifting_with_wf_ordering_family Bot_F Inf_F Bot_G (entails_q q)
- Inf_G (Red_Inf_q q) (Red_F_q q) (\<G>_F_q q) (\<G>_Inf_q q) Prec_F_g"
+ standard_lifting_family:
+ "\<forall>q \<in> Q. lifting_with_wf_ordering_family Bot_F Inf_F Bot_G (entails_q q) (Inf_G_q q)
+ (Red_Inf_q q) (Red_F_q q) (\<G>_F_q q) (\<G>_Inf_q q) Prec_F_g"
begin
-definition \<G>_set_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'g set" where
+abbreviation \<G>_set_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'g set" where
"\<G>_set_q q N \<equiv> \<Union> (\<G>_F_q q ` N)"
definition Red_Inf_\<G>_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'f inference set" where
"Red_Inf_\<G>_q q N = {\<iota> \<in> Inf_F. (\<G>_Inf_q q \<iota> \<noteq> None \<and> the (\<G>_Inf_q q \<iota>) \<subseteq> Red_Inf_q q (\<G>_set_q q N))
- \<or> (\<G>_Inf_q q \<iota> = None \<and> \<G>_F_q q (concl_of \<iota>) \<subseteq> (\<G>_set_q q N \<union> Red_F_q q (\<G>_set_q q N)))}"
-
-definition Red_Inf_\<G>_Q :: "'f set \<Rightarrow> 'f inference set" where
- "Red_Inf_\<G>_Q N = \<Inter> {X N |X. X \<in> (Red_Inf_\<G>_q ` UNIV)}"
+ \<or> (\<G>_Inf_q q \<iota> = None \<and> \<G>_F_q q (concl_of \<iota>) \<subseteq> (\<G>_set_q q N \<union> Red_F_q q (\<G>_set_q q N)))}"
definition Red_F_\<G>_empty_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'f set" where
- "Red_F_\<G>_empty_q q N = {C. \<forall>D \<in> \<G>_F_q q C. D \<in> Red_F_q q (\<G>_set_q q N) \<or>
- (\<exists>E \<in> N. Empty_Order E C \<and> D \<in> \<G>_F_q q E)}"
-
-definition Red_F_\<G>_empty :: "'f set \<Rightarrow> 'f set" where
- "Red_F_\<G>_empty N = \<Inter> {X N |X. X \<in> (Red_F_\<G>_empty_q ` UNIV)}"
+ "Red_F_\<G>_empty_q q N = {C. \<forall>D \<in> \<G>_F_q q C. D \<in> Red_F_q q (\<G>_set_q q N)}"
definition Red_F_\<G>_q_g :: "'q \<Rightarrow> 'f set \<Rightarrow> 'f set" where
- "Red_F_\<G>_q_g q N = {C. \<forall>D \<in> \<G>_F_q q C. D \<in> Red_F_q q (\<G>_set_q q N) \<or> (\<exists>E \<in> N. Prec_F_g D E C \<and> D \<in> \<G>_F_q q E)}"
-
-definition Red_F_\<G>_g :: "'f set \<Rightarrow> 'f set" where
- "Red_F_\<G>_g N = \<Inter> {X N |X. X \<in> (Red_F_\<G>_q_g ` UNIV)}"
+ "Red_F_\<G>_q_g q N =
+ {C. \<forall>D \<in> \<G>_F_q q C. D \<in> Red_F_q q (\<G>_set_q q N) \<or> (\<exists>E \<in> N. Prec_F_g D E C \<and> D \<in> \<G>_F_q q E)}"
-definition entails_\<G>_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'f set \<Rightarrow> bool" where
+abbreviation entails_\<G>_q :: "'q \<Rightarrow> 'f set \<Rightarrow> 'f set \<Rightarrow> bool" where
"entails_\<G>_q q N1 N2 \<equiv> entails_q q (\<G>_set_q q N1) (\<G>_set_q q N2)"
-definition entails_\<G>_Q :: "'f set \<Rightarrow> 'f set \<Rightarrow> bool" (infix "\<Turnstile>\<inter>" 50) where
- "entails_\<G>_Q N1 N2 \<equiv> \<forall>q. entails_\<G>_q q N1 N2"
-
lemma red_crit_lifting_family:
- "calculus_with_red_crit Bot_F Inf_F (entails_\<G>_q q) (Red_Inf_\<G>_q q) (Red_F_\<G>_q_g q)"
+ assumes q_in: "q \<in> Q"
+ shows "calculus_with_red_crit Bot_F Inf_F (entails_\<G>_q q) (Red_Inf_\<G>_q q) (Red_F_\<G>_q_g q)"
proof -
- fix q
interpret wf_lift:
- lifting_with_wf_ordering_family Bot_F Inf_F Bot_G "entails_q q" Inf_G "Red_Inf_q q" "Red_F_q q"
- "\<G>_F_q q" "\<G>_Inf_q q" Prec_F_g
- using standard_lifting_family .
- have "entails_\<G>_q q = wf_lift.entails_\<G>"
- unfolding entails_\<G>_q_def wf_lift.entails_\<G>_def \<G>_set_q_def by blast
- moreover have "Red_Inf_\<G>_q q = wf_lift.Red_Inf_\<G>"
- unfolding Red_Inf_\<G>_q_def \<G>_set_q_def wf_lift.Red_Inf_\<G>_def by blast
+ lifting_with_wf_ordering_family Bot_F Inf_F Bot_G "entails_q q" "Inf_G_q q" "Red_Inf_q q"
+ "Red_F_q q" "\<G>_F_q q" "\<G>_Inf_q q" Prec_F_g
+ using standard_lifting_family q_in by metis
+ have "Red_Inf_\<G>_q q = wf_lift.Red_Inf_\<G>"
+ unfolding Red_Inf_\<G>_q_def wf_lift.Red_Inf_\<G>_def by blast
moreover have "Red_F_\<G>_q_g q = wf_lift.Red_F_\<G>"
- unfolding Red_F_\<G>_q_g_def \<G>_set_q_def wf_lift.Red_F_\<G>_def by blast
- ultimately show "calculus_with_red_crit Bot_F Inf_F (entails_\<G>_q q) (Red_Inf_\<G>_q q) (Red_F_\<G>_q_g q)"
- using wf_lift.lifted_calculus_with_red_crit.calculus_with_red_crit_axioms by simp
+ unfolding Red_F_\<G>_q_g_def wf_lift.Red_F_\<G>_def by blast
+ ultimately show ?thesis
+ using wf_lift.calculus_with_red_crit_axioms by simp
qed
lemma red_crit_lifting_family_empty_ord:
- "calculus_with_red_crit Bot_F Inf_F (entails_\<G>_q q) (Red_Inf_\<G>_q q) (Red_F_\<G>_empty_q q)"
+ assumes q_in: "q \<in> Q"
+ shows "calculus_with_red_crit Bot_F Inf_F (entails_\<G>_q q) (Red_Inf_\<G>_q q) (Red_F_\<G>_empty_q q)"
proof -
- fix q
interpret wf_lift:
- lifting_with_wf_ordering_family Bot_F Inf_F Bot_G "entails_q q" Inf_G "Red_Inf_q q" "Red_F_q q"
- "\<G>_F_q q" "\<G>_Inf_q q" Prec_F_g
- using standard_lifting_family .
- have "entails_\<G>_q q = wf_lift.entails_\<G>"
- unfolding entails_\<G>_q_def wf_lift.entails_\<G>_def \<G>_set_q_def by blast
- moreover have "Red_Inf_\<G>_q q = wf_lift.Red_Inf_\<G>"
- unfolding Red_Inf_\<G>_q_def \<G>_set_q_def wf_lift.Red_Inf_\<G>_def by blast
+ lifting_with_wf_ordering_family Bot_F Inf_F Bot_G "entails_q q" "Inf_G_q q" "Red_Inf_q q"
+ "Red_F_q q" "\<G>_F_q q" "\<G>_Inf_q q" Prec_F_g
+ using standard_lifting_family q_in by metis
+ have "Red_Inf_\<G>_q q = wf_lift.Red_Inf_\<G>"
+ unfolding Red_Inf_\<G>_q_def wf_lift.Red_Inf_\<G>_def by blast
moreover have "Red_F_\<G>_empty_q q = wf_lift.empty_order_lifting.Red_F_\<G>"
- unfolding Red_F_\<G>_empty_q_def \<G>_set_q_def wf_lift.empty_order_lifting.Red_F_\<G>_def by blast
- ultimately show "calculus_with_red_crit Bot_F Inf_F (entails_\<G>_q q) (Red_Inf_\<G>_q q) (Red_F_\<G>_empty_q q)"
- using wf_lift.empty_order_lifting.lifted_calculus_with_red_crit.calculus_with_red_crit_axioms
- by simp
+ unfolding Red_F_\<G>_empty_q_def wf_lift.empty_order_lifting.Red_F_\<G>_def by blast
+ ultimately show ?thesis
+ using wf_lift.empty_order_lifting.calculus_with_red_crit_axioms by simp
qed
-lemma cons_rel_fam_Q_lem: \<open>consequence_relation_family Bot_F entails_\<G>_q\<close>
-proof
- show "Bot_F \<noteq> {}"
- using standard_lifting_family
- by (meson ex_in_conv lifting_with_wf_ordering_family.axioms(1) standard_lifting.Bot_F_not_empty)
+sublocale consequence_relation_family Bot_F Q entails_\<G>_q
+proof (unfold_locales; (intro ballI)?)
+ show "Q \<noteq> {}"
+ by (rule ground.Q_nonempty)
next
fix qi
- show "Bot_F \<noteq> {}"
- using standard_lifting_family
- by (meson ex_in_conv lifting_with_wf_ordering_family.axioms(1) standard_lifting.Bot_F_not_empty)
-next
- fix qi B N1
- assume
- B_in: "B \<in> Bot_F"
- interpret lift: lifting_with_wf_ordering_family Bot_F Inf_F Bot_G "entails_q qi" Inf_G "Red_Inf_q qi"
- "Red_F_q qi" "\<G>_F_q qi" "\<G>_Inf_q qi" Prec_F_g
- by (rule standard_lifting_family)
- have "(entails_\<G>_q qi) = lift.entails_\<G>"
- unfolding entails_\<G>_q_def lift.entails_\<G>_def \<G>_set_q_def by simp
- then show "entails_\<G>_q qi {B} N1"
- using B_in lift.lifted_consequence_relation.bot_implies_all by auto
-next
- fix qi and N2 N1::"'f set"
- assume
- N_incl: "N2 \<subseteq> N1"
- interpret lift: lifting_with_wf_ordering_family Bot_F Inf_F Bot_G "entails_q qi" Inf_G "Red_Inf_q qi"
- "Red_F_q qi" "\<G>_F_q qi" "\<G>_Inf_q qi" Prec_F_g
- by (rule standard_lifting_family)
- have "(entails_\<G>_q qi) = lift.entails_\<G>"
- unfolding entails_\<G>_q_def lift.entails_\<G>_def \<G>_set_q_def by simp
- then show "entails_\<G>_q qi N1 N2"
- using N_incl by (simp add: lift.lifted_consequence_relation.subset_entailed)
-next
- fix qi N1 N2
- assume
- all_C: "\<forall>C\<in> N2. entails_\<G>_q qi N1 {C}"
- interpret lift: lifting_with_wf_ordering_family Bot_F Inf_F Bot_G "entails_q qi" Inf_G "Red_Inf_q qi"
- "Red_F_q qi" "\<G>_F_q qi" "\<G>_Inf_q qi" Prec_F_g
- by (rule standard_lifting_family)
- have "(entails_\<G>_q qi) = lift.entails_\<G>"
- unfolding entails_\<G>_q_def lift.entails_\<G>_def \<G>_set_q_def by simp
- then show "entails_\<G>_q qi N1 N2"
- using all_C lift.lifted_consequence_relation.all_formulas_entailed by presburger
-next
- fix qi N1 N2 N3
- assume
- entails12: "entails_\<G>_q qi N1 N2" and
- entails23: "entails_\<G>_q qi N2 N3"
- interpret lift: lifting_with_wf_ordering_family Bot_F Inf_F Bot_G "entails_q qi" Inf_G "Red_Inf_q qi"
- "Red_F_q qi" "\<G>_F_q qi" "\<G>_Inf_q qi" Prec_F_g
- by (rule standard_lifting_family)
- have "(entails_\<G>_q qi) = lift.entails_\<G>"
- unfolding entails_\<G>_q_def lift.entails_\<G>_def \<G>_set_q_def by simp
- then show "entails_\<G>_q qi N1 N3"
- using entails12 entails23 lift.lifted_consequence_relation.entails_trans by presburger
+ assume qi_in: "qi \<in> Q"
+
+ interpret lift: lifting_with_wf_ordering_family Bot_F Inf_F Bot_G "entails_q qi" "Inf_G_q qi"
+ "Red_Inf_q qi" "Red_F_q qi" "\<G>_F_q qi" "\<G>_Inf_q qi" Prec_F_g
+ using qi_in by (metis standard_lifting_family)
+
+ show "consequence_relation Bot_F (entails_\<G>_q qi)"
+ by unfold_locales
qed
-interpretation cons_rel_Q: consequence_relation Bot_F entails_\<G>_Q
+sublocale calculus_with_red_crit_family Bot_F Inf_F Q entails_\<G>_q Red_Inf_\<G>_q Red_F_\<G>_q_g
+ by unfold_locales (auto simp: Q_nonempty red_crit_lifting_family)
+
+abbreviation entails_\<G>_Q :: "'f set \<Rightarrow> 'f set \<Rightarrow> bool" (infix "\<Turnstile>\<inter>\<G>" 50) where
+ "(\<Turnstile>\<inter>\<G>) \<equiv> entails_Q"
+
+abbreviation Red_Inf_\<G>_Q :: "'f set \<Rightarrow> 'f inference set" where
+ "Red_Inf_\<G>_Q \<equiv> Red_Inf_Q"
+
+abbreviation Red_F_\<G>_Q :: "'f set \<Rightarrow> 'f set" where
+ "Red_F_\<G>_Q \<equiv> Red_F_Q"
+
+lemmas entails_\<G>_Q_def = entails_Q_def
+lemmas Red_Inf_\<G>_Q_def = Red_Inf_Q_def
+lemmas Red_F_\<G>_Q_def = Red_F_Q_def
+
+sublocale empty_ord: calculus_with_red_crit_family Bot_F Inf_F Q entails_\<G>_q Red_Inf_\<G>_q
+ Red_F_\<G>_empty_q
+ by unfold_locales (auto simp: Q_nonempty red_crit_lifting_family_empty_ord)
+
+abbreviation Red_F_\<G>_empty :: "'f set \<Rightarrow> 'f set" where
+ "Red_F_\<G>_empty \<equiv> empty_ord.Red_F_Q"
+
+lemmas Red_F_\<G>_empty_def = empty_ord.Red_F_Q_def
+
+lemma sat_inf_imp_ground_red_fam_inter:
+ assumes
+ sat_n: "saturated N" and
+ i'_in: "\<iota>' \<in> Inf_from N" and
+ q_in: "q \<in> Q" and
+ grounding: "\<G>_Inf_q q \<iota>' \<noteq> None \<and> \<iota> \<in> the (\<G>_Inf_q q \<iota>')"
+ shows "\<iota> \<in> Red_Inf_q q (\<G>_set_q q N)"
proof -
- interpret cons_rel_fam: consequence_relation_family Bot_F Q entails_\<G>_q
- by (rule cons_rel_fam_Q_lem)
- have "consequence_relation_family.entails_Q entails_\<G>_q = entails_\<G>_Q"
- unfolding entails_\<G>_Q_def cons_rel_fam.entails_Q_def by (simp add: entails_\<G>_q_def)
- then show "consequence_relation Bot_F entails_\<G>_Q"
- using consequence_relation_family.intersect_cons_rel_family[OF cons_rel_fam_Q_lem] by simp
+ have "\<iota>' \<in> Red_Inf_\<G>_q q N"
+ using sat_n i'_in q_in all_red_crit calculus_with_red_crit.saturated_def sat_int_to_sat_q
+ by blast
+ then have "the (\<G>_Inf_q q \<iota>') \<subseteq> Red_Inf_q q (\<G>_set_q q N)"
+ by (simp add: Red_Inf_\<G>_q_def grounding)
+ then show ?thesis
+ using grounding by blast
qed
-sublocale lifted_calc_w_red_crit_family:
- calculus_with_red_crit_family Bot_F Inf_F Q entails_\<G>_q Red_Inf_\<G>_q Red_F_\<G>_q_g
- using cons_rel_fam_Q_lem red_crit_lifting_family
- by (simp add: calculus_with_red_crit_family.intro calculus_with_red_crit_family_axioms_def)
+abbreviation ground_Inf_redundant :: "'q \<Rightarrow> 'f set \<Rightarrow> bool" where
+ "ground_Inf_redundant q N \<equiv>
+ ground.Inf_from_q q (\<G>_set_q q N)
+ \<subseteq> {\<iota>. \<exists>\<iota>'\<in> Inf_from N. \<G>_Inf_q q \<iota>' \<noteq> None \<and> \<iota> \<in> the (\<G>_Inf_q q \<iota>')} \<union> Red_Inf_q q (\<G>_set_q q N)"
-lemma lifted_calc_family_is_calc: "calculus_with_red_crit Bot_F Inf_F entails_\<G>_Q Red_Inf_\<G>_Q Red_F_\<G>_g"
-proof -
- have "lifted_calc_w_red_crit_family.entails_Q = entails_\<G>_Q"
- unfolding entails_\<G>_Q_def lifted_calc_w_red_crit_family.entails_Q_def by simp
- moreover have "lifted_calc_w_red_crit_family.Red_Inf_Q = Red_Inf_\<G>_Q"
- unfolding Red_Inf_\<G>_Q_def lifted_calc_w_red_crit_family.Red_Inf_Q_def by simp
- moreover have "lifted_calc_w_red_crit_family.Red_F_Q = Red_F_\<G>_g"
- unfolding Red_F_\<G>_g_def lifted_calc_w_red_crit_family.Red_F_Q_def by simp
- ultimately show "calculus_with_red_crit Bot_F Inf_F entails_\<G>_Q Red_Inf_\<G>_Q Red_F_\<G>_g"
- using lifted_calc_w_red_crit_family.inter_red_crit by simp
-qed
+abbreviation ground_saturated :: "'q \<Rightarrow> 'f set \<Rightarrow> bool" where
+ "ground_saturated q N \<equiv> ground.Inf_from_q q (\<G>_set_q q N) \<subseteq> Red_Inf_q q (\<G>_set_q q N)"
-sublocale empty_ord_lifted_calc_w_red_crit_family:
- calculus_with_red_crit_family Bot_F Inf_F Q entails_\<G>_q Red_Inf_\<G>_q Red_F_\<G>_empty_q
- using cons_rel_fam_Q_lem red_crit_lifting_family_empty_ord
- by (simp add: calculus_with_red_crit_family.intro calculus_with_red_crit_family_axioms_def)
-
-lemma inter_calc: "calculus_with_red_crit Bot_F Inf_F entails_\<G>_Q Red_Inf_\<G>_Q Red_F_\<G>_empty"
-proof -
- have "lifted_calc_w_red_crit_family.entails_Q = entails_\<G>_Q"
- unfolding entails_\<G>_Q_def lifted_calc_w_red_crit_family.entails_Q_def by simp
- moreover have "empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q = Red_Inf_\<G>_Q"
- unfolding Red_Inf_\<G>_Q_def lifted_calc_w_red_crit_family.Red_Inf_Q_def by simp
- moreover have "empty_ord_lifted_calc_w_red_crit_family.Red_F_Q = Red_F_\<G>_empty"
- unfolding Red_F_\<G>_empty_def empty_ord_lifted_calc_w_red_crit_family.Red_F_Q_def by simp
- ultimately show "calculus_with_red_crit Bot_F Inf_F entails_\<G>_Q Red_Inf_\<G>_Q Red_F_\<G>_empty"
- using empty_ord_lifted_calc_w_red_crit_family.inter_red_crit by simp
-qed
+lemma sat_imp_ground_sat_fam_inter:
+ "saturated N \<Longrightarrow> q \<in> Q \<Longrightarrow> ground_Inf_redundant q N \<Longrightarrow> ground_saturated q N"
+ using sat_inf_imp_ground_red_fam_inter by auto
(* thm:intersect-finf-complete *)
theorem stat_ref_comp_to_non_ground_fam_inter:
assumes
- stat_ref_G: "\<And>q. static_refutational_complete_calculus Bot_G Inf_G (entails_q q) (Red_Inf_q q) (Red_F_q q)" and
- sat_n_imp: "\<And>N. (empty_ord_lifted_calc_w_red_crit_family.inter_red_crit_calculus.saturated N \<Longrightarrow>
- \<exists>q. Ground_family.Inf_from (\<G>_set_q q N) \<subseteq>
- ({\<iota>. \<exists>\<iota>'\<in> Non_ground.Inf_from N. \<G>_Inf_q q \<iota>' \<noteq> None \<and> \<iota> \<in> the (\<G>_Inf_q q \<iota>')} \<union> Red_Inf_q q (\<G>_set_q q N)))"
+ stat_ref_G:
+ "\<forall>q \<in> Q. static_refutational_complete_calculus Bot_G (Inf_G_q q) (entails_q q) (Red_Inf_q q)
+ (Red_F_q q)" and
+ sat_n_imp: "\<And>N. saturated N \<Longrightarrow> \<exists>q \<in> Q. ground_Inf_redundant q N"
shows
"static_refutational_complete_calculus Bot_F Inf_F entails_\<G>_Q Red_Inf_\<G>_Q Red_F_\<G>_empty"
- using inter_calc
- unfolding static_refutational_complete_calculus_def static_refutational_complete_calculus_axioms_def
+ using empty_ord.calculus_with_red_crit_axioms unfolding static_refutational_complete_calculus_def
+ static_refutational_complete_calculus_axioms_def
proof (standard, clarify)
fix B N
assume
b_in: "B \<in> Bot_F" and
- sat_n: "calculus_with_red_crit.saturated Inf_F Red_Inf_\<G>_Q N" and
- entails_bot: "N \<Turnstile>\<inter> {B}"
- interpret calculus_with_red_crit Bot_F Inf_F entails_\<G>_Q Red_Inf_\<G>_Q Red_F_\<G>_empty
- using inter_calc by blast
- have "empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q = Red_Inf_\<G>_Q"
- unfolding Red_Inf_\<G>_Q_def lifted_calc_w_red_crit_family.Red_Inf_Q_def by simp
- then have empty_ord_sat_n: "empty_ord_lifted_calc_w_red_crit_family.inter_red_crit_calculus.saturated N"
- using sat_n
- unfolding saturated_def empty_ord_lifted_calc_w_red_crit_family.inter_red_crit_calculus.saturated_def
- by simp
- then obtain q where inf_subs: "Ground_family.Inf_from (\<G>_set_q q N) \<subseteq>
- ({\<iota>. \<exists>\<iota>'\<in> Non_ground.Inf_from N. \<G>_Inf_q q \<iota>' \<noteq> None \<and> \<iota> \<in> the (\<G>_Inf_q q \<iota>')} \<union> Red_Inf_q q (\<G>_set_q q N))"
+ sat_n: "saturated N" and
+ entails_bot: "N \<Turnstile>\<inter>\<G> {B}"
+ then obtain q where
+ q_in: "q \<in> Q" and
+ inf_subs: "ground.Inf_from_q q (\<G>_set_q q N) \<subseteq>
+ {\<iota>. \<exists>\<iota>'\<in> Inf_from N. \<G>_Inf_q q \<iota>' \<noteq> None \<and> \<iota> \<in> the (\<G>_Inf_q q \<iota>')}
+ \<union> Red_Inf_q q (\<G>_set_q q N)"
using sat_n_imp[of N] by blast
interpret q_calc: calculus_with_red_crit Bot_F Inf_F "entails_\<G>_q q" "Red_Inf_\<G>_q q" "Red_F_\<G>_q_g q"
- using lifted_calc_w_red_crit_family.all_red_crit[of q] .
- have n_q_sat: "q_calc.saturated N" using lifted_calc_w_red_crit_family.sat_int_to_sat_q empty_ord_sat_n by simp
- interpret lifted_q_calc: lifting_with_wf_ordering_family Bot_F Inf_F Bot_G "entails_q q" Inf_G "Red_Inf_q q" "Red_F_q q" "\<G>_F_q q" "\<G>_Inf_q q"
- by (simp add: standard_lifting_family)
- have "lifted_q_calc.empty_order_lifting.lifted_calculus_with_red_crit.saturated N"
- using n_q_sat unfolding Red_Inf_\<G>_q_def \<G>_set_q_def lifted_q_calc.empty_order_lifting.Red_Inf_\<G>_def
- lifted_q_calc.lifted_calculus_with_red_crit.saturated_def q_calc.saturated_def by auto
- then have ground_sat_n: "lifted_q_calc.Ground.saturated (\<G>_set_q q N)"
- using lifted_q_calc.sat_imp_ground_sat[of N] inf_subs unfolding \<G>_set_q_def by blast
- have "entails_\<G>_q q N {B}" using entails_bot unfolding entails_\<G>_Q_def by simp
- then have ground_n_entails_bot: "entails_q q (\<G>_set_q q N) (\<G>_set_q q {B})" unfolding entails_\<G>_q_def .
- interpret static_refutational_complete_calculus Bot_G Inf_G "entails_q q" "Red_Inf_q q" "Red_F_q q"
- using stat_ref_G[of q] .
+ using all_red_crit[rule_format, OF q_in] .
+ have n_q_sat: "q_calc.saturated N"
+ using q_in sat_int_to_sat_q sat_n by simp
+ interpret lifted_q_calc:
+ lifting_with_wf_ordering_family Bot_F Inf_F Bot_G "entails_q q" "Inf_G_q q" "Red_Inf_q q"
+ "Red_F_q q" "\<G>_F_q q" "\<G>_Inf_q q"
+ using q_in by (simp add: standard_lifting_family)
+ have n_lift_sat: "lifted_q_calc.empty_order_lifting.saturated N"
+ using n_q_sat unfolding Red_Inf_\<G>_q_def lifted_q_calc.empty_order_lifting.Red_Inf_\<G>_def
+ lifted_q_calc.saturated_def q_calc.saturated_def by auto
+ have ground_sat_n: "lifted_q_calc.ground.saturated (\<G>_set_q q N)"
+ by (rule lifted_q_calc.sat_imp_ground_sat[OF n_lift_sat])
+ (use n_lift_sat inf_subs ground.Inf_from_q_def in auto)
+ have ground_n_entails_bot: "entails_\<G>_q q N {B}"
+ using q_in entails_bot unfolding entails_\<G>_Q_def by simp
+ interpret static_refutational_complete_calculus Bot_G "Inf_G_q q" "entails_q q" "Red_Inf_q q"
+ "Red_F_q q"
+ using stat_ref_G[rule_format, OF q_in] .
obtain BG where bg_in: "BG \<in> \<G>_F_q q B"
using lifted_q_calc.Bot_map_not_empty[OF b_in] by blast
then have "BG \<in> Bot_G" using lifted_q_calc.Bot_map[OF b_in] by blast
then have "\<exists>BG'\<in>Bot_G. BG' \<in> \<G>_set_q q N"
using ground_sat_n ground_n_entails_bot static_refutational_complete[of BG, OF _ ground_sat_n]
- bg_in lifted_q_calc.Ground.entail_set_all_formulas[of "\<G>_set_q q N" "\<G>_set_q q {B}"] unfolding \<G>_set_q_def
+ bg_in lifted_q_calc.ground.entail_set_all_formulas[of "\<G>_set_q q N" "\<G>_set_q q {B}"]
by simp
- then show "\<exists>B'\<in> Bot_F. B' \<in> N" using lifted_q_calc.Bot_cond unfolding \<G>_set_q_def by blast
+ then show "\<exists>B'\<in> Bot_F. B' \<in> N" using lifted_q_calc.Bot_cond by blast
qed
(* lem:intersect-saturation-indep-of-sqsubset *)
-lemma sat_eq_sat_empty_order: "lifted_calc_w_red_crit_family.inter_red_crit_calculus.saturated N =
- empty_ord_lifted_calc_w_red_crit_family.inter_red_crit_calculus.saturated N "
- by simp
+lemma sat_eq_sat_empty_order: "saturated N = empty_ord.saturated N"
+ by (rule refl)
(* lem:intersect-static-ref-compl-indep-of-sqsubset *)
lemma static_empty_ord_inter_equiv_static_inter:
- "static_refutational_complete_calculus Bot_F Inf_F lifted_calc_w_red_crit_family.entails_Q
- lifted_calc_w_red_crit_family.Red_Inf_Q lifted_calc_w_red_crit_family.Red_F_Q =
- static_refutational_complete_calculus Bot_F Inf_F lifted_calc_w_red_crit_family.entails_Q
- empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q empty_ord_lifted_calc_w_red_crit_family.Red_F_Q"
+ "static_refutational_complete_calculus Bot_F Inf_F entails_Q Red_Inf_Q Red_F_Q =
+ static_refutational_complete_calculus Bot_F Inf_F entails_Q Red_Inf_Q Red_F_\<G>_empty"
unfolding static_refutational_complete_calculus_def
- by (simp add: empty_ord_lifted_calc_w_red_crit_family.inter_red_crit_calculus.calculus_with_red_crit_axioms
- lifted_calc_w_red_crit_family.inter_red_crit_calculus.calculus_with_red_crit_axioms)
+ by (simp add: empty_ord.calculus_with_red_crit_axioms calculus_with_red_crit_axioms)
(* thm:intersect-static-ref-compl-is-dyn-ref-compl-with-order *)
-theorem stat_eq_dyn_ref_comp_fam_inter: "static_refutational_complete_calculus Bot_F Inf_F lifted_calc_w_red_crit_family.entails_Q
- empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q empty_ord_lifted_calc_w_red_crit_family.Red_F_Q =
- dynamic_refutational_complete_calculus Bot_F Inf_F lifted_calc_w_red_crit_family.entails_Q
- lifted_calc_w_red_crit_family.Red_Inf_Q lifted_calc_w_red_crit_family.Red_F_Q" (is "?static=?dynamic")
-proof
- assume ?static
- then have static_general: "static_refutational_complete_calculus Bot_F Inf_F
- lifted_calc_w_red_crit_family.entails_Q lifted_calc_w_red_crit_family.Red_Inf_Q
- lifted_calc_w_red_crit_family.Red_F_Q" (is "?static_gen")
- using static_empty_ord_inter_equiv_static_inter
- by simp
- interpret static_refutational_complete_calculus Bot_F Inf_F lifted_calc_w_red_crit_family.entails_Q
- lifted_calc_w_red_crit_family.Red_Inf_Q lifted_calc_w_red_crit_family.Red_F_Q
- using static_general .
- show "?dynamic" by standard
-next
- assume dynamic_gen: ?dynamic
- interpret dynamic_refutational_complete_calculus Bot_F Inf_F lifted_calc_w_red_crit_family.entails_Q
- lifted_calc_w_red_crit_family.Red_Inf_Q lifted_calc_w_red_crit_family.Red_F_Q
- using dynamic_gen .
- have "static_refutational_complete_calculus Bot_F Inf_F lifted_calc_w_red_crit_family.entails_Q
- lifted_calc_w_red_crit_family.Red_Inf_Q lifted_calc_w_red_crit_family.Red_F_Q"
- by standard
- then show "?static" using static_empty_ord_inter_equiv_static_inter by simp
-qed
+theorem stat_eq_dyn_ref_comp_fam_inter: "static_refutational_complete_calculus Bot_F Inf_F
+ entails_Q Red_Inf_Q Red_F_\<G>_empty =
+ dynamic_refutational_complete_calculus Bot_F Inf_F entails_Q Red_Inf_Q Red_F_Q"
+ using dyn_equiv_stat static_empty_ord_inter_equiv_static_inter by blast
end
end
diff --git a/thys/Saturation_Framework/Prover_Architectures.thy b/thys/Saturation_Framework/Prover_Architectures.thy
--- a/thys/Saturation_Framework/Prover_Architectures.thy
+++ b/thys/Saturation_Framework/Prover_Architectures.thy
@@ -1,1339 +1,1167 @@
(* Title: Prover Architectures of the Saturation Framework
* Author: Sophie Tourret <stourret at mpi-inf.mpg.de>, 2019-2020 *)
section \<open>Prover Architectures\<close>
text \<open>This section covers all the results presented in the section 4 of the report.
This is where abstract architectures of provers are defined and proven
dynamically refutationally complete.\<close>
theory Prover_Architectures
- imports Labeled_Lifting_to_Non_Ground_Calculi
+ imports
+ Lambda_Free_RPOs.Lambda_Free_Util
+ Labeled_Lifting_to_Non_Ground_Calculi
begin
subsection \<open>Basis of the Prover Architectures\<close>
-locale Prover_Architecture_Basis = labeled_lifting_with_red_crit_family Bot_F Inf_F Bot_G Q entails_q Inf_G
- Red_Inf_q Red_F_q \<G>_F_q \<G>_Inf_q l Inf_FL
+locale prover_architecture_basis = std?: labeled_lifting_with_red_crit_family Bot_F Inf_F Bot_G Q
+ entails_q Inf_G_q Red_Inf_q Red_F_q \<G>_F_q \<G>_Inf_q Inf_FL
for
Bot_F :: "'f set"
and Inf_F :: "'f inference set"
and Bot_G :: "'g set"
- and Q :: "'q itself"
- and entails_q :: "'q \<Rightarrow> ('g set \<Rightarrow> 'g set \<Rightarrow> bool)"
- and Inf_G :: \<open>'g inference set\<close>
- and Red_Inf_q :: "'q \<Rightarrow> ('g set \<Rightarrow> 'g inference set)"
- and Red_F_q :: "'q \<Rightarrow> ('g set \<Rightarrow> 'g set)"
+ and Q :: "'q set"
+ and entails_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g set \<Rightarrow> bool"
+ and Inf_G_q :: \<open>'q \<Rightarrow> 'g inference set\<close>
+ and Red_Inf_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g inference set"
+ and Red_F_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g set"
and \<G>_F_q :: "'q \<Rightarrow> 'f \<Rightarrow> 'g set"
and \<G>_Inf_q :: "'q \<Rightarrow> 'f inference \<Rightarrow> 'g inference set option"
- and l :: "'l itself"
and Inf_FL :: \<open>('f \<times> 'l) inference set\<close>
+ fixes
- Equiv_F :: "('f \<times> 'f) set" and
- Prec_F :: "'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<cdot>\<succ>" 50) and
- Prec_l :: "'l \<Rightarrow> 'l \<Rightarrow> bool" (infix "\<sqsubset>l" 50)
+ Equiv_F :: "'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<doteq>" 50) and
+ Prec_F :: "'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<prec>\<cdot>" 50) and
+ Prec_l :: "'l \<Rightarrow> 'l \<Rightarrow> bool" (infix "\<sqsubset>l" 50) and
+ active :: "'l"
assumes
- equiv_F_is_equiv_rel: "equiv UNIV Equiv_F" and
- wf_prec_F: "minimal_element (Prec_F) UNIV" and
- wf_prec_l: "minimal_element (Prec_l) UNIV" and
- compat_equiv_prec: "(C1,D1) \<in> equiv_F \<Longrightarrow> (C2,D2) \<in> equiv_F \<Longrightarrow> C1 \<cdot>\<succ> C2 \<Longrightarrow> D1 \<cdot>\<succ> D2" and
- equiv_F_grounding: "(C1,C2) \<in> equiv_F \<Longrightarrow> \<G>_F_q q C1 = \<G>_F_q q C2" and
- prec_F_grounding: "C1 \<cdot>\<succ> C2 \<Longrightarrow> \<G>_F_q q C1 \<subseteq> \<G>_F_q q C2" and
- static_ref_comp: "static_refutational_complete_calculus Bot_F Inf_F (\<Turnstile>\<inter>)
- no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q
- no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_F_Q"
+ equiv_equiv_F: "equivp (\<doteq>)" and
+ wf_prec_F: "minimal_element (\<prec>\<cdot>) UNIV" and
+ wf_prec_l: "minimal_element (\<sqsubset>l) UNIV" and
+ compat_equiv_prec: "C1 \<doteq> D1 \<Longrightarrow> C2 \<doteq> D2 \<Longrightarrow> C1 \<prec>\<cdot> C2 \<Longrightarrow> D1 \<prec>\<cdot> D2" and
+ equiv_F_grounding: "q \<in> Q \<Longrightarrow> C1 \<doteq> C2 \<Longrightarrow> \<G>_F_q q C1 \<subseteq> \<G>_F_q q C2" and
+ prec_F_grounding: "q \<in> Q \<Longrightarrow> C2 \<prec>\<cdot> C1 \<Longrightarrow> \<G>_F_q q C1 \<subseteq> \<G>_F_q q C2" and
+ active_minimal: "l2 \<noteq> active \<Longrightarrow> active \<sqsubset>l l2" and
+ at_least_two_labels: "\<exists>l2. active \<sqsubset>l l2" and
+ inf_never_active: "\<iota> \<in> Inf_FL \<Longrightarrow> snd (concl_of \<iota>) \<noteq> active" and
+ static_ref_comp: "static_refutational_complete_calculus Bot_F Inf_F (\<Turnstile>\<inter>\<G>)
+ no_labels.Red_Inf_\<G>_Q no_labels.Red_F_\<G>_empty"
begin
-definition equiv_F_fun :: "'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<doteq>" 50) where
- "equiv_F_fun C D \<equiv> (C,D) \<in> Equiv_F"
-
-definition Prec_eq_F :: "'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<cdot>\<succeq>" 50) where
- "Prec_eq_F C D \<equiv> ((C,D) \<in> Equiv_F \<or> C \<cdot>\<succ> D)"
+abbreviation Prec_eq_F :: "'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<preceq>\<cdot>" 50) where
+ "C \<preceq>\<cdot> D \<equiv> C \<doteq> D \<or> C \<prec>\<cdot> D"
definition Prec_FL :: "('f \<times> 'l) \<Rightarrow> ('f \<times> 'l) \<Rightarrow> bool" (infix "\<sqsubset>" 50) where
- "Prec_FL Cl1 Cl2 \<equiv> (fst Cl1 \<cdot>\<succ> fst Cl2) \<or> (fst Cl1 \<doteq> fst Cl2 \<and> snd Cl1 \<sqsubset>l snd Cl2)"
+ "Cl1 \<sqsubset> Cl2 \<longleftrightarrow> fst Cl1 \<prec>\<cdot> fst Cl2 \<or> (fst Cl1 \<doteq> fst Cl2 \<and> snd Cl1 \<sqsubset>l snd Cl2)"
+
+lemma irrefl_prec_F: "\<not> C \<prec>\<cdot> C"
+ by (simp add: minimal_element.po[OF wf_prec_F, unfolded po_on_def irreflp_on_def])
+
+lemma trans_prec_F: "C1 \<prec>\<cdot> C2 \<Longrightarrow> C2 \<prec>\<cdot> C3 \<Longrightarrow> C1 \<prec>\<cdot> C3"
+ by (auto intro: minimal_element.po[OF wf_prec_F, unfolded po_on_def transp_on_def, THEN conjunct2,
+ simplified, rule_format])
lemma wf_prec_FL: "minimal_element (\<sqsubset>) UNIV"
proof
show "po_on (\<sqsubset>) UNIV" unfolding po_on_def
proof
show "irreflp_on (\<sqsubset>) UNIV" unfolding irreflp_on_def Prec_FL_def
proof
- fix a
- assume a_in: "a \<in> (UNIV::('f \<times> 'l) set)"
- have "\<not> (fst a \<cdot>\<succ> fst a)" using wf_prec_F minimal_element.min_elt_ex by force
- moreover have "\<not> (snd a \<sqsubset>l snd a)" using wf_prec_l minimal_element.min_elt_ex by force
- ultimately show "\<not> (fst a \<cdot>\<succ> fst a \<or> fst a \<doteq> fst a \<and> snd a \<sqsubset>l snd a)" by blast
+ fix Cl
+ assume a_in: "Cl \<in> (UNIV::('f \<times> 'l) set)"
+ have "\<not> (fst Cl \<prec>\<cdot> fst Cl)" using wf_prec_F minimal_element.min_elt_ex by force
+ moreover have "\<not> (snd Cl \<sqsubset>l snd Cl)" using wf_prec_l minimal_element.min_elt_ex by force
+ ultimately show "\<not> (fst Cl \<prec>\<cdot> fst Cl \<or> fst Cl \<doteq> fst Cl \<and> snd Cl \<sqsubset>l snd Cl)" by blast
qed
next
show "transp_on (\<sqsubset>) UNIV" unfolding transp_on_def Prec_FL_def
proof (simp, intro allI impI)
- fix a1 b1 a2 b2 a3 b3
- assume trans_hyp:"(a1 \<cdot>\<succ> a2 \<or> a1 \<doteq> a2 \<and> b1 \<sqsubset>l b2) \<and> (a2 \<cdot>\<succ> a3 \<or> a2 \<doteq> a3 \<and> b2 \<sqsubset>l b3)"
- have "a1 \<cdot>\<succ> a2 \<Longrightarrow> a2 \<cdot>\<succ> a3 \<Longrightarrow> a1 \<cdot>\<succ> a3" using wf_prec_F compat_equiv_prec by blast
- moreover have "a1 \<cdot>\<succ> a2 \<Longrightarrow> a2 \<doteq> a3 \<Longrightarrow> a1 \<cdot>\<succ> a3" using wf_prec_F compat_equiv_prec by blast
- moreover have "a1 \<doteq> a2 \<Longrightarrow> a2 \<cdot>\<succ> a3 \<Longrightarrow> a1 \<cdot>\<succ> a3" using wf_prec_F compat_equiv_prec by blast
- moreover have "b1 \<sqsubset>l b2 \<Longrightarrow> b2 \<sqsubset>l b3 \<Longrightarrow> b1 \<sqsubset>l b3"
+ fix C1 l1 C2 l2 C3 l3
+ assume trans_hyp: "(C1 \<prec>\<cdot> C2 \<or> C1 \<doteq> C2 \<and> l1 \<sqsubset>l l2) \<and> (C2 \<prec>\<cdot> C3 \<or> C2 \<doteq> C3 \<and> l2 \<sqsubset>l l3)"
+ have "C1 \<prec>\<cdot> C2 \<Longrightarrow> C2 \<doteq> C3 \<Longrightarrow> C1 \<prec>\<cdot> C3"
+ using compat_equiv_prec by (metis equiv_equiv_F equivp_def)
+ moreover have "C1 \<doteq> C2 \<Longrightarrow> C2 \<prec>\<cdot> C3 \<Longrightarrow> C1 \<prec>\<cdot> C3"
+ using compat_equiv_prec by (metis equiv_equiv_F equivp_def)
+ moreover have "l1 \<sqsubset>l l2 \<Longrightarrow> l2 \<sqsubset>l l3 \<Longrightarrow> l1 \<sqsubset>l l3"
using wf_prec_l unfolding minimal_element_def po_on_def transp_on_def by (meson UNIV_I)
- moreover have "a1 \<doteq> a2 \<Longrightarrow> a2 \<doteq> a3 \<Longrightarrow> a1 \<doteq> a3"
- using equiv_F_is_equiv_rel equiv_class_eq unfolding equiv_F_fun_def by fastforce
- ultimately show "(a1 \<cdot>\<succ> a3 \<or> a1 \<doteq> a3 \<and> b1 \<sqsubset>l b3)" using trans_hyp by blast
+ moreover have "C1 \<doteq> C2 \<Longrightarrow> C2 \<doteq> C3 \<Longrightarrow> C1 \<doteq> C3"
+ using equiv_equiv_F by (meson equivp_transp)
+ ultimately show "C1 \<prec>\<cdot> C3 \<or> C1 \<doteq> C3 \<and> l1 \<sqsubset>l l3" using trans_hyp
+ using trans_prec_F by blast
qed
qed
next
show "wfp_on (\<sqsubset>) UNIV" unfolding wfp_on_def
proof
assume contra: "\<exists>f. \<forall>i. f i \<in> UNIV \<and> f (Suc i) \<sqsubset> f i"
- then obtain f where f_in: "\<forall>i. f i \<in> UNIV" and f_suc: "\<forall>i. f (Suc i) \<sqsubset> f i" by blast
- define f_F where "f_F = (\<lambda>i. fst (f i))"
- define f_L where "f_L = (\<lambda>i. snd (f i))"
- have uni_F: "\<forall>i. f_F i \<in> UNIV" using f_in by simp
- have uni_L: "\<forall>i. f_L i \<in> UNIV" using f_in by simp
- have decomp: "\<forall>i. f_F (Suc i) \<cdot>\<succ> f_F i \<or> f_L (Suc i) \<sqsubset>l f_L i"
- using f_suc unfolding Prec_FL_def f_F_def f_L_def by blast
- define I_F where "I_F = { i |i. f_F (Suc i) \<cdot>\<succ> f_F i}"
- define I_L where "I_L = { i |i. f_L (Suc i) \<sqsubset>l f_L i}"
- have "I_F \<union> I_L = UNIV" using decomp unfolding I_F_def I_L_def by blast
- then have "finite I_F \<Longrightarrow> \<not> finite I_L" by (metis finite_UnI infinite_UNIV_nat)
- moreover have "infinite I_F \<Longrightarrow> \<exists>f. \<forall>i. f i \<in> UNIV \<and> f (Suc i) \<cdot>\<succ> f i"
- using uni_F unfolding I_F_def by (meson compat_equiv_prec iso_tuple_UNIV_I not_finite_existsD)
- moreover have "infinite I_L \<Longrightarrow> \<exists>f. \<forall>i. f i \<in> UNIV \<and> f (Suc i) \<sqsubset>l f i"
- using uni_L unfolding I_L_def
- by (metis UNIV_I compat_equiv_prec decomp minimal_element_def wf_prec_F wfp_on_def)
- ultimately show False using wf_prec_F wf_prec_l by (metis minimal_element_def wfp_on_def)
+ then obtain f where
+ f_suc: "\<forall>i. f (Suc i) \<sqsubset> f i"
+ by blast
+
+ define R :: "(('f \<times> 'l) \<times> ('f \<times> 'l)) set" where
+ "R = {(Cl1, Cl2). fst Cl1 \<prec>\<cdot> fst Cl2}"
+ define S :: "(('f \<times> 'l) \<times> ('f \<times> 'l)) set" where
+ "S = {(Cl1, Cl2). fst Cl1 \<doteq> fst Cl2 \<and> snd Cl1 \<sqsubset>l snd Cl2}"
+
+ obtain k where
+ f_chain: "\<forall>i. (f (Suc (i + k)), f (i + k)) \<in> S"
+ proof (atomize_elim, rule wf_infinite_down_chain_compatible[of R f S])
+ show "wf R"
+ unfolding R_def using wf_app[OF wf_prec_F[unfolded minimal_element_def, THEN conjunct2,
+ unfolded wfp_on_UNIV wfP_def]]
+ by force
+ next
+ show "\<forall>i. (f (Suc i), f i) \<in> R \<union> S"
+ using f_suc unfolding R_def S_def Prec_FL_def by blast
+ next
+ show "R O S \<subseteq> R"
+ unfolding R_def S_def using compat_equiv_prec equiv_equiv_F equivp_reflp by fastforce
+ qed
+
+ define g where
+ "\<And>i. g i = f (i + k)"
+
+ have g_chain: "\<forall>i. (g (Suc i), g i) \<in> S"
+ unfolding g_def using f_chain by simp
+ have wf_s: "wf S"
+ unfolding S_def
+ by (rule wf_subset[OF wf_app[OF wf_prec_l[unfolded minimal_element_def, THEN conjunct2,
+ unfolded wfp_on_UNIV wfP_def], of snd]])
+ fast
+ show False
+ using g_chain[unfolded S_def]
+ wf_s[unfolded S_def, folded wfP_def wfp_on_UNIV, unfolded wfp_on_def]
+ by auto
qed
qed
-lemma labeled_static_ref_comp:
- "static_refutational_complete_calculus Bot_FL Inf_FL (\<Turnstile>\<inter>L) with_labels.Red_Inf_Q with_labels.Red_F_Q"
+definition active_subset :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set" where
+ "active_subset M = {CL \<in> M. snd CL = active}"
+
+definition passive_subset :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set" where
+ "passive_subset M = {CL \<in> M. snd CL \<noteq> active}"
+
+lemma active_subset_insert[simp]:
+ "active_subset (insert Cl N) = (if snd Cl = active then {Cl} else {}) \<union> active_subset N"
+ unfolding active_subset_def by auto
+
+lemma active_subset_union[simp]: "active_subset (M \<union> N) = active_subset M \<union> active_subset N"
+ unfolding active_subset_def by auto
+
+lemma passive_subset_insert[simp]:
+ "passive_subset (insert Cl N) = (if snd Cl \<noteq> active then {Cl} else {}) \<union> passive_subset N"
+ unfolding passive_subset_def by auto
+
+lemma passive_subset_union[simp]: "passive_subset (M \<union> N) = passive_subset M \<union> passive_subset N"
+ unfolding passive_subset_def by auto
+
+sublocale std?: static_refutational_complete_calculus Bot_FL Inf_FL "(\<Turnstile>\<inter>\<G>L)" Red_Inf_Q Red_F_Q
using labeled_static_ref[OF static_ref_comp] .
-lemma standard_labeled_lifting_family: "lifting_with_wf_ordering_family Bot_FL Inf_FL Bot_G
- (entails_q q) Inf_G (Red_Inf_q q) (Red_F_q q) (\<G>_F_L_q q) (\<G>_Inf_L_q q) (\<lambda>g. Prec_FL)"
+lemma standard_labeled_lifting_family:
+ assumes q_in: "q \<in> Q"
+ shows "lifting_with_wf_ordering_family Bot_FL Inf_FL Bot_G (entails_q q) (Inf_G_q q)
+ (Red_Inf_q q) (Red_F_q q) (\<G>_F_L_q q) (\<G>_Inf_L_q q) (\<lambda>g. Prec_FL)"
proof -
- fix q
- have "lifting_with_wf_ordering_family Bot_FL Inf_FL Bot_G (entails_q q) Inf_G
- (Red_Inf_q q) (Red_F_q q) (\<G>_F_L_q q) (\<G>_Inf_L_q q) (\<lambda>g. Labeled_Empty_Order)"
- using ord_fam_lifted_q .
- then have "standard_lifting Bot_FL Inf_FL Bot_G Inf_G (entails_q q) (Red_Inf_q q) (Red_F_q q)
- (\<G>_F_L_q q) (\<G>_Inf_L_q q)"
- using lifted_q by blast
- then show "lifting_with_wf_ordering_family Bot_FL Inf_FL Bot_G (entails_q q) Inf_G (Red_Inf_q q)
- (Red_F_q q) (\<G>_F_L_q q) (\<G>_Inf_L_q q) (\<lambda>g. Prec_FL)"
+ have "lifting_with_wf_ordering_family Bot_FL Inf_FL Bot_G (entails_q q) (Inf_G_q q)
+ (Red_Inf_q q) (Red_F_q q) (\<G>_F_L_q q) (\<G>_Inf_L_q q) (\<lambda>g Cl Cl'. False)"
+ using ord_fam_lifted_q[OF q_in] .
+ then have "standard_lifting Bot_FL Inf_FL Bot_G (Inf_G_q q) (entails_q q) (Red_Inf_q q)
+ (Red_F_q q) (\<G>_F_L_q q) (\<G>_Inf_L_q q)"
+ using lifted_q[OF q_in] by blast
+ then show "lifting_with_wf_ordering_family Bot_FL Inf_FL Bot_G (entails_q q) (Inf_G_q q)
+ (Red_Inf_q q) (Red_F_q q) (\<G>_F_L_q q) (\<G>_Inf_L_q q) (\<lambda>g. Prec_FL)"
using wf_prec_FL
by (simp add: lifting_with_wf_ordering_family.intro lifting_with_wf_ordering_family_axioms.intro)
qed
-sublocale labeled_ord_red_crit_fam: standard_lifting_with_red_crit_family Inf_FL Bot_G Inf_G Q
- entails_q Red_Inf_q Red_F_q
+sublocale standard_lifting_with_red_crit_family Inf_FL Bot_G Q Inf_G_q entails_q Red_Inf_q Red_F_q
Bot_FL \<G>_F_L_q \<G>_Inf_L_q "\<lambda>g. Prec_FL"
- using standard_labeled_lifting_family
- no_labels.Ground_family.calculus_with_red_crit_family_axioms
+ using standard_labeled_lifting_family no_labels.ground.calculus_family_with_red_crit_family_axioms
by (simp add: standard_lifting_with_red_crit_family.intro
standard_lifting_with_red_crit_family_axioms.intro)
-lemma entail_equiv:
- "labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.entails_Q N1 N2 = (N1 \<Turnstile>\<inter>L N2)"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.entails_Q_def
- entails_\<G>_L_Q_def entails_\<G>_L_q_def labeled_ord_red_crit_fam.entails_\<G>_q_def
- labeled_ord_red_crit_fam.\<G>_set_q_def \<G>_set_L_q_def
- by simp
-
-lemma entail_equiv2: "labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.entails_Q = (\<Turnstile>\<inter>L)"
- using entail_equiv by auto
-
-lemma red_inf_equiv: "labeled_ord_red_crit_fam.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q N =
- with_labels.Red_Inf_Q N"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_Inf_Q_def
- with_labels.Red_Inf_Q_def labeled_ord_red_crit_fam.Red_Inf_\<G>_q_def Red_Inf_\<G>_L_q_def
- labeled_ord_red_crit_fam.\<G>_set_q_def \<G>_set_L_q_def
- by simp
-
-lemma red_inf_equiv2: "labeled_ord_red_crit_fam.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q =
- with_labels.Red_Inf_Q"
- using red_inf_equiv by auto
+notation derive (infix "\<rhd>RedL" 50)
-lemma empty_red_f_equiv: "labeled_ord_red_crit_fam.empty_ord_lifted_calc_w_red_crit_family.Red_F_Q N =
- with_labels.Red_F_Q N"
- unfolding labeled_ord_red_crit_fam.empty_ord_lifted_calc_w_red_crit_family.Red_F_Q_def
- with_labels.Red_F_Q_def labeled_ord_red_crit_fam.Red_F_\<G>_empty_q_def Red_F_\<G>_empty_L_q_def
- labeled_ord_red_crit_fam.\<G>_set_q_def \<G>_set_L_q_def Labeled_Empty_Order_def
- by simp
-
-lemma empty_red_f_equiv2: "labeled_ord_red_crit_fam.empty_ord_lifted_calc_w_red_crit_family.Red_F_Q =
- with_labels.Red_F_Q"
- using empty_red_f_equiv by auto
+lemma std_Red_Inf_Q_eq: "std.Red_Inf_Q = Red_Inf_\<G>_Q"
+ unfolding Red_Inf_\<G>_q_def Red_Inf_\<G>_L_q_def by simp
-lemma labeled_ordered_static_ref_comp:
- "static_refutational_complete_calculus Bot_FL Inf_FL
- labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.entails_Q
- labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_Inf_Q
- labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q"
- using labeled_ord_red_crit_fam.static_empty_ord_inter_equiv_static_inter empty_red_f_equiv2
- red_inf_equiv2 entail_equiv2 labeled_static_ref_comp
- by argo
+lemma std_Red_F_Q_eq: "std.Red_F_Q = Red_F_\<G>_empty"
+ unfolding Red_F_\<G>_empty_q_def Red_F_\<G>_empty_L_q_def by simp
-interpretation stat_ref_calc: static_refutational_complete_calculus Bot_FL Inf_FL
- labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.entails_Q
- labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_Inf_Q
- labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q
- by (rule labeled_ordered_static_ref_comp)
-
-lemma labeled_ordered_dynamic_ref_comp:
- "dynamic_refutational_complete_calculus Bot_FL Inf_FL
- labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.entails_Q
- labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_Inf_Q
- labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q"
- by (rule stat_ref_calc.dynamic_refutational_complete_calculus_axioms)
+sublocale static_refutational_complete_calculus Bot_FL Inf_FL "(\<Turnstile>\<inter>\<G>L)" Red_Inf_Q Red_F_Q
+ by unfold_locales (use static_refutational_complete std_Red_Inf_Q_eq in auto)
(* lem:redundant-labeled-inferences *)
-lemma labeled_red_inf_eq_red_inf: "\<iota> \<in> Inf_FL \<Longrightarrow>
- \<iota> \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_Inf_Q N \<equiv>
- (to_F \<iota>) \<in> no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` N)" for \<iota>
-proof -
- fix \<iota>
- assume i_in: "\<iota> \<in> Inf_FL"
- have "\<iota> \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_Inf_Q N \<Longrightarrow>
- (to_F \<iota>) \<in> no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` N)"
- proof -
- assume i_in2: "\<iota> \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_Inf_Q N"
- then have "X \<in> labeled_ord_red_crit_fam.Red_Inf_\<G>_q ` UNIV \<Longrightarrow> \<iota> \<in> X N" for X
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_Inf_Q_def by blast
- obtain X0 where "X0 \<in> labeled_ord_red_crit_fam.Red_Inf_\<G>_q ` UNIV" by blast
- then obtain q0 where x0_is: "X0 N = labeled_ord_red_crit_fam.Red_Inf_\<G>_q q0 N" by blast
- then obtain Y0 where y0_is: "Y0 (fst ` N) = to_F ` (X0 N)" by auto
- have "Y0 (fst ` N) = no_labels.Red_Inf_\<G>_q q0 (fst ` N)"
- unfolding y0_is
+lemma labeled_red_inf_eq_red_inf:
+ assumes i_in: "\<iota> \<in> Inf_FL"
+ shows "\<iota> \<in> Red_Inf_Q N \<longleftrightarrow> to_F \<iota> \<in> no_labels.Red_Inf_\<G>_Q (fst ` N)"
+proof
+ assume i_in2: "\<iota> \<in> Red_Inf_Q N"
+ then have "X \<in> Red_Inf_\<G>_q ` Q \<Longrightarrow> \<iota> \<in> X N" for X
+ unfolding Red_Inf_Q_def by blast
+ obtain X0 where "X0 \<in> Red_Inf_\<G>_q ` Q"
+ using Q_nonempty by blast
+ then obtain q0 where x0_is: "X0 N = Red_Inf_\<G>_q q0 N" by blast
+ then obtain Y0 where y0_is: "Y0 (fst ` N) = to_F ` (X0 N)" by auto
+ have "Y0 (fst ` N) = no_labels.Red_Inf_\<G>_q q0 (fst ` N)"
+ unfolding y0_is
+ proof
+ show "to_F ` X0 N \<subseteq> no_labels.Red_Inf_\<G>_q q0 (fst ` N)"
proof
- show "to_F ` X0 N \<subseteq> no_labels.Red_Inf_\<G>_q q0 (fst ` N)"
- proof
- fix \<iota>0
- assume i0_in: "\<iota>0 \<in> to_F ` X0 N"
- then have i0_in2: "\<iota>0 \<in> to_F ` (labeled_ord_red_crit_fam.Red_Inf_\<G>_q q0 N)"
- using x0_is by argo
- then obtain \<iota>0_FL where i0_FL_in: "\<iota>0_FL \<in> Inf_FL" and i0_to_i0_FL: "\<iota>0 = to_F \<iota>0_FL" and
- subs1: "((\<G>_Inf_L_q q0 \<iota>0_FL) \<noteq> None \<and>
- the (\<G>_Inf_L_q q0 \<iota>0_FL) \<subseteq> Red_Inf_q q0 (labeled_ord_red_crit_fam.\<G>_set_q q0 N))
+ fix \<iota>0
+ assume i0_in: "\<iota>0 \<in> to_F ` X0 N"
+ then have i0_in2: "\<iota>0 \<in> to_F ` Red_Inf_\<G>_q q0 N"
+ using x0_is by argo
+ then obtain \<iota>0_FL where i0_FL_in: "\<iota>0_FL \<in> Inf_FL" and i0_to_i0_FL: "\<iota>0 = to_F \<iota>0_FL" and
+ subs1: "((\<G>_Inf_L_q q0 \<iota>0_FL) \<noteq> None \<and>
+ the (\<G>_Inf_L_q q0 \<iota>0_FL) \<subseteq> Red_Inf_q q0 (\<G>_set_q q0 N))
\<or> ((\<G>_Inf_L_q q0 \<iota>0_FL = None) \<and>
- \<G>_F_L_q q0 (concl_of \<iota>0_FL) \<subseteq> (labeled_ord_red_crit_fam.\<G>_set_q q0 N \<union>
- Red_F_q q0 (labeled_ord_red_crit_fam.\<G>_set_q q0 N)))"
- unfolding labeled_ord_red_crit_fam.Red_Inf_\<G>_q_def by blast
- have concl_swap: "fst (concl_of \<iota>0_FL) = concl_of \<iota>0"
- unfolding concl_of_def i0_to_i0_FL to_F_def by simp
- have i0_in3: "\<iota>0 \<in> Inf_F"
- using i0_to_i0_FL Inf_FL_to_Inf_F[OF i0_FL_in] unfolding to_F_def by blast
- {
- assume
- not_none: "\<G>_Inf_q q0 \<iota>0 \<noteq> None" and
- "the (\<G>_Inf_q q0 \<iota>0) \<noteq> {}"
- then obtain \<iota>1 where i1_in: "\<iota>1 \<in> the (\<G>_Inf_q q0 \<iota>0)" by blast
- have "the (\<G>_Inf_q q0 \<iota>0) \<subseteq> Red_Inf_q q0 (no_labels.\<G>_set_q q0 (fst ` N))"
- using subs1 i0_to_i0_FL not_none
- unfolding no_labels.\<G>_set_q_def labeled_ord_red_crit_fam.\<G>_set_q_def
- \<G>_Inf_L_q_def \<G>_F_L_q_def by auto
- }
- moreover {
- assume
- is_none: "\<G>_Inf_q q0 \<iota>0 = None"
- then have "\<G>_F_q q0 (concl_of \<iota>0) \<subseteq> no_labels.\<G>_set_q q0 (fst ` N)
+ \<G>_F_L_q q0 (concl_of \<iota>0_FL) \<subseteq> \<G>_set_q q0 N \<union> Red_F_q q0 (\<G>_set_q q0 N))"
+ unfolding Red_Inf_\<G>_q_def by blast
+ have concl_swap: "fst (concl_of \<iota>0_FL) = concl_of \<iota>0"
+ unfolding concl_of_def i0_to_i0_FL to_F_def by simp
+ have i0_in3: "\<iota>0 \<in> Inf_F"
+ using i0_to_i0_FL Inf_FL_to_Inf_F[OF i0_FL_in] unfolding to_F_def by blast
+ {
+ assume
+ not_none: "\<G>_Inf_q q0 \<iota>0 \<noteq> None" and
+ "the (\<G>_Inf_q q0 \<iota>0) \<noteq> {}"
+ then obtain \<iota>1 where i1_in: "\<iota>1 \<in> the (\<G>_Inf_q q0 \<iota>0)" by blast
+ have "the (\<G>_Inf_q q0 \<iota>0) \<subseteq> Red_Inf_q q0 (no_labels.\<G>_set_q q0 (fst ` N))"
+ using subs1 i0_to_i0_FL not_none by auto
+ }
+ moreover {
+ assume
+ is_none: "\<G>_Inf_q q0 \<iota>0 = None"
+ then have "\<G>_F_q q0 (concl_of \<iota>0) \<subseteq> no_labels.\<G>_set_q q0 (fst ` N)
\<union> Red_F_q q0 (no_labels.\<G>_set_q q0 (fst ` N))"
- using subs1 i0_to_i0_FL concl_swap
- unfolding no_labels.\<G>_set_q_def labeled_ord_red_crit_fam.\<G>_set_q_def
- \<G>_Inf_L_q_def \<G>_F_L_q_def by simp
- }
- ultimately show "\<iota>0 \<in> no_labels.Red_Inf_\<G>_q q0 (fst ` N)"
- unfolding no_labels.Red_Inf_\<G>_q_def using i0_in3 by auto
- qed
- next
- show "no_labels.Red_Inf_\<G>_q q0 (fst ` N) \<subseteq> to_F ` X0 N"
- proof
- fix \<iota>0
- assume i0_in: "\<iota>0 \<in> no_labels.Red_Inf_\<G>_q q0 (fst ` N)"
- then have i0_in2: "\<iota>0 \<in> Inf_F"
- unfolding no_labels.Red_Inf_\<G>_q_def by blast
- obtain \<iota>0_FL where i0_FL_in: "\<iota>0_FL \<in> Inf_FL" and i0_to_i0_FL: "\<iota>0 = to_F \<iota>0_FL"
- using Inf_F_to_Inf_FL[OF i0_in2] unfolding to_F_def
- by (metis Ex_list_of_length fst_conv inference.exhaust_sel inference.inject map_fst_zip)
- have concl_swap: "fst (concl_of \<iota>0_FL) = concl_of \<iota>0"
- unfolding concl_of_def i0_to_i0_FL to_F_def by simp
- have subs1: "((\<G>_Inf_L_q q0 \<iota>0_FL) \<noteq> None \<and>
- the (\<G>_Inf_L_q q0 \<iota>0_FL) \<subseteq> Red_Inf_q q0 (labeled_ord_red_crit_fam.\<G>_set_q q0 N))
+ using subs1 i0_to_i0_FL concl_swap by simp
+ }
+ ultimately show "\<iota>0 \<in> no_labels.Red_Inf_\<G>_q q0 (fst ` N)"
+ unfolding no_labels.Red_Inf_\<G>_q_def using i0_in3 by auto
+ qed
+ next
+ show "no_labels.Red_Inf_\<G>_q q0 (fst ` N) \<subseteq> to_F ` X0 N"
+ proof
+ fix \<iota>0
+ assume i0_in: "\<iota>0 \<in> no_labels.Red_Inf_\<G>_q q0 (fst ` N)"
+ then have i0_in2: "\<iota>0 \<in> Inf_F"
+ unfolding no_labels.Red_Inf_\<G>_q_def by blast
+ obtain \<iota>0_FL where i0_FL_in: "\<iota>0_FL \<in> Inf_FL" and i0_to_i0_FL: "\<iota>0 = to_F \<iota>0_FL"
+ using Inf_F_to_Inf_FL[OF i0_in2] unfolding to_F_def
+ by (metis Ex_list_of_length fst_conv inference.exhaust_sel inference.inject map_fst_zip)
+ have concl_swap: "fst (concl_of \<iota>0_FL) = concl_of \<iota>0"
+ unfolding concl_of_def i0_to_i0_FL to_F_def by simp
+ have subs1: "((\<G>_Inf_L_q q0 \<iota>0_FL) \<noteq> None \<and>
+ the (\<G>_Inf_L_q q0 \<iota>0_FL) \<subseteq> Red_Inf_q q0 (\<G>_set_q q0 N))
\<or> ((\<G>_Inf_L_q q0 \<iota>0_FL = None) \<and>
- \<G>_F_L_q q0 (concl_of \<iota>0_FL) \<subseteq> (labeled_ord_red_crit_fam.\<G>_set_q q0 N \<union>
- Red_F_q q0 (labeled_ord_red_crit_fam.\<G>_set_q q0 N)))"
- using i0_in i0_to_i0_FL concl_swap
- unfolding no_labels.Red_Inf_\<G>_q_def \<G>_Inf_L_q_def no_labels.\<G>_set_q_def
- labeled_ord_red_crit_fam.\<G>_set_q_def \<G>_F_L_q_def
- by simp
- then have "\<iota>0_FL \<in> labeled_ord_red_crit_fam.Red_Inf_\<G>_q q0 N"
- using i0_FL_in unfolding labeled_ord_red_crit_fam.Red_Inf_\<G>_q_def
- by simp
- then show "\<iota>0 \<in> to_F ` X0 N"
- using x0_is i0_to_i0_FL i0_in2 by blast
- qed
- qed
- then have "Y \<in> no_labels.Red_Inf_\<G>_q ` UNIV \<Longrightarrow> (to_F \<iota>) \<in> Y (fst ` N)" for Y
- using i_in2 no_labels.lifted_calc_w_red_crit_family.Red_Inf_Q_def
- red_inf_equiv2 red_inf_impl by fastforce
- then show "(to_F \<iota>) \<in> no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` N)"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_Inf_Q_def
- no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q_def
- by blast
+ \<G>_F_L_q q0 (concl_of \<iota>0_FL) \<subseteq> (\<G>_set_q q0 N \<union> Red_F_q q0 (\<G>_set_q q0 N)))"
+ using i0_in i0_to_i0_FL concl_swap unfolding no_labels.Red_Inf_\<G>_q_def by simp
+ then have "\<iota>0_FL \<in> Red_Inf_\<G>_q q0 N"
+ using i0_FL_in unfolding Red_Inf_\<G>_q_def by simp
+ then show "\<iota>0 \<in> to_F ` X0 N"
+ using x0_is i0_to_i0_FL i0_in2 by blast
qed
- moreover have "(to_F \<iota>) \<in> no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` N) \<Longrightarrow>
- \<iota> \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_Inf_Q N"
- proof -
- assume to_F_in: "to_F \<iota> \<in> no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` N)"
- have imp_to_F: "X \<in> no_labels.Red_Inf_\<G>_q ` UNIV \<Longrightarrow> to_F \<iota> \<in> X (fst ` N)" for X
- using to_F_in unfolding no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q_def
- by blast
- then have to_F_in2: "to_F \<iota> \<in> no_labels.Red_Inf_\<G>_q q (fst ` N)" for q
- by fast
- have "labeled_ord_red_crit_fam.Red_Inf_\<G>_q q N =
- {\<iota>0_FL \<in> Inf_FL. to_F \<iota>0_FL \<in> no_labels.Red_Inf_\<G>_q q (fst ` N)}" for q
+ qed
+ then have "Y \<in> no_labels.Red_Inf_\<G>_q ` Q \<Longrightarrow> to_F \<iota> \<in> Y (fst ` N)" for Y
+ using i_in2 no_labels.Red_Inf_Q_def std_Red_Inf_Q_eq red_inf_impl by force
+ then show "to_F \<iota> \<in> no_labels.Red_Inf_\<G>_Q (fst ` N)"
+ unfolding Red_Inf_Q_def no_labels.Red_Inf_\<G>_Q_def by blast
+next
+ assume to_F_in: "to_F \<iota> \<in> no_labels.Red_Inf_\<G>_Q (fst ` N)"
+ have imp_to_F: "X \<in> no_labels.Red_Inf_\<G>_q ` Q \<Longrightarrow> to_F \<iota> \<in> X (fst ` N)" for X
+ using to_F_in unfolding no_labels.Red_Inf_\<G>_Q_def by blast
+ then have to_F_in2: "to_F \<iota> \<in> no_labels.Red_Inf_\<G>_q q (fst ` N)" if "q \<in> Q" for q
+ using that by auto
+ have "Red_Inf_\<G>_q q N = {\<iota>0_FL \<in> Inf_FL. to_F \<iota>0_FL \<in> no_labels.Red_Inf_\<G>_q q (fst ` N)}" for q
+ proof
+ show "Red_Inf_\<G>_q q N \<subseteq> {\<iota>0_FL \<in> Inf_FL. to_F \<iota>0_FL \<in> no_labels.Red_Inf_\<G>_q q (fst ` N)}"
proof
- show "labeled_ord_red_crit_fam.Red_Inf_\<G>_q q N \<subseteq>
- {\<iota>0_FL \<in> Inf_FL. to_F \<iota>0_FL \<in> no_labels.Red_Inf_\<G>_q q (fst ` N)}"
- proof
- fix q0 \<iota>1
- assume
- i1_in: "\<iota>1 \<in> labeled_ord_red_crit_fam.Red_Inf_\<G>_q q0 N"
- have i1_in2: "\<iota>1 \<in> Inf_FL"
- using i1_in unfolding labeled_ord_red_crit_fam.Red_Inf_\<G>_q_def by blast
- then have to_F_i1_in: "to_F \<iota>1 \<in> Inf_F"
- using Inf_FL_to_Inf_F unfolding to_F_def by simp
- have concl_swap: "fst (concl_of \<iota>1) = concl_of (to_F \<iota>1)"
- unfolding concl_of_def to_F_def by simp
- then have i1_to_F_in: "to_F \<iota>1 \<in> no_labels.Red_Inf_\<G>_q q0 (fst ` N)"
- using i1_in to_F_i1_in
- unfolding labeled_ord_red_crit_fam.Red_Inf_\<G>_q_def no_labels.Red_Inf_\<G>_q_def
- \<G>_Inf_L_q_def labeled_ord_red_crit_fam.\<G>_set_q_def no_labels.\<G>_set_q_def \<G>_F_L_q_def
- by force
- show "\<iota>1 \<in> {\<iota>0_FL \<in> Inf_FL. to_F \<iota>0_FL \<in> no_labels.Red_Inf_\<G>_q q0 (fst ` N)}"
- using i1_in2 i1_to_F_in by blast
- qed
- next
- show "{\<iota>0_FL \<in> Inf_FL. to_F \<iota>0_FL \<in> no_labels.Red_Inf_\<G>_q q (fst ` N)} \<subseteq>
- labeled_ord_red_crit_fam.Red_Inf_\<G>_q q N"
- proof
- fix q0 \<iota>1
- assume
- i1_in: "\<iota>1 \<in> {\<iota>0_FL \<in> Inf_FL. to_F \<iota>0_FL \<in> no_labels.Red_Inf_\<G>_q q0 (fst ` N)}"
- then have i1_in2: "\<iota>1 \<in> Inf_FL" by blast
- then have to_F_i1_in: "to_F \<iota>1 \<in> Inf_F"
- using Inf_FL_to_Inf_F unfolding to_F_def by simp
- have concl_swap: "fst (concl_of \<iota>1) = concl_of (to_F \<iota>1)"
- unfolding concl_of_def to_F_def by simp
- then have "((\<G>_Inf_L_q q0 \<iota>1) \<noteq> None \<and>
- the (\<G>_Inf_L_q q0 \<iota>1) \<subseteq> Red_Inf_q q0 (labeled_ord_red_crit_fam.\<G>_set_q q0 N))
- \<or> ((\<G>_Inf_L_q q0 \<iota>1 = None) \<and>
- \<G>_F_L_q q0 (concl_of \<iota>1) \<subseteq> (labeled_ord_red_crit_fam.\<G>_set_q q0 N \<union>
- Red_F_q q0 (labeled_ord_red_crit_fam.\<G>_set_q q0 N)))"
- using i1_in unfolding no_labels.Red_Inf_\<G>_q_def \<G>_Inf_L_q_def
- labeled_ord_red_crit_fam.\<G>_set_q_def no_labels.\<G>_set_q_def \<G>_F_L_q_def
- by auto
- then show "\<iota>1 \<in> labeled_ord_red_crit_fam.Red_Inf_\<G>_q q0 N"
- using i1_in2 unfolding labeled_ord_red_crit_fam.Red_Inf_\<G>_q_def
- by blast
- qed
+ fix q0 \<iota>1
+ assume
+ i1_in: "\<iota>1 \<in> Red_Inf_\<G>_q q0 N"
+ have i1_in2: "\<iota>1 \<in> Inf_FL"
+ using i1_in unfolding Red_Inf_\<G>_q_def by blast
+ then have to_F_i1_in: "to_F \<iota>1 \<in> Inf_F"
+ using Inf_FL_to_Inf_F unfolding to_F_def by simp
+ have concl_swap: "fst (concl_of \<iota>1) = concl_of (to_F \<iota>1)"
+ unfolding concl_of_def to_F_def by simp
+ then have i1_to_F_in: "to_F \<iota>1 \<in> no_labels.Red_Inf_\<G>_q q0 (fst ` N)"
+ using i1_in to_F_i1_in unfolding Red_Inf_\<G>_q_def no_labels.Red_Inf_\<G>_q_def by force
+ show "\<iota>1 \<in> {\<iota>0_FL \<in> Inf_FL. to_F \<iota>0_FL \<in> no_labels.Red_Inf_\<G>_q q0 (fst ` N)}"
+ using i1_in2 i1_to_F_in by blast
qed
- then have "\<iota> \<in> labeled_ord_red_crit_fam.Red_Inf_\<G>_q q N" for q
- using to_F_in2 i_in
- unfolding labeled_ord_red_crit_fam.Red_Inf_\<G>_q_def
- no_labels.Red_Inf_\<G>_q_def \<G>_Inf_L_q_def labeled_ord_red_crit_fam.\<G>_set_q_def
- no_labels.\<G>_set_q_def \<G>_F_L_q_def
- by auto
- then show "\<iota> \<in> labeled_ord_red_crit_fam.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q N"
- unfolding labeled_ord_red_crit_fam.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q_def
- by blast
+ next
+ show "{\<iota>0_FL \<in> Inf_FL. to_F \<iota>0_FL \<in> no_labels.Red_Inf_\<G>_q q (fst ` N)} \<subseteq> Red_Inf_\<G>_q q N"
+ proof
+ fix q0 \<iota>1
+ assume
+ i1_in: "\<iota>1 \<in> {\<iota>0_FL \<in> Inf_FL. to_F \<iota>0_FL \<in> no_labels.Red_Inf_\<G>_q q0 (fst ` N)}"
+ then have i1_in2: "\<iota>1 \<in> Inf_FL" by blast
+ then have to_F_i1_in: "to_F \<iota>1 \<in> Inf_F"
+ using Inf_FL_to_Inf_F unfolding to_F_def by simp
+ have concl_swap: "fst (concl_of \<iota>1) = concl_of (to_F \<iota>1)"
+ unfolding concl_of_def to_F_def by simp
+ then have "((\<G>_Inf_L_q q0 \<iota>1) \<noteq> None \<and> the (\<G>_Inf_L_q q0 \<iota>1) \<subseteq> Red_Inf_q q0 (\<G>_set_q q0 N))
+ \<or> (\<G>_Inf_L_q q0 \<iota>1 = None \<and>
+ \<G>_F_L_q q0 (concl_of \<iota>1) \<subseteq> \<G>_set_q q0 N \<union> Red_F_q q0 (\<G>_set_q q0 N))"
+ using i1_in unfolding no_labels.Red_Inf_\<G>_q_def by auto
+ then show "\<iota>1 \<in> Red_Inf_\<G>_q q0 N"
+ using i1_in2 unfolding Red_Inf_\<G>_q_def by blast
+ qed
qed
- ultimately show "\<iota> \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_Inf_Q N \<equiv>
- (to_F \<iota>) \<in> no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` N)"
- by argo
+ then have "\<iota> \<in> Red_Inf_\<G>_q q N" if "q \<in> Q" for q
+ using that to_F_in2 i_in unfolding Red_Inf_\<G>_q_def no_labels.Red_Inf_\<G>_q_def by auto
+ then show "\<iota> \<in> Red_Inf_\<G>_Q N"
+ unfolding Red_Inf_\<G>_Q_def by blast
qed
(* lem:redundant-labeled-formulas *)
-lemma red_labeled_clauses: \<open>C \<in> no_labels.Red_F_\<G>_empty (fst ` N) \<or> (\<exists>C' \<in> (fst ` N). C \<cdot>\<succ> C') \<or>
- (\<exists>(C',L') \<in> N. (L' \<sqsubset>l L \<and> C \<cdot>\<succeq> C')) \<Longrightarrow>
- (C,L) \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N\<close>
+lemma red_labeled_clauses:
+ assumes \<open>C \<in> no_labels.Red_F_\<G>_empty (fst ` N) \<or>
+ (\<exists>C' \<in> fst ` N. C' \<prec>\<cdot> C) \<or> (\<exists>(C', L') \<in> N. L' \<sqsubset>l L \<and> C' \<preceq>\<cdot> C)\<close>
+ shows \<open>(C, L) \<in> Red_F_Q N\<close>
proof -
- assume \<open>C \<in> no_labels.Red_F_\<G>_empty (fst ` N) \<or>
- (\<exists>C' \<in> (fst ` N). C \<cdot>\<succ> C') \<or> (\<exists>(C',L') \<in> N. (L' \<sqsubset>l L \<and> C \<cdot>\<succeq> C'))\<close>
- moreover have i: \<open>C \<in> no_labels.Red_F_\<G>_empty (fst ` N) \<Longrightarrow>
- (C,L) \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N\<close>
+ note assms
+ moreover have i: \<open>C \<in> no_labels.Red_F_\<G>_empty (fst ` N) \<Longrightarrow> (C, L) \<in> Red_F_Q N\<close>
proof -
assume "C \<in> no_labels.Red_F_\<G>_empty (fst ` N)"
- then have "C \<in> no_labels.Red_F_\<G>_empty_q q (fst ` N)" for q
- unfolding no_labels.Red_F_\<G>_empty_def by fast
- then have g_in_red: "\<G>_F_q q C \<subseteq> Red_F_q q (no_labels.\<G>_set_q q (fst ` N))" for q
- unfolding no_labels.Red_F_\<G>_empty_q_def by blast
- have "no_labels.\<G>_set_q q (fst ` N) = labeled_ord_red_crit_fam.\<G>_set_q q N" for q
- unfolding no_labels.\<G>_set_q_def labeled_ord_red_crit_fam.\<G>_set_q_def \<G>_F_L_q_def by simp
- then have "\<G>_F_L_q q (C,L) \<subseteq> Red_F_q q (labeled_ord_red_crit_fam.\<G>_set_q q N)" for q
- using g_in_red unfolding \<G>_F_L_q_def by simp
- then show "(C,L) \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q_def
- labeled_ord_red_crit_fam.Red_F_\<G>_q_g_def by blast
+ then have "C \<in> no_labels.Red_F_\<G>_empty_q q (fst ` N)" if "q \<in> Q" for q
+ unfolding no_labels.Red_F_\<G>_empty_def using that by fast
+ then have g_in_red: "\<G>_F_q q C \<subseteq> Red_F_q q (no_labels.\<G>_set_q q (fst ` N))" if "q \<in> Q" for q
+ unfolding no_labels.Red_F_\<G>_empty_q_def using that by blast
+ have "\<G>_F_L_q q (C, L) \<subseteq> Red_F_q q (\<G>_set_q q N)" if "q \<in> Q" for q
+ using that g_in_red by simp
+ then show ?thesis
+ unfolding Red_F_Q_def Red_F_\<G>_q_g_def by blast
qed
- moreover have ii: \<open>\<exists>C' \<in> (fst ` N). C \<cdot>\<succ> C' \<Longrightarrow>
- (C,L) \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N\<close>
+ moreover have ii: \<open>\<exists>C' \<in> fst ` N. C' \<prec>\<cdot> C \<Longrightarrow> (C, L) \<in> Red_F_Q N\<close>
proof -
- assume "\<exists>C' \<in> (fst ` N). C \<cdot>\<succ> C'"
- then obtain C' where c'_in: "C' \<in> (fst ` N)" and c_prec_c': "C \<cdot>\<succ> C'" by blast
- obtain L' where c'_l'_in: "(C',L') \<in> N" using c'_in by auto
- have c'_l'_prec: "(C',L') \<sqsubset> (C,L)"
- using c_prec_c' unfolding Prec_FL_def by (meson UNIV_I compat_equiv_prec)
- have c_in_c'_g: "\<G>_F_q q C \<subseteq> \<G>_F_q q C'" for q
- using prec_F_grounding[OF c_prec_c'] by presburger
- then have "\<G>_F_L_q q (C,L) \<subseteq> \<G>_F_L_q q (C',L')" for q
- unfolding no_labels.\<G>_set_q_def labeled_ord_red_crit_fam.\<G>_set_q_def \<G>_F_L_q_def by auto
- then have "(C,L) \<in> labeled_ord_red_crit_fam.Red_F_\<G>_q_g q N" for q
- unfolding labeled_ord_red_crit_fam.Red_F_\<G>_q_g_def using c'_l'_in c'_l'_prec by blast
- then show "(C,L) \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q_def by blast
+ assume "\<exists>C' \<in> fst ` N. C' \<prec>\<cdot> C"
+ then obtain C' where c'_in: "C' \<in> fst ` N" and c_prec_c': "C' \<prec>\<cdot> C" by blast
+ obtain L' where c'_l'_in: "(C', L') \<in> N" using c'_in by auto
+ have c'_l'_prec: "(C', L') \<sqsubset> (C, L)"
+ using c_prec_c' unfolding Prec_FL_def by simp
+ have c_in_c'_g: "\<G>_F_q q C \<subseteq> \<G>_F_q q C'" if "q \<in> Q" for q
+ using prec_F_grounding[OF that c_prec_c'] by presburger
+ then have "\<G>_F_L_q q (C, L) \<subseteq> \<G>_F_L_q q (C', L')" if "q \<in> Q" for q
+ using that by auto
+ then have "(C, L) \<in> Red_F_\<G>_q_g q N" if "q \<in> Q" for q
+ unfolding Red_F_\<G>_q_g_def using that c'_l'_in c'_l'_prec by blast
+ then show ?thesis
+ unfolding Red_F_Q_def by blast
qed
- moreover have iii: \<open>\<exists>(C',L') \<in> N. (L' \<sqsubset>l L \<and> C \<cdot>\<succeq> C') \<Longrightarrow>
- (C,L) \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N\<close>
+ moreover have iii: \<open>\<exists>(C', L') \<in> N. L' \<sqsubset>l L \<and> C' \<preceq>\<cdot> C \<Longrightarrow> (C, L) \<in> Red_F_Q N\<close>
proof -
- assume "\<exists>(C',L') \<in> N. (L' \<sqsubset>l L \<and> C \<cdot>\<succeq> C')"
- then obtain C' L' where c'_l'_in: "(C',L') \<in> N" and l'_sub_l: "L' \<sqsubset>l L" and c'_sub_c: "C \<cdot>\<succeq> C'"
+ assume "\<exists>(C', L') \<in> N. L' \<sqsubset>l L \<and> C' \<preceq>\<cdot> C"
+ then obtain C' L' where c'_l'_in: "(C', L') \<in> N" and l'_sub_l: "L' \<sqsubset>l L" and c'_sub_c: "C' \<preceq>\<cdot> C"
by fast
- have "(C,L) \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N" if "C \<cdot>\<succ> C'"
+ have "(C, L) \<in> Red_F_Q N" if "C' \<prec>\<cdot> C"
using that c'_l'_in ii by fastforce
moreover {
assume equiv_c_c': "C \<doteq> C'"
then have equiv_c'_c: "C' \<doteq> C"
- using equiv_F_is_equiv_rel equiv_F_fun_def equiv_class_eq_iff by fastforce
- then have c'_l'_prec: "(C',L') \<sqsubset> (C,L)"
+ using equiv_equiv_F by (simp add: equivp_symp)
+ then have c'_l'_prec: "(C', L') \<sqsubset> (C, L)"
using l'_sub_l unfolding Prec_FL_def by simp
- have "\<G>_F_q q C = \<G>_F_q q C'" for q
- using equiv_F_grounding equiv_c'_c by blast
- then have "\<G>_F_L_q q (C,L) = \<G>_F_L_q q (C',L')" for q
- unfolding no_labels.\<G>_set_q_def labeled_ord_red_crit_fam.\<G>_set_q_def \<G>_F_L_q_def by auto
- then have "(C,L) \<in> labeled_ord_red_crit_fam.Red_F_\<G>_q_g q N" for q
- unfolding labeled_ord_red_crit_fam.Red_F_\<G>_q_g_def using c'_l'_in c'_l'_prec by blast
- then have "(C,L) \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q_def by blast
+ have "\<G>_F_q q C = \<G>_F_q q C'" if "q \<in> Q" for q
+ using that equiv_F_grounding equiv_c_c' equiv_c'_c by (simp add: set_eq_subset)
+ then have "\<G>_F_L_q q (C, L) = \<G>_F_L_q q (C', L')" if "q \<in> Q" for q
+ using that by auto
+ then have "(C, L) \<in> Red_F_\<G>_q_g q N" if "q \<in> Q" for q
+ unfolding Red_F_\<G>_q_g_def using that c'_l'_in c'_l'_prec by blast
+ then have ?thesis
+ unfolding Red_F_Q_def by blast
}
- ultimately show "(C,L) \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N"
- using c'_sub_c unfolding Prec_eq_F_def equiv_F_fun_def equiv_F_is_equiv_rel by blast
+ ultimately show ?thesis
+ using c'_sub_c equiv_equiv_F equivp_symp by fastforce
qed
- ultimately show \<open>(C,L) \<in> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N\<close>
+ ultimately show ?thesis
by blast
qed
end
subsection \<open>Given Clause Architecture\<close>
-locale Given_Clause = Prover_Architecture_Basis Bot_F Inf_F Bot_G Q entails_q Inf_G Red_Inf_q
- Red_F_q \<G>_F_q \<G>_Inf_q l Inf_FL Equiv_F Prec_F Prec_l
+locale given_clause = prover_architecture_basis Bot_F Inf_F Bot_G Q entails_q Inf_G_q Red_Inf_q
+ Red_F_q \<G>_F_q \<G>_Inf_q Inf_FL Equiv_F Prec_F Prec_l active
for
Bot_F :: "'f set" and
Inf_F :: "'f inference set" and
Bot_G :: "'g set" and
- Q :: "'q itself" and
- entails_q :: "'q \<Rightarrow> ('g set \<Rightarrow> 'g set \<Rightarrow> bool)" and
- Inf_G :: \<open>'g inference set\<close> and
- Red_Inf_q :: "'q \<Rightarrow> ('g set \<Rightarrow> 'g inference set)" and
- Red_F_q :: "'q \<Rightarrow> ('g set \<Rightarrow> 'g set)" and
+ Q :: "'q set" and
+ entails_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g set \<Rightarrow> bool" and
+ Inf_G_q :: \<open>'q \<Rightarrow> 'g inference set\<close> and
+ Red_Inf_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g inference set" and
+ Red_F_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g set" and
\<G>_F_q :: "'q \<Rightarrow> 'f \<Rightarrow> 'g set" and
\<G>_Inf_q :: "'q \<Rightarrow> 'f inference \<Rightarrow> 'g inference set option" and
- l :: "'l itself" and
Inf_FL :: \<open>('f \<times> 'l) inference set\<close> and
- Equiv_F :: "('f \<times> 'f) set" and
- Prec_F :: "'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<cdot>\<succ>" 50) and
- Prec_l :: "'l \<Rightarrow> 'l \<Rightarrow> bool" (infix "\<sqsubset>l" 50)
- + fixes
- active :: "'l"
+ Equiv_F :: "'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<doteq>" 50) and
+ Prec_F :: "'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<prec>\<cdot>" 50) and
+ Prec_l :: "'l \<Rightarrow> 'l \<Rightarrow> bool" (infix "\<sqsubset>l" 50) and
+ active :: 'l +
assumes
- inf_have_premises: "\<iota>F \<in> Inf_F \<Longrightarrow> length (prems_of \<iota>F) > 0" and
- active_minimal: "l2 \<noteq> active \<Longrightarrow> active \<sqsubset>l l2" and
- at_least_two_labels: "\<exists>l2. active \<sqsubset>l l2" and
- inf_never_active: "\<iota> \<in> Inf_FL \<Longrightarrow> snd (concl_of \<iota>) \<noteq> active"
+ inf_have_prems: "\<iota>F \<in> Inf_F \<Longrightarrow> prems_of \<iota>F \<noteq> []"
begin
-lemma labeled_inf_have_premises: "\<iota> \<in> Inf_FL \<Longrightarrow> set (prems_of \<iota>) \<noteq> {}"
- using inf_have_premises Inf_FL_to_Inf_F by fastforce
-
-definition active_subset :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set" where
- "active_subset M = {CL \<in> M. snd CL = active}"
+lemma labeled_inf_have_prems: "\<iota> \<in> Inf_FL \<Longrightarrow> prems_of \<iota> \<noteq> []"
+ using inf_have_prems Inf_FL_to_Inf_F by fastforce
-definition non_active_subset :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set" where
- "non_active_subset M = {CL \<in> M. snd CL \<noteq> active}"
-
-inductive Given_Clause_step :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> bool" (infix "\<Longrightarrow>GC" 50) where
- process: "N1 = N \<union> M \<Longrightarrow> N2 = N \<union> M' \<Longrightarrow> N \<inter> M = {} \<Longrightarrow>
- M \<subseteq> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q (N \<union> M') \<Longrightarrow>
- active_subset M' = {} \<Longrightarrow> N1 \<Longrightarrow>GC N2" |
- infer: "N1 = N \<union> {(C,L)} \<Longrightarrow> {(C,L)} \<inter> N = {} \<Longrightarrow> N2 = N \<union> {(C,active)} \<union> M \<Longrightarrow> L \<noteq> active \<Longrightarrow>
+inductive step :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> bool" (infix "\<Longrightarrow>GC" 50) where
+ process: "N1 = N \<union> M \<Longrightarrow> N2 = N \<union> M' \<Longrightarrow> M \<subseteq> Red_F_Q (N \<union> M') \<Longrightarrow>
+ active_subset M' = {} \<Longrightarrow> N1 \<Longrightarrow>GC N2"
+| infer: "N1 = N \<union> {(C, L)} \<Longrightarrow> N2 = N \<union> {(C, active)} \<union> M \<Longrightarrow> L \<noteq> active \<Longrightarrow>
active_subset M = {} \<Longrightarrow>
- no_labels.Non_ground.Inf_from2 (fst ` (active_subset N)) {C} \<subseteq>
- no_labels.lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` (N \<union> {(C,active)} \<union> M)) \<Longrightarrow>
+ no_labels.Inf_from2 (fst ` (active_subset N)) {C}
+ \<subseteq> no_labels.Red_Inf_Q (fst ` (N \<union> {(C, active)} \<union> M)) \<Longrightarrow>
N1 \<Longrightarrow>GC N2"
-abbreviation derive :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> bool" (infix "\<rhd>RedL" 50) where
- "derive \<equiv> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.derive"
-
lemma one_step_equiv: "N1 \<Longrightarrow>GC N2 \<Longrightarrow> N1 \<rhd>RedL N2"
-proof (cases N1 N2 rule: Given_Clause_step.cases)
+proof (cases N1 N2 rule: step.cases)
show "N1 \<Longrightarrow>GC N2 \<Longrightarrow> N1 \<Longrightarrow>GC N2" by blast
next
fix N M M'
assume
gc_step: "N1 \<Longrightarrow>GC N2" and
n1_is: "N1 = N \<union> M" and
n2_is: "N2 = N \<union> M'" and
- empty_inter: "N \<inter> M = {}" and
- m_red: "M \<subseteq> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q (N \<union> M')" and
+ m_red: "M \<subseteq> Red_F_Q (N \<union> M')" and
active_empty: "active_subset M' = {}"
- have "N1 - N2 \<subseteq> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N2"
- using n1_is n2_is empty_inter m_red by auto
- then show "labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.derive N1 N2"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.derive.simps by blast
+ have "N1 - N2 \<subseteq> Red_F_Q N2"
+ using n1_is n2_is m_red by auto
+ then show "N1 \<rhd>RedL N2"
+ unfolding derive.simps by blast
next
fix N C L M
assume
gc_step: "N1 \<Longrightarrow>GC N2" and
- n1_is: "N1 = N \<union> {(C,L)}" and
+ n1_is: "N1 = N \<union> {(C, L)}" and
not_active: "L \<noteq> active" and
n2_is: "N2 = N \<union> {(C, active)} \<union> M" and
- empty_inter: "{(C,L)} \<inter> N = {}" and
active_empty: "active_subset M = {}"
have "(C, active) \<in> N2" using n2_is by auto
- moreover have "C \<cdot>\<succeq> C" using Prec_eq_F_def equiv_F_is_equiv_rel equiv_class_eq_iff by fastforce
+ moreover have "C \<preceq>\<cdot> C" using equiv_equiv_F by (metis equivp_def)
moreover have "active \<sqsubset>l L" using active_minimal[OF not_active] .
- ultimately have "{(C,L)} \<subseteq> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N2"
+ ultimately have "{(C, L)} \<subseteq> Red_F_Q N2"
using red_labeled_clauses by blast
- moreover have "(C,L) \<notin> M \<Longrightarrow> N1 - N2 = {(C,L)}" using n1_is n2_is empty_inter not_active by auto
- moreover have "(C,L) \<in> M \<Longrightarrow> N1 - N2 = {}" using n1_is n2_is by auto
- ultimately have "N1 - N2 \<subseteq> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N2"
- using empty_red_f_equiv[of N2] by blast
- then show "labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.derive N1 N2"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.derive.simps
- by blast
+ moreover have "N1 - N2 = {} \<or> N1 - N2 = {(C, L)}" using n1_is n2_is by blast
+ ultimately have "N1 - N2 \<subseteq> Red_F_Q N2"
+ using std_Red_F_Q_eq by blast
+ then show "N1 \<rhd>RedL N2"
+ unfolding derive.simps by blast
qed
-abbreviation fair :: "('f \<times> 'l) set llist \<Rightarrow> bool" where
- "fair \<equiv> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.fair"
-
(* lem:gc-derivations-are-red-derivations *)
lemma gc_to_red: "chain (\<Longrightarrow>GC) D \<Longrightarrow> chain (\<rhd>RedL) D"
using one_step_equiv Lazy_List_Chain.chain_mono by blast
lemma (in-) all_ex_finite_set: "(\<forall>(j::nat)\<in>{0..<m}. \<exists>(n::nat). P j n) \<Longrightarrow>
(\<forall>n1 n2. \<forall>j\<in>{0..<m}. P j n1 \<longrightarrow> P j n2 \<longrightarrow> n1 = n2) \<Longrightarrow> finite {n. \<exists>j \<in> {0..<m}. P j n}" for m P
proof -
fix m::nat and P:: "nat \<Rightarrow> nat \<Rightarrow> bool"
assume
allj_exn: "\<forall>j\<in>{0..<m}. \<exists>n. P j n" and
uniq_n: "\<forall>n1 n2. \<forall>j\<in>{0..<m}. P j n1 \<longrightarrow> P j n2 \<longrightarrow> n1 = n2"
have "{n. \<exists>j \<in> {0..<m}. P j n} = (\<Union>((\<lambda>j. {n. P j n}) ` {0..<m}))" by blast
then have imp_finite: "(\<forall>j\<in>{0..<m}. finite {n. P j n}) \<Longrightarrow> finite {n. \<exists>j \<in> {0..<m}. P j n}"
using finite_UN[of "{0..<m}" "\<lambda>j. {n. P j n}"] by simp
have "\<forall>j\<in>{0..<m}. \<exists>!n. P j n" using allj_exn uniq_n by blast
then have "\<forall>j\<in>{0..<m}. finite {n. P j n}" by (metis bounded_nat_set_is_finite lessI mem_Collect_eq)
then show "finite {n. \<exists>j \<in> {0..<m}. P j n}" using imp_finite by simp
qed
(* lem:fair-gc-derivations *)
-lemma gc_fair: "chain (\<Longrightarrow>GC) D \<Longrightarrow> llength D > 0 \<Longrightarrow> active_subset (lnth D 0) = {} \<Longrightarrow>
- non_active_subset (Liminf_llist D) = {} \<Longrightarrow> fair D"
-proof -
- assume
+lemma gc_fair:
+ assumes
deriv: "chain (\<Longrightarrow>GC) D" and
- non_empty: "llength D > 0" and
init_state: "active_subset (lnth D 0) = {}" and
- final_state: "non_active_subset (Liminf_llist D) = {}"
- show "fair D"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.fair_def
- proof
- fix \<iota>
- assume i_in: "\<iota> \<in> with_labels.Inf_from (Liminf_llist D)"
- have i_in_inf_fl: "\<iota> \<in> Inf_FL" using i_in unfolding with_labels.Inf_from_def by blast
- have "Liminf_llist D = active_subset (Liminf_llist D)"
- using final_state unfolding non_active_subset_def active_subset_def by blast
- then have i_in2: "\<iota> \<in> with_labels.Inf_from (active_subset (Liminf_llist D))" using i_in by simp
- define m where "m = length (prems_of \<iota>)"
- then have m_def_F: "m = length (prems_of (to_F \<iota>))" unfolding to_F_def by simp
- have i_in_F: "to_F \<iota> \<in> Inf_F"
- using i_in Inf_FL_to_Inf_F unfolding with_labels.Inf_from_def to_F_def by blast
- then have m_pos: "m > 0" using m_def_F using inf_have_premises by blast
- have exist_nj: "\<forall>j \<in> {0..<m}. (\<exists>nj. enat (Suc nj) < llength D \<and>
- (prems_of \<iota>)!j \<notin> active_subset (lnth D nj) \<and>
- (\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> (prems_of \<iota>)!j \<in> active_subset (lnth D k)))"
- proof clarify
- fix j
- assume j_in: "j \<in> {0..<m}"
- then obtain C where c_is: "(C,active) = (prems_of \<iota>)!j"
- using i_in2 unfolding m_def with_labels.Inf_from_def active_subset_def
- by (smt Collect_mem_eq Collect_mono_iff atLeastLessThan_iff nth_mem old.prod.exhaust snd_conv)
- then have "(C,active) \<in> Liminf_llist D"
- using j_in i_in unfolding m_def with_labels.Inf_from_def by force
- then obtain nj where nj_is: "enat nj < llength D" and
- c_in2: "(C,active) \<in> \<Inter> (lnth D ` {k. nj \<le> k \<and> enat k < llength D})"
- unfolding Liminf_llist_def using init_state by blast
- then have c_in3: "\<forall>k. k \<ge> nj \<longrightarrow> enat k < llength D \<longrightarrow> (C,active) \<in> (lnth D k)" by blast
- have nj_pos: "nj > 0" using init_state c_in2 nj_is unfolding active_subset_def by fastforce
- obtain nj_min where nj_min_is: "nj_min = (LEAST nj. enat nj < llength D \<and>
- (C,active) \<in> \<Inter> (lnth D ` {k. nj \<le> k \<and> enat k < llength D}))" by blast
- then have in_allk: "\<forall>k. k \<ge> nj_min \<longrightarrow> enat k < llength D \<longrightarrow> (C,active) \<in> (lnth D k)"
- using c_in3 nj_is c_in2
- by (metis (mono_tags, lifting) INT_E LeastI_ex mem_Collect_eq)
- have njm_smaller_D: "enat nj_min < llength D"
- using nj_min_is
- by (smt LeastI_ex \<open>\<And>thesis. (\<And>nj. \<lbrakk>enat nj < llength D;
+ final_state: "passive_subset (Liminf_llist D) = {}"
+ shows "fair D"
+ unfolding fair_def
+proof
+ fix \<iota>
+ assume i_in: "\<iota> \<in> Inf_from (Liminf_llist D)"
+ have i_in_inf_fl: "\<iota> \<in> Inf_FL" using i_in unfolding Inf_from_def by blast
+ have "Liminf_llist D = active_subset (Liminf_llist D)"
+ using final_state unfolding passive_subset_def active_subset_def by blast
+ then have i_in2: "\<iota> \<in> Inf_from (active_subset (Liminf_llist D))" using i_in by simp
+ define m where "m = length (prems_of \<iota>)"
+ then have m_def_F: "m = length (prems_of (to_F \<iota>))" unfolding to_F_def by simp
+ have i_in_F: "to_F \<iota> \<in> Inf_F"
+ using i_in Inf_FL_to_Inf_F unfolding Inf_from_def to_F_def by blast
+ then have m_pos: "m > 0" using m_def_F using inf_have_prems by blast
+ have exist_nj: "\<forall>j \<in> {0..<m}. (\<exists>nj. enat (Suc nj) < llength D \<and>
+ prems_of \<iota> ! j \<notin> active_subset (lnth D nj) \<and>
+ (\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> prems_of \<iota> ! j \<in> active_subset (lnth D k)))"
+ proof clarify
+ fix j
+ assume j_in: "j \<in> {0..<m}"
+ then obtain C where c_is: "(C, active) = prems_of \<iota> ! j"
+ using i_in2 unfolding m_def Inf_from_def active_subset_def
+ by (smt Collect_mem_eq Collect_mono_iff atLeastLessThan_iff nth_mem old.prod.exhaust snd_conv)
+ then have "(C, active) \<in> Liminf_llist D"
+ using j_in i_in unfolding m_def Inf_from_def by force
+ then obtain nj where nj_is: "enat nj < llength D" and
+ c_in2: "(C, active) \<in> \<Inter> (lnth D ` {k. nj \<le> k \<and> enat k < llength D})"
+ unfolding Liminf_llist_def using init_state by blast
+ then have c_in3: "\<forall>k. k \<ge> nj \<longrightarrow> enat k < llength D \<longrightarrow> (C, active) \<in> (lnth D k)" by blast
+ have nj_pos: "nj > 0" using init_state c_in2 nj_is unfolding active_subset_def by fastforce
+ obtain nj_min where nj_min_is: "nj_min = (LEAST nj. enat nj < llength D \<and>
+ (C, active) \<in> \<Inter> (lnth D ` {k. nj \<le> k \<and> enat k < llength D}))" by blast
+ then have in_allk: "\<forall>k. k \<ge> nj_min \<longrightarrow> enat k < llength D \<longrightarrow> (C, active) \<in> (lnth D k)"
+ using c_in3 nj_is c_in2
+ by (metis (mono_tags, lifting) INT_E LeastI_ex mem_Collect_eq)
+ have njm_smaller_D: "enat nj_min < llength D"
+ using nj_min_is
+ by (smt LeastI_ex \<open>\<And>thesis. (\<And>nj. \<lbrakk>enat nj < llength D;
(C, active) \<in> \<Inter> (lnth D ` {k. nj \<le> k \<and> enat k < llength D})\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis\<close>)
- have "nj_min > 0"
- using nj_is c_in2 nj_pos nj_min_is
- by (metis (mono_tags, lifting) Collect_empty_eq \<open>(C, active) \<in> Liminf_llist D\<close>
+ have "nj_min > 0"
+ using nj_is c_in2 nj_pos nj_min_is
+ by (metis (mono_tags, lifting) Collect_empty_eq \<open>(C, active) \<in> Liminf_llist D\<close>
\<open>Liminf_llist D = active_subset (Liminf_llist D)\<close>
\<open>\<forall>k\<ge>nj_min. enat k < llength D \<longrightarrow> (C, active) \<in> lnth D k\<close> active_subset_def init_state
- linorder_not_less mem_Collect_eq non_empty zero_enat_def)
- then obtain njm_prec where nj_prec_is: "Suc njm_prec = nj_min" using gr0_conv_Suc by auto
- then have njm_prec_njm: "njm_prec < nj_min" by blast
- then have njm_prec_njm_enat: "enat njm_prec < enat nj_min" by simp
- have njm_prec_smaller_d: "njm_prec < llength D"
- using HOL.no_atp(15)[OF njm_smaller_D njm_prec_njm_enat] .
- have njm_prec_all_suc: "\<forall>k>njm_prec. enat k < llength D \<longrightarrow> (C, active) \<in> lnth D k"
- using nj_prec_is in_allk by simp
- have notin_njm_prec: "(C, active) \<notin> lnth D njm_prec"
- proof (rule ccontr)
- assume "\<not> (C, active) \<notin> lnth D njm_prec"
- then have absurd_hyp: "(C, active) \<in> lnth D njm_prec" by simp
- have prec_smaller: "enat njm_prec < llength D" using nj_min_is nj_prec_is
- by (smt LeastI_ex Suc_leD \<open>\<And>thesis. (\<And>nj. \<lbrakk>enat nj < llength D;
+ linorder_not_less mem_Collect_eq zero_enat_def chain_length_pos[OF deriv])
+ then obtain njm_prec where nj_prec_is: "Suc njm_prec = nj_min" using gr0_conv_Suc by auto
+ then have njm_prec_njm: "njm_prec < nj_min" by blast
+ then have njm_prec_njm_enat: "enat njm_prec < enat nj_min" by simp
+ have njm_prec_smaller_d: "njm_prec < llength D"
+ using HOL.no_atp(15)[OF njm_smaller_D njm_prec_njm_enat] .
+ have njm_prec_all_suc: "\<forall>k>njm_prec. enat k < llength D \<longrightarrow> (C, active) \<in> lnth D k"
+ using nj_prec_is in_allk by simp
+ have notin_njm_prec: "(C, active) \<notin> lnth D njm_prec"
+ proof (rule ccontr)
+ assume "\<not> (C, active) \<notin> lnth D njm_prec"
+ then have absurd_hyp: "(C, active) \<in> lnth D njm_prec" by simp
+ have prec_smaller: "enat njm_prec < llength D" using nj_min_is nj_prec_is
+ by (smt LeastI_ex Suc_leD \<open>\<And>thesis. (\<And>nj. \<lbrakk>enat nj < llength D;
(C, active) \<in> \<Inter> (lnth D ` {k. nj \<le> k \<and> enat k < llength D})\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis\<close>
enat_ord_simps(1) le_eq_less_or_eq le_less_trans)
- have "(C,active) \<in> \<Inter> (lnth D ` {k. njm_prec \<le> k \<and> enat k < llength D})"
- proof -
- {
- fix k
- assume k_in: "njm_prec \<le> k \<and> enat k < llength D"
- have "k = njm_prec \<Longrightarrow> (C,active) \<in> lnth D k" using absurd_hyp by simp
- moreover have "njm_prec < k \<Longrightarrow> (C,active) \<in> lnth D k"
- using nj_prec_is in_allk k_in by simp
- ultimately have "(C,active) \<in> lnth D k" using k_in by fastforce
- }
- then show "(C,active) \<in> \<Inter> (lnth D ` {k. njm_prec \<le> k \<and> enat k < llength D})" by blast
- qed
- then have "enat njm_prec < llength D \<and>
- (C,active) \<in> \<Inter> (lnth D ` {k. njm_prec \<le> k \<and> enat k < llength D})"
- using prec_smaller by blast
- then show False
- using nj_min_is nj_prec_is Orderings.wellorder_class.not_less_Least njm_prec_njm by blast
+ have "(C, active) \<in> \<Inter> (lnth D ` {k. njm_prec \<le> k \<and> enat k < llength D})"
+ proof -
+ {
+ fix k
+ assume k_in: "njm_prec \<le> k \<and> enat k < llength D"
+ have "k = njm_prec \<Longrightarrow> (C, active) \<in> lnth D k" using absurd_hyp by simp
+ moreover have "njm_prec < k \<Longrightarrow> (C, active) \<in> lnth D k"
+ using nj_prec_is in_allk k_in by simp
+ ultimately have "(C, active) \<in> lnth D k" using k_in by fastforce
+ }
+ then show "(C, active) \<in> \<Inter> (lnth D ` {k. njm_prec \<le> k \<and> enat k < llength D})" by blast
qed
- then have notin_active_subs_njm_prec: "(C, active) \<notin> active_subset (lnth D njm_prec)"
- unfolding active_subset_def by blast
- then show "\<exists>nj. enat (Suc nj) < llength D \<and> (prems_of \<iota>)!j \<notin> active_subset (lnth D nj) \<and>
- (\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> (prems_of \<iota>)!j \<in> active_subset (lnth D k))"
- using c_is njm_prec_all_suc njm_prec_smaller_d by (metis (mono_tags, lifting)
- active_subset_def mem_Collect_eq nj_prec_is njm_smaller_D snd_conv)
+ then have "enat njm_prec < llength D \<and>
+ (C, active) \<in> \<Inter> (lnth D ` {k. njm_prec \<le> k \<and> enat k < llength D})"
+ using prec_smaller by blast
+ then show False
+ using nj_min_is nj_prec_is Orderings.wellorder_class.not_less_Least njm_prec_njm by blast
qed
- have uniq_nj: "j \<in> {0..<m} \<Longrightarrow>
- (enat (Suc nj1) < llength D \<and>
- (prems_of \<iota>)!j \<notin> active_subset (lnth D nj1) \<and>
- (\<forall>k. k > nj1 \<longrightarrow> enat k < llength D \<longrightarrow> (prems_of \<iota>)!j \<in> active_subset (lnth D k))) \<Longrightarrow>
- (enat (Suc nj2) < llength D \<and>
- (prems_of \<iota>)!j \<notin> active_subset (lnth D nj2) \<and>
- (\<forall>k. k > nj2 \<longrightarrow> enat k < llength D \<longrightarrow> (prems_of \<iota>)!j \<in> active_subset (lnth D k))) \<Longrightarrow> nj1=nj2"
- proof (clarify, rule ccontr)
- fix j nj1 nj2
- assume "j \<in> {0..<m}" and
- nj1_d: "enat (Suc nj1) < llength D" and
- nj2_d: "enat (Suc nj2) < llength D" and
- nj1_notin: "prems_of \<iota> ! j \<notin> active_subset (lnth D nj1)" and
- k_nj1: "\<forall>k>nj1. enat k < llength D \<longrightarrow> prems_of \<iota> ! j \<in> active_subset (lnth D k)" and
- nj2_notin: "prems_of \<iota> ! j \<notin> active_subset (lnth D nj2)" and
- k_nj2: "\<forall>k>nj2. enat k < llength D \<longrightarrow> prems_of \<iota> ! j \<in> active_subset (lnth D k)" and
- diff_12: "nj1 \<noteq> nj2"
- have "nj1 < nj2 \<Longrightarrow> False"
- proof -
- assume prec_12: "nj1 < nj2"
- have "enat nj2 < llength D" using nj2_d using Suc_ile_eq less_trans by blast
- then have "prems_of \<iota> ! j \<in> active_subset (lnth D nj2)"
- using k_nj1 prec_12 by simp
- then show False using nj2_notin by simp
- qed
- moreover have "nj1 > nj2 \<Longrightarrow> False"
- proof -
- assume prec_21: "nj2 < nj1"
- have "enat nj1 < llength D" using nj1_d using Suc_ile_eq less_trans by blast
- then have "prems_of \<iota> ! j \<in> active_subset (lnth D nj1)"
- using k_nj2 prec_21
- by simp
- then show False using nj1_notin by simp
- qed
- ultimately show False using diff_12 by linarith
- qed
- define nj_set where "nj_set = {nj. (\<exists>j\<in>{0..<m}. enat (Suc nj) < llength D \<and>
- (prems_of \<iota>)!j \<notin> active_subset (lnth D nj) \<and>
- (\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> (prems_of \<iota>)!j \<in> active_subset (lnth D k)))}"
- then have nj_not_empty: "nj_set \<noteq> {}"
- proof -
- have zero_in: "0 \<in> {0..<m}" using m_pos by simp
- then obtain n0 where "enat (Suc n0) < llength D" and
- "prems_of \<iota> ! 0 \<notin> active_subset (lnth D n0)" and
- "\<forall>k>n0. enat k < llength D \<longrightarrow> prems_of \<iota> ! 0 \<in> active_subset (lnth D k)"
- using exist_nj by fast
- then have "n0 \<in> nj_set" unfolding nj_set_def using zero_in by blast
- then show "nj_set \<noteq> {}" by auto
- qed
- have nj_finite: "finite nj_set"
- using uniq_nj all_ex_finite_set[OF exist_nj]
- by (metis (no_types, lifting) Suc_ile_eq dual_order.strict_implies_order
+ then have notin_active_subs_njm_prec: "(C, active) \<notin> active_subset (lnth D njm_prec)"
+ unfolding active_subset_def by blast
+ then show "\<exists>nj. enat (Suc nj) < llength D \<and> prems_of \<iota> ! j \<notin> active_subset (lnth D nj) \<and>
+ (\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> prems_of \<iota> ! j \<in> active_subset (lnth D k))"
+ using c_is njm_prec_all_suc njm_prec_smaller_d by (metis (mono_tags, lifting)
+ active_subset_def mem_Collect_eq nj_prec_is njm_smaller_D snd_conv)
+ qed
+ define nj_set where "nj_set = {nj. (\<exists>j\<in>{0..<m}. enat (Suc nj) < llength D \<and>
+ prems_of \<iota> ! j \<notin> active_subset (lnth D nj) \<and>
+ (\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> prems_of \<iota> ! j \<in> active_subset (lnth D k)))}"
+ then have nj_not_empty: "nj_set \<noteq> {}"
+ proof -
+ have zero_in: "0 \<in> {0..<m}" using m_pos by simp
+ then obtain n0 where "enat (Suc n0) < llength D" and
+ "prems_of \<iota> ! 0 \<notin> active_subset (lnth D n0)" and
+ "\<forall>k>n0. enat k < llength D \<longrightarrow> prems_of \<iota> ! 0 \<in> active_subset (lnth D k)"
+ using exist_nj by fast
+ then have "n0 \<in> nj_set" unfolding nj_set_def using zero_in by blast
+ then show "nj_set \<noteq> {}" by auto
+ qed
+ have nj_finite: "finite nj_set"
+ using all_ex_finite_set[OF exist_nj]
+ by (metis (no_types, lifting) Suc_ile_eq dual_order.strict_implies_order
linorder_neqE_nat nj_set_def)
- (* the n below in the n-1 from the pen-and-paper proof *)
- have "\<exists>n \<in> nj_set. \<forall>nj \<in> nj_set. nj \<le> n"
- using nj_not_empty nj_finite using Max_ge Max_in by blast
- then obtain n where n_in: "n \<in> nj_set" and n_bigger: "\<forall>nj \<in> nj_set. nj \<le> n" by blast
- then obtain j0 where j0_in: "j0 \<in> {0..<m}" and suc_n_length: "enat (Suc n) < llength D" and
- j0_notin: "(prems_of \<iota>)!j0 \<notin> active_subset (lnth D n)" and
- j0_allin: "(\<forall>k. k > n \<longrightarrow> enat k < llength D \<longrightarrow> (prems_of \<iota>)!j0 \<in> active_subset (lnth D k))"
- unfolding nj_set_def by blast
- obtain C0 where C0_is: "(prems_of \<iota>)!j0 = (C0,active)" using j0_in
- using i_in2 unfolding m_def with_labels.Inf_from_def active_subset_def
- by (smt Collect_mem_eq Collect_mono_iff atLeastLessThan_iff nth_mem old.prod.exhaust snd_conv)
- then have C0_prems_i: "(C0,active) \<in> set (prems_of \<iota>)" using in_set_conv_nth j0_in m_def by force
- have C0_in: "(C0,active) \<in> (lnth D (Suc n))"
- using C0_is j0_allin suc_n_length by (simp add: active_subset_def)
- have C0_notin: "(C0,active) \<notin> (lnth D n)" using C0_is j0_notin unfolding active_subset_def by simp
- have step_n: "lnth D n \<Longrightarrow>GC lnth D (Suc n)"
- using deriv chain_lnth_rel n_in unfolding nj_set_def by blast
- have "\<exists>N C L M. (lnth D n = N \<union> {(C,L)} \<and> {(C,L)} \<inter> N = {} \<and>
- lnth D (Suc n) = N \<union> {(C,active)} \<union> M \<and> L \<noteq> active \<and>
- active_subset M = {} \<and>
- no_labels.Non_ground.Inf_from2 (fst ` (active_subset N)) {C} \<subseteq>
- no_labels.lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` (N \<union> {(C,active)} \<union> M)))"
- proof -
- have proc_or_infer: "(\<exists>N1 N M N2 M'. lnth D n = N1 \<and> lnth D (Suc n) = N2 \<and> N1 = N \<union> M \<and>
- N2 = N \<union> M' \<and> N \<inter> M = {} \<and>
- M \<subseteq> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q (N \<union> M') \<and>
- active_subset M' = {}) \<or>
+ (* the n below in the n-1 from the pen-and-paper proof *)
+ have "\<exists>n \<in> nj_set. \<forall>nj \<in> nj_set. nj \<le> n"
+ using nj_not_empty nj_finite using Max_ge Max_in by blast
+ then obtain n where n_in: "n \<in> nj_set" and n_bigger: "\<forall>nj \<in> nj_set. nj \<le> n" by blast
+ then obtain j0 where j0_in: "j0 \<in> {0..<m}" and suc_n_length: "enat (Suc n) < llength D" and
+ j0_notin: "prems_of \<iota> ! j0 \<notin> active_subset (lnth D n)" and
+ j0_allin: "(\<forall>k. k > n \<longrightarrow> enat k < llength D \<longrightarrow> prems_of \<iota> ! j0 \<in> active_subset (lnth D k))"
+ unfolding nj_set_def by blast
+ obtain C0 where C0_is: "prems_of \<iota> ! j0 = (C0, active)" using j0_in
+ using i_in2 unfolding m_def Inf_from_def active_subset_def
+ by (smt Collect_mem_eq Collect_mono_iff atLeastLessThan_iff nth_mem old.prod.exhaust snd_conv)
+ then have C0_prems_i: "(C0, active) \<in> set (prems_of \<iota>)" using in_set_conv_nth j0_in m_def by force
+ have C0_in: "(C0, active) \<in> (lnth D (Suc n))"
+ using C0_is j0_allin suc_n_length by (simp add: active_subset_def)
+ have C0_notin: "(C0, active) \<notin> (lnth D n)" using C0_is j0_notin unfolding active_subset_def by simp
+ have step_n: "lnth D n \<Longrightarrow>GC lnth D (Suc n)"
+ using deriv chain_lnth_rel n_in unfolding nj_set_def by blast
+ have "\<exists>N C L M. (lnth D n = N \<union> {(C, L)} \<and>
+ lnth D (Suc n) = N \<union> {(C, active)} \<union> M \<and> L \<noteq> active \<and> active_subset M = {} \<and>
+ no_labels.Inf_from2 (fst ` (active_subset N)) {C}
+ \<subseteq> no_labels.Red_Inf_Q (fst ` (N \<union> {(C, active)} \<union> M)))"
+ proof -
+ have proc_or_infer: "(\<exists>N1 N M N2 M'. lnth D n = N1 \<and> lnth D (Suc n) = N2 \<and> N1 = N \<union> M \<and>
+ N2 = N \<union> M' \<and> M \<subseteq> Red_F_Q (N \<union> M') \<and> active_subset M' = {}) \<or>
(\<exists>N1 N C L N2 M. lnth D n = N1 \<and> lnth D (Suc n) = N2 \<and> N1 = N \<union> {(C, L)} \<and>
- {(C, L)} \<inter> N = {} \<and> N2 = N \<union> {(C, active)} \<union> M \<and>
- L \<noteq> active \<and> active_subset M = {} \<and>
- no_labels.Non_ground.Inf_from2 (fst ` (active_subset N)) {C} \<subseteq>
- no_labels.lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` (N \<union> {(C,active)} \<union> M)))"
- using Given_Clause_step.simps[of "lnth D n" "lnth D (Suc n)"] step_n by blast
- show ?thesis
- using C0_in C0_notin proc_or_infer j0_in C0_is
- by (smt Un_iff active_subset_def mem_Collect_eq snd_conv sup_bot.right_neutral)
+ N2 = N \<union> {(C, active)} \<union> M \<and> L \<noteq> active \<and> active_subset M = {} \<and>
+ no_labels.Inf_from2 (fst ` (active_subset N)) {C} \<subseteq>
+ no_labels.Red_Inf_Q (fst ` (N \<union> {(C, active)} \<union> M)))"
+ using step.simps[of "lnth D n" "lnth D (Suc n)"] step_n by blast
+ show ?thesis
+ using C0_in C0_notin proc_or_infer j0_in C0_is
+ by (smt Un_iff active_subset_def mem_Collect_eq snd_conv sup_bot.right_neutral)
+ qed
+ then obtain N M L where inf_from_subs:
+ "no_labels.Inf_from2 (fst ` (active_subset N)) {C0}
+ \<subseteq> no_labels.Red_Inf_Q (fst ` (N \<union> {(C0, active)} \<union> M))" and
+ nth_d_is: "lnth D n = N \<union> {(C0, L)}" and
+ suc_nth_d_is: "lnth D (Suc n) = N \<union> {(C0, active)} \<union> M" and
+ l_not_active: "L \<noteq> active"
+ using C0_in C0_notin j0_in C0_is using active_subset_def by fastforce
+ have "j \<in> {0..<m} \<Longrightarrow> prems_of \<iota> ! j \<noteq> prems_of \<iota> ! j0 \<Longrightarrow> prems_of \<iota> ! j \<in> (active_subset N)" for j
+ proof -
+ fix j
+ assume j_in: "j \<in> {0..<m}" and
+ j_not_j0: "prems_of \<iota> ! j \<noteq> prems_of \<iota> ! j0"
+ obtain nj where nj_len: "enat (Suc nj) < llength D" and
+ nj_prems: "prems_of \<iota> ! j \<notin> active_subset (lnth D nj)" and
+ nj_greater: "(\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> prems_of \<iota> ! j \<in> active_subset (lnth D k))"
+ using exist_nj j_in by blast
+ then have "nj \<in> nj_set" unfolding nj_set_def using j_in by blast
+ moreover have "nj \<noteq> n"
+ proof (rule ccontr)
+ assume "\<not> nj \<noteq> n"
+ then have "prems_of \<iota> ! j = (C0, active)"
+ using C0_in C0_notin step.simps[of "lnth D n" "lnth D (Suc n)"] step_n
+ by (smt Un_iff nth_d_is suc_nth_d_is l_not_active active_subset_def insertCI insertE lessI
+ mem_Collect_eq nj_greater nj_prems snd_conv suc_n_length)
+ then show False using j_not_j0 C0_is by simp
qed
- then obtain N M L where inf_from_subs:
- "no_labels.Non_ground.Inf_from2 (fst ` (active_subset N)) {C0} \<subseteq>
- no_labels.lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` (N \<union> {(C0,active)} \<union> M))" and
- nth_d_is: "lnth D n = N \<union> {(C0,L)}" and
- suc_nth_d_is: "lnth D (Suc n) = N \<union> {(C0,active)} \<union> M" and
- l_not_active: "L \<noteq> active"
- using C0_in C0_notin j0_in C0_is using active_subset_def by fastforce
- have "j \<in> {0..<m} \<Longrightarrow> (prems_of \<iota>)!j \<noteq> (prems_of \<iota>)!j0 \<Longrightarrow> (prems_of \<iota>)!j \<in> (active_subset N)" for j
- proof -
- fix j
- assume j_in: "j \<in> {0..<m}" and
- j_not_j0: "(prems_of \<iota>)!j \<noteq> (prems_of \<iota>)!j0"
- obtain nj where nj_len: "enat (Suc nj) < llength D" and
- nj_prems: "(prems_of \<iota>)!j \<notin> active_subset (lnth D nj)" and
- nj_greater: "(\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> (prems_of \<iota>)!j \<in> active_subset (lnth D k))"
- using exist_nj j_in by blast
- then have "nj \<in> nj_set" unfolding nj_set_def using j_in by blast
- moreover have "nj \<noteq> n"
- proof (rule ccontr)
- assume "\<not> nj \<noteq> n"
- then have "(prems_of \<iota>)!j = (C0,active)"
- using C0_in C0_notin Given_Clause_step.simps[of "lnth D n" "lnth D (Suc n)"] step_n
- by (smt Un_iff Un_insert_right nj_greater nj_prems active_subset_def empty_Collect_eq
- insertE lessI mem_Collect_eq prod.sel(2) suc_n_length)
- then show False using j_not_j0 C0_is by simp
- qed
- ultimately have "nj < n" using n_bigger by force
- then have "(prems_of \<iota>)!j \<in> (active_subset (lnth D n))"
- using nj_greater n_in Suc_ile_eq dual_order.strict_implies_order unfolding nj_set_def by blast
- then show "(prems_of \<iota>)!j \<in> (active_subset N)"
- using nth_d_is l_not_active unfolding active_subset_def by force
- qed
- then have "set (prems_of \<iota>) \<subseteq> active_subset N \<union> {(C0, active)}"
- using C0_prems_i C0_is m_def by (metis Un_iff atLeast0LessThan in_set_conv_nth insertCI lessThan_iff subrelI)
- moreover have "\<not> (set (prems_of \<iota>) \<subseteq> active_subset N - {(C0, active)})" using C0_prems_i by blast
- ultimately have "\<iota> \<in> with_labels.Inf_from2 (active_subset N) {(C0,active)}"
- using i_in_inf_fl unfolding with_labels.Inf_from2_def with_labels.Inf_from_def by blast
- then have "to_F \<iota> \<in> no_labels.Non_ground.Inf_from2 (fst ` (active_subset N)) {C0}"
- unfolding to_F_def with_labels.Inf_from2_def with_labels.Inf_from_def
- no_labels.Non_ground.Inf_from2_def no_labels.Non_ground.Inf_from_def using Inf_FL_to_Inf_F
- by force
- then have "to_F \<iota> \<in> no_labels.lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` (lnth D (Suc n)))"
- using suc_nth_d_is inf_from_subs by fastforce
- then have "\<forall>q. (\<G>_Inf_q q (to_F \<iota>) \<noteq> None \<and>
- the (\<G>_Inf_q q (to_F \<iota>)) \<subseteq> Red_Inf_q q (\<Union> (\<G>_F_q q ` (fst ` (lnth D (Suc n))))))
+ ultimately have "nj < n" using n_bigger by force
+ then have "prems_of \<iota> ! j \<in> (active_subset (lnth D n))"
+ using nj_greater n_in Suc_ile_eq dual_order.strict_implies_order unfolding nj_set_def by blast
+ then show "prems_of \<iota> ! j \<in> (active_subset N)"
+ using nth_d_is l_not_active unfolding active_subset_def by force
+ qed
+ then have "set (prems_of \<iota>) \<subseteq> active_subset N \<union> {(C0, active)}"
+ using C0_prems_i C0_is m_def
+ by (metis Un_iff atLeast0LessThan in_set_conv_nth insertCI lessThan_iff subrelI)
+ moreover have "\<not> (set (prems_of \<iota>) \<subseteq> active_subset N - {(C0, active)})" using C0_prems_i by blast
+ ultimately have "\<iota> \<in> Inf_from2 (active_subset N) {(C0, active)}"
+ using i_in_inf_fl unfolding Inf_from2_def Inf_from_def by blast
+ then have "to_F \<iota> \<in> no_labels.Inf_from2 (fst ` (active_subset N)) {C0}"
+ unfolding to_F_def Inf_from2_def Inf_from_def
+ no_labels.Inf_from2_def no_labels.Inf_from_def using Inf_FL_to_Inf_F
+ by force
+ then have "to_F \<iota> \<in> no_labels.Red_Inf_Q (fst ` (lnth D (Suc n)))"
+ using suc_nth_d_is inf_from_subs by fastforce
+ then have "\<forall>q \<in> Q. (\<G>_Inf_q q (to_F \<iota>) \<noteq> None \<and>
+ the (\<G>_Inf_q q (to_F \<iota>)) \<subseteq> Red_Inf_q q (\<Union> (\<G>_F_q q ` fst ` lnth D (Suc n))))
\<or> (\<G>_Inf_q q (to_F \<iota>) = None \<and>
- \<G>_F_q q (concl_of (to_F \<iota>)) \<subseteq> (\<Union> (\<G>_F_q q ` (fst ` (lnth D (Suc n))))) \<union>
- Red_F_q q (\<Union> (\<G>_F_q q ` (fst ` (lnth D (Suc n))))))"
- unfolding to_F_def no_labels.lifted_calc_w_red_crit_family.Red_Inf_Q_def
- no_labels.Red_Inf_\<G>_q_def no_labels.\<G>_set_q_def
- by fastforce
- then have "\<iota> \<in> with_labels.Red_Inf_Q (lnth D (Suc n))"
- unfolding to_F_def with_labels.Red_Inf_Q_def Red_Inf_\<G>_L_q_def \<G>_Inf_L_q_def \<G>_set_L_q_def
- \<G>_F_L_q_def using i_in_inf_fl by auto
- then show "\<iota> \<in>
- labeled_ord_red_crit_fam.empty_ord_lifted_calc_w_red_crit_family.inter_red_crit_calculus.Sup_Red_Inf_llist D"
- unfolding
- labeled_ord_red_crit_fam.empty_ord_lifted_calc_w_red_crit_family.inter_red_crit_calculus.Sup_Red_Inf_llist_def
- using red_inf_equiv2 suc_n_length by auto
- qed
+ \<G>_F_q q (concl_of (to_F \<iota>)) \<subseteq> \<Union> (\<G>_F_q q ` fst ` lnth D (Suc n)) \<union>
+ Red_F_q q (\<Union> (\<G>_F_q q ` fst ` lnth D (Suc n))))"
+ unfolding to_F_def no_labels.Red_Inf_Q_def no_labels.Red_Inf_\<G>_q_def by blast
+ then have "\<iota> \<in> Red_Inf_\<G>_Q (lnth D (Suc n))"
+ using i_in_inf_fl unfolding Red_Inf_\<G>_Q_def Red_Inf_\<G>_q_def by (simp add: to_F_def)
+ then show "\<iota> \<in> Sup_llist (lmap Red_Inf_\<G>_Q D)"
+ unfolding Sup_llist_def using suc_n_length by auto
+qed
+
+theorem gc_complete_Liminf:
+ assumes
+ deriv: "chain (\<Longrightarrow>GC) D" and
+ init_state: "active_subset (lnth D 0) = {}" and
+ final_state: "passive_subset (Liminf_llist D) = {}" and
+ b_in: "B \<in> Bot_F" and
+ bot_entailed: "no_labels.entails_\<G>_Q (fst ` lnth D 0) {B}"
+ shows "\<exists>BL \<in> Bot_FL. BL \<in> Liminf_llist D"
+proof -
+ have labeled_b_in: "(B, active) \<in> Bot_FL" using b_in by simp
+ have labeled_bot_entailed: "entails_\<G>_L_Q (lnth D 0) {(B, active)}"
+ using labeled_entailment_lifting bot_entailed by fastforce
+ have fair: "fair D" using gc_fair[OF deriv init_state final_state] .
+ then show ?thesis
+ using dynamic_refutational_complete_Liminf[OF labeled_b_in gc_to_red[OF deriv] fair
+ labeled_bot_entailed]
+ by blast
qed
(* thm:gc-completeness *)
-theorem gc_complete: "chain (\<Longrightarrow>GC) D \<Longrightarrow> llength D > 0 \<Longrightarrow> active_subset (lnth D 0) = {} \<Longrightarrow>
- non_active_subset (Liminf_llist D) = {} \<Longrightarrow> B \<in> Bot_F \<Longrightarrow>
- no_labels.entails_\<G>_Q (fst ` (lnth D 0)) {B} \<Longrightarrow>
- \<exists>i. enat i < llength D \<and> (\<exists>BL\<in> Bot_FL. BL \<in> (lnth D i))"
-proof -
- fix B
- assume
+theorem gc_complete:
+ assumes
deriv: "chain (\<Longrightarrow>GC) D" and
- not_empty_d: "llength D > 0" and
init_state: "active_subset (lnth D 0) = {}" and
- final_state: "non_active_subset (Liminf_llist D) = {}" and
+ final_state: "passive_subset (Liminf_llist D) = {}" and
b_in: "B \<in> Bot_F" and
bot_entailed: "no_labels.entails_\<G>_Q (fst ` (lnth D 0)) {B}"
- have labeled_b_in: "(B,active) \<in> Bot_FL" unfolding Bot_FL_def using b_in by simp
- have not_empty_d2: "\<not> lnull D" using not_empty_d by force
- have labeled_bot_entailed: "entails_\<G>_L_Q (lnth D 0) {(B,active)}"
- using labeled_entailment_lifting bot_entailed by fastforce
- have "fair D" using gc_fair[OF deriv not_empty_d init_state final_state] .
- then have "\<exists>i \<in> {i. enat i < llength D}. \<exists>BL\<in>Bot_FL. BL \<in> lnth D i"
- using labeled_ordered_dynamic_ref_comp labeled_b_in not_empty_d2 gc_to_red[OF deriv]
- labeled_bot_entailed entail_equiv
- unfolding dynamic_refutational_complete_calculus_def
- dynamic_refutational_complete_calculus_axioms_def by blast
- then show ?thesis by blast
+ shows "\<exists>i. enat i < llength D \<and> (\<exists>BL \<in> Bot_FL. BL \<in> lnth D i)"
+proof -
+ have "\<exists>BL\<in>Bot_FL. BL \<in> Liminf_llist D"
+ using assms by (rule gc_complete_Liminf)
+ then show ?thesis
+ unfolding Liminf_llist_def by auto
qed
end
subsection \<open>Lazy Given Clause Architecture\<close>
-locale Lazy_Given_Clause = Prover_Architecture_Basis Bot_F Inf_F Bot_G Q entails_q Inf_G Red_Inf_q
- Red_F_q \<G>_F_q \<G>_Inf_q l Inf_FL Equiv_F Prec_F Prec_l
+locale lazy_given_clause = prover_architecture_basis Bot_F Inf_F Bot_G Q entails_q Inf_G_q Red_Inf_q
+ Red_F_q \<G>_F_q \<G>_Inf_q Inf_FL Equiv_F Prec_F Prec_l active
for
Bot_F :: "'f set" and
Inf_F :: "'f inference set" and
Bot_G :: "'g set" and
- Q :: "'q itself" and
- entails_q :: "'q \<Rightarrow> ('g set \<Rightarrow> 'g set \<Rightarrow> bool)" and
- Inf_G :: \<open>'g inference set\<close> and
- Red_Inf_q :: "'q \<Rightarrow> ('g set \<Rightarrow> 'g inference set)" and
- Red_F_q :: "'q \<Rightarrow> ('g set \<Rightarrow> 'g set)" and
+ Q :: "'q set" and
+ entails_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g set \<Rightarrow> bool" and
+ Inf_G_q :: \<open>'q \<Rightarrow> 'g inference set\<close> and
+ Red_Inf_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g inference set" and
+ Red_F_q :: "'q \<Rightarrow> 'g set \<Rightarrow> 'g set" and
\<G>_F_q :: "'q \<Rightarrow> 'f \<Rightarrow> 'g set" and
\<G>_Inf_q :: "'q \<Rightarrow> 'f inference \<Rightarrow> 'g inference set option" and
- l :: "'l itself" and
Inf_FL :: \<open>('f \<times> 'l) inference set\<close> and
- Equiv_F :: "('f \<times> 'f) set" and
- Prec_F :: "'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<cdot>\<succ>" 50) and
- Prec_l :: "'l \<Rightarrow> 'l \<Rightarrow> bool" (infix "\<sqsubset>l" 50)
- + fixes
- active :: "'l"
- assumes
- active_minimal: "l2 \<noteq> active \<Longrightarrow> active \<sqsubset>l l2" and
- at_least_two_labels: "\<exists>l2. active \<sqsubset>l l2" and
- inf_never_active: "\<iota> \<in> Inf_FL \<Longrightarrow> snd (concl_of \<iota>) \<noteq> active"
+ Equiv_F :: "'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<doteq>" 50) and
+ Prec_F :: "'f \<Rightarrow> 'f \<Rightarrow> bool" (infix "\<prec>\<cdot>" 50) and
+ Prec_l :: "'l \<Rightarrow> 'l \<Rightarrow> bool" (infix "\<sqsubset>l" 50) and
+ active :: 'l
begin
-definition active_subset :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set" where
- "active_subset M = {CL \<in> M. snd CL = active}"
-
-definition non_active_subset :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set" where
- "non_active_subset M = {CL \<in> M. snd CL \<noteq> active}"
+inductive step :: "'f inference set \<times> ('f \<times> 'l) set \<Rightarrow>
+ 'f inference set \<times> ('f \<times> 'l) set \<Rightarrow> bool" (infix "\<Longrightarrow>LGC" 50) where
+ process: "N1 = N \<union> M \<Longrightarrow> N2 = N \<union> M' \<Longrightarrow> M \<subseteq> Red_F_Q (N \<union> M') \<Longrightarrow>
+ active_subset M' = {} \<Longrightarrow> (T, N1) \<Longrightarrow>LGC (T, N2)" |
+ schedule_infer: "T2 = T1 \<union> T' \<Longrightarrow> N1 = N \<union> {(C, L)} \<Longrightarrow> N2 = N \<union> {(C, active)} \<Longrightarrow>
+ L \<noteq> active \<Longrightarrow> T' = no_labels.Inf_from2 (fst ` (active_subset N)) {C} \<Longrightarrow>
+ (T1, N1) \<Longrightarrow>LGC (T2, N2)" |
+ compute_infer: "T1 = T2 \<union> {\<iota>} \<Longrightarrow> N2 = N1 \<union> M \<Longrightarrow> active_subset M = {} \<Longrightarrow>
+ \<iota> \<in> no_labels.Red_Inf_Q (fst ` (N1 \<union> M)) \<Longrightarrow> (T1, N1) \<Longrightarrow>LGC (T2, N2)" |
+ delete_orphans: "T1 = T2 \<union> T' \<Longrightarrow>
+ T' \<inter> no_labels.Inf_from (fst ` (active_subset N)) = {} \<Longrightarrow> (T1, N) \<Longrightarrow>LGC (T2, N)"
-inductive Lazy_Given_Clause_step :: "('f inference set) \<times> (('f \<times> 'l) set) \<Rightarrow>
- ('f inference set) \<times> (('f \<times> 'l) set) \<Rightarrow> bool" (infix "\<Longrightarrow>LGC" 50) where
- process: "N1 = N \<union> M \<Longrightarrow> N2 = N \<union> M' \<Longrightarrow> N \<inter> M = {} \<Longrightarrow>
- M \<subseteq> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q (N \<union> M') \<Longrightarrow>
- active_subset M' = {} \<Longrightarrow> (T,N1) \<Longrightarrow>LGC (T,N2)" |
- schedule_infer: "T2 = T1 \<union> T' \<Longrightarrow> N1 = N \<union> {(C,L)} \<Longrightarrow> {(C,L)} \<inter> N = {} \<Longrightarrow> N2 = N \<union> {(C,active)} \<Longrightarrow>
- L \<noteq> active \<Longrightarrow> T' = no_labels.Non_ground.Inf_from2 (fst ` (active_subset N)) {C} \<Longrightarrow>
- (T1,N1) \<Longrightarrow>LGC (T2,N2)" |
- compute_infer: "T1 = T2 \<union> {\<iota>} \<Longrightarrow> T2 \<inter> {\<iota>} = {} \<Longrightarrow> N2 = N1 \<union> M \<Longrightarrow> active_subset M = {} \<Longrightarrow>
- \<iota> \<in> no_labels.lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` (N1 \<union> M)) \<Longrightarrow>
- (T1,N1) \<Longrightarrow>LGC (T2,N2)" |
- delete_orphans: "T1 = T2 \<union> T' \<Longrightarrow> T2 \<inter> T' = {} \<Longrightarrow>
- T' \<inter> no_labels.Non_ground.Inf_from (fst ` (active_subset N)) = {} \<Longrightarrow> (T1,N) \<Longrightarrow>LGC (T2,N)"
+lemma premise_free_inf_always_from:
+ "\<iota> \<in> Inf_F \<Longrightarrow> length (prems_of \<iota>) = 0 \<Longrightarrow> \<iota> \<in> no_labels.Inf_from N"
+ unfolding no_labels.Inf_from_def by simp
-abbreviation derive :: "('f \<times> 'l) set \<Rightarrow> ('f \<times> 'l) set \<Rightarrow> bool" (infix "\<rhd>RedL" 50) where
- "derive \<equiv> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.derive"
-
-lemma premise_free_inf_always_from: "\<iota> \<in> Inf_F \<Longrightarrow> length (prems_of \<iota>) = 0 \<Longrightarrow>
- \<iota> \<in> no_labels.Non_ground.Inf_from N"
- unfolding no_labels.Non_ground.Inf_from_def by simp
-
-lemma one_step_equiv: "(T1,N1) \<Longrightarrow>LGC (T2,N2) \<Longrightarrow> N1 \<rhd>RedL N2"
-proof (cases "(T1,N1)" "(T2,N2)" rule: Lazy_Given_Clause_step.cases)
- show "(T1,N1) \<Longrightarrow>LGC (T2,N2) \<Longrightarrow> (T1,N1) \<Longrightarrow>LGC (T2,N2)" by blast
+lemma one_step_equiv: "(T1, N1) \<Longrightarrow>LGC (T2, N2) \<Longrightarrow> N1 \<rhd>RedL N2"
+proof (cases "(T1, N1)" "(T2, N2)" rule: step.cases)
+ show "(T1, N1) \<Longrightarrow>LGC (T2, N2) \<Longrightarrow> (T1, N1) \<Longrightarrow>LGC (T2, N2)" by blast
next
fix N M M'
assume
n1_is: "N1 = N \<union> M" and
n2_is: "N2 = N \<union> M'" and
- empty_inter: "N \<inter> M = {}" and
- m_red: "M \<subseteq> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q (N \<union> M')"
- have "N1 - N2 \<subseteq> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N2"
- using n1_is n2_is empty_inter m_red by auto
+ m_red: "M \<subseteq> Red_F_Q (N \<union> M')"
+ have "N1 - N2 \<subseteq> Red_F_Q N2"
+ using n1_is n2_is m_red by auto
then show "N1 \<rhd>RedL N2"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.derive.simps by blast
+ unfolding derive.simps by blast
next
fix N C L M
assume
- n1_is: "N1 = N \<union> {(C,L)}" and
+ n1_is: "N1 = N \<union> {(C, L)}" and
not_active: "L \<noteq> active" and
n2_is: "N2 = N \<union> {(C, active)}"
have "(C, active) \<in> N2" using n2_is by auto
- moreover have "C \<cdot>\<succeq> C" using Prec_eq_F_def equiv_F_is_equiv_rel equiv_class_eq_iff by fastforce
+ moreover have "C \<preceq>\<cdot> C" by (metis equivp_def equiv_equiv_F)
moreover have "active \<sqsubset>l L" using active_minimal[OF not_active] .
- ultimately have "{(C,L)} \<subseteq> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N2"
+ ultimately have "{(C, L)} \<subseteq> Red_F_Q N2"
using red_labeled_clauses by blast
- then have "N1 - N2 \<subseteq> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N2"
- using empty_red_f_equiv[of N2] using n1_is n2_is by blast
+ then have "N1 - N2 \<subseteq> Red_F_Q N2"
+ using std_Red_F_Q_eq using n1_is n2_is by blast
then show "N1 \<rhd>RedL N2"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.derive.simps
- by blast
+ unfolding derive.simps by blast
next
fix M
assume
n2_is: "N2 = N1 \<union> M"
- have "N1 - N2 \<subseteq> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N2"
+ have "N1 - N2 \<subseteq> Red_F_Q N2"
using n2_is by blast
then show "N1 \<rhd>RedL N2"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.derive.simps
- by blast
+ unfolding derive.simps by blast
next
assume n2_is: "N2 = N1"
- have "N1 - N2 \<subseteq> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.Red_F_Q N2"
+ have "N1 - N2 \<subseteq> Red_F_Q N2"
using n2_is by blast
then show "N1 \<rhd>RedL N2"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.derive.simps
- by blast
+ unfolding derive.simps by blast
qed
-abbreviation fair :: "('f \<times> 'l) set llist \<Rightarrow> bool" where
- "fair \<equiv> labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.fair"
-
(* lem:lgc-derivations-are-red-derivations *)
lemma lgc_to_red: "chain (\<Longrightarrow>LGC) D \<Longrightarrow> chain (\<rhd>RedL) (lmap snd D)"
using one_step_equiv Lazy_List_Chain.chain_mono by (smt chain_lmap prod.collapse)
(* lem:fair-lgc-derivations *)
-lemma lgc_fair: "chain (\<Longrightarrow>LGC) D \<Longrightarrow> llength D > 0 \<Longrightarrow> active_subset (snd (lnth D 0)) = {} \<Longrightarrow>
- non_active_subset (Liminf_llist (lmap snd D)) = {} \<Longrightarrow> (\<forall>\<iota> \<in> Inf_F. length (prems_of \<iota>) = 0 \<longrightarrow>
- \<iota> \<in> (fst (lnth D 0))) \<Longrightarrow>
- Liminf_llist (lmap fst D) = {} \<Longrightarrow> fair (lmap snd D)"
-proof -
- assume
+lemma lgc_fair:
+ assumes
deriv: "chain (\<Longrightarrow>LGC) D" and
- non_empty: "llength D > 0" and
init_state: "active_subset (snd (lnth D 0)) = {}" and
- final_state: "non_active_subset (Liminf_llist (lmap snd D)) = {}" and
- no_prems_init_active: "\<forall>\<iota> \<in> Inf_F. length (prems_of \<iota>) = 0 \<longrightarrow> \<iota> \<in> (fst (lnth D 0))" and
+ final_state: "passive_subset (Liminf_llist (lmap snd D)) = {}" and
+ no_prems_init_active: "\<forall>\<iota> \<in> Inf_F. length (prems_of \<iota>) = 0 \<longrightarrow> \<iota> \<in> fst (lnth D 0)" and
final_schedule: "Liminf_llist (lmap fst D) = {}"
- show "fair (lmap snd D)"
- unfolding labeled_ord_red_crit_fam.lifted_calc_w_red_crit_family.inter_red_crit_calculus.fair_def
- proof
- fix \<iota>
- assume i_in: "\<iota> \<in> with_labels.Inf_from (Liminf_llist (lmap snd D))"
- have i_in_inf_fl: "\<iota> \<in> Inf_FL" using i_in unfolding with_labels.Inf_from_def by blast
- have "Liminf_llist (lmap snd D) = active_subset (Liminf_llist (lmap snd D))"
- using final_state unfolding non_active_subset_def active_subset_def by blast
- then have i_in2: "\<iota> \<in> with_labels.Inf_from (active_subset (Liminf_llist (lmap snd D)))"
- using i_in by simp
- define m where "m = length (prems_of \<iota>)"
- then have m_def_F: "m = length (prems_of (to_F \<iota>))" unfolding to_F_def by simp
- have i_in_F: "to_F \<iota> \<in> Inf_F"
- using i_in Inf_FL_to_Inf_F unfolding with_labels.Inf_from_def to_F_def by blast
- have exist_nj: "\<forall>j \<in> {0..<m}. (\<exists>nj. enat (Suc nj) < llength D \<and>
- (prems_of \<iota>)!j \<notin> active_subset (snd (lnth D nj)) \<and>
- (\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> (prems_of \<iota>)!j \<in> active_subset (snd (lnth D k))))"
- proof clarify
- fix j
- assume j_in: "j \<in> {0..<m}"
- then obtain C where c_is: "(C,active) = (prems_of \<iota>)!j"
- using i_in2 unfolding m_def with_labels.Inf_from_def active_subset_def
- by (smt Collect_mem_eq Collect_mono_iff atLeastLessThan_iff nth_mem old.prod.exhaust snd_conv)
- then have "(C,active) \<in> Liminf_llist (lmap snd D)"
- using j_in i_in unfolding m_def with_labels.Inf_from_def by force
- then obtain nj where nj_is: "enat nj < llength D" and
- c_in2: "(C,active) \<in> \<Inter> (snd ` (lnth D ` {k. nj \<le> k \<and> enat k < llength D}))"
- unfolding Liminf_llist_def using init_state by fastforce
- then have c_in3: "\<forall>k. k \<ge> nj \<longrightarrow> enat k < llength D \<longrightarrow> (C,active) \<in> snd (lnth D k)" by blast
- have nj_pos: "nj > 0" using init_state c_in2 nj_is unfolding active_subset_def by fastforce
- obtain nj_min where nj_min_is: "nj_min = (LEAST nj. enat nj < llength D \<and>
- (C,active) \<in> \<Inter> (snd ` (lnth D ` {k. nj \<le> k \<and> enat k < llength D})))" by blast
- then have in_allk: "\<forall>k. k \<ge> nj_min \<longrightarrow> enat k < llength D \<longrightarrow> (C,active) \<in> snd (lnth D k)"
- using c_in3 nj_is c_in2 INT_E LeastI_ex
- by (smt INT_iff INT_simps(10) c_is image_eqI mem_Collect_eq)
- have njm_smaller_D: "enat nj_min < llength D"
- using nj_min_is
- by (smt LeastI_ex \<open>\<And>thesis. (\<And>nj. \<lbrakk>enat nj < llength D;
+ shows "fair (lmap snd D)"
+ unfolding fair_def
+proof
+ fix \<iota>
+ assume i_in: "\<iota> \<in> Inf_from (Liminf_llist (lmap snd D))"
+ have i_in_inf_fl: "\<iota> \<in> Inf_FL" using i_in unfolding Inf_from_def by blast
+ have "Liminf_llist (lmap snd D) = active_subset (Liminf_llist (lmap snd D))"
+ using final_state unfolding passive_subset_def active_subset_def by blast
+ then have i_in2: "\<iota> \<in> Inf_from (active_subset (Liminf_llist (lmap snd D)))"
+ using i_in by simp
+ define m where "m = length (prems_of \<iota>)"
+ then have m_def_F: "m = length (prems_of (to_F \<iota>))" unfolding to_F_def by simp
+ have i_in_F: "to_F \<iota> \<in> Inf_F"
+ using i_in Inf_FL_to_Inf_F unfolding Inf_from_def to_F_def by blast
+ have exist_nj: "\<forall>j \<in> {0..<m}. (\<exists>nj. enat (Suc nj) < llength D \<and>
+ prems_of \<iota> ! j \<notin> active_subset (snd (lnth D nj)) \<and>
+ (\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> prems_of \<iota> ! j \<in> active_subset (snd (lnth D k))))"
+ proof clarify
+ fix j
+ assume j_in: "j \<in> {0..<m}"
+ then obtain C where c_is: "(C, active) = prems_of \<iota> ! j"
+ using i_in2 unfolding m_def Inf_from_def active_subset_def
+ by (smt Collect_mem_eq Collect_mono_iff atLeastLessThan_iff nth_mem old.prod.exhaust snd_conv)
+ then have "(C, active) \<in> Liminf_llist (lmap snd D)"
+ using j_in i_in unfolding m_def Inf_from_def by force
+ then obtain nj where nj_is: "enat nj < llength D" and
+ c_in2: "(C, active) \<in> \<Inter> (snd ` (lnth D ` {k. nj \<le> k \<and> enat k < llength D}))"
+ unfolding Liminf_llist_def using init_state by fastforce
+ then have c_in3: "\<forall>k. k \<ge> nj \<longrightarrow> enat k < llength D \<longrightarrow> (C, active) \<in> snd (lnth D k)" by blast
+ have nj_pos: "nj > 0" using init_state c_in2 nj_is unfolding active_subset_def by fastforce
+ obtain nj_min where nj_min_is: "nj_min = (LEAST nj. enat nj < llength D \<and>
+ (C, active) \<in> \<Inter> (snd ` (lnth D ` {k. nj \<le> k \<and> enat k < llength D})))" by blast
+ then have in_allk: "\<forall>k. k \<ge> nj_min \<longrightarrow> enat k < llength D \<longrightarrow> (C, active) \<in> snd (lnth D k)"
+ using c_in3 nj_is c_in2 INT_E LeastI_ex
+ by (smt INT_iff INT_simps(10) c_is image_eqI mem_Collect_eq)
+ have njm_smaller_D: "enat nj_min < llength D"
+ using nj_min_is
+ by (smt LeastI_ex \<open>\<And>thesis. (\<And>nj. \<lbrakk>enat nj < llength D;
(C, active) \<in> \<Inter> (snd ` (lnth D ` {k. nj \<le> k \<and> enat k < llength D}))\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis\<close>)
- have "nj_min > 0"
- using nj_is c_in2 nj_pos nj_min_is
- by (metis (mono_tags, lifting) active_subset_def emptyE in_allk init_state mem_Collect_eq
- non_empty not_less snd_conv zero_enat_def)
- then obtain njm_prec where nj_prec_is: "Suc njm_prec = nj_min" using gr0_conv_Suc by auto
- then have njm_prec_njm: "njm_prec < nj_min" by blast
- then have njm_prec_njm_enat: "enat njm_prec < enat nj_min" by simp
- have njm_prec_smaller_d: "njm_prec < llength D"
- using HOL.no_atp(15)[OF njm_smaller_D njm_prec_njm_enat] .
- have njm_prec_all_suc: "\<forall>k>njm_prec. enat k < llength D \<longrightarrow> (C, active) \<in> snd (lnth D k)"
- using nj_prec_is in_allk by simp
- have notin_njm_prec: "(C, active) \<notin> snd (lnth D njm_prec)"
- proof (rule ccontr)
- assume "\<not> (C, active) \<notin> snd (lnth D njm_prec)"
- then have absurd_hyp: "(C, active) \<in> snd (lnth D njm_prec)" by simp
- have prec_smaller: "enat njm_prec < llength D" using nj_min_is nj_prec_is
- by (smt LeastI_ex Suc_leD \<open>\<And>thesis. (\<And>nj. \<lbrakk>enat nj < llength D;
+ have "nj_min > 0"
+ using nj_is c_in2 nj_pos nj_min_is
+ by (metis (mono_tags, lifting) active_subset_def emptyE in_allk init_state mem_Collect_eq
+ not_less snd_conv zero_enat_def chain_length_pos[OF deriv])
+ then obtain njm_prec where nj_prec_is: "Suc njm_prec = nj_min" using gr0_conv_Suc by auto
+ then have njm_prec_njm: "njm_prec < nj_min" by blast
+ then have njm_prec_njm_enat: "enat njm_prec < enat nj_min" by simp
+ have njm_prec_smaller_d: "njm_prec < llength D"
+ using HOL.no_atp(15)[OF njm_smaller_D njm_prec_njm_enat] .
+ have njm_prec_all_suc: "\<forall>k>njm_prec. enat k < llength D \<longrightarrow> (C, active) \<in> snd (lnth D k)"
+ using nj_prec_is in_allk by simp
+ have notin_njm_prec: "(C, active) \<notin> snd (lnth D njm_prec)"
+ proof (rule ccontr)
+ assume "\<not> (C, active) \<notin> snd (lnth D njm_prec)"
+ then have absurd_hyp: "(C, active) \<in> snd (lnth D njm_prec)" by simp
+ have prec_smaller: "enat njm_prec < llength D" using nj_min_is nj_prec_is
+ by (smt LeastI_ex Suc_leD \<open>\<And>thesis. (\<And>nj. \<lbrakk>enat nj < llength D;
(C, active) \<in> \<Inter> (snd ` (lnth D ` {k. nj \<le> k \<and> enat k < llength D}))\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis\<close>
enat_ord_simps(1) le_eq_less_or_eq le_less_trans)
- have "(C,active) \<in> \<Inter> (snd ` (lnth D ` {k. njm_prec \<le> k \<and> enat k < llength D}))"
- proof -
- {
- fix k
- assume k_in: "njm_prec \<le> k \<and> enat k < llength D"
- have "k = njm_prec \<Longrightarrow> (C,active) \<in> snd (lnth D k)" using absurd_hyp by simp
- moreover have "njm_prec < k \<Longrightarrow> (C,active) \<in> snd (lnth D k)"
- using nj_prec_is in_allk k_in by simp
- ultimately have "(C,active) \<in> snd (lnth D k)" using k_in by fastforce
- }
- then show "(C,active) \<in> \<Inter> (snd ` (lnth D ` {k. njm_prec \<le> k \<and> enat k < llength D}))"
- by blast
- qed
- then have "enat njm_prec < llength D \<and>
- (C,active) \<in> \<Inter> (snd ` (lnth D ` {k. njm_prec \<le> k \<and> enat k < llength D}))"
- using prec_smaller by blast
- then show False
- using nj_min_is nj_prec_is Orderings.wellorder_class.not_less_Least njm_prec_njm by blast
+ have "(C, active) \<in> \<Inter> (snd ` (lnth D ` {k. njm_prec \<le> k \<and> enat k < llength D}))"
+ proof -
+ {
+ fix k
+ assume k_in: "njm_prec \<le> k \<and> enat k < llength D"
+ have "k = njm_prec \<Longrightarrow> (C, active) \<in> snd (lnth D k)" using absurd_hyp by simp
+ moreover have "njm_prec < k \<Longrightarrow> (C, active) \<in> snd (lnth D k)"
+ using nj_prec_is in_allk k_in by simp
+ ultimately have "(C, active) \<in> snd (lnth D k)" using k_in by fastforce
+ }
+ then show "(C, active) \<in> \<Inter> (snd ` (lnth D ` {k. njm_prec \<le> k \<and> enat k < llength D}))"
+ by blast
qed
- then have notin_active_subs_njm_prec: "(C, active) \<notin> active_subset (snd (lnth D njm_prec))"
- unfolding active_subset_def by blast
- then show "\<exists>nj. enat (Suc nj) < llength D \<and> (prems_of \<iota>)!j \<notin> active_subset (snd (lnth D nj)) \<and>
- (\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> (prems_of \<iota>)!j \<in> active_subset (snd (lnth D k)))"
- using c_is njm_prec_all_suc njm_prec_smaller_d by (metis (mono_tags, lifting)
- active_subset_def mem_Collect_eq nj_prec_is njm_smaller_D snd_conv)
+ then have "enat njm_prec < llength D \<and>
+ (C, active) \<in> \<Inter> (snd ` (lnth D ` {k. njm_prec \<le> k \<and> enat k < llength D}))"
+ using prec_smaller by blast
+ then show False
+ using nj_min_is nj_prec_is Orderings.wellorder_class.not_less_Least njm_prec_njm by blast
qed
- define nj_set where "nj_set = {nj. (\<exists>j\<in>{0..<m}. enat (Suc nj) < llength D \<and>
- (prems_of \<iota>)!j \<notin> active_subset (snd (lnth D nj)) \<and>
- (\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> (prems_of \<iota>)!j \<in> active_subset (snd (lnth D k))))}"
- {
- assume m_null: "m = 0"
- then have "enat 0 < llength D \<and> to_F \<iota> \<in> fst (lnth D 0)"
- using no_prems_init_active i_in_F non_empty m_def_F zero_enat_def by auto
- then have "\<exists>n. enat n < llength D \<and> to_F \<iota> \<in> fst (lnth D n)"
- by blast
- }
- moreover {
- assume m_pos: "m > 0"
- have uniq_nj: "j \<in> {0..<m} \<Longrightarrow>
- (enat (Suc nj1) < llength D \<and>
- (prems_of \<iota>)!j \<notin> active_subset (snd (lnth D nj1)) \<and>
- (\<forall>k. k > nj1 \<longrightarrow> enat k < llength D \<longrightarrow> (prems_of \<iota>)!j \<in> active_subset (snd (lnth D k)))) \<Longrightarrow>
- (enat (Suc nj2) < llength D \<and>
- (prems_of \<iota>)!j \<notin> active_subset (snd (lnth D nj2)) \<and>
- (\<forall>k. k > nj2 \<longrightarrow> enat k < llength D \<longrightarrow> (prems_of \<iota>)!j \<in> active_subset (snd (lnth D k)))) \<Longrightarrow>
- nj1=nj2"
- proof (clarify, rule ccontr)
- fix j nj1 nj2
- assume "j \<in> {0..<m}" and
- nj1_d: "enat (Suc nj1) < llength D" and
- nj2_d: "enat (Suc nj2) < llength D" and
- nj1_notin: "prems_of \<iota> ! j \<notin> active_subset (snd (lnth D nj1))" and
- k_nj1: "\<forall>k>nj1. enat k < llength D \<longrightarrow> prems_of \<iota> ! j \<in> active_subset (snd (lnth D k))" and
- nj2_notin: "prems_of \<iota> ! j \<notin> active_subset (snd (lnth D nj2))" and
- k_nj2: "\<forall>k>nj2. enat k < llength D \<longrightarrow> prems_of \<iota> ! j \<in> active_subset (snd (lnth D k))" and
- diff_12: "nj1 \<noteq> nj2"
- have "nj1 < nj2 \<Longrightarrow> False"
- proof -
- assume prec_12: "nj1 < nj2"
- have "enat nj2 < llength D" using nj2_d using Suc_ile_eq less_trans by blast
- then have "prems_of \<iota> ! j \<in> active_subset (snd (lnth D nj2))"
- using k_nj1 prec_12 by simp
- then show False using nj2_notin by simp
- qed
- moreover have "nj1 > nj2 \<Longrightarrow> False"
- proof -
- assume prec_21: "nj2 < nj1"
- have "enat nj1 < llength D" using nj1_d using Suc_ile_eq less_trans by blast
- then have "prems_of \<iota> ! j \<in> active_subset (snd (lnth D nj1))"
- using k_nj2 prec_21
- by simp
- then show False using nj1_notin by simp
- qed
- ultimately show False using diff_12 by linarith
- qed
- have nj_not_empty: "nj_set \<noteq> {}"
- proof -
- have zero_in: "0 \<in> {0..<m}" using m_pos by simp
- then obtain n0 where "enat (Suc n0) < llength D" and
- "prems_of \<iota> ! 0 \<notin> active_subset (snd (lnth D n0))" and
- "\<forall>k>n0. enat k < llength D \<longrightarrow> prems_of \<iota> ! 0 \<in> active_subset (snd (lnth D k))"
- using exist_nj by fast
- then have "n0 \<in> nj_set" unfolding nj_set_def using zero_in by blast
- then show "nj_set \<noteq> {}" by auto
- qed
- have nj_finite: "finite nj_set"
- using uniq_nj all_ex_finite_set[OF exist_nj] by (metis (no_types, lifting) Suc_ile_eq
+ then have notin_active_subs_njm_prec: "(C, active) \<notin> active_subset (snd (lnth D njm_prec))"
+ unfolding active_subset_def by blast
+ then show "\<exists>nj. enat (Suc nj) < llength D \<and> prems_of \<iota> ! j \<notin> active_subset (snd (lnth D nj)) \<and>
+ (\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> prems_of \<iota> ! j \<in> active_subset (snd (lnth D k)))"
+ using c_is njm_prec_all_suc njm_prec_smaller_d by (metis (mono_tags, lifting)
+ active_subset_def mem_Collect_eq nj_prec_is njm_smaller_D snd_conv)
+ qed
+ define nj_set where "nj_set = {nj. (\<exists>j\<in>{0..<m}. enat (Suc nj) < llength D \<and>
+ prems_of \<iota> ! j \<notin> active_subset (snd (lnth D nj)) \<and>
+ (\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow> prems_of \<iota> ! j \<in> active_subset (snd (lnth D k))))}"
+ {
+ assume m_null: "m = 0"
+ then have "enat 0 < llength D \<and> to_F \<iota> \<in> fst (lnth D 0)"
+ using no_prems_init_active i_in_F m_def_F zero_enat_def chain_length_pos[OF deriv] by auto
+ then have "\<exists>n. enat n < llength D \<and> to_F \<iota> \<in> fst (lnth D n)"
+ by blast
+ }
+ moreover {
+ assume m_pos: "m > 0"
+ have nj_not_empty: "nj_set \<noteq> {}"
+ proof -
+ have zero_in: "0 \<in> {0..<m}" using m_pos by simp
+ then obtain n0 where "enat (Suc n0) < llength D" and
+ "prems_of \<iota> ! 0 \<notin> active_subset (snd (lnth D n0))" and
+ "\<forall>k>n0. enat k < llength D \<longrightarrow> prems_of \<iota> ! 0 \<in> active_subset (snd (lnth D k))"
+ using exist_nj by fast
+ then have "n0 \<in> nj_set" unfolding nj_set_def using zero_in by blast
+ then show "nj_set \<noteq> {}" by auto
+ qed
+ have nj_finite: "finite nj_set"
+ using all_ex_finite_set[OF exist_nj] by (metis (no_types, lifting) Suc_ile_eq
dual_order.strict_implies_order linorder_neqE_nat nj_set_def)
- have "\<exists>n \<in> nj_set. \<forall>nj \<in> nj_set. nj \<le> n"
- using nj_not_empty nj_finite using Max_ge Max_in by blast
- then obtain n where n_in: "n \<in> nj_set" and n_bigger: "\<forall>nj \<in> nj_set. nj \<le> n" by blast
- then obtain j0 where j0_in: "j0 \<in> {0..<m}" and suc_n_length: "enat (Suc n) < llength D" and
- j0_notin: "(prems_of \<iota>)!j0 \<notin> active_subset (snd (lnth D n))" and
- j0_allin: "(\<forall>k. k > n \<longrightarrow> enat k < llength D \<longrightarrow>
- (prems_of \<iota>)!j0 \<in> active_subset (snd (lnth D k)))"
+ have "\<exists>n \<in> nj_set. \<forall>nj \<in> nj_set. nj \<le> n"
+ using nj_not_empty nj_finite using Max_ge Max_in by blast
+ then obtain n where n_in: "n \<in> nj_set" and n_bigger: "\<forall>nj \<in> nj_set. nj \<le> n" by blast
+ then obtain j0 where j0_in: "j0 \<in> {0..<m}" and suc_n_length: "enat (Suc n) < llength D" and
+ j0_notin: "prems_of \<iota> ! j0 \<notin> active_subset (snd (lnth D n))" and
+ j0_allin: "(\<forall>k. k > n \<longrightarrow> enat k < llength D \<longrightarrow>
+ prems_of \<iota> ! j0 \<in> active_subset (snd (lnth D k)))"
+ unfolding nj_set_def by blast
+ obtain C0 where C0_is: "prems_of \<iota> ! j0 = (C0, active)"
+ using j0_in i_in2 unfolding m_def Inf_from_def active_subset_def
+ by (smt Collect_mem_eq Collect_mono_iff atLeastLessThan_iff nth_mem old.prod.exhaust snd_conv)
+ then have C0_prems_i: "(C0, active) \<in> set (prems_of \<iota>)" using in_set_conv_nth j0_in m_def by force
+ have C0_in: "(C0, active) \<in> (snd (lnth D (Suc n)))"
+ using C0_is j0_allin suc_n_length by (simp add: active_subset_def)
+ have C0_notin: "(C0, active) \<notin> (snd (lnth D n))"
+ using C0_is j0_notin unfolding active_subset_def by simp
+ have step_n: "lnth D n \<Longrightarrow>LGC lnth D (Suc n)"
+ using deriv chain_lnth_rel n_in unfolding nj_set_def by blast
+ have is_scheduled: "\<exists>T2 T1 T' N1 N C L N2. lnth D n = (T1, N1) \<and> lnth D (Suc n) = (T2, N2) \<and>
+ T2 = T1 \<union> T' \<and> N1 = N \<union> {(C, L)} \<and> N2 = N \<union> {(C, active)} \<and> L \<noteq> active \<and>
+ T' = no_labels.Inf_from2 (fst ` active_subset N) {C}"
+ using step.simps[of "lnth D n" "lnth D (Suc n)"] step_n C0_in C0_notin
+ unfolding active_subset_def by fastforce
+ then obtain T2 T1 T' N1 N L N2 where nth_d_is: "lnth D n = (T1, N1)" and
+ suc_nth_d_is: "lnth D (Suc n) = (T2, N2)" and t2_is: "T2 = T1 \<union> T'" and
+ n1_is: "N1 = N \<union> {(C0, L)}" "N2 = N \<union> {(C0, active)}" and
+ l_not_active: "L \<noteq> active" and
+ tp_is: "T' = no_labels.Inf_from2 (fst ` active_subset N) {C0}"
+ using C0_in C0_notin j0_in C0_is using active_subset_def by fastforce
+ have "j \<in> {0..<m} \<Longrightarrow> prems_of \<iota> ! j \<noteq> prems_of \<iota> ! j0 \<Longrightarrow> prems_of \<iota> ! j \<in> (active_subset N)"
+ for j
+ proof -
+ fix j
+ assume j_in: "j \<in> {0..<m}" and
+ j_not_j0: "prems_of \<iota> ! j \<noteq> prems_of \<iota> ! j0"
+ obtain nj where nj_len: "enat (Suc nj) < llength D" and
+ nj_prems: "prems_of \<iota> ! j \<notin> active_subset (snd (lnth D nj))" and
+ nj_greater: "(\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow>
+ prems_of \<iota> ! j \<in> active_subset (snd (lnth D k)))"
+ using exist_nj j_in by blast
+ then have "nj \<in> nj_set" unfolding nj_set_def using j_in by blast
+ moreover have "nj \<noteq> n"
+ proof (rule ccontr)
+ assume "\<not> nj \<noteq> n"
+ then have "prems_of \<iota> ! j = (C0, active)"
+ using C0_in C0_notin step.simps[of "lnth D n" "lnth D (Suc n)"] step_n
+ active_subset_def is_scheduled nj_greater nj_prems suc_n_length by auto
+ then show False using j_not_j0 C0_is by simp
+ qed
+ ultimately have "nj < n" using n_bigger by force
+ then have "prems_of \<iota> ! j \<in> (active_subset (snd (lnth D n)))"
+ using nj_greater n_in Suc_ile_eq dual_order.strict_implies_order
unfolding nj_set_def by blast
- obtain C0 where C0_is: "(prems_of \<iota>)!j0 = (C0,active)"
- using j0_in i_in2 unfolding m_def with_labels.Inf_from_def active_subset_def
- by (smt Collect_mem_eq Collect_mono_iff atLeastLessThan_iff nth_mem old.prod.exhaust snd_conv)
- then have C0_prems_i: "(C0,active) \<in> set (prems_of \<iota>)" using in_set_conv_nth j0_in m_def by force
- have C0_in: "(C0,active) \<in> (snd (lnth D (Suc n)))"
- using C0_is j0_allin suc_n_length by (simp add: active_subset_def)
- have C0_notin: "(C0,active) \<notin> (snd (lnth D n))"
- using C0_is j0_notin unfolding active_subset_def by simp
- have step_n: "lnth D n \<Longrightarrow>LGC lnth D (Suc n)"
- using deriv chain_lnth_rel n_in unfolding nj_set_def by blast
- have is_scheduled: "\<exists>T2 T1 T' N1 N C L N2. lnth D n = (T1, N1) \<and> lnth D (Suc n) = (T2, N2) \<and>
- T2 = T1 \<union> T' \<and> N1 = N \<union> {(C, L)} \<and> {(C, L)} \<inter> N = {} \<and> N2 = N \<union> {(C, active)} \<and> L \<noteq> active \<and>
- T' = no_labels.Non_ground.Inf_from2 (fst ` active_subset N) {C}"
- using Lazy_Given_Clause_step.simps[of "lnth D n" "lnth D (Suc n)"] step_n C0_in C0_notin
- unfolding active_subset_def by fastforce
- then obtain T2 T1 T' N1 N L N2 where nth_d_is: "lnth D n = (T1, N1)" and
- suc_nth_d_is: "lnth D (Suc n) = (T2, N2)" and t2_is: "T2 = T1 \<union> T'" and
- n1_is: "N1 = N \<union> {(C0, L)}" "{(C0, L)} \<inter> N = {}" "N2 = N \<union> {(C0, active)}" and
- l_not_active: "L \<noteq> active" and
- tp_is: "T' = no_labels.Non_ground.Inf_from2 (fst ` active_subset N) {C0}"
- using C0_in C0_notin j0_in C0_is using active_subset_def by fastforce
- have "j \<in> {0..<m} \<Longrightarrow> (prems_of \<iota>)!j \<noteq> (prems_of \<iota>)!j0 \<Longrightarrow> (prems_of \<iota>)!j \<in> (active_subset N)"
- for j
- proof -
- fix j
- assume j_in: "j \<in> {0..<m}" and
- j_not_j0: "(prems_of \<iota>)!j \<noteq> (prems_of \<iota>)!j0"
- obtain nj where nj_len: "enat (Suc nj) < llength D" and
- nj_prems: "(prems_of \<iota>)!j \<notin> active_subset (snd (lnth D nj))" and
- nj_greater: "(\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow>
- (prems_of \<iota>)!j \<in> active_subset (snd (lnth D k)))"
- using exist_nj j_in by blast
- then have "nj \<in> nj_set" unfolding nj_set_def using j_in by blast
- moreover have "nj \<noteq> n"
- proof (rule ccontr)
- assume "\<not> nj \<noteq> n"
- then have "(prems_of \<iota>)!j = (C0,active)"
- using C0_in C0_notin Lazy_Given_Clause_step.simps[of "lnth D n" "lnth D (Suc n)"] step_n
- active_subset_def is_scheduled nj_greater nj_prems suc_n_length by auto
- then show False using j_not_j0 C0_is by simp
- qed
- ultimately have "nj < n" using n_bigger by force
- then have "(prems_of \<iota>)!j \<in> (active_subset (snd (lnth D n)))"
- using nj_greater n_in Suc_ile_eq dual_order.strict_implies_order
- unfolding nj_set_def by blast
- then show "(prems_of \<iota>)!j \<in> (active_subset N)"
- using nth_d_is l_not_active n1_is unfolding active_subset_def by force
- qed
- then have prems_i_active: "set (prems_of \<iota>) \<subseteq> active_subset N \<union> {(C0, active)}"
- using C0_prems_i C0_is m_def
- by (metis Un_iff atLeast0LessThan in_set_conv_nth insertCI lessThan_iff subrelI)
- moreover have "\<not> (set (prems_of \<iota>) \<subseteq> active_subset N - {(C0, active)})" using C0_prems_i by blast
- ultimately have "\<iota> \<in> with_labels.Inf_from2 (active_subset N) {(C0,active)}"
- using i_in_inf_fl prems_i_active unfolding with_labels.Inf_from2_def with_labels.Inf_from_def
- by blast
- then have "to_F \<iota> \<in> no_labels.Non_ground.Inf_from2 (fst ` (active_subset N)) {C0}"
- unfolding to_F_def with_labels.Inf_from2_def with_labels.Inf_from_def
- no_labels.Non_ground.Inf_from2_def no_labels.Non_ground.Inf_from_def
- using Inf_FL_to_Inf_F by force
- then have i_in_t2: "to_F \<iota> \<in> T2" using tp_is t2_is by simp
- have "j \<in> {0..<m} \<Longrightarrow> (\<forall>k. k > n \<longrightarrow> enat k < llength D \<longrightarrow>
- (prems_of \<iota>)!j \<in> active_subset (snd (lnth D k)))" for j
- proof (cases "j = j0")
- case True
- assume "j = j0"
- then show "(\<forall>k. k > n \<longrightarrow> enat k < llength D \<longrightarrow>
- (prems_of \<iota>)!j \<in> active_subset (snd (lnth D k)))" using j0_allin by simp
- next
- case False
- assume j_in: "j \<in> {0..<m}" and
- "j \<noteq> j0"
- obtain nj where nj_len: "enat (Suc nj) < llength D" and
- nj_prems: "(prems_of \<iota>)!j \<notin> active_subset (snd (lnth D nj))" and
- nj_greater: "(\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow>
- (prems_of \<iota>)!j \<in> active_subset (snd (lnth D k)))"
- using exist_nj j_in by blast
- then have "nj \<in> nj_set" unfolding nj_set_def using j_in by blast
- then show "(\<forall>k. k > n \<longrightarrow> enat k < llength D \<longrightarrow>
- (prems_of \<iota>)!j \<in> active_subset (snd (lnth D k)))"
- using nj_greater n_bigger by auto
- qed
- then have allj_allk: "(\<forall>c\<in> set (prems_of \<iota>). (\<forall>k. k > n \<longrightarrow> enat k < llength D \<longrightarrow>
+ then show "prems_of \<iota> ! j \<in> (active_subset N)"
+ using nth_d_is l_not_active n1_is unfolding active_subset_def by force
+ qed
+ then have prems_i_active: "set (prems_of \<iota>) \<subseteq> active_subset N \<union> {(C0, active)}"
+ using C0_prems_i C0_is m_def
+ by (metis Un_iff atLeast0LessThan in_set_conv_nth insertCI lessThan_iff subrelI)
+ moreover have "\<not> (set (prems_of \<iota>) \<subseteq> active_subset N - {(C0, active)})" using C0_prems_i by blast
+ ultimately have "\<iota> \<in> Inf_from2 (active_subset N) {(C0, active)}"
+ using i_in_inf_fl prems_i_active unfolding Inf_from2_def Inf_from_def by blast
+ then have "to_F \<iota> \<in> no_labels.Inf_from2 (fst ` (active_subset N)) {C0}"
+ unfolding to_F_def Inf_from2_def Inf_from_def
+ no_labels.Inf_from2_def no_labels.Inf_from_def
+ using Inf_FL_to_Inf_F by force
+ then have i_in_t2: "to_F \<iota> \<in> T2" using tp_is t2_is by simp
+ have "j \<in> {0..<m} \<Longrightarrow> (\<forall>k. k > n \<longrightarrow> enat k < llength D \<longrightarrow>
+ prems_of \<iota> ! j \<in> active_subset (snd (lnth D k)))" for j
+ proof (cases "j = j0")
+ case True
+ assume "j = j0"
+ then show "(\<forall>k. k > n \<longrightarrow> enat k < llength D \<longrightarrow>
+ prems_of \<iota> ! j \<in> active_subset (snd (lnth D k)))" using j0_allin by simp
+ next
+ case False
+ assume j_in: "j \<in> {0..<m}" and
+ "j \<noteq> j0"
+ obtain nj where nj_len: "enat (Suc nj) < llength D" and
+ nj_prems: "prems_of \<iota> ! j \<notin> active_subset (snd (lnth D nj))" and
+ nj_greater: "(\<forall>k. k > nj \<longrightarrow> enat k < llength D \<longrightarrow>
+ prems_of \<iota> ! j \<in> active_subset (snd (lnth D k)))"
+ using exist_nj j_in by blast
+ then have "nj \<in> nj_set" unfolding nj_set_def using j_in by blast
+ then show "(\<forall>k. k > n \<longrightarrow> enat k < llength D \<longrightarrow>
+ prems_of \<iota> ! j \<in> active_subset (snd (lnth D k)))"
+ using nj_greater n_bigger by auto
+ qed
+ then have allj_allk: "(\<forall>c\<in> set (prems_of \<iota>). (\<forall>k. k > n \<longrightarrow> enat k < llength D \<longrightarrow>
c \<in> active_subset (snd (lnth D k))))"
- using m_def by (metis atLeast0LessThan in_set_conv_nth lessThan_iff)
- have "\<forall>c\<in> set (prems_of \<iota>). snd c = active"
- using prems_i_active unfolding active_subset_def by auto
- then have ex_n_i_in: "\<exists>n. enat (Suc n) < llength D \<and> to_F \<iota> \<in> fst (lnth D (Suc n)) \<and>
+ using m_def by (metis atLeast0LessThan in_set_conv_nth lessThan_iff)
+ have "\<forall>c\<in> set (prems_of \<iota>). snd c = active"
+ using prems_i_active unfolding active_subset_def by auto
+ then have ex_n_i_in: "\<exists>n. enat (Suc n) < llength D \<and> to_F \<iota> \<in> fst (lnth D (Suc n)) \<and>
(\<forall>c\<in> set (prems_of \<iota>). snd c = active) \<and>
(\<forall>c\<in> set (prems_of \<iota>). (\<forall>k. k > n \<longrightarrow> enat k < llength D \<longrightarrow>
c \<in> active_subset (snd (lnth D k))))"
- using allj_allk i_in_t2 suc_nth_d_is fstI n_in nj_set_def
- by auto
- then have "\<exists>n. enat n < llength D \<and> to_F \<iota> \<in> fst (lnth D n) \<and>
+ using allj_allk i_in_t2 suc_nth_d_is fstI n_in nj_set_def
+ by auto
+ then have "\<exists>n. enat n < llength D \<and> to_F \<iota> \<in> fst (lnth D n) \<and>
(\<forall>c\<in> set (prems_of \<iota>). snd c = active) \<and> (\<forall>c\<in> set (prems_of \<iota>). (\<forall>k. k \<ge> n \<longrightarrow>
enat k < llength D \<longrightarrow> c \<in> active_subset (snd (lnth D k))))"
- by auto
- }
- ultimately obtain n T2 N2 where i_in_suc_n: "to_F \<iota> \<in> fst (lnth D n)" and
- all_prems_active_after: "m > 0 \<Longrightarrow> (\<forall>c\<in> set (prems_of \<iota>). (\<forall>k. k \<ge> n \<longrightarrow> enat k < llength D \<longrightarrow>
+ by auto
+ }
+ ultimately obtain n T2 N2 where i_in_suc_n: "to_F \<iota> \<in> fst (lnth D n)" and
+ all_prems_active_after: "m > 0 \<Longrightarrow> (\<forall>c\<in> set (prems_of \<iota>). (\<forall>k. k \<ge> n \<longrightarrow> enat k < llength D \<longrightarrow>
c \<in> active_subset (snd (lnth D k))))" and
- suc_n_length: "enat n < llength D" and suc_nth_d_is: "lnth D n = (T2, N2)"
- by (metis less_antisym old.prod.exhaust zero_less_Suc)
- then have i_in_t2: "to_F \<iota> \<in> T2" by simp
- have "\<exists>p\<ge>n. enat (Suc p) < llength D \<and> to_F \<iota> \<in> (fst (lnth D p)) \<and> to_F \<iota> \<notin> (fst (lnth D (Suc p)))"
- proof (rule ccontr)
- assume
- contra: "\<not> (\<exists>p\<ge>n. enat (Suc p) < llength D \<and> to_F \<iota> \<in> (fst (lnth D p)) \<and>
+ suc_n_length: "enat n < llength D" and suc_nth_d_is: "lnth D n = (T2, N2)"
+ by (metis less_antisym old.prod.exhaust zero_less_Suc)
+ then have i_in_t2: "to_F \<iota> \<in> T2" by simp
+ have "\<exists>p\<ge>n. enat (Suc p) < llength D \<and> to_F \<iota> \<in> (fst (lnth D p)) \<and> to_F \<iota> \<notin> (fst (lnth D (Suc p)))"
+ proof (rule ccontr)
+ assume
+ contra: "\<not> (\<exists>p\<ge>n. enat (Suc p) < llength D \<and> to_F \<iota> \<in> (fst (lnth D p)) \<and>
to_F \<iota> \<notin> (fst (lnth D (Suc p))))"
- then have i_in_suc: "p0 \<ge> n \<Longrightarrow> enat (Suc p0) < llength D \<Longrightarrow> to_F \<iota> \<in> (fst (lnth D p0)) \<Longrightarrow>
+ then have i_in_suc: "p0 \<ge> n \<Longrightarrow> enat (Suc p0) < llength D \<Longrightarrow> to_F \<iota> \<in> (fst (lnth D p0)) \<Longrightarrow>
to_F \<iota> \<in> (fst (lnth D (Suc p0)))" for p0
- by blast
- have "p0 \<ge> n \<Longrightarrow> enat p0 < llength D \<Longrightarrow> to_F \<iota> \<in> (fst (lnth D p0))" for p0
- proof (induction rule: nat_induct_at_least)
- case base
- then show ?case using i_in_t2 suc_nth_d_is
+ by blast
+ have "p0 \<ge> n \<Longrightarrow> enat p0 < llength D \<Longrightarrow> to_F \<iota> \<in> (fst (lnth D p0))" for p0
+ proof (induction rule: nat_induct_at_least)
+ case base
+ then show ?case using i_in_t2 suc_nth_d_is
by simp
- next
- case (Suc p0)
- assume p_bigger_n: "n \<le> p0" and
- induct_hyp: "enat p0 < llength D \<Longrightarrow> to_F \<iota> \<in> fst (lnth D p0)" and
- sucsuc_smaller_d: "enat (Suc p0) < llength D"
- have suc_p_bigger_n: "n \<le> p0" using p_bigger_n by simp
- have suc_smaller_d: "enat p0 < llength D"
- using sucsuc_smaller_d Suc_ile_eq dual_order.strict_implies_order by blast
- then have "to_F \<iota> \<in> fst (lnth D p0)" using induct_hyp by blast
- then show ?case using i_in_suc[OF suc_p_bigger_n sucsuc_smaller_d] by blast
- qed
- then have i_in_all_bigger_n: "\<forall>j. j \<ge> n \<and> enat j < llength D \<longrightarrow> to_F \<iota> \<in> (fst (lnth D j))"
- by presburger
- have "llength (lmap fst D) = llength D" by force
- then have "to_F \<iota> \<in> \<Inter> (lnth (lmap fst D) ` {j. n \<le> j \<and> enat j < llength (lmap fst D)})"
- using i_in_all_bigger_n using Suc_le_D by auto
- then have "to_F \<iota> \<in> Liminf_llist (lmap fst D)"
- unfolding Liminf_llist_def using suc_n_length by auto
- then show False using final_schedule by fast
+ next
+ case (Suc p0)
+ assume p_bigger_n: "n \<le> p0" and
+ induct_hyp: "enat p0 < llength D \<Longrightarrow> to_F \<iota> \<in> fst (lnth D p0)" and
+ sucsuc_smaller_d: "enat (Suc p0) < llength D"
+ have suc_p_bigger_n: "n \<le> p0" using p_bigger_n by simp
+ have suc_smaller_d: "enat p0 < llength D"
+ using sucsuc_smaller_d Suc_ile_eq dual_order.strict_implies_order by blast
+ then have "to_F \<iota> \<in> fst (lnth D p0)" using induct_hyp by blast
+ then show ?case using i_in_suc[OF suc_p_bigger_n sucsuc_smaller_d] by blast
qed
- then obtain p where p_greater_n: "p \<ge> n" and p_smaller_d: "enat (Suc p) < llength D" and
- i_in_p: "to_F \<iota> \<in> (fst (lnth D p))" and i_notin_suc_p: "to_F \<iota> \<notin> (fst (lnth D (Suc p)))"
- by blast
- have p_neq_n: "Suc p \<noteq> n" using i_notin_suc_p i_in_suc_n by blast
- have step_p: "lnth D p \<Longrightarrow>LGC lnth D (Suc p)" using deriv p_smaller_d chain_lnth_rel by blast
- then have "\<exists>T1 T2 \<iota> N2 N1 M. lnth D p = (T1, N1) \<and> lnth D (Suc p) = (T2, N2) \<and>
- T1 = T2 \<union> {\<iota>} \<and> T2 \<inter> {\<iota>} = {} \<and> N2 = N1 \<union> M \<and> active_subset M = {} \<and>
- \<iota> \<in> no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` (N1 \<union> M))"
- proof -
- have ci_or_do: "(\<exists>T1 T2 \<iota> N2 N1 M. lnth D p = (T1, N1) \<and> lnth D (Suc p) = (T2, N2) \<and>
- T1 = T2 \<union> {\<iota>} \<and> T2 \<inter> {\<iota>} = {} \<and> N2 = N1 \<union> M \<and> active_subset M = {} \<and>
- \<iota> \<in> no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` (N1 \<union> M))) \<or>
+ then have i_in_all_bigger_n: "\<forall>j. j \<ge> n \<and> enat j < llength D \<longrightarrow> to_F \<iota> \<in> (fst (lnth D j))"
+ by presburger
+ have "llength (lmap fst D) = llength D" by force
+ then have "to_F \<iota> \<in> \<Inter> (lnth (lmap fst D) ` {j. n \<le> j \<and> enat j < llength (lmap fst D)})"
+ using i_in_all_bigger_n using Suc_le_D by auto
+ then have "to_F \<iota> \<in> Liminf_llist (lmap fst D)"
+ unfolding Liminf_llist_def using suc_n_length by auto
+ then show False using final_schedule by fast
+ qed
+ then obtain p where p_greater_n: "p \<ge> n" and p_smaller_d: "enat (Suc p) < llength D" and
+ i_in_p: "to_F \<iota> \<in> (fst (lnth D p))" and i_notin_suc_p: "to_F \<iota> \<notin> (fst (lnth D (Suc p)))"
+ by blast
+ have p_neq_n: "Suc p \<noteq> n" using i_notin_suc_p i_in_suc_n by blast
+ have step_p: "lnth D p \<Longrightarrow>LGC lnth D (Suc p)" using deriv p_smaller_d chain_lnth_rel by blast
+ then have "\<exists>T1 T2 \<iota> N2 N1 M. lnth D p = (T1, N1) \<and> lnth D (Suc p) = (T2, N2) \<and>
+ T1 = T2 \<union> {\<iota>} \<and> N2 = N1 \<union> M \<and> active_subset M = {} \<and>
+ \<iota> \<in> no_labels.Red_Inf_\<G>_Q (fst ` (N1 \<union> M))"
+ proof -
+ have ci_or_do: "(\<exists>T1 T2 \<iota> N2 N1 M. lnth D p = (T1, N1) \<and> lnth D (Suc p) = (T2, N2) \<and>
+ T1 = T2 \<union> {\<iota>} \<and> N2 = N1 \<union> M \<and> active_subset M = {} \<and>
+ \<iota> \<in> no_labels.Red_Inf_\<G>_Q (fst ` (N1 \<union> M))) \<or>
(\<exists>T1 T2 T' N. lnth D p = (T1, N) \<and> lnth D (Suc p) = (T2, N) \<and>
- T1 = T2 \<union> T' \<and> T2 \<inter> T' = {} \<and>
- T' \<inter> no_labels.Non_ground.Inf_from (fst ` active_subset N) = {})"
- using Lazy_Given_Clause_step.simps[of "lnth D p" "lnth D (Suc p)"] step_p i_in_p i_notin_suc_p
- by fastforce
- then have p_greater_n_strict: "n < Suc p"
- using suc_nth_d_is p_greater_n i_in_t2 i_notin_suc_p le_eq_less_or_eq by force
- have "m > 0 \<Longrightarrow> j \<in> {0..<m} \<Longrightarrow> (prems_of (to_F \<iota>))!j \<in> (fst ` (active_subset (snd (lnth D p))))"
- for j
- proof -
- fix j
- assume
- m_pos: "m > 0" and
- j_in: "j \<in> {0..<m}"
- then have "(prems_of \<iota>)!j \<in> (active_subset (snd (lnth D p)))"
- using all_prems_active_after[OF m_pos] p_smaller_d m_def p_greater_n p_neq_n
- by (meson Suc_ile_eq atLeastLessThan_iff dual_order.strict_implies_order nth_mem
+ T1 = T2 \<union> T' \<and> T' \<inter> no_labels.Inf_from (fst ` active_subset N) = {})"
+ using step.simps[of "lnth D p" "lnth D (Suc p)"] step_p i_in_p i_notin_suc_p by fastforce
+ then have p_greater_n_strict: "n < Suc p"
+ using suc_nth_d_is p_greater_n i_in_t2 i_notin_suc_p le_eq_less_or_eq by force
+ have "m > 0 \<Longrightarrow> j \<in> {0..<m} \<Longrightarrow> prems_of (to_F \<iota>) ! j \<in> fst ` active_subset (snd (lnth D p))"
+ for j
+ proof -
+ fix j
+ assume
+ m_pos: "m > 0" and
+ j_in: "j \<in> {0..<m}"
+ then have "prems_of \<iota> ! j \<in> (active_subset (snd (lnth D p)))"
+ using all_prems_active_after[OF m_pos] p_smaller_d m_def p_greater_n p_neq_n
+ by (meson Suc_ile_eq atLeastLessThan_iff dual_order.strict_implies_order nth_mem
p_greater_n_strict)
- then have "fst ((prems_of \<iota>)!j) \<in> (fst ` (active_subset (snd (lnth D p))))"
- by blast
- then show "(prems_of (to_F \<iota>))!j \<in> (fst ` (active_subset (snd (lnth D p))))"
+ then have "fst (prems_of \<iota> ! j) \<in> fst ` active_subset (snd (lnth D p))"
+ by blast
+ then show "prems_of (to_F \<iota>) ! j \<in> fst ` active_subset (snd (lnth D p))"
unfolding to_F_def using j_in m_def by simp
- qed
- then have prems_i_active_p: "m > 0 \<Longrightarrow>
- to_F \<iota> \<in> no_labels.Non_ground.Inf_from (fst ` active_subset (snd (lnth D p)))"
- using i_in_F unfolding no_labels.Non_ground.Inf_from_def
- by (smt atLeast0LessThan in_set_conv_nth lessThan_iff m_def_F mem_Collect_eq subsetI)
- have "m = 0 \<Longrightarrow> (\<exists>T1 T2 \<iota> N2 N1 M. lnth D p = (T1, N1) \<and> lnth D (Suc p) = (T2, N2) \<and>
- T1 = T2 \<union> {\<iota>} \<and> T2 \<inter> {\<iota>} = {} \<and> N2 = N1 \<union> M \<and> active_subset M = {} \<and>
- \<iota> \<in> no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` (N1 \<union> M)))"
- using ci_or_do premise_free_inf_always_from[of "to_F \<iota>" "fst ` active_subset _", OF i_in_F]
- m_def i_in_p i_notin_suc_p m_def_F by auto
- then show "(\<exists>T1 T2 \<iota> N2 N1 M. lnth D p = (T1, N1) \<and> lnth D (Suc p) = (T2, N2) \<and>
- T1 = T2 \<union> {\<iota>} \<and> T2 \<inter> {\<iota>} = {} \<and> N2 = N1 \<union> M \<and> active_subset M = {} \<and>
- \<iota> \<in> no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` (N1 \<union> M)))"
- using ci_or_do i_in_p i_notin_suc_p prems_i_active_p unfolding active_subset_def
- by force
qed
- then obtain T1p T2p N1p N2p Mp where "lnth D p = (T1p, N1p)" and
- suc_p_is: "lnth D (Suc p) = (T2p, N2p)" and "T1p = T2p \<union> {to_F \<iota>}" and "T2p \<inter> {to_F \<iota>} = {}" and
- n2p_is: "N2p = N1p \<union> Mp"and "active_subset Mp = {}" and
- i_in_red_inf: "to_F \<iota> \<in> no_labels.empty_ord_lifted_calc_w_red_crit_family.Red_Inf_Q
+ then have prems_i_active_p: "m > 0 \<Longrightarrow>
+ to_F \<iota> \<in> no_labels.Inf_from (fst ` active_subset (snd (lnth D p)))"
+ using i_in_F unfolding no_labels.Inf_from_def
+ by (smt atLeast0LessThan in_set_conv_nth lessThan_iff m_def_F mem_Collect_eq subsetI)
+ have "m = 0 \<Longrightarrow> (\<exists>T1 T2 \<iota> N2 N1 M. lnth D p = (T1, N1) \<and> lnth D (Suc p) = (T2, N2) \<and>
+ T1 = T2 \<union> {\<iota>} \<and> N2 = N1 \<union> M \<and> active_subset M = {} \<and>
+ \<iota> \<in> no_labels.Red_Inf_\<G>_Q (fst ` (N1 \<union> M)))"
+ using ci_or_do premise_free_inf_always_from[of "to_F \<iota>" "fst ` active_subset _", OF i_in_F]
+ m_def i_in_p i_notin_suc_p m_def_F by auto
+ then show "(\<exists>T1 T2 \<iota> N2 N1 M. lnth D p = (T1, N1) \<and> lnth D (Suc p) = (T2, N2) \<and>
+ T1 = T2 \<union> {\<iota>} \<and> N2 = N1 \<union> M \<and> active_subset M = {} \<and>
+ \<iota> \<in> no_labels.Red_Inf_\<G>_Q (fst ` (N1 \<union> M)))"
+ using ci_or_do i_in_p i_notin_suc_p prems_i_active_p unfolding active_subset_def by force
+ qed
+ then obtain T1p T2p N1p N2p Mp where "lnth D p = (T1p, N1p)" and
+ suc_p_is: "lnth D (Suc p) = (T2p, N2p)" and "T1p = T2p \<union> {to_F \<iota>}" and "T2p \<inter> {to_F \<iota>} = {}" and
+ n2p_is: "N2p = N1p \<union> Mp"and "active_subset Mp = {}" and
+ i_in_red_inf: "to_F \<iota> \<in> no_labels.Red_Inf_\<G>_Q
(fst ` (N1p \<union> Mp))"
- using i_in_p i_notin_suc_p by fastforce
- have "to_F \<iota> \<in> no_labels.lifted_calc_w_red_crit_family.Red_Inf_Q (fst ` (snd (lnth D (Suc p))))"
- using i_in_red_inf suc_p_is n2p_is by fastforce
- then have "\<forall>q. (\<G>_Inf_q q (to_F \<iota>) \<noteq> None \<and>
- the (\<G>_Inf_q q (to_F \<iota>)) \<subseteq> Red_Inf_q q (\<Union> (\<G>_F_q q ` (fst ` (snd (lnth D (Suc p)))))))
+ using i_in_p i_notin_suc_p by fastforce
+ have "to_F \<iota> \<in> no_labels.Red_Inf_Q (fst ` (snd (lnth D (Suc p))))"
+ using i_in_red_inf suc_p_is n2p_is by fastforce
+ then have "\<forall>q \<in> Q. (\<G>_Inf_q q (to_F \<iota>) \<noteq> None \<and>
+ the (\<G>_Inf_q q (to_F \<iota>)) \<subseteq> Red_Inf_q q (\<Union> (\<G>_F_q q ` fst ` snd (lnth D (Suc p)))))
\<or> (\<G>_Inf_q q (to_F \<iota>) = None \<and>
- \<G>_F_q q (concl_of (to_F \<iota>)) \<subseteq> (\<Union> (\<G>_F_q q ` (fst ` (snd (lnth D (Suc p)))))) \<union>
- Red_F_q q (\<Union> (\<G>_F_q q ` (fst ` (snd (lnth D (Suc p)))))))"
- unfolding to_F_def no_labels.lifted_calc_w_red_crit_family.Red_Inf_Q_def
- no_labels.Red_Inf_\<G>_q_def no_labels.\<G>_set_q_def
- by fastforce
- then have "\<iota> \<in> with_labels.Red_Inf_Q (snd (lnth D (Suc p)))"
- unfolding to_F_def with_labels.Red_Inf_Q_def Red_Inf_\<G>_L_q_def \<G>_Inf_L_q_def \<G>_set_L_q_def
- \<G>_F_L_q_def using i_in_inf_fl by auto
- then show "\<iota> \<in> labeled_ord_red_crit_fam.empty_ord_lifted_calc_w_red_crit_family.inter_red_crit_calculus.Sup_Red_Inf_llist (lmap snd D)"
- unfolding
- labeled_ord_red_crit_fam.empty_ord_lifted_calc_w_red_crit_family.inter_red_crit_calculus.Sup_Red_Inf_llist_def
- using red_inf_equiv2 suc_n_length p_smaller_d by auto
- qed
+ \<G>_F_q q (concl_of (to_F \<iota>)) \<subseteq> \<Union> (\<G>_F_q q ` fst ` snd (lnth D (Suc p))) \<union>
+ Red_F_q q (\<Union> (\<G>_F_q q ` fst ` snd (lnth D (Suc p)))))"
+ unfolding to_F_def no_labels.Red_Inf_Q_def no_labels.Red_Inf_\<G>_q_def by blast
+ then have "\<iota> \<in> Red_Inf_\<G>_Q (snd (lnth D (Suc p)))"
+ using i_in_inf_fl unfolding Red_Inf_\<G>_Q_def Red_Inf_\<G>_q_def by (simp add: to_F_def)
+ then show "\<iota> \<in> Sup_llist (lmap Red_Inf_\<G>_Q (lmap snd D))"
+ unfolding Sup_llist_def using suc_n_length p_smaller_d by auto
+qed
+
+theorem lgc_complete_Liminf:
+ assumes
+ deriv: "chain (\<Longrightarrow>LGC) D" and
+ init_state: "active_subset (snd (lnth D 0)) = {}" and
+ final_state: "passive_subset (Liminf_llist (lmap snd D)) = {}" and
+ no_prems_init_active: "\<forall>\<iota> \<in> Inf_F. length (prems_of \<iota>) = 0 \<longrightarrow> \<iota> \<in> fst (lnth D 0)" and
+ final_schedule: "Liminf_llist (lmap fst D) = {}" and
+ b_in: "B \<in> Bot_F" and
+ bot_entailed: "no_labels.entails_\<G>_Q (fst ` (snd (lnth D 0))) {B}"
+ shows "\<exists>BL \<in> Bot_FL. BL \<in> Liminf_llist (lmap snd D)"
+proof -
+ have labeled_b_in: "(B, active) \<in> Bot_FL" using b_in by simp
+ have simp_snd_lmap: "lnth (lmap snd D) 0 = snd (lnth D 0)"
+ using lnth_lmap[of 0 D snd] chain_length_pos[OF deriv] by (simp add: zero_enat_def)
+ have labeled_bot_entailed: "entails_\<G>_L_Q (snd (lnth D 0)) {(B, active)}"
+ using labeled_entailment_lifting bot_entailed by fastforce
+ have "fair (lmap snd D)"
+ using lgc_fair[OF deriv init_state final_state no_prems_init_active final_schedule] .
+ then show ?thesis
+ using dynamic_refutational_complete_Liminf labeled_b_in lgc_to_red[OF deriv]
+ labeled_bot_entailed simp_snd_lmap std_Red_Inf_Q_eq
+ by presburger
qed
(* thm:lgc-completeness *)
-theorem lgc_complete: "chain (\<Longrightarrow>LGC) D \<Longrightarrow> llength D > 0 \<Longrightarrow> active_subset (snd (lnth D 0)) = {} \<Longrightarrow>
- non_active_subset (Liminf_llist (lmap snd D)) = {} \<Longrightarrow>
- (\<forall>\<iota> \<in> Inf_F. length (prems_of \<iota>) = 0 \<longrightarrow> \<iota> \<in> (fst (lnth D 0))) \<Longrightarrow>
- Liminf_llist (lmap fst D) = {} \<Longrightarrow> B \<in> Bot_F \<Longrightarrow> no_labels.entails_\<G>_Q (fst ` (snd (lnth D 0))) {B} \<Longrightarrow>
- \<exists>i. enat i < llength D \<and> (\<exists>BL\<in> Bot_FL. BL \<in> (snd (lnth D i)))"
-proof -
- fix B
- assume
+theorem lgc_complete:
+ assumes
deriv: "chain (\<Longrightarrow>LGC) D" and
- not_empty_d: "llength D > 0" and
init_state: "active_subset (snd (lnth D 0)) = {}" and
- final_state: "non_active_subset (Liminf_llist (lmap snd D)) = {}" and
- no_prems_init_active: "\<forall>\<iota> \<in> Inf_F. length (prems_of \<iota>) = 0 \<longrightarrow> \<iota> \<in> (fst (lnth D 0))" and
+ final_state: "passive_subset (Liminf_llist (lmap snd D)) = {}" and
+ no_prems_init_active: "\<forall>\<iota> \<in> Inf_F. length (prems_of \<iota>) = 0 \<longrightarrow> \<iota> \<in> fst (lnth D 0)" and
final_schedule: "Liminf_llist (lmap fst D) = {}" and
b_in: "B \<in> Bot_F" and
- bot_entailed: "no_labels.entails_\<G>_Q (fst ` (snd (lnth D 0))) {B}"
- have labeled_b_in: "(B,active) \<in> Bot_FL" unfolding Bot_FL_def using b_in by simp
- have not_empty_d2: "\<not> lnull (lmap snd D)" using not_empty_d by force
- have simp_snd_lmap: "lnth (lmap snd D) 0 = snd (lnth D 0)"
- using lnth_lmap[of 0 D snd] not_empty_d by (simp add: zero_enat_def)
- have labeled_bot_entailed: "entails_\<G>_L_Q (snd (lnth D 0)) {(B,active)}"
- using labeled_entailment_lifting bot_entailed by fastforce
- have "fair (lmap snd D)"
- using lgc_fair[OF deriv not_empty_d init_state final_state no_prems_init_active final_schedule] .
- then have "\<exists>i \<in> {i. enat i < llength D}. \<exists>BL\<in>Bot_FL. BL \<in> (snd (lnth D i))"
- using labeled_ordered_dynamic_ref_comp labeled_b_in not_empty_d2 lgc_to_red[OF deriv]
- labeled_bot_entailed entail_equiv simp_snd_lmap
- unfolding dynamic_refutational_complete_calculus_def
- dynamic_refutational_complete_calculus_axioms_def
- by (metis (mono_tags, lifting) llength_lmap lnth_lmap mem_Collect_eq)
- then show ?thesis by blast
+ bot_entailed: "no_labels.entails_\<G>_Q (fst ` snd (lnth D 0)) {B}"
+ shows "\<exists>i. enat i < llength D \<and> (\<exists>BL \<in> Bot_FL. BL \<in> snd (lnth D i))"
+proof -
+ have "\<exists>BL\<in>Bot_FL. BL \<in> Liminf_llist (lmap snd D)"
+ using assms by (rule lgc_complete_Liminf)
+ then show ?thesis
+ unfolding Liminf_llist_def by auto
qed
end
end
diff --git a/thys/Saturation_Framework/ROOT b/thys/Saturation_Framework/ROOT
--- a/thys/Saturation_Framework/ROOT
+++ b/thys/Saturation_Framework/ROOT
@@ -1,15 +1,16 @@
chapter AFP
session "Saturation_Framework" (AFP) = Ordered_Resolution_Prover +
options [timeout=300]
sessions
+ Lambda_Free_RPOs
Well_Quasi_Orders
theories
Consequence_Relations_and_Inference_Systems
Calculi
Lifting_to_Non_Ground_Calculi
Labeled_Lifting_to_Non_Ground_Calculi
Prover_Architectures
document_files
"root.tex"
"root.tex"
diff --git a/thys/Slicing/While/Semantics.thy b/thys/Slicing/While/Semantics.thy
--- a/thys/Slicing/While/Semantics.thy
+++ b/thys/Slicing/While/Semantics.thy
@@ -1,361 +1,363 @@
section \<open>Semantics\<close>
theory Semantics imports Labels Com begin
subsection \<open>Small Step Semantics\<close>
inductive red :: "cmd * state \<Rightarrow> cmd * state \<Rightarrow> bool"
and red' :: "cmd \<Rightarrow> state \<Rightarrow> cmd \<Rightarrow> state \<Rightarrow> bool"
("((1\<langle>_,/_\<rangle>) \<rightarrow>/ (1\<langle>_,/_\<rangle>))" [0,0,0,0] 81)
where
"\<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle> == red (c,s) (c',s')"
| RedLAss:
"\<langle>V:=e,s\<rangle> \<rightarrow> \<langle>Skip,s(V:=(interpret e s))\<rangle>"
| SeqRed:
"\<langle>c\<^sub>1,s\<rangle> \<rightarrow> \<langle>c\<^sub>1',s'\<rangle> \<Longrightarrow> \<langle>c\<^sub>1;;c\<^sub>2,s\<rangle> \<rightarrow> \<langle>c\<^sub>1';;c\<^sub>2,s'\<rangle>"
| RedSeq:
"\<langle>Skip;;c\<^sub>2,s\<rangle> \<rightarrow> \<langle>c\<^sub>2,s\<rangle>"
| RedCondTrue:
"interpret b s = Some true \<Longrightarrow> \<langle>if (b) c\<^sub>1 else c\<^sub>2,s\<rangle> \<rightarrow> \<langle>c\<^sub>1,s\<rangle>"
| RedCondFalse:
"interpret b s = Some false \<Longrightarrow> \<langle>if (b) c\<^sub>1 else c\<^sub>2,s\<rangle> \<rightarrow> \<langle>c\<^sub>2,s\<rangle>"
| RedWhileTrue:
"interpret b s = Some true \<Longrightarrow> \<langle>while (b) c,s\<rangle> \<rightarrow> \<langle>c;;while (b) c,s\<rangle>"
| RedWhileFalse:
"interpret b s = Some false \<Longrightarrow> \<langle>while (b) c,s\<rangle> \<rightarrow> \<langle>Skip,s\<rangle>"
lemmas red_induct = red.induct[split_format (complete)]
abbreviation reds ::"cmd \<Rightarrow> state \<Rightarrow> cmd \<Rightarrow> state \<Rightarrow> bool"
("((1\<langle>_,/_\<rangle>) \<rightarrow>*/ (1\<langle>_,/_\<rangle>))" [0,0,0,0] 81) where
"\<langle>c,s\<rangle> \<rightarrow>* \<langle>c',s'\<rangle> == red\<^sup>*\<^sup>* (c,s) (c',s')"
subsection\<open>Label Semantics\<close>
inductive step :: "cmd \<Rightarrow> cmd \<Rightarrow> state \<Rightarrow> nat \<Rightarrow> cmd \<Rightarrow> state \<Rightarrow> nat \<Rightarrow> bool"
("(_ \<turnstile> (1\<langle>_,/_,/_\<rangle>) \<leadsto>/ (1\<langle>_,/_,/_\<rangle>))" [51,0,0,0,0,0,0] 81)
where
StepLAss:
"V:=e \<turnstile> \<langle>V:=e,s,0\<rangle> \<leadsto> \<langle>Skip,s(V:=(interpret e s)),1\<rangle>"
| StepSeq:
"\<lbrakk>labels (c\<^sub>1;;c\<^sub>2) l (Skip;;c\<^sub>2); labels (c\<^sub>1;;c\<^sub>2) #:c\<^sub>1 c\<^sub>2; l < #:c\<^sub>1\<rbrakk>
\<Longrightarrow> c\<^sub>1;;c\<^sub>2 \<turnstile> \<langle>Skip;;c\<^sub>2,s,l\<rangle> \<leadsto> \<langle>c\<^sub>2,s,#:c\<^sub>1\<rangle>"
| StepSeqWhile:
"labels (while (b) c') l (Skip;;while (b) c')
\<Longrightarrow> while (b) c' \<turnstile> \<langle>Skip;;while (b) c',s,l\<rangle> \<leadsto> \<langle>while (b) c',s,0\<rangle>"
| StepCondTrue:
"interpret b s = Some true
\<Longrightarrow> if (b) c\<^sub>1 else c\<^sub>2 \<turnstile> \<langle>if (b) c\<^sub>1 else c\<^sub>2,s,0\<rangle> \<leadsto> \<langle>c\<^sub>1,s,1\<rangle>"
| StepCondFalse:
"interpret b s = Some false
\<Longrightarrow> if (b) c\<^sub>1 else c\<^sub>2 \<turnstile> \<langle>if (b) c\<^sub>1 else c\<^sub>2,s,0\<rangle> \<leadsto> \<langle>c\<^sub>2,s,#:c\<^sub>1 + 1\<rangle>"
| StepWhileTrue:
"interpret b s = Some true
\<Longrightarrow> while (b) c \<turnstile> \<langle>while (b) c,s,0\<rangle> \<leadsto> \<langle>c;;while (b) c,s,2\<rangle>"
| StepWhileFalse:
"interpret b s = Some false \<Longrightarrow> while (b) c \<turnstile> \<langle>while (b) c,s,0\<rangle> \<leadsto> \<langle>Skip,s,1\<rangle>"
| StepRecSeq1:
"prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>
\<Longrightarrow> prog;;c\<^sub>2 \<turnstile> \<langle>c;;c\<^sub>2,s,l\<rangle> \<leadsto> \<langle>c';;c\<^sub>2,s',l'\<rangle>"
| StepRecSeq2:
"prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>
\<Longrightarrow> c\<^sub>1;;prog \<turnstile> \<langle>c,s,l + #:c\<^sub>1\<rangle> \<leadsto> \<langle>c',s',l' + #:c\<^sub>1\<rangle>"
| StepRecCond1:
"prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>
\<Longrightarrow> if (b) prog else c\<^sub>2 \<turnstile> \<langle>c,s,l + 1\<rangle> \<leadsto> \<langle>c',s',l' + 1\<rangle>"
| StepRecCond2:
"prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>
\<Longrightarrow> if (b) c\<^sub>1 else prog \<turnstile> \<langle>c,s,l + #:c\<^sub>1 + 1\<rangle> \<leadsto> \<langle>c',s',l' + #:c\<^sub>1 + 1\<rangle>"
| StepRecWhile:
"cx \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>
\<Longrightarrow> while (b) cx \<turnstile> \<langle>c;;while (b) cx,s,l + 2\<rangle> \<leadsto> \<langle>c';;while (b) cx,s',l' + 2\<rangle>"
lemma step_label_less:
"prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle> \<Longrightarrow> l < #:prog \<and> l' < #:prog"
proof(induct rule:step.induct)
case (StepSeq c\<^sub>1 c\<^sub>2 l s)
from \<open>labels (c\<^sub>1;;c\<^sub>2) l (Skip;;c\<^sub>2)\<close>
have "l < #:(c\<^sub>1;; c\<^sub>2)" by(rule label_less_num_inner_nodes)
thus ?case by(simp add:num_inner_nodes_gr_0)
next
case (StepSeqWhile b cx l s)
from \<open>labels (while (b) cx) l (Skip;;while (b) cx)\<close>
have "l < #:(while (b) cx)" by(rule label_less_num_inner_nodes)
thus ?case by simp
qed (auto intro:num_inner_nodes_gr_0)
abbreviation steps :: "cmd \<Rightarrow> cmd \<Rightarrow> state \<Rightarrow> nat \<Rightarrow> cmd \<Rightarrow> state \<Rightarrow> nat \<Rightarrow> bool"
("(_ \<turnstile> (1\<langle>_,/_,/_\<rangle>) \<leadsto>*/ (1\<langle>_,/_,/_\<rangle>))" [51,0,0,0,0,0,0] 81) where
"prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto>* \<langle>c',s',l'\<rangle> ==
(\<lambda>(c,s,l) (c',s',l'). prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>)\<^sup>*\<^sup>* (c,s,l) (c',s',l')"
subsection\<open>Proof of bisimulation of @{term "\<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle>"}\\
and @{term "prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto>* \<langle>c',s',l'\<rangle>"} via @{term "labels"}\<close>
(*<*)
lemmas converse_rtranclp_induct3 =
converse_rtranclp_induct[of _ "(ax,ay,az)" "(bx,by,bz)", split_rule,
consumes 1, case_names refl step]
(*>*)
subsubsection \<open>From @{term "prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto>* \<langle>c',s',l'\<rangle>"} to
@{term "\<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle>"}\<close>
lemma step_red:
"prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle> \<Longrightarrow> \<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle>"
by(induct rule:step.induct,rule RedLAss,auto intro:red.intros)
lemma steps_reds:
"prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto>* \<langle>c',s',l'\<rangle> \<Longrightarrow> \<langle>c,s\<rangle> \<rightarrow>* \<langle>c',s'\<rangle>"
proof(induct rule:converse_rtranclp_induct3)
case refl thus ?case by simp
next
case (step c s l c'' s'' l'')
then have "prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c'',s'',l''\<rangle>"
and "\<langle>c'',s''\<rangle> \<rightarrow>* \<langle>c',s'\<rangle>" by simp_all
from \<open>prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c'',s'',l''\<rangle>\<close> have "\<langle>c,s\<rangle> \<rightarrow> \<langle>c'',s''\<rangle>"
by(fastforce intro:step_red)
with \<open>\<langle>c'',s''\<rangle> \<rightarrow>* \<langle>c',s'\<rangle>\<close> show ?case
by(fastforce intro:converse_rtranclp_into_rtranclp)
qed
(*<*)declare fun_upd_apply [simp del] One_nat_def [simp del](*>*)
subsubsection \<open>From @{term "\<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle>"} and @{term labels} to
@{term "prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto>* \<langle>c',s',l'\<rangle>"}\<close>
lemma red_step:
"\<lbrakk>labels prog l c; \<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle>\<rbrakk>
\<Longrightarrow> \<exists>l'. prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle> \<and> labels prog l' c'"
proof(induct arbitrary:c' rule:labels.induct)
case (Labels_Base c)
from \<open>\<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle>\<close> show ?case
proof(induct rule:red_induct)
case (RedLAss V e s)
have "V:=e \<turnstile> \<langle>V:=e,s,0\<rangle> \<leadsto> \<langle>Skip,s(V:=(interpret e s)),1\<rangle>" by(rule StepLAss)
have "labels (V:=e) 1 Skip" by(fastforce intro:Labels_LAss)
with \<open>V:=e \<turnstile> \<langle>V:=e,s,0\<rangle> \<leadsto> \<langle>Skip,s(V:=(interpret e s)),1\<rangle>\<close> show ?case by blast
next
case (SeqRed c\<^sub>1 s c\<^sub>1' s' c\<^sub>2)
from \<open>\<exists>l'. c\<^sub>1 \<turnstile> \<langle>c\<^sub>1,s,0\<rangle> \<leadsto> \<langle>c\<^sub>1',s',l'\<rangle> \<and> labels c\<^sub>1 l' c\<^sub>1'\<close>
obtain l' where "c\<^sub>1 \<turnstile> \<langle>c\<^sub>1,s,0\<rangle> \<leadsto> \<langle>c\<^sub>1',s',l'\<rangle>" and "labels c\<^sub>1 l' c\<^sub>1'" by blast
from \<open>c\<^sub>1 \<turnstile> \<langle>c\<^sub>1,s,0\<rangle> \<leadsto> \<langle>c\<^sub>1',s',l'\<rangle>\<close> have "c\<^sub>1;;c\<^sub>2 \<turnstile> \<langle>c\<^sub>1;;c\<^sub>2,s,0\<rangle> \<leadsto> \<langle>c\<^sub>1';;c\<^sub>2,s',l'\<rangle>"
by(rule StepRecSeq1)
moreover
from \<open>labels c\<^sub>1 l' c\<^sub>1'\<close> have "labels (c\<^sub>1;;c\<^sub>2) l' (c\<^sub>1';;c\<^sub>2)" by(rule Labels_Seq1)
ultimately show ?case by blast
next
case (RedSeq c\<^sub>2 s)
have "labels c\<^sub>2 0 c\<^sub>2" by(rule Labels.Labels_Base)
hence "labels (Skip;;c\<^sub>2) (0 + #:Skip) c\<^sub>2" by(rule Labels_Seq2)
have "labels (Skip;;c\<^sub>2) 0 (Skip;;c\<^sub>2)" by(rule Labels.Labels_Base)
with \<open>labels (Skip;;c\<^sub>2) (0 + #:Skip) c\<^sub>2\<close>
have "Skip;;c\<^sub>2 \<turnstile> \<langle>Skip;;c\<^sub>2,s,0\<rangle> \<leadsto> \<langle>c\<^sub>2,s,#:Skip\<rangle>"
by(fastforce intro:StepSeq)
with \<open>labels (Skip;;c\<^sub>2) (0 + #:Skip) c\<^sub>2\<close> show ?case by auto
next
case (RedCondTrue b s c\<^sub>1 c\<^sub>2)
from \<open>interpret b s = Some true\<close>
have "if (b) c\<^sub>1 else c\<^sub>2 \<turnstile> \<langle>if (b) c\<^sub>1 else c\<^sub>2,s,0\<rangle> \<leadsto> \<langle>c\<^sub>1,s,1\<rangle>"
by(rule StepCondTrue)
have "labels (if (b) c\<^sub>1 else c\<^sub>2) (0 + 1) c\<^sub>1"
by(rule Labels_CondTrue,rule Labels.Labels_Base)
with \<open>if (b) c\<^sub>1 else c\<^sub>2 \<turnstile> \<langle>if (b) c\<^sub>1 else c\<^sub>2,s,0\<rangle> \<leadsto> \<langle>c\<^sub>1,s,1\<rangle>\<close> show ?case by auto
next
case (RedCondFalse b s c\<^sub>1 c\<^sub>2)
from \<open>interpret b s = Some false\<close>
have "if (b) c\<^sub>1 else c\<^sub>2 \<turnstile> \<langle>if (b) c\<^sub>1 else c\<^sub>2,s,0\<rangle> \<leadsto> \<langle>c\<^sub>2,s,#:c\<^sub>1 + 1\<rangle>"
by(rule StepCondFalse)
have "labels (if (b) c\<^sub>1 else c\<^sub>2) (0 + #:c\<^sub>1 + 1) c\<^sub>2"
by(rule Labels_CondFalse,rule Labels.Labels_Base)
with \<open>if (b) c\<^sub>1 else c\<^sub>2 \<turnstile> \<langle>if (b) c\<^sub>1 else c\<^sub>2,s,0\<rangle> \<leadsto> \<langle>c\<^sub>2,s,#:c\<^sub>1 + 1\<rangle>\<close>
show ?case by auto
next
case (RedWhileTrue b s c)
from \<open>interpret b s = Some true\<close>
have "while (b) c \<turnstile> \<langle>while (b) c,s,0\<rangle> \<leadsto> \<langle>c;; while (b) c,s,2\<rangle>"
by(rule StepWhileTrue)
have "labels (while (b) c) (0 + 2) (c;; while (b) c)"
by(rule Labels_WhileBody,rule Labels.Labels_Base)
with \<open>while (b) c \<turnstile> \<langle>while (b) c,s,0\<rangle> \<leadsto> \<langle>c;; while (b) c,s,2\<rangle>\<close>
show ?case by(auto simp del:add_2_eq_Suc')
next
case (RedWhileFalse b s c)
from \<open>interpret b s = Some false\<close>
have "while (b) c \<turnstile> \<langle>while (b) c,s,0\<rangle> \<leadsto> \<langle>Skip,s,1\<rangle>"
by(rule StepWhileFalse)
have "labels (while (b) c) 1 Skip" by(rule Labels_WhileExit)
with \<open>while (b) c \<turnstile> \<langle>while (b) c,s,0\<rangle> \<leadsto> \<langle>Skip,s,1\<rangle>\<close> show ?case by auto
qed
next
case (Labels_LAss V e)
from \<open>\<langle>Skip,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle>\<close> have False by(auto elim:red.cases)
thus ?case by simp
next
case (Labels_Seq1 c\<^sub>1 l c c\<^sub>2)
note IH = \<open>\<And>c'. \<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle> \<Longrightarrow>
\<exists>l'. c\<^sub>1 \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle> \<and> labels c\<^sub>1 l' c'\<close>
from \<open>\<langle>c;;c\<^sub>2,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle>\<close>
have "(c = Skip \<and> c' = c\<^sub>2 \<and> s = s') \<or> (\<exists>c''. c' = c'';;c\<^sub>2)"
by -(erule red.cases,auto)
thus ?case
proof
assume [simp]:"c = Skip \<and> c' = c\<^sub>2 \<and> s = s'"
from \<open>labels c\<^sub>1 l c\<close> have "l < #:c\<^sub>1"
by(rule label_less_num_inner_nodes[simplified])
have "labels (c\<^sub>1;;c\<^sub>2) (0 + #:c\<^sub>1) c\<^sub>2"
by(rule Labels_Seq2,rule Labels_Base)
from \<open>labels c\<^sub>1 l c\<close> have "labels (c\<^sub>1;; c\<^sub>2) l (Skip;;c\<^sub>2)"
by(fastforce intro:Labels.Labels_Seq1)
with \<open>labels (c\<^sub>1;;c\<^sub>2) (0 + #:c\<^sub>1) c\<^sub>2\<close> \<open>l < #:c\<^sub>1\<close>
have "c\<^sub>1;; c\<^sub>2 \<turnstile> \<langle>Skip;;c\<^sub>2,s,l\<rangle> \<leadsto> \<langle>c\<^sub>2,s,#:c\<^sub>1\<rangle>"
by(fastforce intro:StepSeq)
with \<open>labels (c\<^sub>1;;c\<^sub>2) (0 + #:c\<^sub>1) c\<^sub>2\<close> show ?case by auto
next
assume "\<exists>c''. c' = c'';;c\<^sub>2"
then obtain c'' where [simp]:"c' = c'';;c\<^sub>2" by blast
+ have "c\<^sub>2 \<noteq> c'';; c\<^sub>2"
+ by (induction c\<^sub>2) auto
with \<open>\<langle>c;;c\<^sub>2,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle>\<close> have "\<langle>c,s\<rangle> \<rightarrow> \<langle>c'',s'\<rangle>"
- by(auto elim!:red.cases,induct c\<^sub>2,auto)
+ by (auto elim!:red.cases)
from IH[OF this] obtain l' where "c\<^sub>1 \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c'',s',l'\<rangle>"
and "labels c\<^sub>1 l' c''" by blast
from \<open>c\<^sub>1 \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c'',s',l'\<rangle>\<close> have "c\<^sub>1;;c\<^sub>2 \<turnstile> \<langle>c;;c\<^sub>2,s,l\<rangle> \<leadsto> \<langle>c'';;c\<^sub>2,s',l'\<rangle>"
by(rule StepRecSeq1)
from \<open>labels c\<^sub>1 l' c''\<close> have "labels (c\<^sub>1;;c\<^sub>2) l' (c'';;c\<^sub>2)"
by(rule Labels.Labels_Seq1)
with \<open>c\<^sub>1;;c\<^sub>2 \<turnstile> \<langle>c;;c\<^sub>2,s,l\<rangle> \<leadsto> \<langle>c'';;c\<^sub>2,s',l'\<rangle>\<close> show ?case by auto
qed
next
case (Labels_Seq2 c\<^sub>2 l c c\<^sub>1 c')
note IH = \<open>\<And>c'. \<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle> \<Longrightarrow>
\<exists>l'. c\<^sub>2 \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle> \<and> labels c\<^sub>2 l' c'\<close>
from IH[OF \<open>\<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle>\<close>] obtain l' where "c\<^sub>2 \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>"
and "labels c\<^sub>2 l' c'" by blast
from \<open>c\<^sub>2 \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>\<close> have "c\<^sub>1;; c\<^sub>2 \<turnstile> \<langle>c,s,l + #:c\<^sub>1\<rangle> \<leadsto> \<langle>c',s',l' + #:c\<^sub>1\<rangle>"
by(rule StepRecSeq2)
moreover
from \<open>labels c\<^sub>2 l' c'\<close> have "labels (c\<^sub>1;;c\<^sub>2) (l' + #:c\<^sub>1) c'"
by(rule Labels.Labels_Seq2)
ultimately show ?case by blast
next
case (Labels_CondTrue c\<^sub>1 l c b c\<^sub>2 c')
note label = \<open>labels c\<^sub>1 l c\<close> and red = \<open>\<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle>\<close>
and IH = \<open>\<And>c'. \<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle> \<Longrightarrow>
\<exists>l'. c\<^sub>1 \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle> \<and> labels c\<^sub>1 l' c'\<close>
from IH[OF \<open>\<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle>\<close>] obtain l' where "c\<^sub>1 \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>"
and "labels c\<^sub>1 l' c'" by blast
from \<open>c\<^sub>1 \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>\<close>
have "if (b) c\<^sub>1 else c\<^sub>2 \<turnstile> \<langle>c,s,l + 1\<rangle> \<leadsto> \<langle>c',s',l' + 1\<rangle>"
by(rule StepRecCond1)
moreover
from \<open>labels c\<^sub>1 l' c'\<close> have "labels (if (b) c\<^sub>1 else c\<^sub>2) (l' + 1) c'"
by(rule Labels.Labels_CondTrue)
ultimately show ?case by blast
next
case (Labels_CondFalse c\<^sub>2 l c b c\<^sub>1 c')
note IH = \<open>\<And>c'. \<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle> \<Longrightarrow>
\<exists>l'. c\<^sub>2 \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle> \<and> labels c\<^sub>2 l' c'\<close>
from IH[OF \<open>\<langle>c,s\<rangle> \<rightarrow> \<langle>c',s'\<rangle>\<close>] obtain l' where "c\<^sub>2 \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>"
and "labels c\<^sub>2 l' c'" by blast
from \<open>c\<^sub>2 \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>\<close>
have "if (b) c\<^sub>1 else c\<^sub>2 \<turnstile> \<langle>c,s,l + #:c\<^sub>1 + 1\<rangle> \<leadsto> \<langle>c',s',l' + #:c\<^sub>1 + 1\<rangle>"
by(rule StepRecCond2)
moreover
from \<open>labels c\<^sub>2 l' c'\<close> have "labels (if (b) c\<^sub>1 else c\<^sub>2) (l' + #:c\<^sub>1 + 1) c'"
by(rule Labels.Labels_CondFalse)
ultimately show ?case by blast
next
case (Labels_WhileBody c' l c b cx)
note IH = \<open>\<And>c''. \<langle>c,s\<rangle> \<rightarrow> \<langle>c'',s'\<rangle> \<Longrightarrow>
\<exists>l'. c' \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c'',s',l'\<rangle> \<and> labels c' l' c''\<close>
from \<open>\<langle>c;;while (b) c',s\<rangle> \<rightarrow> \<langle>cx,s'\<rangle>\<close>
have "(c = Skip \<and> cx = while (b) c' \<and> s = s') \<or> (\<exists>c''. cx = c'';;while (b) c')"
by -(erule red.cases,auto)
thus ?case
proof
assume [simp]:"c = Skip \<and> cx = while (b) c' \<and> s = s'"
have "labels (while (b) c') 0 (while (b) c')"
by(fastforce intro:Labels_Base)
from \<open>labels c' l c\<close> have "labels (while (b) c') (l + 2) (Skip;;while (b) c')"
by(fastforce intro:Labels.Labels_WhileBody simp del:add_2_eq_Suc')
hence "while (b) c' \<turnstile> \<langle>Skip;;while (b) c',s,l + 2\<rangle> \<leadsto> \<langle>while (b) c',s,0\<rangle>"
by(rule StepSeqWhile)
with \<open>labels (while (b) c') 0 (while (b) c')\<close> show ?case by simp blast
next
assume "\<exists>c''. cx = c'';;while (b) c'"
then obtain c'' where [simp]:"cx = c'';;while (b) c'" by blast
with \<open>\<langle>c;;while (b) c',s\<rangle> \<rightarrow> \<langle>cx,s'\<rangle>\<close> have "\<langle>c,s\<rangle> \<rightarrow> \<langle>c'',s'\<rangle>"
by(auto elim:red.cases)
from IH[OF this] obtain l' where "c' \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c'',s',l'\<rangle>"
and "labels c' l' c''" by blast
from \<open>c' \<turnstile> \<langle>c,s,l\<rangle> \<leadsto> \<langle>c'',s',l'\<rangle>\<close>
have "while (b) c' \<turnstile> \<langle>c;;while (b) c',s,l + 2\<rangle> \<leadsto> \<langle>c'';;while (b) c',s',l' + 2\<rangle>"
by(rule StepRecWhile)
moreover
from \<open>labels c' l' c''\<close> have "labels (while (b) c') (l' + 2) (c'';;while (b) c')"
by(rule Labels.Labels_WhileBody)
ultimately show ?case by simp blast
qed
next
case (Labels_WhileExit b c' c'')
from \<open>\<langle>Skip,s\<rangle> \<rightarrow> \<langle>c'',s'\<rangle>\<close> have False by(auto elim:red.cases)
thus ?case by simp
qed
lemma reds_steps:
"\<lbrakk>\<langle>c,s\<rangle> \<rightarrow>* \<langle>c',s'\<rangle>; labels prog l c\<rbrakk>
\<Longrightarrow> \<exists>l'. prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto>* \<langle>c',s',l'\<rangle> \<and> labels prog l' c'"
proof(induct rule:rtranclp_induct2)
case refl
from \<open>labels prog l c\<close> show ?case by blast
next
case (step c'' s'' c' s')
note IH = \<open>labels prog l c \<Longrightarrow>
\<exists>l'. prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto>* \<langle>c'',s'',l'\<rangle> \<and> labels prog l' c''\<close>
from IH[OF \<open>labels prog l c\<close>] obtain l'' where "prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto>* \<langle>c'',s'',l''\<rangle>"
and "labels prog l'' c''" by blast
from \<open>labels prog l'' c''\<close> \<open>\<langle>c'',s''\<rangle> \<rightarrow> \<langle>c',s'\<rangle>\<close> obtain l'
where "prog \<turnstile> \<langle>c'',s'',l''\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>"
and "labels prog l' c'" by(auto dest:red_step)
from \<open>prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto>* \<langle>c'',s'',l''\<rangle>\<close> \<open>prog \<turnstile> \<langle>c'',s'',l''\<rangle> \<leadsto> \<langle>c',s',l'\<rangle>\<close>
have "prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto>* \<langle>c',s',l'\<rangle>"
by(fastforce elim:rtranclp_trans)
with \<open>labels prog l' c'\<close> show ?case by blast
qed
subsubsection \<open>The bisimulation theorem\<close>
theorem reds_steps_bisimulation:
"labels prog l c \<Longrightarrow> (\<langle>c,s\<rangle> \<rightarrow>* \<langle>c',s'\<rangle>) =
(\<exists>l'. prog \<turnstile> \<langle>c,s,l\<rangle> \<leadsto>* \<langle>c',s',l'\<rangle> \<and> labels prog l' c')"
by(fastforce intro:reds_steps elim:steps_reds)
end
diff --git a/thys/Stable_Matching/Basis.thy b/thys/Stable_Matching/Basis.thy
--- a/thys/Stable_Matching/Basis.thy
+++ b/thys/Stable_Matching/Basis.thy
@@ -1,572 +1,597 @@
(*<*)
theory Basis
imports
Main
"HOL-Library.While_Combinator"
begin
(*>*)
section\<open> Preliminaries \<close>
(*>*)(*<*)
subsection\<open> HOL Detritus \<close>
lemma Above_union:
shows "x \<in> Above r (X \<union> Y) \<longleftrightarrow> x \<in> Above r X \<and> x \<in> Above r Y"
unfolding Above_def by blast
lemma Above_Field:
assumes "x \<in> Above r X"
shows "x \<in> Field r"
using assms unfolding Above_def by blast
lemma AboveS_Field:
assumes "x \<in> AboveS r X"
shows "x \<in> Field r"
using assms unfolding AboveS_def by blast
lemma Above_Linear_singleton:
assumes "x \<in> Field r"
assumes "Linear_order r"
shows "x \<in> Above r {x}"
using assms unfolding Above_def order_on_defs by (force dest: refl_onD)
lemma subseqs_set:
assumes "y \<in> set (subseqs xs)"
shows "set y \<subseteq> set xs"
using assms by (metis Pow_iff image_eqI subseqs_powset)
primrec map_of_default :: "'v \<Rightarrow> ('k \<times> 'v) list \<Rightarrow> 'k \<Rightarrow> 'v" where
"map_of_default v0 [] k = v0"
| "map_of_default v0 (kv # kvs) k = (if k = fst kv then snd kv else map_of_default v0 kvs k)"
lemmas set_elem_equalityI = Set.equalityI[OF Set.subsetI Set.subsetI]
lemmas total_onI = iffD2[OF total_on_def, rule_format]
lemma partial_order_on_acyclic:
assumes "partial_order_on A r"
shows "acyclic (r - Id)"
by (metis acyclic_irrefl assms irrefl_diff_Id partial_order_on_def preorder_on_def trancl_id trans_diff_Id)
lemma finite_Linear_order_induct[consumes 3, case_names step]:
assumes "Linear_order r"
assumes "x \<in> Field r"
assumes "finite r"
assumes step: "\<And>x. \<lbrakk>x \<in> Field r; \<And>y. y \<in> aboveS r x \<Longrightarrow> P y\<rbrakk> \<Longrightarrow> P x"
shows "P x"
using assms(2)
proof(induct rule: wf_induct[of "r\<inverse> - Id"])
from assms(1,3) show "wf (r\<inverse> - Id)"
using linear_order_on_well_order_on linear_order_on_converse
unfolding well_order_on_def by blast
next
case (2 x) then show ?case
by - (rule step; auto simp: aboveS_def intro: FieldI2)
qed
text\<open>
We sometimes want a notion of monotonicity over some set.
\<close>
definition mono_on :: "'a::order set \<Rightarrow> ('a \<Rightarrow> 'b::order) \<Rightarrow> bool" where
"mono_on A f = (\<forall>x\<in>A. \<forall>y\<in>A. x \<le> y \<longrightarrow> f x \<le> f y)"
lemmas mono_onI = iffD2[OF mono_on_def, rule_format]
lemmas mono_onD = iffD1[OF mono_on_def, rule_format]
lemma mono_onE:
"\<lbrakk>mono_on A f; x \<in> A; y \<in> A; x \<le> y; f x \<le> f y \<Longrightarrow> thesis\<rbrakk> \<Longrightarrow> thesis"
using mono_onD by blast
lemma mono_on_mono:
"mono_on UNIV = mono"
by (clarsimp simp: mono_on_def mono_def fun_eq_iff)
(*>*)
subsection\<open> MaxR: maximum elements of linear orders \<close>
text\<open>
We generalize the existing @{const "max"} and @{const "Max"} functions
to work on orders defined over sets. See \S\ref{sec:cf-linear} for
choice-function related lemmas.
\<close>
locale MaxR =
fixes r :: "'a::finite rel"
assumes r_Linear_order: "Linear_order r"
begin
text\<open>
The basic function chooses the largest of two elements:
\<close>
definition maxR :: "'a \<Rightarrow> 'a \<Rightarrow> 'a" where
"maxR x y = (if (x, y) \<in> r then y else x)"
(*<*)
lemma maxR_domain:
shows "{x, y} \<subseteq> A \<Longrightarrow> maxR x y \<in> A"
unfolding maxR_def by simp
lemma maxR_range:
shows "maxR x y \<in> {x, y}"
unfolding maxR_def by simp
lemma maxR_rangeD:
"maxR x y \<noteq> x \<Longrightarrow> maxR x y = y"
"maxR x y \<noteq> y \<Longrightarrow> maxR x y = x"
unfolding maxR_def by auto
lemma maxR_idem:
shows "maxR x x = x"
unfolding maxR_def by simp
lemma maxR_absorb2:
shows "(x, y) \<in> r \<Longrightarrow> maxR x y = y"
unfolding maxR_def by simp
lemma maxR_absorb1:
shows "(y, x) \<in> r \<Longrightarrow> maxR x y = x"
using r_Linear_order unfolding maxR_def by (simp add: order_on_defs antisym_def)
lemma maxR_assoc:
shows "{x,y,z} \<subseteq> Field r \<Longrightarrow> maxR (maxR x y) z = maxR x (maxR y z)"
using r_Linear_order unfolding maxR_def by simp (metis order_on_defs(1-3) total_on_def trans_def)
lemma maxR_commute:
shows "{x,y} \<subseteq> Field r \<Longrightarrow> maxR x y = maxR y x"
using r_Linear_order unfolding maxR_def by (fastforce simp: order_on_defs antisym_def total_on_def)
lemmas maxR_simps =
maxR_idem
maxR_absorb1
maxR_absorb2
(*>*)
text\<open>
We hoist this to finite sets using the @{const "Finite_Set.fold"}
combinator. For code generation purposes it seems inevitable that we
need to fuse the fold and filter into a single total recursive
definition.
\<close>
definition MaxR_f :: "'a \<Rightarrow> 'a option \<Rightarrow> 'a option" where
"MaxR_f x acc = (if x \<in> Field r then Some (case acc of None \<Rightarrow> x | Some y \<Rightarrow> maxR x y) else acc)"
interpretation MaxR_f: comp_fun_idem MaxR_f
using %invisible r_Linear_order
by unfold_locales (fastforce simp: fun_eq_iff maxR_def MaxR_f_def order_on_defs total_on_def antisymD elim: transE split: option.splits)+
definition MaxR_opt :: "'a set \<Rightarrow> 'a option" where
MaxR_opt_eq_fold': "MaxR_opt A = Finite_Set.fold MaxR_f None A"
(*<*)
lemma empty [simp]:
shows "MaxR_opt {} = None"
by (simp add: MaxR_opt_eq_fold')
lemma
shows insert: "MaxR_opt (insert x A) = (if x \<in> Field r then Some (case MaxR_opt A of None \<Rightarrow> x | Some y \<Rightarrow> maxR x y) else MaxR_opt A)"
and range_Some[rule_format]: "MaxR_opt A = Some a \<longrightarrow> a \<in> A \<inter> Field r"
using finite[of A] by induct (auto simp: MaxR_opt_eq_fold' maxR_def MaxR_f_def split: option.splits)
lemma range_None:
assumes "MaxR_opt A = None"
shows "A \<inter> Field r = {}"
using assms by (metis Int_iff insert all_not_in_conv insert_absorb option.simps(3))
lemma domain_empty:
assumes "A \<inter> Field r = {}"
shows "MaxR_opt A = None"
using assms by (metis empty_iff option.exhaust range_Some)
lemma domain:
shows "MaxR_opt (A \<inter> Field r) = MaxR_opt A"
using finite[of A] by induct (simp_all add: insert)
lemmas MaxR_opt_code = MaxR_opt_eq_fold'[where A="set A", unfolded MaxR_f.fold_set_fold] for A
lemma range:
shows "MaxR_opt A \<in> Some ` (A \<inter> Field r) \<union> {None}"
using range_Some notin_range_Some by fastforce
lemma union:
shows "MaxR_opt (A \<union> B) = (case MaxR_opt A of None \<Rightarrow> MaxR_opt B | Some mA \<Rightarrow> Some (case MaxR_opt B of None \<Rightarrow> mA | Some mB \<Rightarrow> maxR mA mB))"
using finite[of A] by induct (auto simp: maxR_assoc insert dest!: range_Some split: option.splits)
lemma mono:
assumes "MaxR_opt A = Some x"
shows "\<exists>y. MaxR_opt (A \<union> B) = Some y \<and> (x, y) \<in> r"
using finite[of B]
proof induct
case empty with assms show ?case
using range_Some underS_incl_iff[OF r_Linear_order] by fastforce
next
note ins = insert
case (insert b B) with assms r_Linear_order show ?case
unfolding order_on_defs total_on_def by (fastforce simp: ins maxR_def elim: transE intro: FieldI1)
qed
+
+declare [[simproc del: eliminate_false_implies]]
+
lemma MaxR_opt_is_greatest:
assumes "MaxR_opt A = Some x"
assumes "y \<in> A \<inter> Field r"
shows "(y, x) \<in> r"
using finite[of A] assms
proof(induct arbitrary: x)
note ins = insert
- case (insert a A) then show ?case
- using r_Linear_order unfolding order_on_defs refl_on_def total_on_def
- by (auto 10 0 simp: maxR_def ins dest!: range_None range_Some split: if_splits option.splits elim: transE)
+ case (insert a A)
+ show ?case
+ proof (cases "y = x")
+ case True
+ thus "(y, x) \<in> r"
+ using r_Linear_order insert by (auto simp: order_on_defs refl_on_def)
+ next
+ case False
+ show "(y, x) \<in> r"
+ proof (rule ccontr)
+ assume "(y, x) \<notin> r"
+ from insert have "x \<in> Field r" "y \<in> Field r"
+ by (auto simp: maxR_def ins dest!: range_None range_Some split: if_splits option.splits)
+ from \<open>(y, x) \<notin> r\<close> and \<open>y \<noteq> x\<close> and insert obtain z where z: "(x, z) \<notin> r" "(y, z) \<in> r" "z \<in> Field r"
+ by (auto simp: maxR_def ins dest!: range_None range_Some split: if_splits option.splits)
+ have "(x, y) \<in> r"
+ using r_Linear_order \<open>(y, x) \<notin> r\<close> \<open>x \<in> Field r\<close> \<open>y \<in> Field r\<close> \<open>y \<noteq> x\<close>
+ by (auto simp: order_on_defs total_on_def)
+ have "trans r"
+ using r_Linear_order by (auto simp: order_on_defs)
+ from this and \<open>(x, y) \<in> r\<close> and \<open>(y, z) \<in> r\<close> have "(x, z) \<in> r"
+ by (rule transD)
+ with \<open>(x, z) \<notin> r\<close> show False by contradiction
+ qed
+ qed
qed simp
lemma greatest_is_MaxR_opt:
assumes "x \<in> A \<inter> Field r"
assumes "\<forall>y \<in> A \<inter> Field r. (y, x) \<in> r"
shows "MaxR_opt A = Some x"
using finite[of A] assms
proof(induct arbitrary: x)
note ins = insert
case (insert a A) then show ?case
using maxR_absorb1 maxR_absorb2
by (fastforce simp: maxR_def ins dest: range_None range_Some split: option.splits)
qed simp
lemma subset:
assumes "set_option (MaxR_opt B) \<subseteq> A"
assumes "A \<subseteq> B"
shows "MaxR_opt B = MaxR_opt A"
using union[where A=A and B="B-A"] range[of "B - A"] assms
by (auto simp: Un_absorb1 finite_subset maxR_def split: option.splits)
(*>*)
end
interpretation MaxR_empty: MaxR "{}"
by unfold_locales simp
interpretation MaxR_singleton: MaxR "{(x,x)}" for x
by unfold_locales simp
lemma MaxR_r_domain [iff]:
assumes "MaxR r"
shows "MaxR (Restr r A)"
using assms Linear_order_Restr unfolding MaxR_def by blast
subsection\<open> Linear orders from lists \<close>
text\<open>
Often the easiest way to specify a concrete linear order is with a
list. Here these run from greatest to least.
\<close>
primrec linord_of_listP :: "'a \<Rightarrow> 'a \<Rightarrow> 'a list \<Rightarrow> bool" where
"linord_of_listP x y [] \<longleftrightarrow> False"
| "linord_of_listP x y (z # zs) \<longleftrightarrow> (z = y \<and> x \<in> set (z # zs)) \<or> linord_of_listP x y zs"
definition linord_of_list :: "'a list \<Rightarrow> 'a rel" where
"linord_of_list xs \<equiv> {(x, y). linord_of_listP x y xs}"
(*<*)
lemma linord_of_list_linord_of_listP:
shows "xy \<in> linord_of_list xs \<longleftrightarrow> linord_of_listP (fst xy) (snd xy) xs"
unfolding linord_of_list_def split_def by simp
lemma linord_of_listP_linord_of_list:
shows "linord_of_listP x y xs \<longleftrightarrow> (x, y) \<in> linord_of_list xs"
unfolding linord_of_list_def by simp
lemma linord_of_listP_empty:
shows "(\<forall>x y. \<not>linord_of_listP x y xs) \<longleftrightarrow> xs = []"
by (metis linord_of_listP.simps list.exhaust list.set_intros(1))
lemma linord_of_listP_domain:
assumes "linord_of_listP x y xs"
shows "x \<in> set xs \<and> y \<in> set xs"
using assms by (induct xs) auto
lemma linord_of_list_empty[iff]:
"linord_of_list [] = {}"
"linord_of_list xs = {} \<longleftrightarrow> xs = []"
unfolding linord_of_list_def by (simp_all add: linord_of_listP_empty)
lemma linord_of_list_singleton:
"(x, y) \<in> linord_of_list [z] \<longleftrightarrow> x = z \<and> y = z"
by (force simp: linord_of_list_linord_of_listP)
lemma linord_of_list_range:
"linord_of_list xs \<subseteq> set xs \<times> set xs"
unfolding linord_of_list_def by (induct xs) auto
lemma linord_of_list_Field [simp]:
"Field (linord_of_list xs) = set xs"
unfolding linord_of_list_def by (induct xs) (auto simp: Field_def)
lemma linord_of_listP_append:
"linord_of_listP x y (xs @ ys) \<longleftrightarrow> linord_of_listP x y xs \<or> linord_of_listP x y ys \<or> (y \<in> set xs \<and> x \<in> set ys)"
by (induct xs) auto
lemma linord_of_list_append:
"(x, y) \<in> linord_of_list (xs @ ys) \<longleftrightarrow> (x, y) \<in> linord_of_list xs \<or> (x, y) \<in> linord_of_list ys \<or> (y \<in> set xs \<and> x \<in> set ys)"
unfolding linord_of_list_def by (simp add: linord_of_listP_append)
lemma linord_of_list_refl_on:
shows "refl_on (set xs) (linord_of_list xs)"
unfolding linord_of_list_def
by (induct xs) (auto intro!: refl_onI simp: refl_onD1 refl_onD2 dest: refl_onD subsetD[OF linord_of_list_range])
lemma linord_of_list_trans:
assumes "distinct xs"
shows "trans (linord_of_list xs)"
using assms unfolding linord_of_list_def
by (induct xs) (auto intro!: transI dest: linord_of_listP_domain elim: transE)
lemma linord_of_list_antisym:
assumes "distinct xs"
shows "antisym (linord_of_list xs)"
using assms unfolding linord_of_list_def
by (induct xs) (auto intro!: antisymI dest: linord_of_listP_domain simp: antisymD)
lemma linord_of_list_total_on:
shows "total_on (set xs) (linord_of_list xs)"
unfolding total_on_def linord_of_list_def by (induct xs) auto
lemma linord_of_list_Restr:
assumes "x \<notin> C"
notes in_set_remove1[simp del] (* suppress warning *)
shows "Restr (linord_of_list (remove1 x xs)) C = Restr (linord_of_list xs) C"
using assms unfolding linord_of_list_def by (induct xs) (auto iff: in_set_remove1)
lemma linord_of_list_nth:
assumes "(xs ! i, xs ! j) \<in> linord_of_list xs"
assumes "i < length xs" "j < length xs"
assumes "distinct xs"
shows "j \<le> i"
using %invisible assms
proof(induct xs arbitrary: i j)
case (Cons x xs i j) show ?case
proof(cases "i < length xs")
case True with Cons show ?thesis
by (auto simp: linord_of_list_linord_of_listP nth_equal_first_eq less_Suc_eq_0_disj linord_of_listP_domain)
next
case False with Cons show ?thesis by fastforce
qed
qed simp
(*>*)
text\<open>\<close>
lemma linord_of_list_Linear_order:
assumes "distinct xs"
assumes "ys = set xs"
shows "linear_order_on ys (linord_of_list xs)"
using %invisible assms linord_of_list_range linord_of_list_refl_on linord_of_list_trans linord_of_list_antisym linord_of_list_total_on
unfolding order_on_defs by force
text\<open>
Every finite linear order is generated by a list.
\<close>
(*<*)
inductive sorted_on :: "'a rel \<Rightarrow> 'a list \<Rightarrow> bool" where
Nil [iff]: "sorted_on r []"
| Cons [intro!]: "\<lbrakk>x \<in> Field r; \<forall>y\<in>set xs. (x, y) \<in> r; sorted_on r xs\<rbrakk> \<Longrightarrow> sorted_on r (x # xs)"
inductive_cases sorted_on_inv[elim!]:
"sorted_on r []"
"sorted_on r (x # xs)"
primrec insort_key_on :: "'a rel \<Rightarrow> ('b \<Rightarrow> 'a) \<Rightarrow> 'b \<Rightarrow> 'b list \<Rightarrow> 'b list" where
"insort_key_on r f x [] = [x]"
| "insort_key_on r f x (y # ys) =
(if (f x, f y) \<in> r then (x # y # ys) else y # insort_key_on r f x ys)"
definition sort_key_on :: "'a rel \<Rightarrow> ('b \<Rightarrow> 'a) \<Rightarrow> 'b list \<Rightarrow> 'b list" where
"sort_key_on r f xs = foldr (insort_key_on r f) xs []"
definition insort_insert_key_on :: "'a rel \<Rightarrow> ('b \<Rightarrow> 'a) \<Rightarrow> 'b \<Rightarrow> 'b list \<Rightarrow> 'b list" where
"insort_insert_key_on r f x xs =
(if f x \<in> f ` set xs then xs else insort_key_on r f x xs)"
abbreviation "sort_on r \<equiv> sort_key_on r (\<lambda>x. x)"
abbreviation "insort_on r \<equiv> insort_key_on r (\<lambda>x. x)"
abbreviation "insort_insert_on r \<equiv> insort_insert_key_on r (\<lambda>x. x)"
context
fixes r :: "'a rel"
assumes "Linear_order r"
begin
lemma sorted_on_single [iff]:
shows "sorted_on r [x] \<longleftrightarrow> x \<in> Field r"
by (metis empty_iff list.distinct(1) list.set(1) nth_Cons_0 sorted_on.simps)
lemma sorted_on_many:
assumes "(x, y) \<in> r"
assumes "sorted_on r (y # zs)"
shows "sorted_on r (x # y # zs)"
using assms \<open>Linear_order r\<close> unfolding order_on_defs by (auto elim: transE intro: FieldI1)
lemma sorted_on_Cons:
shows "sorted_on r (x # xs) \<longleftrightarrow> (x \<in> Field r \<and> sorted_on r xs \<and> (\<forall>y\<in>set xs. (x, y) \<in> r))"
using \<open>Linear_order r\<close> unfolding order_on_defs by (induct xs arbitrary: x) (auto elim: transE)
lemma sorted_on_distinct_set_unique:
assumes "sorted_on r xs" "distinct xs" "sorted_on r ys" "distinct ys" "set xs = set ys"
shows "xs = ys"
proof -
from assms have 1: "length xs = length ys" by (auto dest!: distinct_card)
from assms show ?thesis
proof(induct rule: list_induct2[OF 1])
case (2 x xs y ys) with \<open>Linear_order r\<close> show ?case
unfolding order_on_defs
by (simp add: sorted_on_Cons) (metis antisymD insertI1 insert_eq_iff)
qed simp
qed
lemma set_insort_on:
shows "set (insort_key_on r f x xs) = insert x (set xs)"
by (induct xs) auto
lemma sort_key_on_simps [simp]:
shows "sort_key_on r f [] = []"
"sort_key_on r f (x#xs) = insort_key_on r f x (sort_key_on r f xs)"
by (simp_all add: sort_key_on_def)
lemma set_sort_on [simp]:
shows "set (sort_key_on r f xs) = set xs"
by (induct xs) (simp_all add: set_insort_on)
lemma distinct_insort_on:
shows "distinct (insort_key_on r f x xs) = (x \<notin> set xs \<and> distinct xs)"
by(induct xs) (auto simp: set_insort_on)
lemma distinct_sort_on [simp]:
shows "distinct (sort_key_on r f xs) = distinct xs"
by (induct xs) (simp_all add: distinct_insort_on)
lemma sorted_on_insort_key_on:
assumes "f ` set (x # xs) \<subseteq> Field r"
shows "sorted_on r (map f (insort_key_on r f x xs)) = sorted_on r (map f xs)"
using assms
proof(induct xs)
case (Cons x xs) with \<open>Linear_order r\<close> show ?case
unfolding order_on_defs
by (auto 4 4 simp: sorted_on_Cons sorted_on_many set_insort_on refl_on_def total_on_def elim: transE)
qed simp
lemma sorted_on_insort_on:
assumes "set (x # xs) \<subseteq> Field r"
shows "sorted_on r (insort_on r x xs) = sorted_on r xs"
using sorted_on_insort_key_on[where f="\<lambda>x. x"] assms by simp
theorem sorted_on_sort_key_on [simp]:
assumes "f ` set xs \<subseteq> Field r"
shows "sorted_on r (map f (sort_key_on r f xs))"
using assms by (induct xs) (simp_all add: sorted_on_insort_key_on)
theorem sorted_on_sort_on [simp]:
assumes "set xs \<subseteq> Field r"
shows "sorted_on r (sort_on r xs)"
using sorted_on_sort_key_on[where f="\<lambda>x. x"] assms by simp
lemma finite_sorted_on_distinct_unique:
assumes "A \<subseteq> Field r"
assumes "finite A"
shows "\<exists>!xs. set xs = A \<and> sorted_on r xs \<and> distinct xs"
proof -
from \<open>finite A\<close> obtain xs where "set xs = A \<and> distinct xs"
using finite_distinct_list by blast
with \<open>A \<subseteq> Field r\<close> show ?thesis
by (fastforce intro!: ex1I[where a="sort_on r xs"] simp: sorted_on_distinct_set_unique)
qed
end
lemma sorted_on_linord_of_list_subseteq_r:
assumes "Linear_order r"
assumes "sorted_on r xs"
assumes "distinct xs"
shows "linord_of_list (rev xs) \<subseteq> r"
using assms
proof(induct xs)
case (Cons x xs)
then have "linord_of_list (rev xs) \<subseteq> r" by (simp add: sorted_on_Cons)
with Cons.prems show ?case
by (clarsimp simp: linord_of_list_append linord_of_list_singleton sorted_on_Cons)
(meson contra_subsetD subsetI underS_incl_iff)
qed simp
lemma sorted_on_linord_of_list:
assumes "Linear_order r"
assumes "set xs = Field r"
assumes "sorted_on r xs"
assumes "distinct xs"
shows "linord_of_list (rev xs) = r"
proof(rule equalityI)
from assms show "linord_of_list (rev xs) \<subseteq> r"
using sorted_on_linord_of_list_subseteq_r by blast
next
{ fix x y assume xy: "(x, y) \<in> r"
with \<open>Linear_order r\<close> have "(y, x) \<notin> r - Id"
using Linear_order_in_diff_Id by (fastforce intro: FieldI1)
with linord_of_list_Linear_order[of "rev xs" "Field r"] assms xy
have "(x, y) \<in> linord_of_list (rev xs)"
by simp (metis Diff_subset FieldI1 FieldI2 Linear_order_in_diff_Id linord_of_list_Field set_rev sorted_on_linord_of_list_subseteq_r subset_eq) }
then show "r \<subseteq> linord_of_list (rev xs)" by clarsimp
qed
lemma linord_of_listP_rev:
assumes "z # zs \<in> set (subseqs xs)"
assumes "y \<in> set zs"
shows "linord_of_listP z y (rev xs)"
using assms by (induct xs) (auto simp: Let_def linord_of_listP_append dest: subseqs_set)
lemma linord_of_list_sorted_on_subseqs:
assumes "ys \<in> set (subseqs xs)"
assumes "distinct xs"
shows "sorted_on (linord_of_list (rev xs)) ys"
using assms
proof(induct ys)
case (Cons y ys) then show ?case
using linord_of_list_Linear_order[where xs="rev xs" and ys="Field (linord_of_list (rev xs))"]
by (force simp: Cons_in_subseqsD sorted_on_Cons linord_of_list_linord_of_listP linord_of_listP_rev dest: subseqs_set)
qed simp
lemma linord_of_list_sorted_on:
assumes "distinct xs"
shows "sorted_on (linord_of_list (rev xs)) xs"
by (rule linord_of_list_sorted_on_subseqs[OF subseqs_refl \<open>distinct xs\<close>])
(*>*)
lemma linear_order_on_list:
assumes "linear_order_on ys r"
assumes "ys = Field r"
assumes "finite ys"
shows "\<exists>!xs. r = linord_of_list xs \<and> distinct xs \<and> set xs = ys"
using %invisible finite_sorted_on_distinct_unique[of r ys] sorted_on_linord_of_list[of r] assms
by simp (metis distinct_rev linord_of_list_sorted_on rev_rev_ident set_rev)
(*<*)
end
(*>*)
diff --git a/thys/Stable_Matching/Contracts.thy b/thys/Stable_Matching/Contracts.thy
--- a/thys/Stable_Matching/Contracts.thy
+++ b/thys/Stable_Matching/Contracts.thy
@@ -1,2677 +1,2680 @@
(*<*)
theory Contracts
imports
Choice_Functions
"HOL-Library.Dual_Ordered_Lattice"
"HOL-Library.Bourbaki_Witt_Fixpoint"
"HOL-Library.While_Combinator"
"HOL-Library.Product_Order"
begin
(*>*)
section\<open> \citet{HatfieldMilgrom:2005}: Matching with contracts \label{sec:contracts} \<close>
text\<open>
We take the original paper on matching with contracts by
\citet{HatfieldMilgrom:2005} as our roadmap, which follows a similar
path to \citet[\S2.5]{RothSotomayor:1990}. We defer further motivation
to these texts. Our first move is to capture the scenarios described
in their {\S}I(A) (p916) in a locale.
\<close>
locale Contracts =
fixes Xd :: "'x::finite \<Rightarrow> 'd::finite"
fixes Xh :: "'x \<Rightarrow> 'h::finite"
fixes Pd :: "'d \<Rightarrow> 'x rel"
fixes Ch :: "'h \<Rightarrow> 'x cfun"
assumes Pd_linear: "\<forall>d. Linear_order (Pd d)"
assumes Pd_range: "\<forall>d. Field (Pd d) \<subseteq> {x. Xd x = d}"
assumes Ch_range: "\<forall>h. \<forall>X. Ch h X \<subseteq> {x\<in>X. Xh x = h}"
assumes Ch_singular: "\<forall>h. \<forall>X. inj_on Xd (Ch h X)"
begin
text \<open>
The set of contracts is modelled by the type @{typ "'x"}, a free type
variable that will later be interpreted by some non-empty
set. Similarly @{typ "'d"} and @{typ "'h"} track the names of doctors
and hospitals respectively. All of these are finite by virtue of
belonging to the \<open>finite\<close> type class.
We fix four constants:
\begin{itemize}
\item \<open>Xd\<close> (\<open>Xh\<close>) projects the name of the
relevant doctor (hospital) from a contract;
\item \<open>Pd\<close> maps doctors to their linear preferences over
some subset of contracts that name them (assumptions @{thm [source]
Pd_linear} and @{thm [source] Pd_range}); and
\item \<open>Ch\<close> maps hospitals to their choice functions
(\S\ref{sec:cf}), which are similarly constrained to contracts that
name them (assumption @{thm [source] Ch_range}). Moreover their
choices must name each doctor at most once, i.e., \<open>Xd\<close>
must be injective on these (assumption @{thm [source]
"Ch_singular"}).
\end{itemize}
The reader familiar with the literature will note that we do not have
a null contract (also said to represent the @{emph \<open>outside option\<close>} of
unemployment), and instead use partiality of the doctors'
preferences. This provides two benefits: firstly, \<open>Xh\<close> is
a total function here, and secondly we achieve some economy of
description when instantiating this locale as \<open>Pd\<close> only
has to rank the relevant contracts.
We note in passing that neither the doctors' nor hospitals' choice
functions are required to be decisive, unlike the standard literature
on choice functions (\S\ref{sec:cf}).
In addition to the above, the following constitute the definitions
that must be trusted for the results we prove to be meaningful.
\<close>
definition Cd :: "'d \<Rightarrow> 'x cfun" where
"Cd d \<equiv> set_option \<circ> MaxR.MaxR_opt (Pd d)"
definition CD_on :: "'d set \<Rightarrow> 'x cfun" where
"CD_on ds X = (\<Union>d\<in>ds. Cd d X)"
abbreviation CD :: "'x set \<Rightarrow> 'x set" where
"CD \<equiv> CD_on UNIV"
definition CH :: "'x cfun" where
"CH X = (\<Union>h. Ch h X)"
text\<open>
The function @{const "Cd"} constructs a choice function from the
doctor's linear preferences (see \S\ref{sec:cf-linear}). Both @{const
"CD"} and @{const "CH"} simply aggregate opinions in the obvious
way. The functions @{const "CD_on"} is parameterized with a set of
doctors to support the proofs of \S\ref{sec:contracts-vacancy-chain}.
We also define \<open>RD\<close> (\<open>Rh\<close>,
\<open>RH\<close>) to be the subsets of a given set of contracts that
are rejected by the doctors (hospitals). (The abbreviation @{const
"Rf"} is defined in \S\ref{sec:cf-rf}.)
\<close>
abbreviation (input) RD_on :: "'d set \<Rightarrow> 'x cfun" where
"RD_on ds \<equiv> Rf (CD_on ds)"
abbreviation (input) RD :: "'x cfun" where
"RD \<equiv> RD_on UNIV"
abbreviation (input) Rh :: "'h \<Rightarrow> 'x cfun" where
"Rh h \<equiv> Rf (Ch h)"
abbreviation (input) RH :: "'x cfun" where
"RH \<equiv> Rf CH"
text \<open>
A @{emph \<open>mechanism\<close>} maps doctor and hospital preferences into a match
(here a set of contracts).
\<close>
type_synonym (in -) ('d, 'h, 'x) mechanism = "('d \<Rightarrow> 'x rel) \<Rightarrow> ('h \<Rightarrow> 'x cfun) \<Rightarrow> 'd set \<Rightarrow> 'x set"
(*<*)
(* Pd *)
lemmas Pd_linear' = Pd_linear[rule_format]
lemmas Pd_range' = subsetD[OF Pd_range[rule_format], simplified, of x d] for x d
lemma Pd_refl:
assumes "x \<in> Field (Pd d)"
shows "(x, x) \<in> Pd d"
using assms Pd_linear' by (meson subset_refl underS_incl_iff)
lemma Pd_Xd:
assumes "(x, y) \<in> Pd d"
shows "Xd x = d \<and> Xd y = d"
using assms Pd_range contra_subsetD unfolding Field_def by blast
lemma Above_Pd_Xd:
assumes "x \<in> Above (Pd d) X"
shows "Xd x = d"
using assms by (blast dest: Above_Field Pd_range')
lemma AboveS_Pd_Xd:
assumes "x \<in> AboveS (Pd d) X"
shows "Xd x = d"
using assms by (blast dest: AboveS_Field Pd_range')
(* Cd *)
interpretation Cd: linear_cf "Pd d" "Cd d" for d
using Cd_def Pd_linear by unfold_locales simp_all
lemmas Cd_domain = Cd.domain
lemmas Cd_f_range = Cd.f_range
lemmas Cd_range = Cd.range
lemmas Cd_range' = Cd.range'
lemmas Rf_Cd_mono = Cd.Rf_mono_on[of UNIV, unfolded mono_on_mono]
lemmas Cd_Chernoff = Cd.Chernoff
lemmas Cd_path_independent = Cd.path_independent
lemmas Cd_iia = Cd.iia
lemmas Cd_irc = Cd.irc
lemmas Cd_lad = Cd.lad
lemmas Cd_mono = Cd.mono
lemmas Cd_greatest = Cd.greatest
lemmas Cd_preferred = Cd.preferred
lemmas Cd_singleton = Cd.singleton
lemmas Cd_union = Cd.union
lemmas Cd_idem = iia_f_idem[OF Cd.f_range[of UNIV d, folded Cd_def] Cd_iia[of UNIV], simplified] for d
lemma Cd_Xd:
shows "x \<in> Cd d X \<Longrightarrow> Xd x = d"
using Pd_range Cd_range by fastforce
lemma Cd_inj_on_Xd:
shows "inj_on Xd (Cd d X)"
by (rule inj_onI) (clarsimp simp: Cd_Xd Cd_singleton)
lemma Cd_range_disjoint:
assumes "d \<noteq> d'"
shows "Cd d A \<inter> Cd d' A = {}"
using assms Cd_range Pd_range by blast
lemma Cd_single:
assumes "x \<in> X"
assumes "inj_on Xd X"
assumes "x \<in> Field (Pd d)"
shows "x \<in> Cd d X"
using assms Pd_linear unfolding Cd_greatest greatest_def
by clarsimp (metis Pd_Xd inj_on_eq_iff subset_refl underS_incl_iff)
lemma Cd_Above:
shows "Cd d X = Above (Pd d) (X \<inter> Field (Pd d)) \<inter> X"
unfolding Cd_greatest greatest_Above Above_def by blast
(* Code generator setup. Repeats a lot of stuff. *)
definition maxR :: "'d \<Rightarrow> 'x \<Rightarrow> 'x \<Rightarrow> 'x" where
"maxR d x y = (if (x, y) \<in> Pd d then y else x)"
definition MaxR_f :: "'d \<Rightarrow> 'x \<Rightarrow> 'x option \<Rightarrow> 'x option" where
"MaxR_f d = (\<lambda>x acc. if x \<in> Field (Pd d) then Some (case acc of None \<Rightarrow> x | Some y \<Rightarrow> maxR d x y) else acc)"
lemma MaxR_maxR:
shows "MaxR.maxR (Pd d) = maxR d"
by (simp add: fun_eq_iff maxR_def Cd.maxR_code)
lemma MaxR_MaxR_f:
shows "MaxR.MaxR_f (Pd d) = MaxR_f d"
by (simp add: fun_eq_iff Cd.MaxR_f_code MaxR_f_def MaxR_maxR cong: option.case_cong)
lemmas Cd_code[code] = Cd.code[unfolded MaxR_MaxR_f]
lemma Cd_simps[simp, nitpick_simp]:
shows "Cd d {} = {}"
"Cd d (insert x A) = (if x \<in> Field (Pd d) then if Cd d A = {} then {x} else {maxR d x y |y. y \<in> Cd d A} else Cd d A)"
unfolding Cd.simps MaxR_maxR by simp_all
(* CD *)
lemma CD_on_def2:
shows "CD_on ds A = (\<Union>d\<in>ds. Cd d (A \<inter> Field (Pd d)))"
using Cd_domain unfolding CD_on_def by blast
lemma CD_on_Xd:
assumes "x \<in> CD_on ds A"
shows "Xd x \<in> ds"
using assms Cd_Xd unfolding CD_on_def by blast
lemma mem_CD_on_Cd:
shows "x \<in> CD_on ds X \<longleftrightarrow> (x \<in> Cd (Xd x) X \<and> Xd x \<in> ds)"
unfolding CD_on_def using Cd_range Cd_Xd by blast
lemma CD_on_domain:
assumes "d \<in> ds"
shows "CD_on ds A \<inter> Field (Pd d) = Cd d (A \<inter> Field (Pd d))"
unfolding CD_on_def2 using assms Cd_range by (force dest: Pd_range')
lemma CD_on_range:
shows "CD_on ds A \<subseteq> A \<inter> (\<Union>d\<in>ds. Field (Pd d))"
using Cd_range unfolding CD_on_def by blast
lemmas CD_on_range' = subsetD[OF CD_on_range]
lemma CD_on_f_range_on:
shows "f_range_on A (CD_on ds)"
by (rule f_range_onI) (meson CD_on_range Int_subset_iff)
lemma RD_on_mono:
shows "mono (RD_on ds)"
unfolding CD_on_def by (rule monoI) (auto dest: monoD[OF Rf_Cd_mono])
lemma CD_on_Chernoff:
shows "Chernoff (CD_on ds)"
using mono_on_mono RD_on_mono[of ds] Rf_mono_on_iia_on[of UNIV] Chernoff_on_iia_on by (simp add: fun_eq_iff) blast
lemma CD_on_irc:
shows "irc (CD_on ds)"
by (rule ircI) (fastforce simp: CD_on_def ircD[OF Cd_irc] simp del: Cd_simps cong: SUP_cong)
lemmas CD_on_consistency = irc_on_consistency_on[OF CD_on_irc, simplified]
lemma CD_on_path_independent:
shows "path_independent (\<lambda>X. CD_on ds X)"
using CD_on_f_range_on CD_on_Chernoff CD_on_consistency by (blast intro: path_independent_onI2)
lemma CD_on_simps:
shows "CD_on ds {} = {}"
using CD_on_range by blast
lemmas CD_on_iia = RD_on_mono[unfolded Rf_mono_iia]
lemmas CD_on_idem = iia_f_idem[OF CD_on_f_range_on CD_on_iia, simplified]
lemma CD_on_inj_on_Xd:
shows "inj_on Xd (CD_on ds X)"
unfolding CD_on_def by (rule inj_onI) (clarsimp simp: Cd_Xd Cd_singleton)
lemma CD_on_card:
shows "card (CD_on ds X) = (\<Sum>d\<in>ds. card (Cd d X))"
unfolding CD_on_def by (simp add: card_UN_disjoint Cd_range_disjoint)
lemma CD_on_closed:
assumes "inj_on Xd X"
assumes "X \<subseteq> (\<Union>d\<in>ds. Field (Pd d))"
shows "CD_on ds X = X"
using assms Cd_domain Cd_single[OF _ assms(1)] unfolding CD_on_def2 by (force dest: Cd_range')
(* Ch *)
lemmas Ch_singular' = Ch_singular[rule_format]
lemmas Ch_range' = subsetD[OF Ch_range[rule_format], simplified, of x h X] for x h X
lemma Ch_simps:
shows "Ch h {} = {}"
using Ch_range by blast
lemma Ch_range_disjoint:
assumes "h \<noteq> h'"
shows "Ch h A \<inter> Ch h' A = {}"
using assms Ch_range by blast
lemma Ch_f_range:
shows "f_range (Ch h)"
using Ch_range unfolding f_range_on_def by blast
(* CH *)
lemma CH_card:
shows "card (CH X) = (\<Sum>h\<in>UNIV. card (Ch h X))"
unfolding CH_def by (simp add: card_UN_disjoint Ch_range_disjoint)
lemma CH_simps:
shows "CH {} = {}"
unfolding CH_def by (simp add: Ch_simps)
lemma CH_range:
shows "CH A \<subseteq> A"
unfolding CH_def using Ch_range by blast
lemmas CH_range' = subsetD[OF CH_range]
lemmas CH_f_range_on = f_range_onI[OF CH_range]
lemma mem_CH_Ch:
shows "x \<in> CH X \<longleftrightarrow> x \<in> Ch (Xh x) X"
unfolding CH_def using Ch_range by blast
lemma mem_Ch_CH:
assumes "x \<in> Ch h X"
shows "x \<in> CH X"
unfolding CH_def using assms Ch_range by blast
(*>*)
text\<open>
An @{emph \<open>allocation\<close>} is a set of contracts where each names a distinct
doctor. (Hospitals can contract multiple doctors.)
\<close>
abbreviation (input) allocation :: "'x set \<Rightarrow> bool" where
"allocation \<equiv> inj_on Xd"
text\<open>
We often wish to extract a doctor's or a hospital's contract from an
@{const "allocation"}.
\<close>
definition dX :: "'x set \<Rightarrow> 'd \<Rightarrow> 'x set" where
"dX X d = {x \<in> X. Xd x = d}"
definition hX :: "'x set \<Rightarrow> 'h \<Rightarrow> 'x set" where
"hX X h = {x \<in> X. Xh x = h}"
(*<*)
lemma dX_union:
shows "dX (X \<union> Y) d = dX X d \<union> dX Y d"
unfolding dX_def by auto
lemma dX_range:
shows "\<forall>d. dX X d \<subseteq> {x. Xd x = d}"
unfolding dX_def by clarsimp
lemma dX_range':
assumes "x \<in> dX X d"
shows "x \<in> X \<and> Xd x = d"
using assms unfolding dX_def by simp
lemma dX_empty_or_singleton:
assumes "allocation X"
shows "\<forall>d. dX X d = {} \<or> (\<exists>x. dX X d = {x})"
unfolding dX_def using \<open>allocation X\<close> by (fastforce dest: inj_onD)
lemma dX_linear:
assumes "allocation X"
shows "Linear_order (dX X d \<times> dX X d)"
using spec[OF dX_empty_or_singleton[OF \<open>allocation X\<close>], where x=d] by fastforce
lemma dX_singular:
assumes "allocation X"
assumes "x \<in> X"
assumes "d = Xd x"
shows "dX X d = {x}"
using assms unfolding dX_def by (fastforce dest: inj_onD)
lemma dX_Int_Field_Pd:
assumes "dX X d \<subseteq> Field (Pd d)"
shows "X \<inter> Field (Pd d) = dX X d"
using assms unfolding dX_def by (fastforce dest: Pd_range')
lemma Cd_Above_dX:
assumes "dX X d \<subseteq> Field (Pd d)"
shows "Cd d X = Above (Pd d) (dX X d) \<inter> X"
using assms unfolding Cd_greatest greatest_Above Above_def dX_def by (auto dest: Pd_range')
(*>*)
text\<open>
@{emph \<open>Stability\<close>} is the key property we look for in a match (here a
set of contracts), and consists of two parts.
Firstly, we ask that it be @{emph \<open>individually rational\<close>}, i.e., the
contracts in the match are actually acceptable to all
participants. Note that this implies the match is an @{const
"allocation"}.
\<close>
definition individually_rational_on :: "'d set \<Rightarrow> 'x set \<Rightarrow> bool" where
"individually_rational_on ds X \<longleftrightarrow> CD_on ds X = X \<and> CH X = X"
abbreviation individually_rational :: "'x set \<Rightarrow> bool" where
"individually_rational \<equiv> individually_rational_on UNIV"
text\<open>
The second condition requires that there be no coalition of a hospital
and one or more doctors who prefer another set of contracts involving
them; the hospital strictly, the doctors weakly. Contrast this
definition with the classical one for stable marriages given in
\S\ref{sec:sotomayor}.
\<close>
definition blocking_on :: "'d set \<Rightarrow> 'x set \<Rightarrow> 'h \<Rightarrow> 'x set \<Rightarrow> bool" where
"blocking_on ds X h X' \<longleftrightarrow> X' \<noteq> Ch h X \<and> X' = Ch h (X \<union> X') \<and> X' \<subseteq> CD_on ds (X \<union> X')"
definition stable_no_blocking_on :: "'d set \<Rightarrow> 'x set \<Rightarrow> bool" where
"stable_no_blocking_on ds X \<longleftrightarrow> (\<forall>h X'. \<not>blocking_on ds X h X')"
abbreviation stable_no_blocking :: "'x set \<Rightarrow> bool" where
"stable_no_blocking \<equiv> stable_no_blocking_on UNIV"
definition stable_on :: "'d set \<Rightarrow> 'x set \<Rightarrow> bool" where
"stable_on ds X \<longleftrightarrow> individually_rational_on ds X \<and> stable_no_blocking_on ds X"
abbreviation stable :: "'x set \<Rightarrow> bool" where
"stable \<equiv> stable_on UNIV"
(*<*)
lemma stable_onI:
assumes "individually_rational_on ds X"
assumes "stable_no_blocking_on ds X"
shows "stable_on ds X"
unfolding stable_on_def using assms by blast
lemma individually_rational_onI:
assumes "CD_on ds X = X"
assumes "CH X = X"
shows "individually_rational_on ds X"
unfolding individually_rational_on_def using assms by blast
lemma individually_rational_on_CD_on:
assumes "individually_rational_on ds X"
shows "CD_on ds X = X"
using assms unfolding individually_rational_on_def by blast
lemma individually_rational_on_Cd:
assumes "individually_rational_on ds X"
shows "Cd d X = dX X d"
using individually_rational_on_CD_on[OF assms]
by (auto simp: dX_def mem_CD_on_Cd dest: Cd_range' Cd_Xd)
lemma individually_rational_on_empty:
shows "individually_rational_on ds {}"
by (simp add: CD_on_simps CH_simps individually_rational_onI)
lemma blocking_onI:
assumes "X'' \<noteq> Ch h X"
assumes "X'' = Ch h (X \<union> X'')"
assumes "\<And>x. x \<in> X'' \<Longrightarrow> x \<in> CD_on ds (X \<union> X'')"
shows "blocking_on ds X h X''"
unfolding blocking_on_def using assms by blast
lemma blocking_on_imp_not_stable:
assumes "blocking_on ds X h X''"
shows "\<not>stable_on ds X"
unfolding stable_on_def stable_no_blocking_on_def using assms by blast
lemma blocking_on_allocation:
assumes "blocking_on ds X h X''"
shows "allocation X''"
using assms unfolding blocking_on_def by (metis Ch_singular')
lemma blocking_on_Field:
assumes "blocking_on ds X h X''"
shows "dX X'' d \<subseteq> Field (Pd d)"
using assms blocking_on_allocation[OF assms] unfolding blocking_on_def dX_def
by (force simp: Pd_range' dest: CD_on_range')
lemma blocking_on_CD_on:
assumes "blocking_on ds X h X''"
shows "X'' \<subseteq> CD_on ds (X \<union> X'')"
using assms unfolding blocking_on_def by blast
lemma blocking_on_CD_on':
assumes "blocking_on ds X h X''"
assumes "x \<in> X''"
shows "x \<in> CD_on ds (X \<union> X'')"
using assms unfolding blocking_on_def by blast
lemma blocking_on_Cd:
assumes "blocking_on ds X h X''"
shows "dX X'' d \<subseteq> Cd d (X \<union> X'')"
using assms unfolding blocking_on_def by (force dest: dX_range' simp: mem_CD_on_Cd)
lemma stable_no_blocking_onI:
assumes "\<And>h X''. \<lbrakk>X'' = Ch h (X \<union> X''); X'' \<noteq> Ch h X; X'' \<subseteq> CD_on ds (X \<union> X'')\<rbrakk> \<Longrightarrow> False"
shows "stable_no_blocking_on ds X"
unfolding stable_no_blocking_on_def blocking_on_def using assms by blast
lemma stable_no_blocking_onI2:
assumes "\<And>h X''. blocking_on ds X h X'' \<Longrightarrow> False"
shows "stable_no_blocking_on ds X"
unfolding stable_no_blocking_on_def using assms by blast
lemma "stable_no_blocking_on ds UNIV"
using stable_no_blocking_onI by fastforce
lemma
assumes "stable_on ds X"
shows stable_on_CD_on: "CD_on ds X = X"
and stable_on_Xd: "x \<in> X \<Longrightarrow> Xd x \<in> ds"
and stable_on_range': "x \<in> X \<Longrightarrow> x \<in> Field (Pd (Xd x))"
and stable_on_CH: "CH X = X"
and stable_on_no_blocking_on: "stable_no_blocking_on ds X"
using assms mem_CD_on_Cd Cd_range' Pd_range'
unfolding stable_on_def individually_rational_on_def by blast+
lemma stable_on_allocation:
assumes "stable_on ds X"
shows "allocation X"
using assms unfolding stable_on_def individually_rational_on_def by (metis CD_on_inj_on_Xd)
lemma stable_on_blocking_onD:
assumes "stable_on ds X"
shows "\<lbrakk>X'' = Ch h (X \<union> X''); X'' \<subseteq> CD_on ds (X \<union> X'')\<rbrakk> \<Longrightarrow> X'' = Ch h X"
using \<open>stable_on ds X\<close> unfolding stable_on_def individually_rational_on_def stable_no_blocking_on_def blocking_on_def by blast
lemma not_stable_on_cases[consumes 1, case_names not_individually_rational not_no_blocking]:
assumes "\<not> stable_on ds X"
assumes "\<not> individually_rational_on ds X \<Longrightarrow> P"
assumes "\<not> stable_no_blocking_on ds X \<Longrightarrow> P"
shows "P"
using assms unfolding stable_on_def by blast
(*>*)
text\<open>\<close>
end
subsection\<open> Theorem~1: Existence of stable pairs \<close>
text\<open>
We proceed to define a function whose fixed points capture all stable
matches. \citet[I(B), p917]{HatfieldMilgrom:2005} provide the
following intuition:
\begin{quote}
The first theorem states that a set of contracts is stable if any
alternative contract would be rejected by some doctor or some hospital
from its suitably defined opportunity set. In the formulas below,
think of the doctors' opportunity set as @{term "XD"} and the
hospitals' opportunity set as @{term "XH"}. If @{term "X'"} is the
corresponding stable set, then @{term "XD"} must include, in addition
to @{term "X'"}, all contracts that would not be rejected by the
hospitals, and @{term "XH"} must similarly include @{term "X'"} and
all contracts that would not be rejected by the doctors. If @{term
"X'"} is stable, then every alternative contract is rejected by
somebody, so @{term "X = XH \<union> XD"} [where @{term "X"} is the
set of all contracts]. This logic is summarized in the first theorem.
\end{quote}
See also \citet[p6,\S4]{Fleiner:2003} and \citet[\S2]{Fleiner:2002},
from whom we adopt the term @{emph \<open>stable pair\<close>}.
\<close>
context Contracts
begin
definition stable_pair_on :: "'d set \<Rightarrow> 'x set \<times> 'x set \<Rightarrow> bool" where
"stable_pair_on ds = (\<lambda>(XD, XH). XD = - RH XH \<and> XH = - RD_on ds XD)"
abbreviation stable_pair :: "'x set \<times> 'x set \<Rightarrow> bool" where
"stable_pair \<equiv> stable_pair_on UNIV"
abbreviation match :: "'x set \<times> 'x set \<Rightarrow> 'x set" where
"match X \<equiv> fst X \<inter> snd X"
text \<open>
\citet[Theorem~1]{HatfieldMilgrom:2005} state that every solution
@{term "(XD, XH)"} of @{const "stable_pair"} yields a stable match
@{term "XD \<inter> XH"}, and conversely, i.e., every stable match is
the intersection of some stable pair. \citet{AygunSonmez:2012-WP2}
show that neither is the case without further restrictions on the
hospitals' choice functions @{term "Ch"}; we exhibit their
counterexample below.
Even so we can establish some properties in the present setting:
\<close>
lemma stable_pair_on_CD_on:
assumes "stable_pair_on ds XD_XH"
shows "match XD_XH = CD_on ds (fst XD_XH)"
using %invisible assms CD_on_range unfolding stable_pair_on_def split_def fst_conv snd_conv
by blast
lemma stable_pair_on_CH:
assumes "stable_pair_on ds XD_XH"
shows "match XD_XH = CH (snd XD_XH)"
using %invisible assms CH_range unfolding stable_pair_on_def split_def fst_conv snd_conv
by blast
lemma stable_pair_on_CD_on_CH:
assumes "stable_pair_on ds XD_XH"
shows "CD_on ds (fst XD_XH) = CH (snd XD_XH)"
using %invisible assms stable_pair_on_CD_on stable_pair_on_CH by blast
lemma stable_pair_on_allocation:
assumes "stable_pair_on ds XD_XH"
shows "allocation (match XD_XH)"
unfolding %invisible stable_pair_on_CD_on[OF assms] by (rule CD_on_inj_on_Xd)
(*<*)
lemma stable_pair_onI:
assumes "fst XD_XH = - RH (snd XD_XH)"
assumes "snd XD_XH = - RD_on ds (fst XD_XH)"
shows "stable_pair_on ds XD_XH"
using assms unfolding stable_pair_on_def split_def by blast
lemma stable_pair_onE:
shows "\<lbrakk>stable_pair_on ds XD_XH; \<lbrakk>- RH (snd XD_XH) = fst XD_XH; - RD_on ds (fst XD_XH) = snd XD_XH\<rbrakk> \<Longrightarrow> P\<rbrakk> \<Longrightarrow> P"
unfolding stable_pair_on_def split_def by blast
lemma stable_pair_on_Cd:
assumes "stable_pair_on ds XD_XH"
assumes "d \<in> ds"
shows "Cd d (fst XD_XH) = match XD_XH \<inter> Field (Pd d)"
using stable_pair_on_CD_on[OF \<open>stable_pair_on ds XD_XH\<close>] CD_on_domain Cd_domain \<open>d \<in> ds\<close> by simp
lemma stable_pair_on_Cd_match:
assumes "stable_pair_on ds XD_XH"
assumes "d \<in> ds"
shows "Cd d (match XD_XH) = Cd d (fst XD_XH)"
using assms by (metis Cd_domain Cd_idem stable_pair_on_Cd)
lemma stable_pair_on_Xd:
assumes "stable_pair_on ds XD_XH"
assumes "x \<in> match XD_XH"
shows "Xd x \<in> ds"
using assms CD_on_Xd unfolding stable_pair_on_def split_def by blast
lemma stable_pair_on_match_Cd:
assumes "stable_pair_on ds XD_XH"
assumes "x \<in> match XD_XH"
shows "x \<in> Cd (Xd x) (match XD_XH)"
using assms by (metis (full_types) CD_on_def Cd_Xd UN_iff stable_pair_on_CD_on stable_pair_on_Cd_match)
(*>*)
text\<open>
We run out of steam on the following two lemmas, which are the
remaining requirements for stability.
\<close>
lemma
assumes "stable_pair_on ds XD_XH"
shows "individually_rational_on ds (match XD_XH)"
oops
lemma
assumes "stable_pair_on ds XD_XH"
shows "stable_no_blocking (match XD_XH)"
oops
text\<open>
\citet{HatfieldMilgrom:2005} also claim that the converse holds:
\<close>
lemma
assumes "stable_on ds X"
obtains XD_XH where "stable_pair_on ds XD_XH" and "X = match XD_XH"
oops
text\<open>
Again, the following counterexample shows that the @{const
substitutes} condition on @{term "Ch"} is too weak to guarantee
this. We show it holds under stronger assumptions in
\S\ref{sec:contracts-t1-converse}.
\<close>
end
subsubsection\<open> Theorem~1 does not hold \citep{AygunSonmez:2012-WP2} \label{sec:contracts-t1-counterexample} \<close>
text\<open>
The following counterexample, due to \citet[\S3:
Example~2]{AygunSonmez:2012-WP2}, comprehensively demonstrates that
\citet[Theorem~1]{HatfieldMilgrom:2005} does not hold.
We create three types: \<open>D2\<close> consists of two elements,
representing the doctors, and \<open>H\<close> is the type of the single
hospital. There are four contracts in the type \<open>X4\<close>.
\<close>
datatype D2 = D1 | D2
datatype H1 = H
datatype X4 = Xd1 | Xd1' | Xd2 | Xd2'
(*<*)
lemma D2_UNIV:
shows "UNIV = set [D1, D2]"
using D2.exhaust by auto
instantiation D2 :: enum
begin
definition "enum_class.enum = [D1, D2]"
definition "enum_class.enum_all P = (P D1 \<and> P D2)"
definition "enum_class.enum_ex P = (P D1 \<or> P D2)"
instance
by standard (simp_all add: enum_D2_def enum_all_D2_def enum_ex_D2_def D2_UNIV)
end
lemma D2_ALL:
shows "(\<forall>d. P d) = (\<forall>d\<in>{D1, D2}. P d)"
using D2_UNIV by auto
lemma D2_UNION:
shows "(\<Union>d. P d) = (\<Union>d\<in>{D1, D2}. P d)"
using D2_UNIV by auto
instance H1 :: finite
by standard (metis (full_types) H1.exhaust ex_new_if_finite finite.intros(1) finite_insert insert_subset subset_insertI)
lemma X4_UNIV:
shows "UNIV = set [Xd1, Xd1', Xd2, Xd2']"
using X4.exhaust by auto
lemmas X4_pow = subset_subseqs[OF subset_trans[OF subset_UNIV Set.equalityD1[OF X4_UNIV]]]
instance X4 :: finite
by standard (simp add: X4_UNIV)
lemma X4_ALL:
shows "(\<forall>X''. P X'') \<longleftrightarrow> (\<forall>X''\<in>set ` set (subseqs [Xd1, Xd1', Xd2, Xd2']). P X'')"
using X4_pow by blast
(*>*)
primrec X4d :: "X4 \<Rightarrow> D2" where
"X4d Xd1 = D1"
| "X4d Xd1' = D1"
| "X4d Xd2 = D2"
| "X4d Xd2' = D2"
abbreviation X4h :: "X4 \<Rightarrow> H1" where
"X4h _ \<equiv> H"
primrec PX4d :: "D2 \<Rightarrow> X4 rel" where
"PX4d D1 = linord_of_list [Xd1', Xd1]"
| "PX4d D2 = linord_of_list [Xd2, Xd2']"
function CX4h :: "H1 \<Rightarrow> X4 cfun" where
"CX4h _ {Xd1} = {Xd1}"
| "CX4h _ {Xd1'} = {Xd1'}"
| "CX4h _ {Xd2} = {Xd2}"
| "CX4h _ {Xd2'} = {Xd2'}"
| "CX4h _ {Xd1, Xd1'} = {Xd1}"
| "CX4h _ {Xd1, Xd2} = {Xd1, Xd2}"
| "CX4h _ {Xd1, Xd2'} = {Xd2'}"
| "CX4h _ {Xd1', Xd2} = {Xd1'}"
| "CX4h _ {Xd1', Xd2'} = {Xd1', Xd2'}"
| "CX4h _ {Xd2, Xd2'} = {Xd2}"
| "CX4h _ {Xd1, Xd1', Xd2} = {}"
| "CX4h _ {Xd1, Xd1', Xd2'} = {}"
| "CX4h _ {Xd1, Xd2, Xd2'} = {}"
| "CX4h _ {Xd1', Xd2, Xd2'} = {}"
| "CX4h _ {Xd1, Xd1', Xd2, Xd2'} = {}"
| "CX4h _ {} = {}"
apply %invisible (case_tac x)
apply (cut_tac X=b in X4_pow)
apply auto
done
(*<*)
termination by %invisible lexicographic_order
lemma PX4d_linear:
shows "Linear_order (PX4d d)"
by (cases d) (simp_all add: linord_of_list_Linear_order)
lemma PX4d_range:
shows "Field (PX4d d) \<subseteq> {x. X4d x = d}"
by (cases d) simp_all
lemma CX4h_range:
shows "CX4h h X \<subseteq> {x \<in> X. H = h}"
by (cases "(h, X)" rule: CX4h.cases) (auto simp: spec[OF H1.nchotomy, of h])
lemma CX4h_singular:
shows "inj_on X4d (CX4h h X)"
by (cases "(h, X)" rule: CX4h.cases) auto
(*>*)
text\<open>\<close>
interpretation StableNoDecomp: Contracts X4d X4h PX4d CX4h
using %invisible PX4d_linear PX4d_range CX4h_range CX4h_singular by unfold_locales blast+
text\<open>
There are two stable matches in this model.
\<close>
(*<*)
lemma Xd1_Xd2_stable:
shows "StableNoDecomp.stable {Xd1, Xd2}"
proof(rule StableNoDecomp.stable_onI)
show "StableNoDecomp.individually_rational {Xd1, Xd2}"
by (simp add: StableNoDecomp.individually_rational_on_def StableNoDecomp.CD_on_def
StableNoDecomp.CH_def insert_commute D2_UNION cong add: SUP_cong_simp)
show "StableNoDecomp.stable_no_blocking {Xd1, Xd2}"
apply (rule StableNoDecomp.stable_no_blocking_onI)
apply (rule_tac x="(H, X'')" in CX4h.cases)
apply (simp_all add: insert_commute)
done
qed
lemma Xd1'_Xd2'_stable:
shows "StableNoDecomp.stable {Xd1', Xd2'}"
proof(rule StableNoDecomp.stable_onI)
show "StableNoDecomp.individually_rational {Xd1', Xd2'}"
by (simp add: StableNoDecomp.individually_rational_on_def StableNoDecomp.CD_on_def
StableNoDecomp.CH_def insert_commute D2_UNION cong add: SUP_cong_simp)
show "StableNoDecomp.stable_no_blocking {Xd1', Xd2'}"
apply (rule StableNoDecomp.stable_no_blocking_onI)
apply (rule_tac x="(H, X'')" in CX4h.cases)
apply (simp_all add: insert_commute)
done
qed
(*>*)
text\<open>\<close>
lemma stable:
shows "StableNoDecomp.stable X \<longleftrightarrow> X = {Xd1, Xd2} \<or> X = {Xd1', Xd2'}"
(*<*)
(is "?lhs = ?rhs")
proof(rule iffI)
assume ?lhs then show ?rhs
using X4_pow[where X=X]
unfolding StableNoDecomp.stable_on_def StableNoDecomp.individually_rational_on_def
StableNoDecomp.stable_no_blocking_on_def StableNoDecomp.blocking_on_def
StableNoDecomp.CD_on_def StableNoDecomp.CH_def
by simp (elim disjE, simp_all add: D2_UNION X4_ALL insert_commute StableNoDecomp.maxR_def cong add: SUP_cong_simp)
next
assume ?rhs then show ?lhs
using Xd1_Xd2_stable Xd1'_Xd2'_stable by blast
qed
(*>*)
text\<open>
However neither arises from a pair \<open>XD, XH\<close> that satisfy
@{const "StableNoDecomp.stable_pair"}:
\<close>
+
+
+
lemma StableNoDecomp_XD_XH:
shows "StableNoDecomp.stable_pair (XD, XH) \<longleftrightarrow> (XD = {} \<and> XH = {Xd1, Xd1', Xd2, Xd2'})"
(*<*)
(is "?lhs = ?rhs")
proof(rule iffI)
note image_cong_simp [cong del] note INF_cong_simp [cong] note SUP_cong_simp [cong]
assume ?lhs then show ?rhs (* Expand the Cartesian product and check. *)
using X4_pow [of XD] X4_pow [of XH]
apply simp
apply (erule StableNoDecomp.stable_pair_onE)
- apply (elim disjE)
+ apply (elim disjE)
apply (simp_all add: StableNoDecomp.CD_on_def StableNoDecomp.CH_def)
unfolding X4_UNIV [simplified]
apply (auto simp: D2_ALL D2_UNION X4_ALL insert_commute StableNoDecomp.maxR_def linord_of_list_linord_of_listP)
done
next
assume ?rhs then show ?lhs
unfolding StableNoDecomp.stable_pair_on_def using X4.exhaust by (auto simp: StableNoDecomp.CH_def)
qed
(*>*)
text\<open>\<close>
proposition
assumes "StableNoDecomp.stable_pair (XD, XH)"
shows "\<not>StableNoDecomp.stable (XD \<inter> XH)"
using %invisible assms
apply (subst (asm) StableNoDecomp_XD_XH)
apply (simp add: StableNoDecomp.stable_on_def StableNoDecomp.stable_no_blocking_on_def StableNoDecomp.blocking_on_def StableNoDecomp.individually_rational_on_empty)
apply (auto simp: StableNoDecomp.mem_CD_on_Cd MaxR_def exI[where x=D1] exI[where x=H] exI[where x="{Xd1}"])
done
text\<open>
Moreover the converse of Theorem~1 does not hold either: the single
decomposition that satisfies @{const "StableNoDecomp.stable_pair"} (@{thm
[source] "StableNoDecomp_XD_XH"}) does not yield a stable match:
\<close>
proposition
assumes "StableNoDecomp.stable X"
shows "\<not>(\<exists>XD XH. StableNoDecomp.stable_pair (XD, XH) \<and> X = XD \<inter> XH)"
using %invisible assms StableNoDecomp_XD_XH stable by fastforce
text\<open>
So there is not hope for \citet[Theorem~1]{HatfieldMilgrom:2005} as it
stands. Note that the counterexample satisfies the @{const "substitutes"}
condition (see \S\ref{sec:cf-substitutes}):
\<close>
lemma
shows "substitutes (CX4h H)"
proof %invisible (rule substitutes_onI)
fix A a b assume "b \<notin> CX4h H (insert b A)"
then show "b \<notin> CX4h H (insert a (insert b A))"
apply (case_tac [!] a)
apply (case_tac [!] b)
apply ( (rule CX4h.cases[of "(H, A)"], auto simp: insert_commute)[1] )+
done
qed
text\<open>
Therefore while @{const "substitutes"} supports the monotonicity argument
that underpins their deferred-acceptance algorithm (see
\S\ref{sec:contracts-algorithmics}), it is not enough to rescue
Theorem~1. One way forward is to constrain the hospitals'
choice functions, which we discuss in the next section.
\<close>
subsubsection\<open> Theorem 1 holds with @{emph \<open>independence of rejected contracts\<close>} \label{sec:contracts-irc} \<close>
text\<open>
\citet{AygunSonmez:2012-WP2} propose to rectify this issue by
requiring hospitals' choices to satisfy @{const "irc"}
(\S\ref{sec:cf-irc}). Reassuringly their counterexample fails to
satisfy it:
\<close>
lemma
shows "\<not>irc (CX4h H)"
by %invisible (fastforce simp: insert_commute dest: irc_onD[where a="Xd2" and B="{Xd1, Xd1'}"])
text\<open>
We adopt this hypothesis by extending the @{const "Contracts"} locale:
\<close>
locale ContractsWithIRC = Contracts +
assumes Ch_irc: "\<forall>h. irc (Ch h)"
begin
text\<open>
This property requires that \<open>Ch\<close> behave, for example, as
follows:
\<close>
lemma Ch_domain:
shows "Ch h (A \<inter> {x. Xh x = h}) = Ch h A"
using %invisible irc_on_discard[OF spec[OF Ch_irc, of h], where B="A \<inter> {x. Xh x = h}" and C="A - {x. Xh x = h}"]
by (fastforce simp: Un_Diff_Int ac_simps dest: Ch_range')
lemma %invisible CH_domain:
shows "CH A \<inter> {x. Xh x = h} = Ch h (A \<inter> {x. Xh x = h})"
unfolding CH_def using Ch_range by (auto simp: Ch_domain)
lemma %invisible stable_pair_on_Ch:
assumes "stable_pair_on ds XD_XH"
shows "Ch h (snd XD_XH) = match XD_XH \<inter> {x. Xh x = h}"
using stable_pair_on_CH[OF assms] CH_domain Ch_domain by simp
lemmas %invisible Ch_consistency = irc_on_consistency_on[OF spec[OF Ch_irc], simplified, of h] for h
lemmas Ch_irc_idem = consistency_on_f_idem[OF Ch_f_range Ch_consistency, simplified]
lemma CH_irc_idem:
shows "CH (CH A) = CH A"
unfolding %invisible CH_def by (metis CH_def CH_domain Ch_domain Ch_irc_idem)
lemma Ch_CH_irc_idem:
shows "Ch h (CH A) = Ch h A"
using %invisible CH_domain CH_irc_idem Ch_domain by blast
text\<open>
This suffices to show the left-to-right direction of Theorem~1.
\<close>
lemma stable_pair_on_individually_rational:
assumes "stable_pair_on ds XD_XH"
shows "individually_rational_on ds (match XD_XH)"
by %invisible (metis CD_on_idem CH_irc_idem stable_pair_on_CD_on stable_pair_on_CD_on_CH assms individually_rational_onI)
lemma stable_pair_on_stable_no_blocking_on:
assumes "stable_pair_on ds XD_XH"
shows "stable_no_blocking_on ds (match XD_XH)"
proof(rule stable_no_blocking_onI)
fix h X''
assume C: "X'' = Ch h (match XD_XH \<union> X'')"
assume NE: "X'' \<noteq> Ch h (match XD_XH)"
assume CD: "X'' \<subseteq> CD_on ds (match XD_XH \<union> X'')"
have "X'' \<subseteq> snd XD_XH"
proof -
from CD have "X'' \<subseteq> CD_on ds (CD_on ds (fst XD_XH) \<union> X'')" by (simp only: stable_pair_on_CD_on[OF assms])
then have "X'' \<subseteq> CD_on ds (fst XD_XH \<union> X'')"
using CD_on_path_independent unfolding path_independent_def by (simp add: Un_commute)
moreover have "fst XD_XH \<inter> CD_on ds (fst XD_XH \<union> X'') \<subseteq> CD_on ds (fst XD_XH)"
using CD_on_Chernoff unfolding Chernoff_on_def by (simp add: inf_commute)
ultimately show ?thesis using assms unfolding stable_pair_on_def split_def by blast
qed
then have "Ch h (snd XD_XH) = Ch h (Ch h (snd XD_XH) \<union> X'')"
by (force intro!: consistencyD[OF Ch_consistency] dest: Ch_range')
moreover from NE have "X'' \<noteq> Ch h (snd XD_XH)"
using stable_pair_on_CH[OF assms] CH_domain[of _ h] Ch_domain[of h] by (metis Ch_irc_idem)
ultimately have "X'' \<noteq> Ch h (match XD_XH \<union> X'')"
using stable_pair_on_CH[OF assms] CH_domain[of _ h] Ch_domain[of h]
by (metis (no_types, lifting) inf.right_idem inf_sup_distrib2)
with C show False by blast
qed
theorem stable_pair_on_stable_on:
assumes "stable_pair_on ds XD_XH"
shows "stable_on ds (match XD_XH)"
using %invisible assms stable_pair_on_allocation stable_pair_on_individually_rational stable_pair_on_stable_no_blocking_on
by (blast intro: stable_onI)
end
subsubsection\<open> The converse of Theorem~1 \label{sec:contracts-t1-converse} \<close>
text (in Contracts) \<open>
The forward direction of Theorem~1 gives us a way of finding stable
matches by computing fixed points of a function closely related to
@{const "stable_pair"} (see \S\ref{sec:contracts-algorithmics}). The
converse says that every stable match can be decomposed in this way,
which implies that the stable matches form a lattice (see also
\S\ref{sec:contracts-algorithmics}).
The following proofs assume that the hospitals' choice functions
satisfy @{const "substitutes"} and @{const "irc"}.
\<close>
context ContractsWithIRC
begin
context
fixes ds :: "'b set"
fixes X :: "'a set"
begin
text\<open>
Following \citet[Proof of Theorem~1]{HatfieldMilgrom:2005}, we
partition the set of all contracts into @{term "[X, XD_smallest - X,
XH_largest - X]"} with careful definitions of the two sets @{term
"XD_smallest"} and @{term "XH_largest"}. Specifically @{term
"XH_largest"} contains all contracts ranked at least as good as those
in @{term "X"} by the doctors, considering unemployment and
unacceptable contracts. Similarly @{term "XD_smallest"} contains those
ranked at least as poorly.
\<close>
definition XH_largest :: "'a set" where
"XH_largest =
{y. Xd y \<in> ds
\<and> y \<in> Field (Pd (Xd y))
\<and> (\<forall>x \<in> dX X (Xd y). (x, y) \<in> Pd (Xd y))}"
definition XD_smallest :: "'a set" where
"XD_smallest = - (XH_largest - X)"
context
assumes "stable_on ds X"
begin
lemma Ch_XH_largest_Field:
assumes "x \<in> Ch h XH_largest"
shows "x \<in> Field (Pd (Xd x))"
using assms unfolding XH_largest_def by (blast dest: Ch_range')
lemma Ch_XH_largest_Xd:
assumes "x \<in> Ch h XH_largest"
shows "Xd x \<in> ds"
using assms unfolding XH_largest_def by (blast dest: Ch_range')
lemma X_subseteq_XH_largest:
shows "X \<subseteq> XH_largest"
proof(rule subsetI)
fix x assume "x \<in> X"
then obtain d where "d \<in> ds" "x \<in> Cd d X" using stable_on_CD_on[OF \<open>stable_on ds X\<close>] unfolding CD_on_def by blast
with \<open>stable_on ds X\<close> show "x \<in> XH_largest"
using Pd_linear' Pd_range' Cd_range subset_Image1_Image1_iff[of "Pd d"] stable_on_allocation[of ds X]
unfolding XH_largest_def linear_order_on_def partial_order_on_def stable_on_def inj_on_def dX_def
by simp blast
qed
lemma X_subseteq_XD_smallest:
shows "X \<subseteq> XD_smallest"
unfolding XD_smallest_def by blast
lemma X_XD_smallest_XH_largest:
shows "X = XD_smallest \<inter> XH_largest"
using X_subseteq_XH_largest unfolding XD_smallest_def by blast
text\<open>
The goal of the next few lemmas is to show the constituents of @{term
"stable_pair_on ds (XD_smallest, XH_largest)"}.
Intuitively, if a doctor has a contract @{term "x"} in @{term "X"},
then all of their contracts in @{const "XH_largest"} are at least as
desirable as @{term "x"}, and so the @{const
"stable_no_blocking"} hypothesis guarantees the hospitals choose
@{term "x"} from @{const "XH_largest"}, and similarly the doctors
@{term "x"} from @{const "XD_smallest"}.
\<close>
lemma XH_largestCdXXH_largest:
assumes "x \<in> Ch h XH_largest"
shows "x \<in> Cd (Xd x) (X \<union> Ch h XH_largest)"
proof -
from assms have "(y, x) \<in> Pd (Xd x)" if "Xd y = Xd x" and "y \<in> X" for y
using that by (fastforce simp: XH_largest_def dX_def dest: Ch_range')
with Ch_XH_largest_Field[OF assms] Pd_linear Pd_range show ?thesis
using assms Ch_XH_largest_Field[OF assms]
by (clarsimp simp: Cd_greatest greatest_def)
(metis Ch_singular Pd_range' inj_onD subset_refl underS_incl_iff)
qed
lemma CH_XH_largest:
shows "CH XH_largest = X"
proof -
have "Ch h XH_largest \<subseteq> CD_on ds (X \<union> Ch h XH_largest)" for h
using XH_largestCdXXH_largest Ch_XH_largest_Xd Ch_XH_largest_Field unfolding CD_on_def by blast
from \<open>stable_on ds X\<close> have "Ch h XH_largest = Ch h X" for h
using \<open>Ch h XH_largest \<subseteq> CD_on ds (X \<union> Ch h XH_largest)\<close> X_subseteq_XH_largest
by - (erule stable_on_blocking_onD[where h=h and X''="Ch h XH_largest"];
force intro!: consistencyD[OF Ch_consistency] dest: Ch_range')
with stable_on_CH[OF \<open>stable_on ds X\<close>] show ?thesis unfolding CH_def by simp
qed
lemma Cd_XD_smallest:
assumes "d \<in> ds"
shows "Cd d (XD_smallest \<inter> Field (Pd d)) = Cd d (X \<inter> Field (Pd d))"
proof(cases "X \<inter> Field (Pd d) = {}")
case True
with Pd_range' Cd_range'[where X=X] stable_on_CD_on[OF \<open>stable_on ds X\<close>] mem_CD_on_Cd assms
have "- XH_largest \<inter> Field (Pd d) = {}"
unfolding XH_largest_def dX_def by auto blast
then show ?thesis
unfolding XD_smallest_def by (simp add: Int_Un_distrib2)
next
case False
with Pd_linear'[of d] \<open>stable_on ds X\<close> stable_on_CD_on stable_on_allocation assms
show ?thesis
unfolding XD_smallest_def order_on_defs total_on_def
by (auto 0 0 simp: Int_Un_distrib2 Cd_greatest greatest_def XH_largest_def dX_def)
(metis (mono_tags, lifting) IntI Pd_range' UnCI inj_onD)+
qed
lemma CD_on_XD_smallest:
shows "CD_on ds XD_smallest = X"
using stable_on_CD_on[OF \<open>stable_on ds X\<close>] unfolding CD_on_def2 by (simp add: Cd_XD_smallest)
theorem stable_on_stable_pair_on:
shows "stable_pair_on ds (XD_smallest, XH_largest)"
proof(rule stable_pair_onI, simp_all only: prod.sel)
from CH_XH_largest have "- RH XH_largest = - (XH_largest - X)" by blast
also from X_XD_smallest_XH_largest have "\<dots> = XD_smallest" unfolding XD_smallest_def by blast
finally show "XD_smallest = -RH XH_largest" by blast
next
from CD_on_XD_smallest have "-RD_on ds XD_smallest = -(XD_smallest - X)" by simp
also have "\<dots> = XH_largest" unfolding XD_smallest_def using X_subseteq_XH_largest by blast
finally show "XH_largest = -RD_on ds XD_smallest" by blast
qed
end
end
text\<open>
Our ultimate statement of Theorem~1 of \cite{HatfieldMilgrom:2005} ala
\citet{AygunSonmez:2012-WP2} goes as follows, bearing in mind that we
are working in the @{const "ContractsWithIRC"} locale:
\<close>
theorem T1:
shows "stable_on ds X \<longleftrightarrow> (\<exists>XD_XH. stable_pair_on ds XD_XH \<and> X = match XD_XH)"
using stable_pair_on_stable_on stable_on_stable_pair_on X_XD_smallest_XH_largest by fastforce
end
subsection\<open> Theorem~3: Algorithmics \label{sec:contracts-algorithmics} \<close>
text (in Contracts) \<open>
Having revived Theorem~1, we reformulate @{const "stable_pair"} as a
monotone (aka @{emph \<open>isotone\<close>}) function and exploit the lattice
structure of its fixed points, following \citet[{\S}II,
Theorem~3]{HatfieldMilgrom:2005}. This underpins all of their results
that we formulate here. \citet[\S2]{Fleiner:2002} provides an
intuitive gloss of these definitions.
\<close>
context Contracts
begin
definition F1 :: "'x cfun" where
"F1 X' = - RH X'"
definition F2 :: "'d set \<Rightarrow> 'x cfun" where
"F2 ds X' = - RD_on ds X'"
definition F :: "'d set \<Rightarrow> 'x set \<times> 'x set dual \<Rightarrow> 'x set \<times> 'x set dual" where
"F ds = (\<lambda>(XD, XH). (F1 (undual XH), dual (F2 ds (F1 (undual XH)))))"
text\<open>
We exploit Isabelle/HOL's ordering type classes (over the type
constructors @{typ "'a set"} and @{typ "'a \<times> 'b"}) to define
@{const "F"}. As @{const "F"} is @{const "antimono"} (where @{thm
"antimono_def"} for a lattice order \<open>\<le>\<close>) on its
second argument \<open>XH\<close>, we adopt the dual lattice order
using the type constructor @{typ "'a dual"}, where @{const "dual"} and
@{const "undual"} mediate the isomorphism on values, to satisfy
Isabelle/HOL's @{const "mono"} predicate. Note we work under the
@{const "substitutes"} hypothesis here.
Relating this function to @{const "stable_pair"} is syntactically
awkward but straightforward:
\<close>
lemma fix_F_stable_pair_on:
assumes "X = F ds X"
shows "stable_pair_on ds (map_prod id undual X)"
using %invisible assms
by (cases X) (simp add: F_def F1_def F2_def stable_pair_on_def dual_eq_iff)
lemma stable_pair_on_fix_F:
assumes "stable_pair_on ds X"
shows "map_prod id dual X = F ds (map_prod id dual X)"
using %invisible assms
unfolding F_def F1_def F2_def stable_pair_on_def split_def
by (metis fst_map_prod id_apply prod.collapse snd_map_prod undual_dual)
end
text (in Contracts) \<open>
The function @{const F} is monotonic under @{const substitutes}.
\<close>
locale ContractsWithSubstitutes = Contracts +
assumes Ch_substitutes: "\<forall>h. substitutes (Ch h)"
begin
(*<*)
lemma Rh_mono:
shows "mono (Rh h)"
using %invisible substitutes_on_Rf_mono_on[OF spec[OF Ch_substitutes]] mono_on_mono by (simp add: fun_eq_iff) blast
lemmas Ch_iia = Rh_mono[unfolded Rf_mono_iia]
lemmas Ch_Chernoff = Ch_iia[unfolded Chernoff_on_iia_on[symmetric]]
lemmas Ch_subsitutes_idem = iia_f_idem[OF Ch_f_range Ch_iia, simplified]
lemma RH_mono:
shows "mono RH"
unfolding %invisible CH_def by (rule monoI) (auto dest: monoD[OF Rh_mono])
lemmas CH_iia = RH_mono[unfolded Rf_mono_iia]
lemmas CH_Chernoff = CH_iia[unfolded Chernoff_on_iia_on[symmetric]]
lemmas CH_substitutes_idem = iia_f_idem[OF CH_f_range_on CH_iia, simplified]
(*>*)
text\<open>\<close>
lemma F1_antimono:
shows "antimono F1"
by %invisible (rule antimonoI) (auto simp: F1_def dest: Diff_mono[OF _ monoD[OF RH_mono]])
lemma F2_antimono:
shows "antimono (F2 ds)"
by %invisible (rule antimonoI) (auto simp: F2_def dest: Diff_mono[OF _ monoD[OF RD_on_mono]])
lemma F_mono:
shows "mono (F ds)"
unfolding %invisible F_def using antimonoD[OF F1_antimono] antimonoD[OF F2_antimono]
by (auto intro: monoI simp: less_eq_dual_def)
text\<open>
We define the extremal fixed points using Isabelle/HOL's least and
greatest fixed point operators:
\<close>
definition gfp_F :: "'b set \<Rightarrow> 'a set \<times> 'a set" where
"gfp_F ds = map_prod id undual (gfp (F ds))"
definition lfp_F :: "'b set \<Rightarrow> 'a set \<times> 'a set" where
"lfp_F ds = map_prod id undual (lfp (F ds))"
lemmas gfp_F_stable_pair_on = fix_F_stable_pair_on[OF gfp_unfold[OF F_mono], folded gfp_F_def]
lemmas lfp_F_stable_pair_on = fix_F_stable_pair_on[OF lfp_unfold[OF F_mono], folded lfp_F_def]
text\<open>
These last two lemmas show that the least and greatest fixed points do
satisfy @{const "stable_pair"}.
Using standard fixed-point properties, we can establish:
\<close>
lemma F2_o_F1_mono:
shows "mono (F2 ds \<circ> F1)"
by %invisible (metis F2_antimono F1_antimono antimono_def comp_apply monoI)
lemmas F2_F1_mono = F2_o_F1_mono[unfolded o_def]
lemma gfp_F_lfp_F:
shows "gfp_F ds = (F1 (lfp (F2 ds \<circ> F1)), lfp (F2 ds \<circ> F1))"
proof %invisible -
let ?F' = "dual \<circ> F2 ds \<circ> F1 \<circ> undual"
have "gfp (F ds) = (F1 (undual (gfp ?F')), gfp ?F')"
by (subst gfp_prod[OF F_mono]) (simp add: F_def o_def gfp_const)
also have "gfp ?F' = dual (lfp (F2 ds \<circ> F1))"
by (simp add: lfp_dual_gfp[OF F2_o_F1_mono, simplified o_assoc])
finally show ?thesis unfolding gfp_F_def by simp
qed
end
text\<open>
We need hospital CFs to satisfy both @{const substitutes} and @{const irc}
to relate these fixed points to stable matches.
\<close>
locale ContractsWithSubstitutesAndIRC =
ContractsWithSubstitutes + ContractsWithIRC
begin
lemmas gfp_F_stable_on = stable_pair_on_stable_on[OF gfp_F_stable_pair_on]
lemmas lfp_F_stable_on = stable_pair_on_stable_on[OF lfp_F_stable_pair_on]
end
text\<open>
\label{sec:contracts-codegen-gfp_F}
We demonstrate the effectiveness of our definitions by executing an
example due to \citet[p920]{HatfieldMilgrom:2005} using Isabelle/HOL's
code generator \citep{Haftmann-Nipkow:2010:code}. Note that, while
adequate for this toy instance, the representations used here are
hopelessly n{\"a}ive: sets are represented by lists and operations
typically traverse the entire contract space. It is feasible, with
more effort, to derive efficient algorithms; see, for instance,
\citet{Bijlsma:1991,Bulwahn-et-al:2008:imp_HOL}.
\<close>
context ContractsWithSubstitutes
begin
lemma gfp_F_code[code]:
shows "gfp_F ds = map_prod id undual (while (\<lambda>A. F ds A \<noteq> A) (F ds) top)"
using %invisible gfp_F_def gfp_while_lattice[OF F_mono] by simp
lemma lfp_F_code[code]:
shows "lfp_F ds = map_prod id undual (while (\<lambda>A. F ds A \<noteq> A) (F ds) bot)"
using %invisible lfp_F_def lfp_while_lattice[OF F_mono] by simp
end
text\<open>
There are two hospitals and two doctors.
\<close>
datatype H2 = H1 | H2
text\<open>
The contract space is simply the Cartesian product @{typ "D2 \<times>
H2"}.
\<close>
type_synonym X_D2_H2 = "D2 \<times> H2"
text\<open>
Doctor @{const "D1"} prefers \<open>H1 \<succ> H2\<close>, doctor @{const
"D2"} the same \<open>H1 \<succ> H2\<close> (but over different
contracts).
\<close>
primrec P_D2_H2_d :: "D2 \<Rightarrow> X_D2_H2 rel" where
"P_D2_H2_d D1 = linord_of_list [(D1, H1), (D1, H2)]"
| "P_D2_H2_d D2 = linord_of_list [(D2, H1), (D2, H2)]"
text\<open>
Hospital @{const "H1"} prefers \<open>{D1} \<succ> {D2} \<succ>
\<emptyset>\<close>, and hospital @{const "H2"} \<open>{D1, D2}
\<succ> {D1} \<succ> {D2} \<succ> \<emptyset>\<close>. We interpret
these constraints as follows:
\<close>
definition P_D2_H2_H1 :: "X_D2_H2 cfun" where
"P_D2_H2_H1 A = (if (D1, H1) \<in> A then {(D1, H1)} else if (D2, H1) \<in> A then {(D2, H1)} else {})"
definition P_D2_H2_H2 :: "X_D2_H2 cfun" where
"P_D2_H2_H2 A =
(if {(D1, H2), (D2, H2)} \<subseteq> A then {(D1, H2), (D2, H2)} else
if (D1, H2) \<in> A then {(D1, H2)} else if (D2, H2) \<in> A then {(D2, H2)} else {})"
primrec P_D2_H2_h :: "H2 \<Rightarrow> X_D2_H2 cfun" where
"P_D2_H2_h H1 = P_D2_H2_H1"
| "P_D2_H2_h H2 = P_D2_H2_H2"
(*<*)
lemma H2_UNIV:
shows "UNIV = set [H1, H2]"
using H2.exhaust by auto
instantiation H2 :: enum
begin
definition "enum_class.enum = [H1, H2]"
definition "enum_class.enum_all P = (P H1 \<and> P H2)"
definition "enum_class.enum_ex P = (P H1 \<or> P H2)"
instance
by standard (simp_all add: enum_H2_def enum_all_H2_def enum_ex_H2_def H2_UNIV)
end
lemma H2_ALL [simp]:
shows "(\<forall>h. P h) = (\<forall>h\<in>{H1, H2}. P h)"
using H2_UNIV by auto
lemma H2_UNION:
shows "(\<Union>h. P h) = (\<Union>h\<in>{H1, H2}. P h)"
using H2_UNIV by auto
lemma P_D2_H2_d_linear:
shows "Linear_order (P_D2_H2_d d)"
by (cases d) (simp_all add: linord_of_list_Linear_order)
lemma P_D2_H2_d_range:
shows "Field (P_D2_H2_d d) \<subseteq> {x. fst x = d}"
by (cases d) simp_all
lemma P_D2_H2_h_substitutes:
shows "substitutes (P_D2_H2_h h)"
by %invisible (cases h) (auto intro!: substitutes_onI simp: P_D2_H2_H1_def P_D2_H2_H2_def split: if_splits)
(*>*)
text\<open>
Isabelle's code generator requires us to hoist the relevant
definitions from the locale to the top-level (see the \verb!codegen!
documentation, \S7.3).
\<close>
global_interpretation P920_example:
ContractsWithSubstitutes fst snd P_D2_H2_d P_D2_H2_h
defines P920_example_gfp_F = P920_example.gfp_F
and P920_example_lfp_F = P920_example.lfp_F
and P920_example_F = P920_example.F
and P920_example_F1 = P920_example.F1
and P920_example_F2 = P920_example.F2
and P920_example_maxR = P920_example.maxR
and P920_example_MaxR_f = P920_example.MaxR_f
and P920_example_Cd = P920_example.Cd
and P920_example_CD_on = P920_example.CD_on
and P920_example_CH = P920_example.CH
using %invisible P_D2_H2_d_linear P_D2_H2_h_substitutes
by %invisible unfold_locales (simp_all, simp_all add: D2_ALL P_D2_H2_H1_def P_D2_H2_H2_def)
(*<*)
(*
Codegen hackery: avoid the CoSet constructor as some operations do not
handle it.
*)
declare UNIV_coset[code del]
declare UNIV_enum[code]
declare compl_set[code del] compl_coset[code del]
declare Compl_eq_Diff_UNIV[code]
(*
code_thms P920_example_gfp_F
export_code P920_example_gfp_F in SML module_name F file "F.sml"
value "P920_example_gfp_F UNIV"
*)
lemma P920_example_gfp_F_value:
shows "P920_example_gfp_F UNIV = ({(D1, H1), (D1, H2), (D2, H2)}, {(D1, H1), (D2, H1), (D2, H2)})"
by eval
lemma P920_example_gfp_F_match_value:
shows "P920_example.match (P920_example_gfp_F UNIV) = {(D1, H1), (D2, H2)}"
unfolding P920_example_gfp_F_value by simp
lemma P920_example_lfp_F_value:
shows "P920_example_lfp_F UNIV = ({(D1, H1), (D1, H2), (D2, H2)}, {(D1, H1), (D2, H1), (D2, H2)})"
by eval
(*>*)
text\<open>
We can now evaluate the @{const "gfp"} of @{const "P920_example.F"}
(i.e., \<open>F\<close> specialized to the above constants) using
Isabelle's \verb!value! antiquotation or \verb!eval! method. This
yields the \<open>(XD, XH)\<close> pair:
\begin{center}
@{thm (rhs) "P920_example_gfp_F_value"}
\end{center}
The stable match is therefore @{thm (rhs) "P920_example_gfp_F_match_value"}.
The @{const "lfp"} of @{const "P920_example.F"} is identical to the
@{const "gfp"}:
\begin{center}
@{thm (rhs) "P920_example_lfp_F_value"}
\end{center}
This implies that there is only one stable match in this scenario.
\<close>
subsection\<open> Theorem~4: Optimality \label{sec:contracts-optimality} \<close>
text (in ContractsWithSubstitutes) \<open>
\citet[Theorem~4]{HatfieldMilgrom:2005} assert that the greatest fixed
point @{const "gfp_F"} of @{const "F"} yields the stable match most
preferred by the doctors in the following sense:
\<close>
context Contracts
begin
definition doctor_optimal_match :: "'d set \<Rightarrow> 'x set \<Rightarrow> bool" where
"doctor_optimal_match ds Y
\<longleftrightarrow> (stable_on ds Y \<and> (\<forall>X. \<forall>x\<in>X. stable_on ds X \<longrightarrow> (\<exists>y \<in> Y. (x, y) \<in> Pd (Xd x))))"
(*<*)
lemmas doctor_optimal_matchI = iffD2[OF doctor_optimal_match_def, unfolded conj_imp_eq_imp_imp, rule_format]
lemmas doctor_optimal_match_stable_on = iffD1[OF doctor_optimal_match_def, THEN conjunct1]
lemmas doctor_optimal_match_optimal = iffD1[OF doctor_optimal_match_def, THEN conjunct2, rule_format]
lemma doctor_optimal_match_unique:
assumes "doctor_optimal_match ds X"
assumes "doctor_optimal_match ds Y"
shows "X = Y"
proof(rule iffD2[OF set_eq_iff, rule_format])
fix x
from Pd_linear'[where d="Xd x"] Pd_Xd[where d="Xd x"]
stable_on_allocation[OF doctor_optimal_match_stable_on[OF assms(1)]]
stable_on_allocation[OF doctor_optimal_match_stable_on[OF assms(2)]]
assms
show "x \<in> X \<longleftrightarrow> x \<in> Y"
unfolding doctor_optimal_match_def order_on_defs
by - (rule iffI; metis antisymD inj_on_eq_iff)
qed
(*>*)
end
text (in ContractsWithSubstitutes) \<open>
In a similar sense, @{const "lfp_F"} is the doctor-pessimal match.
We state a basic doctor-optimality result in terms of @{const
"stable_pair"} in the @{const "ContractsWithSubstitutes"} locale for
generality; we can establish @{const "doctor_optimal_match"} only
under additional constraints on hospital choice functions (see
\S\ref{sec:contracts-irc}).
\<close>
context ContractsWithSubstitutes
begin
context
fixes XD_XH :: "'a set \<times> 'a set"
fixes ds :: "'b set"
assumes "stable_pair_on ds XD_XH"
begin
lemma gfp_F_upperbound:
shows "(fst XD_XH, dual (snd XD_XH)) \<le> gfp (F ds)"
proof %invisible -
have "(fst XD_XH, dual (snd XD_XH)) = F ds (fst XD_XH, dual (snd XD_XH))"
using stable_pair_on_fix_F[OF \<open>stable_pair_on ds XD_XH\<close>] by (metis id_apply map_prod_simp prod.collapse)
then show ?thesis by (fastforce intro: gfp_upperbound)
qed
lemma XD_XH_gfp_F:
shows "fst XD_XH \<subseteq> fst (gfp_F ds)"
and "snd (gfp_F ds) \<subseteq> snd XD_XH"
using %invisible gfp_F_upperbound
unfolding gfp_F_def by (simp_all add: less_eq_dual_def less_eq_prod_def)
lemma lfp_F_upperbound:
shows "lfp (F ds) \<le> (fst XD_XH, dual (snd XD_XH))"
proof %invisible -
have "(fst XD_XH, dual (snd XD_XH)) = F ds (fst XD_XH, dual (snd XD_XH))"
using stable_pair_on_fix_F[OF \<open>stable_pair_on ds XD_XH\<close>] by (metis id_apply map_prod_simp prod.collapse)
then show ?thesis by (fastforce intro: lfp_lowerbound)
qed
lemma XD_XH_lfp_F:
shows "fst (lfp_F ds) \<subseteq> fst XD_XH"
and "snd XD_XH \<subseteq> snd (lfp_F ds)"
using %invisible lfp_F_upperbound
unfolding lfp_F_def by (simp_all add: less_eq_dual_def less_eq_prod_def)
text\<open>
We appeal to the doctors' linear preferences to show the optimality
(pessimality) of @{const "gfp_F"} (@{const "lfp_F"}) for doctors.
\<close>
theorem gfp_f_doctor_optimal:
assumes "x \<in> match XD_XH"
shows "\<exists>y \<in> match (gfp_F ds). (x, y) \<in> Pd (Xd x)"
using %invisible assms gfp_F_stable_pair_on[where ds=ds] \<open>stable_pair_on ds XD_XH\<close>
stable_pair_on_CD_on stable_pair_on_Xd Cd_Xd mem_CD_on_Cd
XD_XH_gfp_F(1) Cd_mono[where d="Xd x" and x=x and X="fst XD_XH" and Y="fst (gfp_F ds)"]
by (metis sup.absorb_iff2)
theorem lfp_f_doctor_pessimal:
assumes "x \<in> match (lfp_F ds)"
shows "\<exists>y \<in> match XD_XH. (x, y) \<in> Pd (Xd x)"
using %invisible assms lfp_F_stable_pair_on[where ds=ds] \<open>stable_pair_on ds XD_XH\<close>
stable_pair_on_CD_on stable_pair_on_Xd Cd_Xd mem_CD_on_Cd
XD_XH_lfp_F(1) Cd_mono[where d="Xd x" and x=x and X="fst (lfp_F ds)" and Y="fst XD_XH"]
by (metis sup.absorb_iff2)
end
end
theorem (in ContractsWithSubstitutesAndIRC) gfp_F_doctor_optimal_match:
shows "doctor_optimal_match ds (match (gfp_F ds))"
by %invisible (rule doctor_optimal_matchI[OF gfp_F_stable_on]) (auto simp: T1 elim: gfp_f_doctor_optimal)
text (in ContractsWithSubstitutesAndIRC) \<open>
Conversely @{const "lfp_F"} is most preferred by the hospitals in a
revealed-preference sense, and @{const "gfp_F"} least preferred. These
results depend on @{thm [source] Ch_domain} and hence the @{const
"irc"} hypothesis on hospital choice functions.
\<close>
context ContractsWithSubstitutesAndIRC
begin
theorem lfp_f_hospital_optimal:
assumes "stable_pair_on ds XD_XH"
assumes "x \<in> Ch h (match (lfp_F ds))"
shows "x \<in> Ch h (match (lfp_F ds) \<union> match XD_XH)"
proof %invisible -
from \<open>stable_pair_on ds XD_XH\<close> have "match (lfp_F ds) \<union> match XD_XH \<subseteq> snd (lfp_F ds)"
by (simp add: XD_XH_lfp_F(2) le_infI2)
with \<open>x \<in> Ch h (match (lfp_F ds))\<close> lfp_F_stable_pair_on stable_pair_on_Ch Ch_range show ?thesis
by - (rule iia_onD[OF Ch_iia[where h=h], where B="snd (lfp_F ds)", simplified]; blast)
qed
theorem gfp_f_hospital_pessimal:
assumes "stable_pair_on ds XD_XH"
assumes "x \<in> Ch h (match XD_XH)"
shows "x \<in> Ch h (match (gfp_F ds) \<union> match XD_XH)"
proof %invisible -
from \<open>stable_pair_on ds XD_XH\<close> have "match (gfp_F ds) \<union> match XD_XH \<subseteq> snd XD_XH"
by (simp add: XD_XH_gfp_F(2) le_infI2)
with assms lfp_F_stable_pair_on stable_pair_on_Ch Ch_range show ?thesis
by - (rule iia_onD[OF Ch_iia[where h=h], where B="snd XD_XH", simplified]; blast+)
qed
end
text\<open>
The general lattice-theoretic results of e.g. \citet{Fleiner:2002}
depend on the full Tarski-Knaster fixed point theorem, which is
difficult to state in the present type class-based setting. (The
theorem itself is available in the Isabelle/HOL distribution but
requires working with less convenient machinery.)
\<close>
subsection\<open> Theorem~5 does not hold \citep{HatfieldKojima:2008} \<close>
text (in Contracts) \<open>
\citet[Theorem~5]{HatfieldMilgrom:2005} claim that:
\begin{quote}
Suppose \<open>H\<close> contains at least two hospitals, which we
denote by \<open>h\<close> and \<open>h'\<close>. Further suppose that
@{term "Rh h"} is not isotone, that is, contracts are not @{const
"substitutes"} for \<open>h\<close>. Then there exist preference
orderings for the doctors in set \<open>D\<close>, a preference
ordering for a hospital \<open>h'\<close> with a single job opening
such that, regardless of the preferences of the other hospitals, no
stable set of contracts exists.
\end{quote}
\citet[Observation~1]{HatfieldKojima:2008} show this is not true:
there can be stable matches even if hospital choice functions violate
@{const "substitutes"}. This motivates looking for conditions weaker
than @{const "substitutes"} that still guarantee stable matches, a
project taken up by \citet{HatfieldKojima:2010}; see
\S\ref{sec:cop}. We omit their counterexample to this incorrect claim.
\<close>
subsection\<open> Theorem~6: ``Vacancy chain'' dynamics \label{sec:contracts-vacancy-chain} \<close>
text (in ContractsWithSubstitutesAndIRC) \<open>
\citet[II(C), p923]{HatfieldMilgrom:2005} propose a model for updating
a stable match @{term "X"} when a doctor @{term "d'"}
retires. Intuitively the contracts mentioning @{term "d'"} are
discarded and a modified algorithm run from the @{const "XH_largest"}
and @{const "XD_smallest"} sets determined from @{term "X"}. The
result is another stable match where the remaining doctors @{term "ds
- {d'}"} are (weakly) better off and the hospitals (weakly) worse off
than they were in the initial state. The proofs are essentially the
same as for optimality (\S\ref{sec:contracts-optimality}).
\<close>
context ContractsWithSubstitutesAndIRC
begin
context
fixes X :: "'a set"
fixes d' :: "'b"
fixes ds :: "'b set"
begin
text\<open>
\citeauthor{HatfieldMilgrom:2005} do not motivate why the process uses
this functional and not @{const "F"}.
\<close>
definition F' :: "'a set \<times> 'a set dual \<Rightarrow> 'a set \<times> 'a set dual" where
"F' = (\<lambda>(XD, XH). (- RH (undual XH), dual (- RD_on (ds-{d'}) XD)))"
lemma F'_apply:
"F' (XD, XH) = (- RH (undual XH), dual (- RD_on (ds - {d'}) XD))"
by (simp add: F'_def)
lemma %invisible F1'_antimono:
shows "antimono (\<lambda>XH. - RH XH)"
by %invisible (rule antimonoI) (auto simp: F1_def dest: Diff_mono[OF _ monoD[OF RH_mono]])
lemma %invisible F2'_antimono:
shows "antimono (\<lambda>XD. - RD_on (ds-{d'}) XD)"
by %invisible (rule antimonoI) (auto simp: F2_def dest: Diff_mono[OF _ monoD[OF RD_on_mono]])
lemma F'_mono:
shows "mono F'"
unfolding %invisible F'_def using antimonoD[OF F1'_antimono] antimonoD[OF F2'_antimono]
by (auto intro: monoI simp: less_eq_dual_def)
lemma fix_F'_stable_pair_on:
"stable_pair_on (ds - {d'}) (map_prod id undual A)"
if "A = F' A"
proof %invisible -
obtain x y where "A = (x, y)"
by (cases A)
with that have "F' (x, y) = (x, y)"
by simp
then have "- Rf CH (undual y) = x" and
"dual (- Rf (CD_on (ds - {d'})) x) = y"
by (simp_all only: F'_apply prod_eq_iff fst_conv snd_conv)
with \<open>A = (x, y)\<close> show ?thesis
by (simp add: stable_pair_on_def dual_eq_iff)
qed
text\<open>
We model their update process using the @{const "while"} combinator,
as we cannot connect it to the extremal fixed points as we did in
\S\ref{sec:contracts-algorithmics} because we begin computing from the
stable match @{term "X"}.
\<close>
definition F'_iter :: "'a set \<times> 'a set dual" where
"F'_iter = (while (\<lambda>A. F' A \<noteq> A) F' (XD_smallest ds X, dual (XH_largest ds X)))"
abbreviation F'_iter_match :: "'a set" where
"F'_iter_match \<equiv> match (map_prod id undual F'_iter)"
context
assumes "stable_on ds X"
begin
lemma F_start:
shows "F ds (XD_smallest ds X, dual (XH_largest ds X)) = (XD_smallest ds X, dual (XH_largest ds X))"
using %invisible CH_XH_largest[OF \<open>stable_on ds X\<close>] CD_on_XD_smallest[OF \<open>stable_on ds X\<close>] X_subseteq_XH_largest[OF \<open>stable_on ds X\<close>]
unfolding F_def F1_def F2_def XD_smallest_def by (auto simp add: dual_eq_iff)
lemma F'_start:
shows "(XD_smallest ds X, dual (XH_largest ds X)) \<le> F' (XD_smallest ds X, dual (XH_largest ds X))"
using %invisible F_start unfolding F_def F1_def F2_def F'_def
unfolding CD_on_def XD_smallest_def by (auto simp add: dual_eq_iff dual_less_eq_iff)
lemma
shows F'_iter_stable_pair_on: "stable_pair_on (ds-{d'}) (map_prod id undual F'_iter)" (is "?thesis1")
and F'_start_le_F'_iter: "(XD_smallest ds X, dual (XH_largest ds X)) \<le> F'_iter" (is "?thesis2")
proof %invisible -
obtain P where XXX: "while_option (\<lambda>A. F' A \<noteq> A) F' ((XD_smallest ds X), dual (XH_largest ds X)) = Some P"
using while_option_finite_increasing_Some[OF F'_mono _ F'_start, simplified] by blast
with while_option_stop2[OF XXX] fix_F'_stable_pair_on[where A=P]
show ?thesis1 and ?thesis2
using funpow_mono2[OF F'_mono _ order.refl F'_start, where i=0]
unfolding F'_iter_def while_def by auto
qed
lemma F'_iter_match_stable_on:
shows "stable_on (ds-{d'}) F'_iter_match"
by %invisible (rule stable_pair_on_stable_on) (metis F'_iter_stable_pair_on)
theorem F'_iter_match_doctors_weakly_better_off:
assumes "x \<in> Cd d X"
assumes "d \<noteq> d'"
shows "\<exists>y \<in> Cd d F'_iter_match. (x, y) \<in> Pd d"
proof %invisible -
from \<open>stable_on ds X\<close> assms
have "d \<in> ds" by (blast dest: Cd_Xd Cd_range' stable_on_Xd)
with assms \<open>stable_on ds X\<close> stable_on_stable_pair_on[OF \<open>stable_on ds X\<close>]
have "\<exists>y\<in>Cd d (XD_smallest ds X \<union> fst F'_iter). (x, y) \<in> Pd d"
by - (rule Cd_mono; fastforce dest: X_XD_smallest_XH_largest stable_pair_on_Cd_match)
with F'_iter_stable_pair_on F'_start_le_F'_iter
have "\<exists>y\<in>Cd d (fst F'_iter). (x, y) \<in> Pd d"
by (metis Pair_le Un_absorb1 prod.collapse)
with \<open>d \<noteq> d'\<close> \<open>d \<in> ds\<close>
show ?thesis
using stable_pair_on_Cd[OF F'_iter_stable_pair_on, symmetric, of d]
by (subst Cd_domain[symmetric]) (simp add: Cd_idem)
qed
theorem F'_iter_match_hospitals_weakly_worse_off:
assumes "x \<in> Ch h X"
shows "x \<in> Ch h (F'_iter_match \<union> X)"
proof %invisible -
from F'_iter_stable_pair_on F'_start_le_F'_iter stable_on_stable_pair_on[OF \<open>stable_on ds X\<close>] X_subseteq_XH_largest[OF \<open>stable_on ds X\<close>]
have "F'_iter_match \<union> X \<subseteq> XH_largest ds X"
by (simp add: less_eq_prod_def le_infI2 less_eq_dual_def)
with assms Ch_range \<open>stable_on ds X\<close> show ?thesis
by - (rule iia_onD[OF Ch_iia, where B="XH_largest ds X"], auto, metis Ch_CH_irc_idem CH_XH_largest)
qed
text\<open>
\citeauthor{HatfieldMilgrom:2005} observe but do not prove that
@{const "F'_iter_match"} is not necessarily doctor-optimal wrt the new
set of doctors, even if @{term "X"} was.
These results seem incomplete. One might expect that the process of
reacting to a doctor's retirement would involve considering new
entrants to the workforce and allowing the set of possible contracts
to be refined. There are also the questions of hospitals opening and
closing.
\<close>
end
end
end
subsection\<open> Theorems~8~and~9: A ``rural hospitals'' theorem \label{sec:contracts-rh} \<close>
text\<open>
Given that some hospitals are less desirable than others, the question
arises of whether there is a mechanism that can redistribute doctors
to under-resourced hospitals while retaining the stability of the
match. Roth's @{emph \<open>rural hospitals theorem\<close>}
\citep[Theorem~5.12]{RothSotomayor:1990} resolves this in the negative
by showing that each doctor and hospital signs the same number of
contracts in every stable match. In the context of contracts the
theorem relies on the further hypothesis that hospital choices satisfy
the law of aggregate demand (\S\ref{sec:cf-lad}).
\<close>
locale ContractsWithLAD = Contracts +
assumes Ch_lad: "\<forall>h. lad (Ch h)"
locale ContractsWithSubstitutesAndLAD =
ContractsWithSubstitutes + ContractsWithLAD
text\<open>
We can use results that hold under @{const "irc"} by discharging that
hypothesis against @{const "lad"} using the @{thm [source]
"lad_on_substitutes_on_irc_on"} lemma. This is the effect of the
following \<open>sublocale\<close> command:
\<close>
sublocale ContractsWithSubstitutesAndLAD < ContractsWithSubstitutesAndIRC
using Ch_range Ch_substitutes Ch_lad by unfold_locales (blast intro: lad_on_substitutes_on_irc_on f_range_onI)
context ContractsWithSubstitutesAndLAD
begin
text\<open>
The following results lead to \citet[Theorem~8]{HatfieldMilgrom:2005},
and the proofs go as they say. Again we state these with respect to an
arbitrary solution to @{const "stable_pair"}.
\<close>
context
fixes XD_XH :: "'a set \<times> 'a set"
fixes ds :: "'b set"
assumes "stable_pair_on ds XD_XH"
begin
lemma Cd_XD_gfp_F_card:
assumes "d \<in> ds"
shows "card (Cd d (fst XD_XH)) \<le> card (Cd d (fst (gfp_F ds)))"
using %invisible assms Cd_lad XD_XH_gfp_F(1)[OF \<open>stable_pair_on ds XD_XH\<close>]
unfolding lad_on_def by blast
lemma Ch_gfp_F_XH_card:
shows "card (Ch h (snd (gfp_F ds))) \<le> card (Ch h (snd XD_XH))"
using %invisible Ch_lad XD_XH_gfp_F(2)[OF \<open>stable_pair_on ds XD_XH\<close>]
unfolding lad_on_def by blast
theorem Theorem_8:
shows "d \<in> ds \<Longrightarrow> card (Cd d (fst XD_XH)) = card (Cd d (fst (gfp_F ds)))"
and "card (Ch h (snd XD_XH)) = card (Ch h (snd (gfp_F ds)))"
proof %invisible -
let ?Sum_Cd_gfp = "\<Sum>d\<in>ds. card (Cd d (fst (gfp_F ds)))"
let ?Sum_Ch_gfp = "\<Sum>h\<in>UNIV. card (Ch h (snd (gfp_F ds)))"
let ?Sum_Cd_XD = "\<Sum>d\<in>ds. card (Cd d (fst XD_XH))"
let ?Sum_Ch_XH = "\<Sum>h\<in>UNIV. card (Ch h (snd XD_XH))"
have "?Sum_Cd_gfp = ?Sum_Ch_gfp"
using stable_pair_on_CD_on_CH[OF gfp_F_stable_pair_on] CD_on_card[symmetric] CH_card[symmetric] by simp
also have "\<dots> \<le> ?Sum_Ch_XH"
using Ch_gfp_F_XH_card by (simp add: sum_mono)
also have "\<dots> = ?Sum_Cd_XD"
using stable_pair_on_CD_on_CH[OF \<open>stable_pair_on ds XD_XH\<close>] CD_on_card[symmetric] CH_card[symmetric] by simp
finally have "?Sum_Cd_XD = ?Sum_Cd_gfp"
using Cd_XD_gfp_F_card by (simp add: eq_iff sum_mono)
with Cd_XD_gfp_F_card show "d \<in> ds \<Longrightarrow> card (Cd d (fst XD_XH)) = card (Cd d (fst (gfp_F ds)))"
by (fastforce elim: sum_mono_inv)
have "?Sum_Ch_XH = ?Sum_Cd_XD"
using stable_pair_on_CD_on_CH[OF \<open>stable_pair_on ds XD_XH\<close>] CD_on_card[symmetric] CH_card[symmetric] by simp
also have "\<dots> \<le> ?Sum_Cd_gfp"
using Cd_XD_gfp_F_card by (simp add: sum_mono)
also have "\<dots> = ?Sum_Ch_gfp"
using stable_pair_on_CD_on_CH[OF gfp_F_stable_pair_on] CD_on_card[symmetric] CH_card[symmetric] by simp
finally have "?Sum_Ch_gfp = ?Sum_Ch_XH"
using Ch_gfp_F_XH_card by (simp add: eq_iff sum_mono)
with Ch_gfp_F_XH_card show "card (Ch h (snd XD_XH)) = card (Ch h (snd (gfp_F ds)))"
by (fastforce elim: sym[OF sum_mono_inv])
qed
end
text\<open>
Their result may be more easily understood when phrased in terms of
arbitrary stable matches:
\<close>
corollary rural_hospitals_theorem:
assumes "stable_on ds X"
assumes "stable_on ds Y"
shows "d \<in> ds \<Longrightarrow> card (Cd d X) = card (Cd d Y)"
and "card (Ch h X) = card (Ch h Y)"
using %invisible assms T1[of ds X] T1[of ds Y] Theorem_8 stable_pair_on_Cd_match Ch_CH_irc_idem stable_pair_on_CH
by force+
end
text\<open>
\citet[Theorem~9]{HatfieldMilgrom:2005} show that without @{const
"lad"}, the rural hospitals theorem does not hold. Their proof does
not seem to justify the theorem as stated (for instance, the contracts
\<open>x'\<close>, \<open>y'\<close> and \<open>z'\<close> need not
exist), and so we instead simply provide a counterexample (discovered
by \verb!nitpick!) to the same effect.
\<close>
lemma (in ContractsWithSubstitutesAndIRC) Theorem_9_counterexample:
assumes "stable_on ds Y"
assumes "stable_on ds Z"
shows "card (Ch h Y) = card (Ch h Z)"
oops
datatype X3 = Xd1 | Xd1' | Xd2
(*<*)
lemma X3_UNIV:
shows "UNIV = set [Xd1, Xd1', Xd2]"
using X3.exhaust by auto
lemmas X3_pow = subset_subseqs[OF subset_trans[OF subset_UNIV Set.equalityD1[OF X3_UNIV]]]
instance X3 :: finite
by standard (simp add: X3_UNIV)
lemma X3_all_pow:
shows "(\<forall>X''. P X'') \<longleftrightarrow> (\<forall>X''\<in>set ` set (subseqs [Xd1, Xd1', Xd2]). P X'')"
using X3_pow by blast
(*>*)
primrec X3d :: "X3 \<Rightarrow> D2" where
"X3d Xd1 = D1"
| "X3d Xd1' = D1"
| "X3d Xd2 = D2"
abbreviation X3h :: "X3 \<Rightarrow> H1" where
"X3h _ \<equiv> H"
primrec PX3d :: "D2 \<Rightarrow> X3 rel" where
"PX3d D1 = linord_of_list [Xd1, Xd1']"
| "PX3d D2 = linord_of_list [Xd2]"
function CX3h :: "H1 \<Rightarrow> X3 set \<Rightarrow> X3 set" where
"CX3h _ {Xd1} = {Xd1}"
| "CX3h _ {Xd1'} = {Xd1'}"
| "CX3h _ {Xd2} = {Xd2}"
| "CX3h _ {Xd1, Xd1'} = {Xd1'}"
| "CX3h _ {Xd1, Xd2} = {Xd1, Xd2}"
| "CX3h _ {Xd1', Xd2} = {Xd1'}"
| "CX3h _ {Xd1, Xd1', Xd2} = {Xd1'}"
| "CX3h _ {} = {}"
apply %invisible (case_tac x)
apply (cut_tac X=b in X3_pow)
apply auto
done
(*<*)
termination by lexicographic_order
lemma PX3d_linear:
shows "Linear_order (PX3d d)"
by (cases d) (simp_all add: linord_of_list_Linear_order)
lemma PX3d_range:
shows "Field (PX3d d) \<subseteq> {x. X3d x = d}"
by (cases d) simp_all
lemma CX3h_range:
shows "CX3h h X \<subseteq> {x\<in>X. X3h x = h}"
by (cases "(h, X)" rule: CX3h.cases; simp; metis (mono_tags) H1.exhaust)
lemma CX3h_singular:
shows "inj_on X3d (CX3h h X)"
by (cases "(h, X)" rule: CX3h.cases) auto
lemma CX3h_substitutes:
shows "substitutes (CX3h h)"
apply (rule substitutes_onI)
apply (cases h)
apply (cut_tac X=B in X3_pow)
apply (case_tac b)
apply (case_tac [!] a)
apply (auto simp: insert_commute)
done
lemma CX3h_irc:
shows "irc (CX3h h)"
apply (rule ircI)
apply (cases h)
apply (cut_tac X=B in X3_pow)
apply (case_tac a)
apply (auto simp: insert_commute)
done
(*>*)
interpretation Theorem_9: ContractsWithSubstitutesAndIRC X3d X3h PX3d CX3h
using %invisible PX3d_linear PX3d_range CX3h_range CX3h_singular CX3h_substitutes CX3h_irc
by unfold_locales blast+
lemma Theorem_9_stable_Xd1':
shows "Theorem_9.stable_on UNIV {Xd1'}"
proof %invisible (rule Theorem_9.stable_onI)
note image_cong_simp [cong del] note INF_cong_simp [cong] note SUP_cong_simp [cong]
show "Theorem_9.individually_rational_on UNIV {Xd1'}"
by (rule Theorem_9.individually_rational_onI)
(simp_all add: D2_UNION Theorem_9.CD_on_def Theorem_9.CH_def)
show "Theorem_9.stable_no_blocking_on UNIV {Xd1'}"
apply (rule Theorem_9.stable_no_blocking_onI)
apply (case_tac h)
apply (cut_tac X=X'' in X3_pow)
apply simp
apply safe
apply (simp_all add: insert_commute)
done
qed
lemma Theorem_9_stable_Xd1_Xd2:
shows "Theorem_9.stable_on UNIV {Xd1, Xd2}"
proof %invisible (rule Theorem_9.stable_onI)
note image_cong_simp [cong del] note INF_cong_simp [cong] note SUP_cong_simp [cong]
show "Theorem_9.individually_rational_on UNIV {Xd1, Xd2}"
by (rule Theorem_9.individually_rational_onI)
(simp_all add: D2_UNION Theorem_9.CD_on_def Theorem_9.CH_def insert_commute)
show "Theorem_9.stable_no_blocking_on UNIV {Xd1, Xd2}"
apply (rule Theorem_9.stable_no_blocking_onI)
apply (case_tac h)
apply (cut_tac X=X'' in X3_pow)
apply simp
apply safe
apply (simp_all add: D2_UNION Theorem_9.CD_on_def Theorem_9.maxR_def linord_of_list_linord_of_listP insert_commute)
done
qed
text \<open>
This violates the rural hospitals theorem:
\<close>
theorem
shows "card (Theorem_9.CH {Xd1'}) \<noteq> card (Theorem_9.CH {Xd1, Xd2})"
using %invisible Theorem_9_stable_Xd1' Theorem_9_stable_Xd1_Xd2 Theorem_9.stable_on_CH by simp
text\<open>
{\ldots}which is attributed to the failure of the hospitals' choice
functions to satisfy @{const "lad"}:
\<close>
lemma CX3h_not_lad:
shows "\<not>lad (CX3h h)"
unfolding %invisible lad_on_def
apply (cases h)
apply clarsimp
apply (rule exI[where x="{Xd1, Xd1', Xd2}"])
apply (rule exI[where x="{Xd1, Xd2}"])
apply simp
done
text\<open>
\citet{CiupanHatfieldKominers:2016} discuss an alternative approach to
this result in a marriage market.
\<close>
subsection\<open> Theorems~15 and 16: Cumulative Offer Processes \label{sec:contracts-cop} \<close>
text\<open>
The goal of \citet[{\S}V]{HatfieldMilgrom:2005} is to connect this
theory of contracts with matching to earlier work on auctions by the
first of the authors, in particular by eliminating the @{const
"substitutes"} hypothesis. They do so by defining a @{emph \<open>cumulative
offer process\<close>} (COP):
\<close>
context Contracts
begin
definition cop_F_HM :: "'d set \<Rightarrow> 'x set \<times> 'x set \<Rightarrow> 'x set \<times> 'x set" where
"cop_F_HM ds = (\<lambda>(XD, XH). (- RH XH, XH \<union> CD_on ds (- RH XH)))"
text\<open>
Intuitively all of the doctors simultaneously offer their most
preferred contracts that have yet to be rejected by the hospitals, and
the hospitals choose amongst these and all that have been offered
previously. Asking hospital choice functions to satisfy the @{const
"substitutes"} condition effectively forces hospitals to consider only
the contracts they have previously not rejected.
This definition is neither monotonic nor increasing (i.e., it is not
the case that @{term "\<forall>x. x \<le> cop_F_HM ds x"}). We rectify
this by focusing on the second part of the definition.
\<close>
definition cop_F :: "'d set \<Rightarrow> 'x set \<Rightarrow> 'x set" where
"cop_F ds XH = XH \<union> CD_on ds (- RH XH)"
lemma cop_F_HM_cop_F:
shows "cop_F_HM ds XD_XH = (- RH (snd XD_XH), cop_F ds (snd XD_XH))"
unfolding cop_F_HM_def cop_F_def split_def by simp
lemma cop_F_increasing:
shows "x \<le> cop_F ds x"
unfolding %invisible cop_F_def by simp
text\<open>
We have the following straightforward case distinction principles:
\<close>
lemma cop_F_cases:
assumes "x \<in> cop_F ds fp"
obtains (fp) "x \<in> fp" | (CD_on) "x \<in> CD_on ds (-RH fp) - fp"
using assms unfolding cop_F_def by blast
lemma CH_cop_F_cases:
assumes "x \<in> CH (cop_F ds fp)"
obtains (CH) "x \<in> CH fp" | (RH_fp) "x \<in> RH fp" | (CD_on) "x \<in> CD_on ds (-RH fp) - fp"
using assms CH_range cop_F_def by auto
text\<open>
The existence of fixed points for our earlier definitions
(\S\ref{sec:contracts-algorithmics}) was guaranteed by the
Tarski-Knaster theorem, which relies on the monotonicity of the
defining functional. As @{const "cop_F"} lacks this property, we
appeal instead to the Bourbaki-Witt theorem for increasing
functions.
\<close>
interpretation COP: bourbaki_witt_fixpoint Sup "{(x, y). x \<le> y}" "cop_F ds" for ds
by %invisible (rule bourbaki_witt_fixpoint_complete_latticeI[OF cop_F_increasing])
definition fp_cop_F :: "'d set \<Rightarrow> 'x set" where
"fp_cop_F ds = COP.fixp_above ds {}"
abbreviation "cop ds \<equiv> CH (fp_cop_F ds)"
(*<*)
lemmas fp_cop_F_unfold = COP.fixp_above_unfold[where a="{}", folded fp_cop_F_def, simplified Field_def, simplified]
lemmas fp_cop_F_code = COP.fixp_above_conv_while[where a="{}", folded fp_cop_F_def, simplified Field_def, simplified]
(*>*)
text\<open>
Given that the set of contracts is finite, we avoid continuity and
admissibility issues; we have the following straightforward induction
principle:
\<close>
lemma fp_cop_F_induct[case_names base step]:
assumes "P {}"
assumes "\<And>fp. P fp \<Longrightarrow> P (cop_F ds fp)"
shows "P (fp_cop_F ds)"
using %invisible assms
by (induct rule: COP.fixp_above_induct[where a="{}", folded fp_cop_F_def])
(fastforce intro: admissible_chfin)+
text\<open>
An alternative is to use the @{const "while"} combinator, which is
equivalent to the above by @{thm [source] COP.fixp_above_conv_while}.
In any case, invariant reasoning is essential to verifying the
properties of the COP, no matter how we phrase it. We develop a small
program logic to ease the reuse of the invariants we
prove.
\<close>
definition
valid :: "'d set \<Rightarrow> ('d set \<Rightarrow> 'x set \<Rightarrow> bool) \<Rightarrow> ('d set \<Rightarrow> 'x set \<Rightarrow> bool) \<Rightarrow> bool"
where
"valid ds P Q = (Q ds {} \<and> (\<forall>fp. P ds fp \<and> Q ds fp \<longrightarrow> Q ds (cop_F ds fp)))"
abbreviation
invariant :: "'d set \<Rightarrow> ('d set \<Rightarrow> 'x set \<Rightarrow> bool) \<Rightarrow> bool"
where
"invariant ds P \<equiv> valid ds (\<lambda>_ _. True) P"
text\<open>
Intuitively @{term "valid ds P Q"} asserts that the COP satisfies
@{term "Q"} assuming that it satisfies @{term "P"}. This allows us to
decompose our invariant proofs. By setting the precondition to @{term
"True"}, @{term "invariant ds P"} captures the proof obligations of
@{term "fp_cop_F_induct"} exactly.
The following lemmas ease the syntactic manipulation of these facts.
\<close>
lemma validI[case_names base step]:
assumes "Q ds {}"
assumes "\<And>fp. \<lbrakk>P ds fp; Q ds fp\<rbrakk> \<Longrightarrow> Q ds (cop_F ds fp)"
shows "valid ds P Q"
using %invisible assms unfolding valid_def by blast
lemma invariant_cop_FD:
assumes "invariant ds P"
assumes "P ds fp"
shows "P ds (cop_F ds fp)"
using %invisible assms unfolding valid_def by blast
lemma invariantD:
assumes "invariant ds P"
shows "P ds (fp_cop_F ds)"
using %invisible assms fp_cop_F_induct unfolding valid_def by blast
lemma valid_pre:
assumes "valid ds P' Q"
assumes "\<And>fp. P ds fp \<Longrightarrow> P' ds fp"
shows "valid ds P Q"
using %invisible assms unfolding valid_def by blast
lemma valid_invariant:
assumes "valid ds P Q"
assumes "invariant ds P"
shows "invariant ds (\<lambda> ds fp. P ds fp \<and> Q ds fp)"
using %invisible assms unfolding valid_def by blast
lemma valid_conj:
assumes "valid ds (\<lambda>ds fp. R ds fp \<and> P ds fp \<and> Q ds fp) P"
assumes "valid ds (\<lambda>ds fp. R ds fp \<and> P ds fp \<and> Q ds fp) Q"
shows "valid ds R (\<lambda> ds fp. P ds fp \<and> Q ds fp)"
using %invisible assms unfolding valid_def by blast
end
text (in ContractsWithSubstitutes) \<open>
\citet[Theorem~15]{HatfieldMilgrom:2005} assert that @{const
"fp_cop_F"} is equivalent to the doctor-offering algorithm @{const
"gfp_F"}, assuming @{const "substitutes"}. (Note that the fixed points
generated by increasing functions do not necessarily form a lattice,
so there is not necessarily a hospital-optimal match, and indeed in
general these do not exist.) Our proof is eased by the decomposition
lemma @{thm [source] gfp_F_lfp_F} and the standard properties of fixed
points in a lattice.
\<close>
context ContractsWithSubstitutes
begin
lemma lfp_F2_o_F1_fp_cop_F:
shows "lfp (F2 ds \<circ> F1) = fp_cop_F ds"
proof(rule antisym)
have "(F2 ds \<circ> F1) (fp_cop_F ds) \<subseteq> cop_F ds (fp_cop_F ds)"
by (clarsimp simp: F2_def F1_def cop_F_def)
then show "lfp (F2 ds \<circ> F1) \<subseteq> fp_cop_F ds"
by (simp add: lfp_lowerbound fp_cop_F_unfold[symmetric])
next
show "fp_cop_F ds \<subseteq> lfp (F2 ds \<circ> F1)"
proof(induct rule: fp_cop_F_induct)
case base then show ?case by simp
next
case (step fp) note IH = \<open>fp \<subseteq> lfp (F2 ds \<circ> F1)\<close>
then have "CD_on ds (- RH fp) \<subseteq> lfp (F2 ds \<circ> F1)"
by (subst lfp_unfold[OF F2_o_F1_mono])
(metis (no_types, lifting) Compl_Diff_eq F1_antimono F2_antimono F1_def F2_def Un_subset_iff antimonoD comp_apply)
with IH show ?case
unfolding cop_F_def by blast
qed
qed
theorem Theorem_15:
shows "gfp_F ds = (- RH (fp_cop_F ds), fp_cop_F ds)"
using lfp_F2_o_F1_fp_cop_F unfolding gfp_F_lfp_F F1_def by simp
theorem Theorem_15_match:
shows "match (gfp_F ds) = CH (fp_cop_F ds)"
using Theorem_15 by (fastforce dest: subsetD[OF CH_range])
end
text\<open>
\label{sec:contracts-codegen-fp_cop_F}
With some auxiliary definitions, we can evaluate the COP on the
example from \S\ref{sec:contracts-codegen-gfp_F}.
\<close>
(*<*)
definition "P920_example_cop_F = P920_example.cop_F"
definition "P920_example_fp_cop_F = P920_example.fp_cop_F"
lemmas P920_example_cop_F_code[code] = P920_example.cop_F_def[folded P920_example_cop_F_def]
lemmas P920_example_fp_cop_F_code[code] = P920_example.fp_cop_F_code[folded P920_example_fp_cop_F_def P920_example_cop_F_def]
(*>*)
lemma P920_example_fp_cop_F_value:
shows "P920_example_CH (P920_example_fp_cop_F UNIV) = {(D1, H1), (D2, H2)}"
by eval
text\<open>
\citet[Theorem~16]{HatfieldMilgrom:2005} assert that this process
yields a stable match when we have a single hospital (now called an
auctioneer) with unrestricted preferences. As before, this holds
provided the auctioneer's preferences satisfy @{const "irc"}.
We begin by establishing two obvious invariants of the COP that
hold in general.
\<close>
context Contracts
begin
lemma %invisible CH_Ch_singular:
assumes "(UNIV::'h set) = {h}"
shows "CH A = Ch h A"
unfolding CH_def using assms by auto
definition cop_F_range_inv :: "'d set \<Rightarrow> 'x set \<Rightarrow> bool" where
"cop_F_range_inv ds fp \<longleftrightarrow> (\<forall>x\<in>fp. x \<in> Field (Pd (Xd x)) \<and> Xd x \<in> ds)"
definition cop_F_closed_inv :: "'d set \<Rightarrow> 'x set \<Rightarrow> bool" where
"cop_F_closed_inv ds fp \<longleftrightarrow> (\<forall>x\<in>fp. above (Pd (Xd x)) x \<subseteq> fp)"
text\<open>
The first, @{const "cop_F_range_inv"}, simply states that the result
of the COP respects the structural conditions for doctors. The second
@{const "cop_F_closed_inv"} states that the COP is upwards-closed with
respect to the doctors' preferences.
\<close>
lemma cop_F_range_inv:
shows "invariant ds cop_F_range_inv"
unfolding valid_def cop_F_range_inv_def cop_F_def by (fastforce simp: mem_CD_on_Cd dest: Cd_range')
lemma cop_F_closed_inv:
shows "invariant ds cop_F_closed_inv"
unfolding valid_def cop_F_closed_inv_def cop_F_def above_def
by (clarsimp simp: subset_iff) (metis Cd_preferred ComplI Un_upper1 mem_CD_on_Cd subsetCE)
lemmas fp_cop_F_range_inv = invariantD[OF cop_F_range_inv]
lemmas fp_cop_F_range_inv' = fp_cop_F_range_inv[unfolded cop_F_range_inv_def, rule_format]
lemmas fp_cop_F_closed_inv = invariantD[OF cop_F_closed_inv]
lemmas fp_cop_F_closed_inv' = subsetD[OF bspec[OF invariantD[OF cop_F_closed_inv, unfolded cop_F_closed_inv_def, simplified]]]
text\<open>
The only challenge in showing that the COP yields a stable match is in
establishing @{const "stable_no_blocking_on"}. Our key lemma states
that that if @{const "CH"} rejects all contracts for doctor
\<open>d\<close> in @{const "fp_cop_F"}, then all contracts for
\<open>d\<close> are in @{const "fp_cop_F"}.
\<close>
lemma cop_F_RH:
assumes "d \<in> ds"
assumes "x \<in> Field (Pd d)"
assumes "aboveS (Pd d) x \<subseteq> RH fp"
shows "x \<in> cop_F ds fp"
using %invisible assms Pd_linear unfolding cop_F_def
by (clarsimp simp: mem_CD_on_Cd Cd_greatest greatest_def aboveS_def order_on_defs total_on_def subset_iff)
(metis Compl_Diff_eq Compl_iff Diff_iff IntE Pd_Xd refl_onD)
lemma fp_cop_F_all:
assumes "d \<in> ds"
assumes "d \<notin> Xd ` CH (fp_cop_F ds)"
shows "Field (Pd d) \<subseteq> fp_cop_F ds"
proof %invisible (rule subsetI)
fix x assume "x \<in> Field (Pd d)"
from spec[OF Pd_linear] this finite[of "Pd d"] show "x \<in> fp_cop_F ds"
proof(induct rule: finite_Linear_order_induct)
case (step x)
with assms Pd_range Pd_Xd cop_F_RH[of d ds _ "fp_cop_F ds", unfolded fp_cop_F_unfold[symmetric]]
show ?case unfolding aboveS_def by (fastforce iff: image_iff)
qed
qed
text\<open>
\citet{AygunSonmez:2012-WP2} observe that any blocking contract must
be weakly preferred by its doctor to anything in the outcome of the
@{const "fp_cop_F"}:
\<close>
lemma fp_cop_F_preferred:
assumes "y \<in> CD_on ds (CH (fp_cop_F ds) \<union> X'')"
assumes "x \<in> CH (fp_cop_F ds)"
assumes "Xd x = Xd y"
shows "(x, y) \<in> Pd (Xd x)"
using %invisible assms fp_cop_F_range_inv'[OF CH_range'[OF assms(2)]] Pd_Xd Pd_linear
by (clarsimp simp: CD_on_def Cd_greatest greatest_def) (metis Int_iff Un_iff subset_refl underS_incl_iff)
text\<open>
The headline lemma cobbles these results together.
\<close>
lemma X''_closed:
assumes "X'' \<subseteq> CD_on ds (CH (fp_cop_F ds) \<union> X'')"
shows "X'' \<subseteq> fp_cop_F ds"
proof(rule subsetI)
fix x assume "x \<in> X''"
show "x \<in> fp_cop_F ds"
proof(cases "Xd x \<in> Xd ` CH (fp_cop_F ds)")
case True
then obtain y where "Xd y = Xd x" and "y \<in> CH (fp_cop_F ds)" by clarsimp
with assms \<open>x \<in> X''\<close> show ?thesis
using CH_range fp_cop_F_closed_inv' fp_cop_F_preferred unfolding above_def by blast
next
case False with assms \<open>x \<in> X''\<close> show ?thesis
by (meson Cd_range' IntD2 fp_cop_F_all mem_CD_on_Cd rev_subsetD)
qed
qed
text (in Contracts) \<open>
The @{const "irc"} constraint on the auctioneer's preferences is
needed for @{const "stable_no_blocking"} and their part of @{const
"individually_rational"}.
\<close>
end
context ContractsWithIRC
begin
lemma cop_stable_no_blocking_on:
shows "stable_no_blocking_on ds (cop ds)"
proof(rule stable_no_blocking_onI)
fix h X''
assume C: "X'' = Ch h (CH (fp_cop_F ds) \<union> X'')"
assume NE: "X'' \<noteq> Ch h (CH (fp_cop_F ds))"
assume CD: "X'' \<subseteq> CD_on ds (CH (fp_cop_F ds) \<union> X'')"
from CD have "X'' \<subseteq> fp_cop_F ds" by (rule X''_closed)
then have X: "CH (fp_cop_F ds) \<union> X'' \<subseteq> fp_cop_F ds" using CH_range by simp
from C NE Ch_CH_irc_idem[of h] show False
using consistency_onD[OF Ch_consistency _ X] CH_domain Ch_domain by blast
qed
theorem Theorem_16:
assumes h: "(UNIV::'c set) = {h}"
shows "stable_on ds (cop ds)" (is "stable_on ds ?fp")
proof(rule stable_onI)
show "individually_rational_on ds ?fp"
proof(rule individually_rational_onI)
from h have "allocation ?fp" by (simp add: Ch_singular CH_Ch_singular)
then show "CD_on ds ?fp = ?fp"
by (rule CD_on_closed) (blast dest: CH_range' fp_cop_F_range_inv')
show "CH (CH (fp_cop_F ds)) = CH (fp_cop_F ds)" by (simp add: CH_irc_idem)
qed
show "stable_no_blocking_on ds ?fp" by (rule cop_stable_no_blocking_on)
qed
end
subsection\<open> Concluding remarks \<close>
text\<open>
From \citet{HatfieldMilgrom:2005}, we have not shown Theorems~2, 7, 13
and~14, all of which are intended to position their results against
prior work in this space. We delay establishing their strategic
results (Theorems~10, 11 and~12) to \S\ref{sec:strategic}, after we
have developed more useful invariants for the COP.
By assuming \isa{irc}, \citet{AygunSonmez:2012-WP2} are essentially
trading on Plott's path independence condition
(\S\ref{sec:cf-path-independence}), as observed by
\citet{ChambersYenmez:2013}. The latter show that these results
generalize naturally to many-to-many matches, where doctors also use
path-independent choice functions; see also \citet{Fleiner:2003}.
For many applications, however, @{const "substitutes"} proves to be
too strong a condition. The COP of \S\ref{sec:contracts-cop} provides
a way forward, as we discuss in the next section.
\<close>
(*<*)
end
(*>*)
diff --git a/thys/Stirling_Formula/Gamma_Asymptotics.thy b/thys/Stirling_Formula/Gamma_Asymptotics.thy
new file mode 100644
--- /dev/null
+++ b/thys/Stirling_Formula/Gamma_Asymptotics.thy
@@ -0,0 +1,2105 @@
+(*
+ File: Gamma_Asymptotics.thy
+ Author: Manuel Eberl
+
+ The complete asymptotics of the real and complex logarithmic Gamma functions.
+ Also of the real Polygamma functions (could be extended to the complex ones fairly easily
+ if needed).
+*)
+section \<open>Complete asymptotics of the logarithmic Gamma function\<close>
+theory Gamma_Asymptotics
+imports
+ "HOL-Complex_Analysis.Complex_Analysis"
+ "HOL-Real_Asymp.Real_Asymp"
+ Bernoulli.Bernoulli_FPS
+ Bernoulli.Periodic_Bernpoly
+ Stirling_Formula
+begin
+
+subsection \<open>Auxiliary Facts\<close>
+
+(* TODO Move *)
+lemma arg_of_real [simp]:
+ "x > 0 \<Longrightarrow> arg (complex_of_real x) = 0"
+ "x < 0 \<Longrightarrow> arg (complex_of_real x) = pi"
+ by (rule arg_unique; simp add: complex_sgn_def scaleR_conv_of_real)+
+
+lemma arg_conv_arctan:
+ assumes "Re z > 0"
+ shows "arg z = arctan (Im z / Re z)"
+proof (rule arg_unique)
+ show "sgn z = cis (arctan (Im z / Re z))"
+ proof (rule complex_eqI)
+ have "Re (cis (arctan (Im z / Re z))) = 1 / sqrt (1 + (Im z)\<^sup>2 / (Re z)\<^sup>2)"
+ by (simp add: cos_arctan power_divide)
+ also have "1 + Im z ^ 2 / Re z ^ 2 = norm z ^ 2 / Re z ^ 2"
+ using assms by (simp add: cmod_def field_simps)
+ also have "1 / sqrt \<dots> = Re z / norm z"
+ using assms by (simp add: real_sqrt_divide)
+ finally show "Re (sgn z) = Re (cis (arctan (Im z / Re z)))"
+ by simp
+ next
+ have "Im (cis (arctan (Im z / Re z))) = Im z / (Re z * sqrt (1 + (Im z)\<^sup>2 / (Re z)\<^sup>2))"
+ by (simp add: sin_arctan field_simps)
+ also have "1 + Im z ^ 2 / Re z ^ 2 = norm z ^ 2 / Re z ^ 2"
+ using assms by (simp add: cmod_def field_simps)
+ also have "Im z / (Re z * sqrt \<dots>) = Im z / norm z"
+ using assms by (simp add: real_sqrt_divide)
+ finally show "Im (sgn z) = Im (cis (arctan (Im z / Re z)))"
+ by simp
+ qed
+next
+ show "arctan (Im z / Re z) > -pi"
+ by (rule le_less_trans[OF _ arctan_lbound]) auto
+next
+ have "arctan (Im z / Re z) < pi / 2"
+ by (rule arctan_ubound)
+ also have "\<dots> \<le> pi" by simp
+ finally show "arctan (Im z / Re z) \<le> pi"
+ by simp
+qed
+
+lemma mult_indicator_cong:
+ fixes f g :: "_ \<Rightarrow> 'a :: semiring_1"
+ shows "(\<And>x. x \<in> A \<Longrightarrow> f x = g x) \<Longrightarrow> indicator A x * f x = indicator A x * g x"
+ by (auto simp: indicator_def)
+
+lemma has_absolute_integral_change_of_variables_1':
+ fixes f :: "real \<Rightarrow> real" and g :: "real \<Rightarrow> real"
+ assumes S: "S \<in> sets lebesgue"
+ and der_g: "\<And>x. x \<in> S \<Longrightarrow> (g has_field_derivative g' x) (at x within S)"
+ and inj: "inj_on g S"
+ shows "(\<lambda>x. \<bar>g' x\<bar> *\<^sub>R f(g x)) absolutely_integrable_on S \<and>
+ integral S (\<lambda>x. \<bar>g' x\<bar> *\<^sub>R f(g x)) = b
+ \<longleftrightarrow> f absolutely_integrable_on (g ` S) \<and> integral (g ` S) f = b"
+proof -
+ have "(\<lambda>x. \<bar>g' x\<bar> *\<^sub>R vec (f(g x)) :: real ^ 1) absolutely_integrable_on S \<and>
+ integral S (\<lambda>x. \<bar>g' x\<bar> *\<^sub>R vec (f(g x))) = (vec b :: real ^ 1)
+ \<longleftrightarrow> (\<lambda>x. vec (f x) :: real ^ 1) absolutely_integrable_on (g ` S) \<and>
+ integral (g ` S) (\<lambda>x. vec (f x)) = (vec b :: real ^ 1)"
+ using assms unfolding has_field_derivative_iff_has_vector_derivative
+ by (intro has_absolute_integral_change_of_variables_1 assms) auto
+ thus ?thesis
+ by (simp add: absolutely_integrable_on_1_iff integral_on_1_eq)
+qed
+
+corollary Ln_times_of_nat:
+ "\<lbrakk>r > 0; z \<noteq> 0\<rbrakk> \<Longrightarrow> Ln(of_nat r * z :: complex) = ln (of_nat r) + Ln(z)"
+ using Ln_times_of_real[of "of_nat r" z] by simp
+
+lemma tendsto_of_real_0_I:
+ "(f \<longlongrightarrow> 0) G \<Longrightarrow> ((\<lambda>x. (of_real (f x))) \<longlongrightarrow> (0 ::'a::real_normed_div_algebra)) G"
+ by (subst (asm) tendsto_of_real_iff [symmetric]) simp
+
+lemma negligible_atLeastAtMostI: "b \<le> a \<Longrightarrow> negligible {a..(b::real)}"
+ by (cases "b < a") auto
+
+lemma vector_derivative_cong_eq:
+ assumes "eventually (\<lambda>x. x \<in> A \<longrightarrow> f x = g x) (nhds x)" "x = y" "A = B" "x \<in> A"
+ shows "vector_derivative f (at x within A) = vector_derivative g (at y within B)"
+proof -
+ from eventually_nhds_x_imp_x[OF assms(1)] assms(4) have "f x = g x" by blast
+ hence "(\<lambda>D. (f has_vector_derivative D) (at x within A)) =
+ (\<lambda>D. (g has_vector_derivative D) (at x within A))" using assms
+ by (intro ext has_vector_derivative_cong_ev refl assms) simp_all
+ thus ?thesis by (simp add: vector_derivative_def assms)
+qed
+
+lemma differentiable_of_real [simp]: "of_real differentiable at x within A"
+proof -
+ have "(of_real has_vector_derivative 1) (at x within A)"
+ by (auto intro!: derivative_eq_intros)
+ thus ?thesis by (rule differentiableI_vector)
+qed
+
+lemma higher_deriv_cong_ev:
+ assumes "eventually (\<lambda>x. f x = g x) (nhds x)" "x = y"
+ shows "(deriv ^^ n) f x = (deriv ^^ n) g y"
+proof -
+ from assms(1) have "eventually (\<lambda>x. (deriv ^^ n) f x = (deriv ^^ n) g x) (nhds x)"
+ proof (induction n arbitrary: f g)
+ case (Suc n)
+ from Suc.prems have "eventually (\<lambda>y. eventually (\<lambda>z. f z = g z) (nhds y)) (nhds x)"
+ by (simp add: eventually_eventually)
+ hence "eventually (\<lambda>x. deriv f x = deriv g x) (nhds x)"
+ by eventually_elim (rule deriv_cong_ev, simp_all)
+ thus ?case by (auto intro!: deriv_cong_ev Suc simp: funpow_Suc_right simp del: funpow.simps)
+ qed auto
+ from eventually_nhds_x_imp_x[OF this] assms(2) show ?thesis by simp
+qed
+
+lemma deriv_of_real [simp]:
+ "at x within A \<noteq> bot \<Longrightarrow> vector_derivative of_real (at x within A) = 1"
+ by (auto intro!: vector_derivative_within derivative_eq_intros)
+
+lemma deriv_Re [simp]: "deriv Re = (\<lambda>_. 1)"
+ by (auto intro!: DERIV_imp_deriv simp: fun_eq_iff)
+
+lemma vector_derivative_of_real_left:
+ assumes "f differentiable at x"
+ shows "vector_derivative (\<lambda>x. of_real (f x)) (at x) = of_real (deriv f x)"
+proof -
+ have "vector_derivative (of_real \<circ> f) (at x) = (of_real (deriv f x))"
+ by (subst vector_derivative_chain_at)
+ (simp_all add: scaleR_conv_of_real field_derivative_eq_vector_derivative assms)
+ thus ?thesis by (simp add: o_def)
+qed
+
+lemma vector_derivative_of_real_right:
+ assumes "f field_differentiable at (of_real x)"
+ shows "vector_derivative (\<lambda>x. f (of_real x)) (at x) = deriv f (of_real x)"
+proof -
+ have "vector_derivative (f \<circ> of_real) (at x) = deriv f (of_real x)"
+ using assms by (subst vector_derivative_chain_at_general) simp_all
+ thus ?thesis by (simp add: o_def)
+qed
+
+lemma Ln_holomorphic [holomorphic_intros]:
+ assumes "A \<inter> \<real>\<^sub>\<le>\<^sub>0 = {}"
+ shows "Ln holomorphic_on (A :: complex set)"
+proof (intro holomorphic_onI)
+ fix z assume "z \<in> A"
+ with assms have "(Ln has_field_derivative inverse z) (at z within A)"
+ by (auto intro!: derivative_eq_intros)
+ thus "Ln field_differentiable at z within A" by (auto simp: field_differentiable_def)
+qed
+
+lemma higher_deriv_Polygamma:
+ assumes "z \<notin> \<int>\<^sub>\<le>\<^sub>0"
+ shows "(deriv ^^ n) (Polygamma m) z =
+ Polygamma (m + n) (z :: 'a :: {real_normed_field,euclidean_space})"
+proof -
+ have "eventually (\<lambda>u. (deriv ^^ n) (Polygamma m) u = Polygamma (m + n) u) (nhds z)"
+ proof (induction n)
+ case (Suc n)
+ from Suc.IH have "eventually (\<lambda>z. eventually (\<lambda>u. (deriv ^^ n) (Polygamma m) u = Polygamma (m + n) u) (nhds z)) (nhds z)"
+ by (simp add: eventually_eventually)
+ hence "eventually (\<lambda>z. deriv ((deriv ^^ n) (Polygamma m)) z =
+ deriv (Polygamma (m + n)) z) (nhds z)"
+ by eventually_elim (intro deriv_cong_ev refl)
+ moreover have "eventually (\<lambda>z. z \<in> UNIV - \<int>\<^sub>\<le>\<^sub>0) (nhds z)" using assms
+ by (intro eventually_nhds_in_open open_Diff open_UNIV) auto
+ ultimately show ?case by eventually_elim (simp_all add: deriv_Polygamma)
+ qed simp_all
+ thus ?thesis by (rule eventually_nhds_x_imp_x)
+qed
+
+lemma higher_deriv_cmult:
+ assumes "f holomorphic_on A" "x \<in> A" "open A"
+ shows "(deriv ^^ j) (\<lambda>x. c * f x) x = c * (deriv ^^ j) f x"
+ using assms
+proof (induction j arbitrary: f x)
+ case (Suc j f x)
+ have "deriv ((deriv ^^ j) (\<lambda>x. c * f x)) x = deriv (\<lambda>x. c * (deriv ^^ j) f x) x"
+ using eventually_nhds_in_open[of A x] assms(2,3) Suc.prems
+ by (intro deriv_cong_ev refl) (auto elim!: eventually_mono simp: Suc.IH)
+ also have "\<dots> = c * deriv ((deriv ^^ j) f) x" using Suc.prems assms(2,3)
+ by (intro deriv_cmult holomorphic_on_imp_differentiable_at holomorphic_higher_deriv) auto
+ finally show ?case by simp
+qed simp_all
+
+lemma higher_deriv_ln_Gamma_complex:
+ assumes "(x::complex) \<notin> \<real>\<^sub>\<le>\<^sub>0"
+ shows "(deriv ^^ j) ln_Gamma x = (if j = 0 then ln_Gamma x else Polygamma (j - 1) x)"
+proof (cases j)
+ case (Suc j')
+ have "(deriv ^^ j') (deriv ln_Gamma) x = (deriv ^^ j') Digamma x"
+ using eventually_nhds_in_open[of "UNIV - \<real>\<^sub>\<le>\<^sub>0" x] assms
+ by (intro higher_deriv_cong_ev refl)
+ (auto elim!: eventually_mono simp: open_Diff deriv_ln_Gamma_complex)
+ also have "\<dots> = Polygamma j' x" using assms
+ by (subst higher_deriv_Polygamma)
+ (auto elim!: nonpos_Ints_cases simp: complex_nonpos_Reals_iff)
+ finally show ?thesis using Suc by (simp del: funpow.simps add: funpow_Suc_right)
+qed simp_all
+
+lemma higher_deriv_ln_Gamma_real:
+ assumes "(x::real) > 0"
+ shows "(deriv ^^ j) ln_Gamma x = (if j = 0 then ln_Gamma x else Polygamma (j - 1) x)"
+proof (cases j)
+ case (Suc j')
+ have "(deriv ^^ j') (deriv ln_Gamma) x = (deriv ^^ j') Digamma x"
+ using eventually_nhds_in_open[of "{0<..}" x] assms
+ by (intro higher_deriv_cong_ev refl)
+ (auto elim!: eventually_mono simp: open_Diff deriv_ln_Gamma_real)
+ also have "\<dots> = Polygamma j' x" using assms
+ by (subst higher_deriv_Polygamma)
+ (auto elim!: nonpos_Ints_cases simp: complex_nonpos_Reals_iff)
+ finally show ?thesis using Suc by (simp del: funpow.simps add: funpow_Suc_right)
+qed simp_all
+
+lemma higher_deriv_ln_Gamma_complex_of_real:
+ assumes "(x :: real) > 0"
+ shows "(deriv ^^ j) ln_Gamma (complex_of_real x) = of_real ((deriv ^^ j) ln_Gamma x)"
+ using assms
+ by (auto simp: higher_deriv_ln_Gamma_real higher_deriv_ln_Gamma_complex
+ ln_Gamma_complex_of_real Polygamma_of_real)
+(* END TODO *)
+
+(* TODO: could be automated with Laurent series expansions in the future *)
+lemma stirling_limit_aux1:
+ "((\<lambda>y. Ln (1 + z * of_real y) / of_real y) \<longlongrightarrow> z) (at_right 0)" for z :: complex
+proof (cases "z = 0")
+ case True
+ then show ?thesis by simp
+next
+ case False
+ have "((\<lambda>y. ln (1 + z * of_real y)) has_vector_derivative 1 * z) (at 0)"
+ by (rule has_vector_derivative_real_field) (auto intro!: derivative_eq_intros)
+ then have "(\<lambda>y. (Ln (1 + z * of_real y) - of_real y * z) / of_real \<bar>y\<bar>) \<midarrow>0\<rightarrow> 0"
+ by (auto simp add: has_vector_derivative_def has_derivative_def netlimit_at
+ scaleR_conv_of_real field_simps)
+ then have "((\<lambda>y. (Ln (1 + z * of_real y) - of_real y * z) / of_real \<bar>y\<bar>) \<longlongrightarrow> 0) (at_right 0)"
+ by (rule filterlim_mono[OF _ _ at_le]) simp_all
+ also have "?this \<longleftrightarrow> ((\<lambda>y. Ln (1 + z * of_real y) / (of_real y) - z) \<longlongrightarrow> 0) (at_right 0)"
+ using eventually_at_right_less[of "0::real"]
+ by (intro filterlim_cong refl) (auto elim!: eventually_mono simp: field_simps)
+ finally show ?thesis by (simp only: LIM_zero_iff)
+qed
+
+lemma stirling_limit_aux2:
+ "((\<lambda>y. y * Ln (1 + z / of_real y)) \<longlongrightarrow> z) at_top" for z :: complex
+ using stirling_limit_aux1[of z] by (subst filterlim_at_top_to_right) (simp add: field_simps)
+
+lemma Union_atLeastAtMost:
+ assumes "N > 0"
+ shows "(\<Union>n\<in>{0..<N}. {real n..real (n + 1)}) = {0..real N}"
+proof (intro equalityI subsetI)
+ fix x assume x: "x \<in> {0..real N}"
+ thus "x \<in> (\<Union>n\<in>{0..<N}. {real n..real (n + 1)})"
+ proof (cases "x = real N")
+ case True
+ with assms show ?thesis by (auto intro!: bexI[of _ "N - 1"])
+ next
+ case False
+ with x have x: "x \<ge> 0" "x < real N" by simp_all
+ hence "x \<ge> real (nat \<lfloor>x\<rfloor>)" "x \<le> real (nat \<lfloor>x\<rfloor> + 1)" by linarith+
+ moreover from x have "nat \<lfloor>x\<rfloor> < N" by linarith
+ ultimately have "\<exists>n\<in>{0..<N}. x \<in> {real n..real (n + 1)}"
+ by (intro bexI[of _ "nat \<lfloor>x\<rfloor>"]) simp_all
+ thus ?thesis by blast
+ qed
+qed auto
+
+
+subsection \<open>Cones in the complex plane\<close>
+
+definition complex_cone :: "real \<Rightarrow> real \<Rightarrow> complex set" where
+ "complex_cone a b = {z. \<exists>y\<in>{a..b}. z = rcis (norm z) y}"
+
+abbreviation complex_cone' :: "real \<Rightarrow> complex set" where
+ "complex_cone' a \<equiv> complex_cone (-a) a"
+
+lemma zero_in_complex_cone [simp, intro]: "a \<le> b \<Longrightarrow> 0 \<in> complex_cone a b"
+ by (auto simp: complex_cone_def)
+
+lemma complex_coneE:
+ assumes "z \<in> complex_cone a b"
+ obtains r \<alpha> where "r \<ge> 0" "\<alpha> \<in> {a..b}" "z = rcis r \<alpha>"
+proof -
+ from assms obtain y where "y \<in> {a..b}" "z = rcis (norm z) y"
+ unfolding complex_cone_def by auto
+ thus ?thesis using that[of "norm z" y] by auto
+qed
+
+lemma arg_cis [simp]:
+ assumes "x \<in> {-pi<..pi}"
+ shows "arg (cis x) = x"
+ using assms by (intro arg_unique) auto
+
+lemma arg_mult_of_real_left [simp]:
+ assumes "r > 0"
+ shows "arg (of_real r * z) = arg z"
+proof (cases "z = 0")
+ case False
+ thus ?thesis
+ using arg_bounded[of z] assms
+ by (intro arg_unique) (auto simp: sgn_mult sgn_of_real cis_arg)
+qed auto
+
+lemma arg_mult_of_real_right [simp]:
+ assumes "r > 0"
+ shows "arg (z * of_real r) = arg z"
+ by (subst mult.commute, subst arg_mult_of_real_left) (simp_all add: assms)
+
+lemma arg_rcis [simp]:
+ assumes "x \<in> {-pi<..pi}" "r > 0"
+ shows "arg (rcis r x) = x"
+ using assms by (simp add: rcis_def)
+
+lemma rcis_in_complex_cone [intro]:
+ assumes "\<alpha> \<in> {a..b}" "r \<ge> 0"
+ shows "rcis r \<alpha> \<in> complex_cone a b"
+ using assms by (auto simp: complex_cone_def)
+
+lemma arg_imp_in_complex_cone:
+ assumes "arg z \<in> {a..b}"
+ shows "z \<in> complex_cone a b"
+proof -
+ have "z = rcis (norm z) (arg z)"
+ by (simp add: rcis_cmod_arg)
+ also have "\<dots> \<in> complex_cone a b"
+ using assms by auto
+ finally show ?thesis .
+qed
+
+lemma complex_cone_altdef:
+ assumes "-pi < a" "a \<le> b" "b \<le> pi"
+ shows "complex_cone a b = insert 0 {z. arg z \<in> {a..b}}"
+proof (intro equalityI subsetI)
+ fix z assume "z \<in> complex_cone a b"
+ then obtain r \<alpha> where *: "r \<ge> 0" "\<alpha> \<in> {a..b}" "z = rcis r \<alpha>"
+ by (auto elim: complex_coneE)
+ have "arg z \<in> {a..b}" if [simp]: "z \<noteq> 0"
+ proof -
+ have "r > 0" using that * by (subst (asm) *) auto
+ hence "\<alpha> \<in> {a..b}"
+ using *(1,2) assms by (auto simp: *(1))
+ moreover from assms *(2) have "\<alpha> \<in> {-pi<..pi}"
+ by auto
+ ultimately show ?thesis using *(3) \<open>r > 0\<close>
+ by (subst *) auto
+ qed
+ thus "z \<in> insert 0 {z. arg z \<in> {a..b}}"
+ by auto
+qed (use assms in \<open>auto intro: arg_imp_in_complex_cone\<close>)
+
+lemma nonneg_of_real_in_complex_cone [simp, intro]:
+ assumes "x \<ge> 0" "a \<le> 0" "0 \<le> b"
+ shows "of_real x \<in> complex_cone a b"
+proof -
+ from assms have "rcis x 0 \<in> complex_cone a b"
+ by (intro rcis_in_complex_cone) auto
+ thus ?thesis by simp
+qed
+
+lemma one_in_complex_cone [simp, intro]: "a \<le> 0 \<Longrightarrow> 0 \<le> b \<Longrightarrow> 1 \<in> complex_cone a b"
+ using nonneg_of_real_in_complex_cone[of 1] by (simp del: nonneg_of_real_in_complex_cone)
+
+lemma of_nat_in_complex_cone [simp, intro]: "a \<le> 0 \<Longrightarrow> 0 \<le> b \<Longrightarrow> of_nat n \<in> complex_cone a b"
+ using nonneg_of_real_in_complex_cone[of "real n"] by (simp del: nonneg_of_real_in_complex_cone)
+
+
+subsection \<open>Another integral representation of the Beta function\<close>
+
+lemma complex_cone_inter_nonpos_Reals:
+ assumes "-pi < a" "a \<le> b" "b < pi"
+ shows "complex_cone a b \<inter> \<real>\<^sub>\<le>\<^sub>0 = {0}"
+proof (safe elim!: nonpos_Reals_cases)
+ fix x :: real
+ assume "complex_of_real x \<in> complex_cone a b" "x \<le> 0"
+ hence "\<not>(x < 0)"
+ using assms by (intro notI) (auto simp: complex_cone_altdef)
+ with \<open>x \<le> 0\<close> show "complex_of_real x = 0" by auto
+qed (use assms in auto)
+
+theorem
+ assumes a: "a > 0" and b: "b > (0 :: real)"
+ shows has_integral_Beta_real':
+ "((\<lambda>u. u powr (b - 1) / (1 + u) powr (a + b)) has_integral Beta a b) {0<..}"
+ and Beta_conv_nn_integral:
+ "Beta a b = (\<integral>\<^sup>+u. ennreal (indicator {0<..} u * u powr (b - 1) / (1 + u) powr (a + b)) \<partial>lborel)"
+proof -
+ define I where
+ "I = (\<integral>\<^sup>+u. ennreal (indicator {0<..} u * u powr (b - 1) / (1 + u) powr (a + b)) \<partial>lborel)"
+ have "Gamma (a + b) > 0" "Beta a b > 0"
+ using assms by (simp_all add: add_pos_pos Beta_def)
+ from a b have "ennreal (Gamma a * Gamma b) =
+ (\<integral>\<^sup>+ t. ennreal (indicator {0..} t * t powr (a - 1) / exp t) \<partial>lborel) *
+ (\<integral>\<^sup>+ t. ennreal (indicator {0..} t * t powr (b - 1) / exp t) \<partial>lborel)"
+ by (subst ennreal_mult') (simp_all add: Gamma_conv_nn_integral_real)
+ also have "\<dots> = (\<integral>\<^sup>+t. \<integral>\<^sup>+u. ennreal (indicator {0..} t * t powr (a - 1) / exp t) *
+ ennreal (indicator {0..} u * u powr (b - 1) / exp u) \<partial>lborel \<partial>lborel)"
+ by (simp add: nn_integral_cmult nn_integral_multc)
+ also have "\<dots> = (\<integral>\<^sup>+t. indicator {0<..} t * (\<integral>\<^sup>+u. indicator {0..} u * t powr (a - 1) * u powr (b - 1)
+ / exp (t + u) \<partial>lborel) \<partial>lborel)"
+ by (intro nn_integral_cong_AE AE_I[of _ _ "{0}"])
+ (auto simp: indicator_def divide_ennreal ennreal_mult' [symmetric] exp_add mult_ac)
+ also have "\<dots> = (\<integral>\<^sup>+t. indicator {0<..} t * (\<integral>\<^sup>+u. indicator {0..} u * t powr (a - 1) * u powr (b - 1)
+ / exp (t + u)
+ \<partial>(density (distr lborel borel ((*) t)) (\<lambda>x. ennreal \<bar>t\<bar>))) \<partial>lborel)"
+ by (intro nn_integral_cong mult_indicator_cong, subst lborel_distr_mult' [symmetric]) auto
+ also have "\<dots> = (\<integral>\<^sup>+(t::real). indicator {0<..} t * (\<integral>\<^sup>+u.
+ indicator {0..} (u * t) * t powr a *
+ (u * t) powr (b - 1) / exp (t + t * u) \<partial>lborel) \<partial>lborel)"
+ by (intro nn_integral_cong mult_indicator_cong)
+ (auto simp: nn_integral_density nn_integral_distr algebra_simps powr_diff
+ simp flip: ennreal_mult)
+ also have "\<dots> = (\<integral>\<^sup>+(t::real). \<integral>\<^sup>+u. indicator ({0<..}\<times>{0..}) (t, u) *
+ t powr a * (u * t) powr (b - 1) / exp (t * (u + 1)) \<partial>lborel \<partial>lborel)"
+ by (subst nn_integral_cmult [symmetric], simp, intro nn_integral_cong)
+ (auto simp: indicator_def zero_le_mult_iff algebra_simps)
+ also have "\<dots> = (\<integral>\<^sup>+(t::real). \<integral>\<^sup>+u. indicator ({0<..}\<times>{0..}) (t, u) *
+ t powr (a + b - 1) * u powr (b - 1) / exp (t * (u + 1)) \<partial>lborel \<partial>lborel)"
+ by (intro nn_integral_cong) (auto simp: powr_add powr_diff indicator_def powr_mult field_simps)
+ also have "\<dots> = (\<integral>\<^sup>+(u::real). \<integral>\<^sup>+t. indicator ({0<..}\<times>{0..}) (t, u) *
+ t powr (a + b - 1) * u powr (b - 1) / exp (t * (u + 1)) \<partial>lborel \<partial>lborel)"
+ by (rule lborel_pair.Fubini') auto
+ also have "\<dots> = (\<integral>\<^sup>+(u::real). indicator {0..} u * (\<integral>\<^sup>+t. indicator {0<..} t *
+ t powr (a + b - 1) * u powr (b - 1) / exp (t * (u + 1)) \<partial>lborel) \<partial>lborel)"
+ by (intro nn_integral_cong mult_indicator_cong) (auto simp: indicator_def)
+ also have "\<dots> = (\<integral>\<^sup>+(u::real). indicator {0<..} u * (\<integral>\<^sup>+t. indicator {0<..} t *
+ t powr (a + b - 1) * u powr (b - 1) / exp (t * (u + 1)) \<partial>lborel) \<partial>lborel)"
+ by (intro nn_integral_cong_AE AE_I[of _ _ "{0}"]) (auto simp: indicator_def)
+ also have "\<dots> = (\<integral>\<^sup>+(u::real). indicator {0<..} u * (\<integral>\<^sup>+t. indicator {0<..} t *
+ t powr (a + b - 1) * u powr (b - 1) / exp (t * (u + 1))
+ \<partial>(density (distr lborel borel ((*) (1/(1+u)))) (\<lambda>x. ennreal \<bar>1/(1+u)\<bar>))) \<partial>lborel)"
+ by (intro nn_integral_cong mult_indicator_cong, subst lborel_distr_mult' [symmetric]) auto
+ also have "\<dots> = (\<integral>\<^sup>+(u::real). indicator {0<..} u *
+ (\<integral>\<^sup>+t. ennreal (1 / (u + 1)) * ennreal (indicator {0<..} (t / (u + 1)) *
+ (t / (1+u)) powr (a + b - 1) * u powr (b - 1) / exp t)
+ \<partial>lborel) \<partial>lborel)"
+ by (intro nn_integral_cong mult_indicator_cong)
+ (auto simp: nn_integral_distr nn_integral_density add_ac)
+ also have "\<dots> = (\<integral>\<^sup>+u. \<integral>\<^sup>+t. indicator ({0<..}\<times>{0<..}) (u, t) *
+ 1/(u+1) * (t / (u+1)) powr (a + b - 1) * u powr (b - 1) / exp t
+ \<partial>lborel \<partial>lborel)"
+ by (subst nn_integral_cmult [symmetric], simp, intro nn_integral_cong)
+ (auto simp: indicator_def field_simps divide_ennreal simp flip: ennreal_mult ennreal_mult')
+ also have "\<dots> = (\<integral>\<^sup>+u. \<integral>\<^sup>+t. ennreal (indicator {0<..} u * u powr (b - 1) / (1 + u) powr (a + b)) *
+ ennreal (indicator {0<..} t * t powr (a + b - 1) / exp t)
+ \<partial>lborel \<partial>lborel)"
+ by (intro nn_integral_cong)
+ (auto simp: indicator_def powr_add powr_diff powr_divide powr_minus divide_simps add_ac
+ simp flip: ennreal_mult)
+ also have "\<dots> = I * (\<integral>\<^sup>+t. indicator {0<..} t * t powr (a + b - 1) / exp t \<partial>lborel)"
+ by (simp add: nn_integral_cmult nn_integral_multc I_def)
+ also have "(\<integral>\<^sup>+t. indicator {0<..} t * t powr (a + b - 1) / exp t \<partial>lborel) =
+ ennreal (Gamma (a + b))"
+ using assms
+ by (subst Gamma_conv_nn_integral_real)
+ (auto intro!: nn_integral_cong_AE[OF AE_I[of _ _ "{0}"]]
+ simp: indicator_def split: if_splits)
+ finally have "ennreal (Gamma a * Gamma b) = I * ennreal (Gamma (a + b))" .
+ hence "ennreal (Gamma a * Gamma b) / ennreal (Gamma (a + b)) =
+ I * ennreal (Gamma (a + b)) / ennreal (Gamma (a + b))" by simp
+ also have "\<dots> = I"
+ using \<open>Gamma (a + b) > 0\<close> by (intro ennreal_mult_divide_eq) (auto simp: )
+ also have "ennreal (Gamma a * Gamma b) / ennreal (Gamma (a + b)) =
+ ennreal (Gamma a * Gamma b / Gamma (a + b))"
+ using assms by (intro divide_ennreal) auto
+ also have "\<dots> = ennreal (Beta a b)"
+ by (simp add: Beta_def)
+ finally show *: "ennreal (Beta a b) = I" .
+
+ define f where "f = (\<lambda>u. u powr (b - 1) / (1 + u) powr (a + b))"
+ have "((\<lambda>u. indicator {0<..} u * f u) has_integral Beta a b) UNIV"
+ using * \<open>Beta a b > 0\<close>
+ by (subst has_integral_iff_nn_integral_lebesgue)
+ (auto simp: f_def measurable_completion nn_integral_completion I_def mult_ac)
+ also have "(\<lambda>u. indicator {0<..} u * f u) = (\<lambda>u. if u \<in> {0<..} then f u else 0)"
+ by (auto simp: fun_eq_iff)
+ also have "(\<dots> has_integral Beta a b) UNIV \<longleftrightarrow> (f has_integral Beta a b) {0<..}"
+ by (rule has_integral_restrict_UNIV)
+ finally show \<dots> by (simp add: f_def)
+qed
+
+lemma has_integral_Beta2:
+ fixes a :: real
+ assumes "a < -1/2"
+ shows "((\<lambda>x. (1 + x ^ 2) powr a) has_integral Beta (- a - 1 / 2) (1 / 2) / 2) {0<..}"
+proof -
+ define f where "f = (\<lambda>u. u powr (-1/2) / (1 + u) powr (-a))"
+ define C where "C = Beta (-a-1/2) (1/2)"
+ have I: "(f has_integral C) {0<..}"
+ using has_integral_Beta_real'[of "-a-1/2" "1/2"] assms
+ by (simp_all add: diff_divide_distrib f_def C_def)
+
+ define g where "g = (\<lambda>x. x ^ 2 :: real)"
+ have bij: "bij_betw g {0<..} {0<..}"
+ by (intro bij_betwI[of _ _ _ sqrt]) (auto simp: g_def)
+
+ have "(f absolutely_integrable_on g ` {0<..} \<and> integral (g ` {0<..}) f = C)"
+ using I bij by (simp add: bij_betw_def has_integral_iff absolutely_integrable_on_def f_def)
+ also have "?this \<longleftrightarrow> ((\<lambda>x. \<bar>2 * x\<bar> *\<^sub>R f (g x)) absolutely_integrable_on {0<..} \<and>
+ integral {0<..} (\<lambda>x. \<bar>2 * x\<bar> *\<^sub>R f (g x)) = C)"
+ using bij by (intro has_absolute_integral_change_of_variables_1' [symmetric])
+ (auto intro!: derivative_eq_intros simp: g_def bij_betw_def)
+ finally have "((\<lambda>x. \<bar>2 * x\<bar> * f (g x)) has_integral C) {0<..}"
+ by (simp add: absolutely_integrable_on_def f_def has_integral_iff)
+ also have "?this \<longleftrightarrow> ((\<lambda>x::real. 2 * (1 + x\<^sup>2) powr a) has_integral C) {0<..}"
+ by (intro has_integral_cong) (auto simp: f_def g_def powr_def exp_minus ln_realpow field_simps)
+ finally have "((\<lambda>x::real. 1/2 * (2 * (1 + x\<^sup>2) powr a)) has_integral 1/2 * C) {0<..}"
+ by (intro has_integral_mult_right)
+ thus ?thesis by (simp add: C_def)
+qed
+
+lemma has_integral_Beta3:
+ fixes a b :: real
+ assumes "a < -1/2" and "b > 0"
+ shows "((\<lambda>x. (b + x ^ 2) powr a) has_integral
+ Beta (-a - 1/2) (1/2) / 2 * b powr (a + 1/2)) {0<..}"
+proof -
+ define C where "C = Beta (- a - 1 / 2) (1 / 2) / 2"
+ have int: "nn_integral lborel (\<lambda>x. indicator {0<..} x * (1 + x ^ 2) powr a) = C"
+ using nn_integral_has_integral_lebesgue[OF _ has_integral_Beta2[OF assms(1)]]
+ by (auto simp: C_def)
+ have "nn_integral lborel (\<lambda>x. indicator {0<..} x * (b + x ^ 2) powr a) =
+ (\<integral>\<^sup>+x. ennreal (indicat_real {0<..} (x * sqrt b) * (b + (x * sqrt b)\<^sup>2) powr a * sqrt b) \<partial>lborel)"
+ using assms
+ by (subst lborel_distr_mult'[of "sqrt b"])
+ (auto simp: nn_integral_density nn_integral_distr mult_ac simp flip: ennreal_mult)
+ also have "\<dots> = (\<integral>\<^sup>+x. ennreal (indicat_real {0<..} x * (b * (1 + x ^ 2)) powr a * sqrt b) \<partial>lborel)"
+ using assms
+ by (intro nn_integral_cong) (auto simp: indicator_def field_simps zero_less_mult_iff)
+ also have "\<dots> = (\<integral>\<^sup>+x. ennreal (indicat_real {0<..} x * b powr (a + 1/2) * (1 + x ^ 2) powr a) \<partial>lborel)"
+ using assms
+ by (intro nn_integral_cong) (auto simp: indicator_def powr_add powr_half_sqrt powr_mult)
+ also have "\<dots> = b powr (a + 1/2) * (\<integral>\<^sup>+x. ennreal (indicat_real {0<..} x * (1 + x ^ 2) powr a) \<partial>lborel)"
+ using assms by (subst nn_integral_cmult [symmetric]) (simp_all add: mult_ac flip: ennreal_mult)
+ also have "(\<integral>\<^sup>+x. ennreal (indicat_real {0<..} x * (1 + x ^ 2) powr a) \<partial>lborel) = C"
+ using int by simp
+ also have "ennreal (b powr (a + 1/2)) * ennreal C = ennreal (C * b powr (a + 1/2))"
+ using assms by (subst ennreal_mult) (auto simp: C_def mult_ac Beta_def)
+ finally have *: "(\<integral>\<^sup>+ x. ennreal (indicat_real {0<..} x * (b + x\<^sup>2) powr a) \<partial>lborel) = \<dots>" .
+ hence "((\<lambda>x. indicator {0<..} x * (b + x^2) powr a) has_integral C * b powr (a + 1/2)) UNIV"
+ using assms
+ by (subst has_integral_iff_nn_integral_lebesgue)
+ (auto simp: C_def measurable_completion nn_integral_completion Beta_def)
+ also have "(\<lambda>x. indicator {0<..} x * (b + x^2) powr a) =
+ (\<lambda>x. if x \<in> {0<..} then (b + x^2) powr a else 0)"
+ by (auto simp: fun_eq_iff)
+ finally show ?thesis
+ by (subst (asm) has_integral_restrict_UNIV) (auto simp: C_def)
+qed
+
+
+subsection \<open>Asymptotics of the real $\log\Gamma$ function and its derivatives\<close>
+
+text \<open>
+ This is the error term that occurs in the expansion of @{term ln_Gamma}. It can be shown to
+ be of order $O(s^{-n})$.
+\<close>
+definition stirling_integral :: "nat \<Rightarrow> 'a :: {real_normed_div_algebra, banach} \<Rightarrow> 'a" where
+ "stirling_integral n s =
+ lim (\<lambda>N. integral {0..N} (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n))"
+
+context
+ fixes s :: complex assumes s: "s \<notin> \<real>\<^sub>\<le>\<^sub>0"
+ fixes approx :: "nat \<Rightarrow> complex"
+ defines "approx \<equiv> (\<lambda>N.
+ (\<Sum>n = 1..<N. s / of_nat n - ln (1 + s / of_nat n)) - (euler_mascheroni * s + ln s) - \<comment> \<open>\<open>\<longrightarrow> ln_Gamma s\<close>\<close>
+ (ln_Gamma (of_nat N) - ln (2 * pi / of_nat N) / 2 - of_nat N * ln (of_nat N) + of_nat N) - \<comment> \<open>\<open>\<longrightarrow> 0\<close>\<close>
+ s * (harm (N - 1) - ln (of_nat (N - 1)) - euler_mascheroni) + \<comment> \<open>\<open>\<longrightarrow> 0\<close>\<close>
+ s * (ln (of_nat N + s) - ln (of_nat (N - 1))) - \<comment> \<open>\<open>\<longrightarrow> 0\<close>\<close>
+ (1/2) * (ln (of_nat N + s) - ln (of_nat N)) + \<comment> \<open>\<open>\<longrightarrow> 0\<close>\<close>
+ of_nat N * (ln (of_nat N + s) - ln (of_nat N)) - \<comment> \<open>\<open>\<longrightarrow> s\<close>\<close>
+ (s - 1/2) * ln s - ln (2 * pi) / 2)"
+begin
+
+qualified lemma
+ assumes N: "N > 0"
+ shows integrable_pbernpoly_1:
+ "(\<lambda>x. of_real (-pbernpoly 1 x) / (of_real x + s)) integrable_on {0..real N}"
+ and integral_pbernpoly_1_aux:
+ "integral {0..real N} (\<lambda>x. -of_real (pbernpoly 1 x) / (of_real x + s)) = approx N"
+ and has_integral_pbernpoly_1:
+ "((\<lambda>x. pbernpoly 1 x /(x + s)) has_integral
+ (\<Sum>m<N. (of_nat m + 1 / 2 + s) * (ln (of_nat m + s) -
+ ln (of_nat m + 1 + s)) + 1)) {0..real N}"
+proof -
+ let ?A = "(\<lambda>n. {of_nat n..of_nat (n+1)}) ` {0..<N}"
+ have has_integral:
+ "((\<lambda>x. -pbernpoly 1 x / (x + s)) has_integral
+ (of_nat n + 1/2 + s) * (ln (of_nat (n + 1) + s) - ln (of_nat n + s)) - 1)
+ {of_nat n..of_nat (n + 1)}" for n
+ proof (rule has_integral_spike)
+ have "((\<lambda>x. (of_nat n + 1/2 + s) * (1 / (x + s)) - 1) has_integral
+ (of_nat n + 1/2 + s) * (ln (of_real (real (n + 1)) + s) - ln (of_real (real n) + s)) - 1)
+ {of_nat n..of_nat (n + 1)}"
+ using s has_integral_const_real[of 1 "of_nat n" "of_nat (n + 1)"]
+ by (intro has_integral_diff has_integral_mult_right fundamental_theorem_of_calculus)
+ (auto intro!: derivative_eq_intros has_vector_derivative_real_field
+ simp: has_field_derivative_iff_has_vector_derivative [symmetric] field_simps
+ complex_nonpos_Reals_iff)
+ thus "((\<lambda>x. (of_nat n + 1/2 + s) * (1 / (x + s)) - 1) has_integral
+ (of_nat n + 1/2 + s) * (ln (of_nat (n + 1) + s) - ln (of_nat n + s)) - 1)
+ {of_nat n..of_nat (n + 1)}" by simp
+
+ show "-pbernpoly 1 x / (x + s) = (of_nat n + 1/2 + s) * (1 / (x + s)) - 1"
+ if "x \<in> {of_nat n..of_nat (n + 1)} - {of_nat (n + 1)}" for x
+ proof -
+ have x: "x \<ge> real n" "x < real (n + 1)" using that by simp_all
+ hence "floor x = int n" by linarith
+ moreover from s x have "complex_of_real x \<noteq> -s"
+ by (auto simp add: complex_eq_iff complex_nonpos_Reals_iff simp del: of_nat_Suc)
+ ultimately show "-pbernpoly 1 x / (x + s) = (of_nat n + 1/2 + s) * (1 / (x + s)) - 1"
+ by (auto simp: pbernpoly_def bernpoly_def frac_def divide_simps add_eq_0_iff2)
+ qed
+ qed simp_all
+ hence *: "\<And>I. I\<in>?A \<Longrightarrow> ((\<lambda>x. -pbernpoly 1 x / (x + s)) has_integral
+ (Inf I + 1/2 + s) * (ln (Inf I + 1 + s) - ln (Inf I + s)) - 1) I"
+ by (auto simp: add_ac)
+ have "((\<lambda>x. - pbernpoly 1 x / (x + s)) has_integral
+ (\<Sum>I\<in>?A. (Inf I + 1 / 2 + s) * (ln (Inf I + 1 + s) - ln (Inf I + s)) - 1))
+ (\<Union>n\<in>{0..<N}. {real n..real (n + 1)})" (is "(_ has_integral ?i) _")
+ apply (intro has_integral_Union * finite_imageI)
+ apply (force intro!: negligible_atLeastAtMostI pairwiseI)+
+ done
+ hence has_integral: "((\<lambda>x. - pbernpoly 1 x / (x + s)) has_integral ?i) {0..real N}"
+ by (subst has_integral_spike_set_eq)
+ (use Union_atLeastAtMost assms in \<open>auto simp: intro!: empty_imp_negligible\<close>)
+ hence "(\<lambda>x. - pbernpoly 1 x / (x + s)) integrable_on {0..real N}"
+ and integral: "integral {0..real N} (\<lambda>x. - pbernpoly 1 x / (x + s)) = ?i"
+ by (simp_all add: has_integral_iff)
+ show "(\<lambda>x. - pbernpoly 1 x / (x + s)) integrable_on {0..real N}" by fact
+
+ note has_integral_neg[OF has_integral]
+ also have "-?i = (\<Sum>x<N. (of_nat x + 1 / 2 + s) * (ln (of_nat x + s) - ln (of_nat x + 1 + s)) + 1)"
+ by (subst sum.reindex)
+ (simp_all add: inj_on_def atLeast0LessThan algebra_simps sum_negf [symmetric])
+ finally show has_integral:
+ "((\<lambda>x. of_real (pbernpoly 1 x) / (of_real x + s)) has_integral \<dots>) {0..real N}" by simp
+
+ note integral
+ also have "?i = (\<Sum>n<N. (of_nat n + 1 / 2 + s) *
+ (ln (of_nat n + 1 + s) - ln (of_nat n + s))) - N" (is "_ = ?S - _")
+ by (subst sum.reindex) (simp_all add: inj_on_def sum_subtractf atLeast0LessThan)
+ also have "?S = (\<Sum>n<N. of_nat n * (ln (of_nat n + 1 + s) - ln (of_nat n + s))) +
+ (s + 1 / 2) * (\<Sum>n<N. ln (of_nat (Suc n) + s) - ln (of_nat n + s))"
+ (is "_ = ?S1 + _ * ?S2") by (simp add: algebra_simps sum.distrib sum_subtractf sum_distrib_left)
+ also have "?S2 = ln (of_nat N + s) - ln s" by (subst sum_lessThan_telescope) simp
+ also have "?S1 = (\<Sum>n=1..<N. of_nat n * (ln (of_nat n + 1 + s) - ln (of_nat n + s)))"
+ by (intro sum.mono_neutral_right) auto
+ also have "\<dots> = (\<Sum>n=1..<N. of_nat n * ln (of_nat n + 1 + s)) - (\<Sum>n=1..<N. of_nat n * ln (of_nat n + s))"
+ by (simp add: algebra_simps sum_subtractf)
+ also have "(\<Sum>n=1..<N. of_nat n * ln (of_nat n + 1 + s)) =
+ (\<Sum>n=1..<N. (of_nat n - 1) * ln (of_nat n + s)) + (N - 1) * ln (of_nat N + s)"
+ by (induction N) (simp_all add: add_ac of_nat_diff)
+ also have "\<dots> - (\<Sum>n = 1..<N. of_nat n * ln (of_nat n + s)) =
+ -(\<Sum>n=1..<N. ln (of_nat n + s)) + (N - 1) * ln (of_nat N + s)"
+ by (induction N) (simp_all add: algebra_simps)
+ also from s have neq: "s + of_nat x \<noteq> 0" for x
+ by (auto simp: complex_nonpos_Reals_iff complex_eq_iff)
+ hence "(\<Sum>n=1..<N. ln (of_nat n + s)) = (\<Sum>n=1..<N. ln (of_nat n) + ln (1 + s/n))"
+ by (intro sum.cong refl, subst Ln_times_of_nat [symmetric]) (auto simp: divide_simps add_ac)
+ also have "\<dots> = ln (fact (N - 1)) + (\<Sum>n=1..<N. ln (1 + s/n))"
+ by (induction N) (simp_all add: Ln_times_of_nat fact_reduce add_ac)
+ also have "(\<Sum>n=1..<N. ln (1 + s/n)) = -(\<Sum>n=1..<N. s / n - ln (1 + s/n)) + s * (\<Sum>n=1..<N. 1 / of_nat n)"
+ by (simp add: sum_distrib_left sum_subtractf)
+ also from N have "ln (fact (N - 1)) = ln_Gamma (of_nat N :: complex)"
+ by (simp add: ln_Gamma_complex_conv_fact)
+ also have "{1..<N} = {1..N - 1}" by auto
+ hence "(\<Sum>n = 1..<N. 1 / of_nat n) = (harm (N - 1) :: complex)"
+ by (simp add: harm_def divide_simps)
+ also have "- (ln_Gamma (of_nat N) + (- (\<Sum>n = 1..<N. s / of_nat n - ln (1 + s / of_nat n)) +
+ s * harm (N - 1))) + of_nat (N - 1) * ln (of_nat N + s) +
+ (s + 1 / 2) * (ln (of_nat N + s) - ln s) - of_nat N = approx N"
+ using N by (simp add: field_simps of_nat_diff ln_div approx_def Ln_of_nat
+ ln_Gamma_complex_of_real [symmetric])
+ finally show "integral {0..of_nat N} (\<lambda>x. -of_real (pbernpoly 1 x) / (of_real x + s)) = \<dots>"
+ by simp
+qed
+
+lemma integrable_ln_Gamma_aux:
+ shows "(\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n) integrable_on {0..real N}"
+proof (cases "n = 1")
+ case True
+ with s show ?thesis using integrable_neg[OF integrable_pbernpoly_1[of N]]
+ by (cases "N = 0") (simp_all add: integrable_negligible)
+next
+ case False
+ from s have "of_real x + s \<noteq> 0" if "x \<ge> 0" for x using that
+ by (auto simp: complex_eq_iff add_eq_0_iff2 complex_nonpos_Reals_iff)
+ with False s show ?thesis
+ by (auto intro!: integrable_continuous_real continuous_intros)
+qed
+
+text \<open>
+ This following proof is based on ``Rudiments of the theory of the gamma function''
+ by Bruce Berndt~\cite{berndt}.
+\<close>
+qualified lemma integral_pbernpoly_1:
+ "(\<lambda>N. integral {0..real N} (\<lambda>x. pbernpoly 1 x / (x + s)))
+ \<longlonglongrightarrow> -ln_Gamma s - s + (s - 1 / 2) * ln s + ln (2 * pi) / 2"
+proof -
+ have neq: "s + of_real x \<noteq> 0" if "x \<ge> 0" for x :: real
+ using that s by (auto simp: complex_eq_iff complex_nonpos_Reals_iff)
+ have "(approx \<longlongrightarrow> ln_Gamma s - 0 - 0 + 0 - 0 + s - (s - 1/2) * ln s - ln (2 * pi) / 2) at_top"
+ unfolding approx_def
+ proof (intro tendsto_add tendsto_diff)
+ from s have s': "s \<notin> \<int>\<^sub>\<le>\<^sub>0" by (auto simp: complex_nonpos_Reals_iff elim!: nonpos_Ints_cases)
+ have "(\<lambda>n. \<Sum>i=1..<n. s / of_nat i - ln (1 + s / of_nat i)) \<longlonglongrightarrow>
+ ln_Gamma s + euler_mascheroni * s + ln s" (is "?f \<longlonglongrightarrow> _")
+ using ln_Gamma_series'_aux[OF s'] unfolding sums_def
+ by (subst filterlim_sequentially_Suc [symmetric], subst (asm) sum.atLeast1_atMost_eq [symmetric])
+ (simp add: atLeastLessThanSuc_atLeastAtMost)
+ thus "((\<lambda>n. ?f n - (euler_mascheroni * s + ln s)) \<longlongrightarrow> ln_Gamma s) at_top"
+ by (auto intro: tendsto_eq_intros)
+ next
+ show "(\<lambda>x. complex_of_real (ln_Gamma (real x) - ln (2 * pi / real x) / 2 -
+ real x * ln (real x) + real x)) \<longlonglongrightarrow> 0"
+ proof (intro tendsto_of_real_0_I
+ filterlim_compose[OF tendsto_sandwich filterlim_real_sequentially])
+ show "eventually (\<lambda>x::real. ln_Gamma x - ln (2 * pi / x) / 2 - x * ln x + x \<ge> 0) at_top"
+ using eventually_ge_at_top[of "1::real"]
+ by eventually_elim (insert ln_Gamma_bounds(1), simp add: algebra_simps)
+ show "eventually (\<lambda>x::real. ln_Gamma x - ln (2 * pi / x) / 2 - x * ln x + x \<le>
+ 1 / 12 * inverse x) at_top"
+ using eventually_ge_at_top[of "1::real"]
+ by eventually_elim (insert ln_Gamma_bounds(2), simp add: field_simps)
+ show "((\<lambda>x::real. 1 / 12 * inverse x) \<longlongrightarrow> 0) at_top"
+ by (intro tendsto_mult_right_zero tendsto_inverse_0_at_top filterlim_ident)
+ qed simp_all
+ next
+ have "(\<lambda>x. s * of_real (harm (x - 1) - ln (real (x - 1)) - euler_mascheroni)) \<longlonglongrightarrow>
+ s * of_real (euler_mascheroni - euler_mascheroni)"
+ by (subst filterlim_sequentially_Suc [symmetric], intro tendsto_intros)
+ (insert euler_mascheroni_LIMSEQ, simp_all)
+ also have "?this \<longleftrightarrow> (\<lambda>x. s * (harm (x - 1) - ln (of_nat (x - 1)) - euler_mascheroni)) \<longlonglongrightarrow> 0"
+ by (intro filterlim_cong refl eventually_mono[OF eventually_gt_at_top[of "1::nat"]])
+ (auto simp: Ln_of_nat of_real_harm)
+ finally show "(\<lambda>x. s * (harm (x - 1) - ln (of_nat (x - 1)) - euler_mascheroni)) \<longlonglongrightarrow> 0" .
+ next
+ have "((\<lambda>x. ln (1 + (s + 1) / of_real x)) \<longlongrightarrow> ln (1 + 0)) at_top" (is ?P)
+ by (intro tendsto_intros tendsto_divide_0[OF tendsto_const])
+ (simp_all add: filterlim_ident filterlim_at_infinity_conv_norm_at_top filterlim_abs_real)
+ also have "ln (of_real (x + 1) + s) - ln (complex_of_real x) = ln (1 + (s + 1) / of_real x)"
+ if "x > 1" for x using that s
+ using Ln_divide_of_real[of x "of_real (x + 1) + s", symmetric] neq[of "x+1"]
+ by (simp add: field_simps Ln_of_real)
+ hence "?P \<longleftrightarrow> ((\<lambda>x. ln (of_real (x + 1) + s) - ln (of_real x)) \<longlongrightarrow> 0) at_top"
+ by (intro filterlim_cong refl)
+ (auto intro: eventually_mono[OF eventually_gt_at_top[of "1::real"]])
+ finally have "((\<lambda>n. ln (of_real (real n + 1) + s) - ln (of_real (real n))) \<longlongrightarrow> 0) at_top"
+ by (rule filterlim_compose[OF _ filterlim_real_sequentially])
+ hence "((\<lambda>n. ln (of_nat n + s) - ln (of_nat (n - 1))) \<longlongrightarrow> 0) at_top"
+ by (subst filterlim_sequentially_Suc [symmetric]) (simp add: add_ac)
+ thus "(\<lambda>x. s * (ln (of_nat x + s) - ln (of_nat (x - 1)))) \<longlonglongrightarrow> 0"
+ by (rule tendsto_mult_right_zero)
+ next
+ have "((\<lambda>x. ln (1 + s / of_real x)) \<longlongrightarrow> ln (1 + 0)) at_top" (is ?P)
+ by (intro tendsto_intros tendsto_divide_0[OF tendsto_const])
+ (simp_all add: filterlim_ident filterlim_at_infinity_conv_norm_at_top filterlim_abs_real)
+ also have "ln (of_real x + s) - ln (of_real x) = ln (1 + s / of_real x)" if "x > 0" for x
+ using Ln_divide_of_real[of x "of_real x + s"] neq[of x] that
+ by (auto simp: field_simps Ln_of_real)
+ hence "?P \<longleftrightarrow> ((\<lambda>x. ln (of_real x + s) - ln (of_real x)) \<longlongrightarrow> 0) at_top"
+ using s by (intro filterlim_cong refl)
+ (auto intro: eventually_mono [OF eventually_gt_at_top[of "1::real"]])
+ finally have "(\<lambda>x. (1/2) * (ln (of_real (real x) + s) - ln (of_real (real x)))) \<longlonglongrightarrow> 0"
+ by (rule tendsto_mult_right_zero[OF filterlim_compose[OF _ filterlim_real_sequentially]])
+ thus "(\<lambda>x. (1/2) * (ln (of_nat x + s) - ln (of_nat x))) \<longlonglongrightarrow> 0" by simp
+ next
+ have "((\<lambda>x. x * (ln (1 + s / of_real x))) \<longlongrightarrow> s) at_top" (is ?P)
+ by (rule stirling_limit_aux2)
+ also have "ln (1 + s / of_real x) = ln (of_real x + s) - ln (of_real x)" if "x > 1" for x
+ using that s Ln_divide_of_real [of x "of_real x + s", symmetric] neq[of x]
+ by (auto simp: Ln_of_real field_simps)
+ hence "?P \<longleftrightarrow> ((\<lambda>x. of_real x * (ln (of_real x + s) - ln (of_real x))) \<longlongrightarrow> s) at_top"
+ by (intro filterlim_cong refl)
+ (auto intro: eventually_mono[OF eventually_gt_at_top[of "1::real"]])
+ finally have "(\<lambda>n. of_real (real n) * (ln (of_real (real n) + s) - ln (of_real (real n)))) \<longlonglongrightarrow> s"
+ by (rule filterlim_compose[OF _ filterlim_real_sequentially])
+ thus "(\<lambda>n. of_nat n * (ln (of_nat n + s) - ln (of_nat n))) \<longlonglongrightarrow> s" by simp
+ qed simp_all
+ also have "?this \<longleftrightarrow> ((\<lambda>N. integral {0..real N} (\<lambda>x. -pbernpoly 1 x / (x + s))) \<longlongrightarrow>
+ ln_Gamma s + s - (s - 1/2) * ln s - ln (2 * pi) / 2) at_top"
+ using integral_pbernpoly_1_aux
+ by (intro filterlim_cong refl)
+ (auto intro: eventually_mono[OF eventually_gt_at_top[of "0::nat"]])
+ also have "(\<lambda>N. integral {0..real N} (\<lambda>x. -pbernpoly 1 x / (x + s))) =
+ (\<lambda>N. -integral {0..real N} (\<lambda>x. pbernpoly 1 x / (x + s)))"
+ by (simp add: fun_eq_iff)
+ finally show ?thesis by (simp add: tendsto_minus_cancel_left [symmetric] algebra_simps)
+qed
+
+
+qualified lemma pbernpoly_integral_conv_pbernpoly_integral_Suc:
+ assumes "n \<ge> 1"
+ shows "integral {0..real N} (\<lambda>x. pbernpoly n x / (x + s) ^ n) =
+ of_real (pbernpoly (Suc n) (real N)) / (of_nat (Suc n) * (s + of_nat N) ^ n) -
+ of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n) + of_nat n / of_nat (Suc n) *
+ integral {0..real N} (\<lambda>x. of_real (pbernpoly (Suc n) x) / (of_real x + s) ^ Suc n)"
+proof -
+ note [derivative_intros] = has_field_derivative_pbernpoly_Suc'
+ define I where "I = -of_real (pbernpoly (Suc n) (of_nat N)) / (of_nat (Suc n) * (of_nat N + s) ^ n) +
+ of_real (bernoulli (Suc n) / real (Suc n)) / s ^ n +
+ integral {0..real N} (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n)"
+ have "((\<lambda>x. (-of_nat n * inverse (of_real x + s) ^ Suc n) *
+ (of_real (pbernpoly (Suc n) x) / (of_nat (Suc n))))
+ has_integral -I) {0..real N}"
+ proof (rule integration_by_parts_interior_strong[OF bounded_bilinear_mult])
+ fix x :: real assume x: "x \<in> {0<..<real N} - real ` {0..N}"
+ have "x \<notin> \<int>"
+ proof
+ assume "x \<in> \<int>"
+ then obtain n where "x = of_int n" by (auto elim!: Ints_cases)
+ with x have x': "x = of_nat (nat n)" by simp
+ from x show False by (auto simp: x')
+ qed
+ hence "((\<lambda>x. of_real (pbernpoly (Suc n) x / of_nat (Suc n))) has_vector_derivative
+ complex_of_real (pbernpoly n x)) (at x)"
+ by (intro has_vector_derivative_of_real) (auto intro!: derivative_eq_intros)
+ thus "((\<lambda>x. of_real (pbernpoly (Suc n) x) / of_nat (Suc n)) has_vector_derivative
+ complex_of_real (pbernpoly n x)) (at x)" by simp
+ from x s have "complex_of_real x + s \<noteq> 0"
+ by (auto simp: complex_eq_iff complex_nonpos_Reals_iff)
+ thus "((\<lambda>x. inverse (of_real x + s) ^ n) has_vector_derivative
+ - of_nat n * inverse (of_real x + s) ^ Suc n) (at x)" using x s assms
+ by (auto intro!: derivative_eq_intros has_vector_derivative_real_field simp: divide_simps power_add [symmetric]
+ simp del: power_Suc)
+ next
+ have "complex_of_real x + s \<noteq> 0" if "x \<ge> 0" for x
+ using that s by (auto simp: complex_eq_iff complex_nonpos_Reals_iff)
+ thus "continuous_on {0..real N} (\<lambda>x. inverse (of_real x + s) ^ n)"
+ "continuous_on {0..real N} (\<lambda>x. complex_of_real (pbernpoly (Suc n) x) / of_nat (Suc n))"
+ using assms s by (auto intro!: continuous_intros simp del: of_nat_Suc)
+ next
+ have "((\<lambda>x. inverse (of_real x + s) ^ n * of_real (pbernpoly n x)) has_integral
+ pbernpoly (Suc n) (of_nat N) / (of_nat (Suc n) * (of_nat N + s) ^ n) -
+ of_real (bernoulli (Suc n) / real (Suc n)) / s ^ n - -I) {0..real N}"
+ using integrable_ln_Gamma_aux[of n N] assms
+ by (auto simp: I_def has_integral_integral divide_simps)
+ thus "((\<lambda>x. inverse (of_real x + s) ^ n * of_real (pbernpoly n x)) has_integral
+ inverse (of_real (real N) + s) ^ n * (of_real (pbernpoly (Suc n) (real N)) /
+ of_nat (Suc n)) -
+ inverse (of_real 0 + s) ^ n * (of_real (pbernpoly (Suc n) 0) / of_nat (Suc n)) - - I)
+ {0..real N}" by (simp_all add: field_simps)
+ qed simp_all
+ also have "(\<lambda>x. - of_nat n * inverse (of_real x + s) ^ Suc n * (of_real (pbernpoly (Suc n) x) /
+ of_nat (Suc n))) =
+ (\<lambda>x. - (of_nat n / of_nat (Suc n) * of_real (pbernpoly (Suc n) x) /
+ (of_real x + s) ^ Suc n))"
+ by (simp add: divide_simps fun_eq_iff)
+ finally have "((\<lambda>x. - (of_nat n / of_nat (Suc n) * of_real (pbernpoly (Suc n) x) /
+ (of_real x + s) ^ Suc n)) has_integral - I) {0..real N}" .
+ from has_integral_neg[OF this] show ?thesis
+ by (auto simp add: I_def has_integral_iff algebra_simps integral_mult_right [symmetric]
+ simp del: power_Suc of_nat_Suc )
+qed
+
+lemma pbernpoly_over_power_tendsto_0:
+ assumes "n > 0"
+ shows "(\<lambda>x. of_real (pbernpoly (Suc n) (real x)) / (of_nat (Suc n) * (s + of_nat x) ^ n)) \<longlonglongrightarrow> 0"
+proof -
+ from s have neq: "s + of_nat n \<noteq> 0" for n
+ by (auto simp: complex_eq_iff complex_nonpos_Reals_iff)
+ from bounded_pbernpoly[of "Suc n"] guess c . note c = this
+ have "eventually (\<lambda>x. real x + Re s > 0) at_top"
+ by real_asymp
+ hence "eventually (\<lambda>x. norm (of_real (pbernpoly (Suc n) (real x)) /
+ (of_nat (Suc n) * (s + of_nat x) ^ n)) \<le>
+ (c / real (Suc n)) / (real x + Re s) ^ n) at_top"
+ using eventually_gt_at_top[of "0::nat"]
+ proof eventually_elim
+ case (elim x)
+ have "norm (of_real (pbernpoly (Suc n) (real x)) /
+ (of_nat (Suc n) * (s + of_nat x) ^ n)) \<le>
+ (c / real (Suc n)) / norm (s + of_nat x) ^ n" (is "_ \<le> ?rhs") using c[of x]
+ by (auto simp: norm_divide norm_mult norm_power neq field_simps simp del: of_nat_Suc)
+ also have "(real x + Re s) \<le> cmod (s + of_nat x)"
+ using complex_Re_le_cmod[of "s + of_nat x"] s by (auto simp add: complex_nonpos_Reals_iff)
+ hence "?rhs \<le> (c / real (Suc n)) / (real x + Re s) ^ n" using s elim c[of 0] neq[of x]
+ by (intro divide_left_mono power_mono mult_pos_pos divide_nonneg_pos zero_less_power) auto
+ finally show ?case .
+ qed
+ moreover have "(\<lambda>x. (c / real (Suc n)) / (real x + Re s) ^ n) \<longlonglongrightarrow> 0"
+ using \<open>n > 0\<close> by real_asymp
+ ultimately show ?thesis by (rule Lim_null_comparison)
+qed
+
+lemma convergent_stirling_integral:
+ assumes "n > 0"
+ shows "convergent (\<lambda>N. integral {0..real N}
+ (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n))" (is "convergent (?f n)")
+proof -
+ have "convergent (?f (Suc n))" for n
+ proof (induction n)
+ case 0
+ thus ?case using integral_pbernpoly_1 by (auto intro!: convergentI)
+ next
+ case (Suc n)
+ have "convergent (\<lambda>N. ?f (Suc n) N -
+ of_real (pbernpoly (Suc (Suc n)) (real N)) /
+ (of_nat (Suc (Suc n)) * (s + of_nat N) ^ Suc n) +
+ of_real (bernoulli (Suc (Suc n)) / (real (Suc (Suc n)))) / s ^ Suc n)"
+ (is "convergent ?g")
+ by (intro convergent_add convergent_diff Suc
+ convergent_const convergentI[OF pbernpoly_over_power_tendsto_0]) simp_all
+ also have "?g = (\<lambda>N. of_nat (Suc n) / of_nat (Suc (Suc n)) * ?f (Suc (Suc n)) N)" using s
+ by (subst pbernpoly_integral_conv_pbernpoly_integral_Suc)
+ (auto simp: fun_eq_iff field_simps simp del: of_nat_Suc power_Suc)
+ also have "convergent \<dots> \<longleftrightarrow> convergent (?f (Suc (Suc n)))"
+ by (intro convergent_mult_const_iff) (simp_all del: of_nat_Suc)
+ finally show ?case .
+ qed
+ from this[of "n - 1"] assms show ?thesis by simp
+qed
+
+lemma stirling_integral_conv_stirling_integral_Suc:
+ assumes "n > 0"
+ shows "stirling_integral n s =
+ of_nat n / of_nat (Suc n) * stirling_integral (Suc n) s -
+ of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n)"
+proof -
+ have "(\<lambda>N. of_real (pbernpoly (Suc n) (real N)) / (of_nat (Suc n) * (s + of_nat N) ^ n) -
+ of_real (bernoulli (Suc n)) / (real (Suc n) * s ^ n) +
+ integral {0..real N} (\<lambda>x. of_nat n / of_nat (Suc n) *
+ (of_real (pbernpoly (Suc n) x) / (of_real x + s) ^ Suc n)))
+ \<longlonglongrightarrow> 0 - of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n) +
+ of_nat n / of_nat (Suc n) * stirling_integral (Suc n) s" (is "?f \<longlonglongrightarrow> _")
+ unfolding stirling_integral_def integral_mult_right
+ using convergent_stirling_integral[of "Suc n"] assms s
+ by (intro tendsto_intros pbernpoly_over_power_tendsto_0)
+ (auto simp: convergent_LIMSEQ_iff simp del: of_nat_Suc)
+ also have "?this \<longleftrightarrow> (\<lambda>N. integral {0..real N}
+ (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n)) \<longlonglongrightarrow>
+ of_nat n / of_nat (Suc n) * stirling_integral (Suc n) s -
+ of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n)"
+ using eventually_gt_at_top[of "0::nat"] pbernpoly_integral_conv_pbernpoly_integral_Suc[of n]
+ assms unfolding integral_mult_right
+ by (intro filterlim_cong refl) (auto elim!: eventually_mono simp del: power_Suc)
+ finally show ?thesis unfolding stirling_integral_def[of n] by (rule limI)
+qed
+
+lemma stirling_integral_1_unfold:
+ assumes "m > 0"
+ shows "stirling_integral 1 s = stirling_integral m s / of_nat m -
+ (\<Sum>k=1..<m. of_real (bernoulli (Suc k)) / (of_nat k * of_nat (Suc k) * s ^ k))"
+proof -
+ have "stirling_integral 1 s = stirling_integral (Suc m) s / of_nat (Suc m) -
+ (\<Sum>k=1..<Suc m. of_real (bernoulli (Suc k)) / (of_nat k * of_nat (Suc k) * s ^ k))" for m
+ proof (induction m)
+ case (Suc m)
+ let ?C = "(\<Sum>k = 1..<Suc m. of_real (bernoulli (Suc k)) / (of_nat k * of_nat (Suc k) * s ^ k))"
+ note Suc.IH
+ also have "stirling_integral (Suc m) s / of_nat (Suc m) =
+ stirling_integral (Suc (Suc m)) s / of_nat (Suc (Suc m)) -
+ of_real (bernoulli (Suc (Suc m))) /
+ (of_nat (Suc m) * of_nat (Suc (Suc m)) * s ^ Suc m)"
+ (is "_ = ?A - ?B") by (subst stirling_integral_conv_stirling_integral_Suc)
+ (simp_all del: of_nat_Suc power_Suc add: divide_simps)
+ also have "?A - ?B - ?C = ?A - (?B + ?C)" by (rule diff_diff_eq)
+ also have "?B + ?C = (\<Sum>k = 1..<Suc (Suc m). of_real (bernoulli (Suc k)) /
+ (of_nat k * of_nat (Suc k) * s ^ k))"
+ using s by (simp add: divide_simps)
+ finally show ?case .
+ qed simp_all
+ note this[of "m - 1"]
+ also from assms have "Suc (m - 1) = m" by simp
+ finally show ?thesis .
+qed
+
+lemma ln_Gamma_stirling_complex:
+ assumes "m > 0"
+ shows "ln_Gamma s = (s - 1 / 2) * ln s - s + ln (2 * pi) / 2 +
+ (\<Sum>k=1..<m. of_real (bernoulli (Suc k)) / (of_nat k * of_nat (Suc k) * s ^ k)) -
+ stirling_integral m s / of_nat m"
+proof -
+ have "ln_Gamma s = (s - 1 / 2) * ln s - s + ln (2 * pi) / 2 - stirling_integral 1 s"
+ using limI[OF integral_pbernpoly_1] by (simp add: stirling_integral_def algebra_simps)
+ also have "stirling_integral 1 s = stirling_integral m s / of_nat m -
+ (\<Sum>k = 1..<m. of_real (bernoulli (Suc k)) / (of_nat k * of_nat (Suc k) * s ^ k))"
+ using assms by (rule stirling_integral_1_unfold)
+ finally show ?thesis by simp
+qed
+
+lemma LIMSEQ_stirling_integral:
+ "n > 0 \<Longrightarrow> (\<lambda>x. integral {0..real x} (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n))
+ \<longlonglongrightarrow> stirling_integral n s" unfolding stirling_integral_def
+ using convergent_stirling_integral[of n] by (simp only: convergent_LIMSEQ_iff)
+
+end
+
+lemmas has_integral_of_real = has_integral_linear[OF _ bounded_linear_of_real, unfolded o_def]
+lemmas integral_of_real = integral_linear[OF _ bounded_linear_of_real, unfolded o_def]
+
+lemma integrable_ln_Gamma_aux_real:
+ assumes "0 < s"
+ shows "(\<lambda>x. pbernpoly n x / (x + s) ^ n) integrable_on {0..real N}"
+proof -
+ have "(\<lambda>x. complex_of_real (pbernpoly n x / (x + s) ^ n)) integrable_on {0..real N}"
+ using integrable_ln_Gamma_aux[of "of_real s" n N] assms by simp
+ from integrable_linear[OF this bounded_linear_Re] show ?thesis
+ by (simp only: o_def Re_complex_of_real)
+qed
+
+lemma
+ assumes "x > 0" "n > 0"
+ shows stirling_integral_complex_of_real:
+ "stirling_integral n (complex_of_real x) = of_real (stirling_integral n x)"
+ and LIMSEQ_stirling_integral_real:
+ "(\<lambda>N. integral {0..real N} (\<lambda>t. pbernpoly n t / (t + x) ^ n))
+ \<longlonglongrightarrow> stirling_integral n x"
+ and stirling_integral_real_convergent:
+ "convergent (\<lambda>N. integral {0..real N} (\<lambda>t. pbernpoly n t / (t + x) ^ n))"
+proof -
+ have "(\<lambda>N. integral {0..real N} (\<lambda>t. of_real (pbernpoly n t / (t + x) ^ n)))
+ \<longlonglongrightarrow> stirling_integral n (complex_of_real x)"
+ using LIMSEQ_stirling_integral[of "complex_of_real x" n] assms by simp
+ hence "(\<lambda>N. of_real (integral {0..real N} (\<lambda>t. pbernpoly n t / (t + x) ^ n)))
+ \<longlonglongrightarrow> stirling_integral n (complex_of_real x)"
+ using integrable_ln_Gamma_aux_real[OF assms(1), of n]
+ by (subst (asm) integral_of_real) simp
+ from tendsto_Re[OF this]
+ have "(\<lambda>N. integral {0..real N} (\<lambda>t. pbernpoly n t / (t + x) ^ n))
+ \<longlonglongrightarrow> Re (stirling_integral n (complex_of_real x))" by simp
+ thus "convergent (\<lambda>N. integral {0..real N} (\<lambda>t. pbernpoly n t / (t + x) ^ n))"
+ by (rule convergentI)
+ thus "(\<lambda>N. integral {0..real N} (\<lambda>t. pbernpoly n t / (t + x) ^ n))
+ \<longlonglongrightarrow> stirling_integral n x" unfolding stirling_integral_def
+ by (simp add: convergent_LIMSEQ_iff)
+ from tendsto_of_real[OF this, where 'a = complex]
+ integrable_ln_Gamma_aux_real[OF assms(1), of n]
+ have "(\<lambda>xa. integral {0..real xa}
+ (\<lambda>xa. complex_of_real (pbernpoly n xa) / (complex_of_real xa + x) ^ n))
+ \<longlonglongrightarrow> complex_of_real (stirling_integral n x)"
+ by (subst (asm) integral_of_real [symmetric]) simp_all
+ from LIMSEQ_unique[OF this LIMSEQ_stirling_integral[of "complex_of_real x" n]] assms
+ show "stirling_integral n (complex_of_real x) = of_real (stirling_integral n x)" by simp
+qed
+
+lemma ln_Gamma_stirling_real:
+ assumes "x > (0 :: real)" "m > (0::nat)"
+ shows "ln_Gamma x = (x - 1 / 2) * ln x - x + ln (2 * pi) / 2 +
+ (\<Sum>k = 1..<m. bernoulli (Suc k) / (of_nat k * of_nat (Suc k) * x ^ k)) -
+ stirling_integral m x / of_nat m"
+proof -
+ from assms have "complex_of_real (ln_Gamma x) = ln_Gamma (complex_of_real x)"
+ by (simp add: ln_Gamma_complex_of_real)
+ also have "ln_Gamma (complex_of_real x) = complex_of_real (
+ (x - 1 / 2) * ln x - x + ln (2 * pi) / 2 +
+ (\<Sum>k = 1..<m. bernoulli (Suc k) / (of_nat k * of_nat (Suc k) * x ^ k)) -
+ stirling_integral m x / of_nat m)" using assms
+ by (subst ln_Gamma_stirling_complex[of _ m])
+ (simp_all add: Ln_of_real stirling_integral_complex_of_real)
+ finally show ?thesis by (subst (asm) of_real_eq_iff)
+qed
+
+
+lemma stirling_integral_bound_aux:
+ assumes n: "n > (1::nat)"
+ obtains c where "\<And>s. Re s > 0 \<Longrightarrow> norm (stirling_integral n s) \<le> c / Re s ^ (n - 1)"
+proof -
+ obtain c where c: "norm (pbernpoly n x) \<le> c" for x by (rule bounded_pbernpoly[of n]) blast
+ have c': "pbernpoly n x \<le> c" for x using c[of x] by (simp add: abs_real_def split: if_splits)
+ from c[of 0] have c_nonneg: "c \<ge> 0" by simp
+ have "norm (stirling_integral n s) \<le> c / (real n - 1) / Re s ^ (n - 1)" if s: "Re s > 0" for s
+ proof (rule Lim_norm_ubound[OF _ LIMSEQ_stirling_integral])
+ have pos: "x + norm s > 0" if "x \<ge> 0" for x using s that by (intro add_nonneg_pos) auto
+ have nz: "of_real x + s \<noteq> 0" if "x \<ge> 0" for x using s that by (auto simp: complex_eq_iff)
+ let ?bound = "\<lambda>N. c / (Re s ^ (n - 1) * (real n - 1)) -
+ c / ((real N + Re s) ^ (n - 1) * (real n - 1))"
+ show "eventually (\<lambda>N. norm (integral {0..real N}
+ (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n)) \<le>
+ c / (real n - 1) / Re s ^ (n - 1)) at_top"
+ using eventually_gt_at_top[of "0::nat"]
+ proof eventually_elim
+ case (elim N)
+ let ?F = "\<lambda>x. -c / ((x + Re s) ^ (n - 1) * (real n - 1))"
+ from n s have "((\<lambda>x. c / (x + Re s) ^ n) has_integral (?F (real N) - ?F 0)) {0..real N}"
+ by (intro fundamental_theorem_of_calculus)
+ (auto intro!: derivative_eq_intros simp: divide_simps power_diff add_eq_0_iff2
+ has_field_derivative_iff_has_vector_derivative [symmetric])
+ also have "?F (real N) - ?F 0 = ?bound N" by simp
+ finally have *: "((\<lambda>x. c / (x + Re s) ^ n) has_integral ?bound N) {0..real N}" .
+ have "norm (integral {0..real N} (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n)) \<le>
+ integral {0..real N} (\<lambda>x. c / (x + Re s) ^ n)"
+ proof (intro integral_norm_bound_integral integrable_ln_Gamma_aux s ballI)
+ fix x assume x: "x \<in> {0..real N}"
+ have "norm (of_real (pbernpoly n x) / (of_real x + s) ^ n) \<le> c / norm (of_real x + s) ^ n"
+ unfolding norm_divide norm_power using c by (intro divide_right_mono) simp_all
+ also have "\<dots> \<le> c / (x + Re s) ^ n"
+ using x c c_nonneg s nz[of x] complex_Re_le_cmod[of "of_real x + s"]
+ by (intro divide_left_mono power_mono mult_pos_pos zero_less_power add_nonneg_pos) auto
+ finally show "norm (of_real (pbernpoly n x) / (of_real x + s) ^ n) \<le> \<dots>" .
+ qed (insert n s * pos nz c, auto simp: complex_nonpos_Reals_iff)
+ also have "\<dots> = ?bound N" using * by (simp add: has_integral_iff)
+ also have "\<dots> \<le> c / (Re s ^ (n - 1) * (real n - 1))" using c_nonneg elim s n by simp
+ also have "\<dots> = c / (real n - 1) / (Re s ^ (n - 1))" by simp
+ finally show "norm (integral {0..real N} (\<lambda>x. of_real (pbernpoly n x) /
+ (of_real x + s) ^ n)) \<le> c / (real n - 1) / Re s ^ (n - 1)" .
+ qed
+ qed (insert s n, simp_all add: complex_nonpos_Reals_iff)
+ thus ?thesis by (rule that)
+qed
+
+lemma stirling_integral_bound_aux_integral1:
+ fixes a b c :: real and n :: nat
+ assumes "a \<ge> 0" "b > 0" "c \<ge> 0" "n > 1" "l < a - b" "r > a + b"
+ shows "((\<lambda>x. c / max b \<bar>x - a\<bar> ^ n) has_integral
+ 2*c*(n / (n - 1))/b^(n-1) - c/(n-1) * (1/(a-l)^(n-1) + 1/(r-a)^(n-1))) {l..r}"
+proof -
+ define x1 x2 where "x1 = a - b" and "x2 = a + b"
+ define F1 where "F1 = (\<lambda>x::real. c / (a - x) ^ (n - 1) / (n - 1))"
+ define F3 where "F3 = (\<lambda>x::real. -c / (x - a) ^ (n - 1) / (n - 1))"
+ have deriv: "(F1 has_vector_derivative (c / (a - x) ^ n)) (at x within A)"
+ "(F3 has_vector_derivative (c / (x - a) ^ n)) (at x within A)"
+ if "x \<noteq> a" for x :: real and A
+ unfolding F1_def F3_def using assms that
+ by (auto intro!: derivative_eq_intros simp: divide_simps power_diff add_eq_0_iff2
+ simp flip: has_field_derivative_iff_has_vector_derivative)
+
+ from assms have "((\<lambda>x. c / (a - x) ^ n) has_integral (F1 x1 - F1 l)) {l..x1}"
+ by (intro fundamental_theorem_of_calculus deriv) (auto simp: x1_def max_def split: if_splits)
+ also have "?this \<longleftrightarrow> ((\<lambda>x. c / max b \<bar>x - a\<bar> ^ n) has_integral (F1 x1 - F1 l)) {l..x1}"
+ using assms
+ by (intro has_integral_spike_finite_eq[of "{l}"]) (auto simp: x1_def max_def split: if_splits)
+ finally have I1: "((\<lambda>x. c / max b \<bar>x - a\<bar> ^ n) has_integral (F1 x1 - F1 l)) {l..x1}" .
+
+ have "((\<lambda>x. c / b ^ n) has_integral (x2 - x1) * c / b ^ n) {x1..x2}"
+ using has_integral_const_real[of "c / b ^ n" x1 x2] assms by (simp add: x1_def x2_def)
+ also have "?this \<longleftrightarrow> ((\<lambda>x. c / max b \<bar>x - a\<bar> ^ n) has_integral ((x2 - x1) * c / b ^ n)) {x1..x2}"
+ by (intro has_integral_spike_finite_eq[of "{x1, x2}"])
+ (auto simp: x1_def x2_def split: if_splits)
+ finally have I2: "((\<lambda>x. c / max b \<bar>x - a\<bar> ^ n) has_integral ((x2 - x1) * c / b ^ n)) {x1..x2}" .
+
+ from assms have I3: "((\<lambda>x. c / (x - a) ^ n) has_integral (F3 r - F3 x2)) {x2..r}"
+ by (intro fundamental_theorem_of_calculus deriv) (auto simp: x2_def min_def split: if_splits)
+ also have "?this \<longleftrightarrow> ((\<lambda>x. c / max b \<bar>x - a\<bar> ^ n) has_integral (F3 r - F3 x2)) {x2..r}"
+ using assms
+ by (intro has_integral_spike_finite_eq[of "{r}"]) (auto simp: x2_def min_def split: if_splits)
+ finally have I3: "((\<lambda>x. c / max b \<bar>x - a\<bar> ^ n) has_integral (F3 r - F3 x2)) {x2..r}" .
+
+ have "((\<lambda>x. c / max b \<bar>x - a\<bar> ^ n) has_integral (F1 x1 - F1 l) + ((x2 - x1) * c / b ^ n) + (F3 r - F3 x2)) {l..r}"
+ using assms
+ by (intro has_integral_combine[OF _ _ has_integral_combine[OF _ _ I1 I2] I3])
+ (auto simp: x1_def x2_def)
+ also have "(F1 x1 - F1 l) + ((x2 - x1) * c / b ^ n) + (F3 r - F3 x2) =
+ F1 x1 - F1 l + F3 r - F3 x2 + (x2 - x1) * c / b ^ n"
+ by (simp add: algebra_simps)
+ also have "x2 - x1 = 2 * b"
+ using assms by (simp add: x2_def x1_def min_def max_def)
+ also have "2 * b * c / b ^ n = 2 * c / b ^ (n - 1)"
+ using assms by (simp add: power_diff field_simps)
+ also have "F1 x1 - F1 l + F3 r - F3 x2 =
+ c/(n-1) * (2/b^(n-1) - 1/(a-l)^(n-1) - 1/(r-a)^(n-1))"
+ using assms by (simp add: x1_def x2_def F1_def F3_def field_simps)
+ also have "\<dots> + 2 * c / b ^ (n - 1) =
+ 2*c*(1 + 1/(n-1))/b^(n-1) - c/(n-1) * (1/(a-l)^(n-1) + 1/(r-a)^(n-1))"
+ using assms by (simp add: field_simps)
+ also have "1 + 1 / (n - 1) = n / (n - 1)"
+ using assms by (simp add: field_simps)
+ finally show ?thesis .
+qed
+
+lemma stirling_integral_bound_aux_integral2:
+ fixes a b c :: real and n :: nat
+ assumes "a \<ge> 0" "b > 0" "c \<ge> 0" "n > 1"
+ obtains I where "((\<lambda>x. c / max b \<bar>x - a\<bar> ^ n) has_integral I) {l..r}"
+ "I \<le> 2 * c * (n / (n - 1)) / b ^ (n-1)"
+proof -
+ define l' where "l' = min l (a - b - 1)"
+ define r' where "r' = max r (a + b + 1)"
+
+ define A where "A = 2 * c * (n / (n - 1)) / b ^ (n - 1)"
+ define B where "B = c / real (n - 1) * (1 / (a - l') ^ (n - 1) + 1 / (r' - a) ^ (n - 1))"
+
+ have has_int: "((\<lambda>x. c / max b \<bar>x - a\<bar> ^ n) has_integral (A - B)) {l'..r'}"
+ using assms unfolding A_def B_def
+ by (intro stirling_integral_bound_aux_integral1) (auto simp: l'_def r'_def)
+ have "(\<lambda>x. c / max b \<bar>x - a\<bar> ^ n) integrable_on {l..r}"
+ by (rule integrable_on_subinterval[OF has_integral_integrable[OF has_int]])
+ (auto simp: l'_def r'_def)
+ then obtain I where has_int': "((\<lambda>x. c / max b \<bar>x - a\<bar> ^ n) has_integral I) {l..r}"
+ by (auto simp: integrable_on_def)
+
+ from assms have "I \<le> A - B"
+ by (intro has_integral_subset_le[OF _ has_int' has_int]) (auto simp: l'_def r'_def)
+ also have "\<dots> \<le> A"
+ using assms by (simp add: B_def l'_def r'_def)
+ finally show ?thesis using that[of I] has_int' unfolding A_def by blast
+qed
+
+lemma stirling_integral_bound_aux':
+ assumes n: "n > (1::nat)" and \<alpha>: "\<alpha> \<in> {0<..<pi}"
+ obtains c where "\<And>s::complex. s \<in> complex_cone' \<alpha> - {0} \<Longrightarrow>
+ norm (stirling_integral n s) \<le> c / norm s ^ (n - 1)"
+proof -
+ obtain c where c: "norm (pbernpoly n x) \<le> c" for x by (rule bounded_pbernpoly[of n]) blast
+ have c': "pbernpoly n x \<le> c" for x using c[of x] by (simp add: abs_real_def split: if_splits)
+ from c[of 0] have c_nonneg: "c \<ge> 0" by simp
+
+ define D where "D = c * Beta (- (real_of_int (- int n) / 2) - 1 / 2) (1 / 2) / 2"
+ define C where "C = max D (2*c*(n/(n-1))/sin \<alpha>^(n-1))"
+
+ have *: "norm (stirling_integral n s) \<le> C / norm s ^ (n - 1)"
+ if s: "s \<in> complex_cone' \<alpha> - {0}" for s :: complex
+ proof (rule Lim_norm_ubound[OF _ LIMSEQ_stirling_integral])
+ from s \<alpha> have arg: "\<bar>arg s\<bar> \<le> \<alpha>" by (auto simp: complex_cone_altdef)
+ have s': "s \<notin> \<real>\<^sub>\<le>\<^sub>0"
+ using complex_cone_inter_nonpos_Reals[of "-\<alpha>" \<alpha>] \<alpha> s by auto
+ from s have [simp]: "s \<noteq> 0" by auto
+
+ show "eventually (\<lambda>N. norm (integral {0..real N}
+ (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n)) \<le>
+ C / norm s ^ (n - 1)) at_top"
+ using eventually_gt_at_top[of "0::nat"]
+ proof eventually_elim
+ case (elim N)
+ show ?case
+ proof (cases "Re s > 0")
+ case True
+ have int: "((\<lambda>x. c * (x^2 + norm s^2) powr (-n / 2)) has_integral
+ D * (norm s ^ 2) powr (-n / 2 + 1 / 2)) {0<..}"
+ using has_integral_mult_left[OF has_integral_Beta3[of "-n/2" "norm s ^ 2"], of c] assms True
+ unfolding D_def by (simp add: algebra_simps)
+ hence int': "((\<lambda>x. c * (x^2 + norm s^2) powr (-n / 2)) has_integral
+ D * (norm s ^ 2) powr (-n / 2 + 1 / 2)) {0..}"
+ by (subst has_integral_interior [symmetric]) simp_all
+ hence integrable: "(\<lambda>x. c * (x^2 + norm s^2) powr (-n / 2)) integrable_on {0..}"
+ by (simp add: has_integral_iff)
+
+ have "norm (integral {0..real N} (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n)) \<le>
+ integral {0..real N} (\<lambda>x. c * (x^2 + norm s^2) powr (-n / 2))"
+ proof (intro integral_norm_bound_integral s ballI integrable_ln_Gamma_aux)
+ have [simp]: "{0<..} - {0::real..} = {}" "{0..} - {0<..} = {0::real}"
+ by auto
+ have "(\<lambda>x. c * (x\<^sup>2 + (cmod s)\<^sup>2) powr (real_of_int (- int n) / 2)) integrable_on {0<..}"
+ using int by (simp add: has_integral_iff)
+ also have "?this \<longleftrightarrow> (\<lambda>x. c * (x\<^sup>2 + (cmod s)\<^sup>2) powr (real_of_int (- int n) / 2)) integrable_on {0..}"
+ by (intro integrable_spike_set_eq) auto
+ finally show "(\<lambda>x. c * (x\<^sup>2 + (cmod s)\<^sup>2) powr (real_of_int (- int n) / 2)) integrable_on
+ {0..real N}" by (rule integrable_on_subinterval) auto
+ next
+ fix x assume x: "x \<in> {0..real N}"
+ have nz: "complex_of_real x + s \<noteq> 0"
+ using True x by (auto simp: complex_eq_iff)
+ have "norm (of_real (pbernpoly n x) / (of_real x + s) ^ n) \<le> c / norm (of_real x + s) ^ n"
+ unfolding norm_divide norm_power using c by (intro divide_right_mono) simp_all
+ also have "\<dots> \<le> c / sqrt (x ^ 2 + norm s ^ 2) ^ n"
+ proof (intro divide_left_mono mult_pos_pos zero_less_power power_mono)
+ show "sqrt (x\<^sup>2 + (cmod s)\<^sup>2) \<le> cmod (complex_of_real x + s)"
+ using x True by (simp add: cmod_def algebra_simps power2_eq_square)
+ qed (use x True c_nonneg assms nz in \<open>auto simp: add_nonneg_pos\<close>)
+ also have "sqrt (x ^ 2 + norm s ^ 2) ^ n = (x ^ 2 + norm s ^ 2) powr (1/2 * n)"
+ by (subst powr_powr [symmetric], subst powr_realpow)
+ (auto simp: powr_half_sqrt add_nonneg_pos)
+ also have "c / \<dots> = c * (x^2 + norm s^2) powr (-n / 2)"
+ by (simp add: powr_minus field_simps)
+ finally show "norm (complex_of_real (pbernpoly n x) / (complex_of_real x + s) ^ n) \<le> \<dots>" .
+ qed fact+
+ also have "\<dots> \<le> integral {0..} (\<lambda>x. c * (x^2 + norm s^2) powr (-n / 2))"
+ using c_nonneg
+ by (intro integral_subset_le integrable integrable_on_subinterval[OF integrable]) auto
+ also have "\<dots> = D * (norm s ^ 2) powr (-n / 2 + 1 / 2)"
+ using int' by (simp add: has_integral_iff)
+ also have "(norm s ^ 2) powr (-n / 2 + 1 / 2) = norm s powr (2 * (-n / 2 + 1 / 2))"
+ by (subst powr_powr [symmetric]) auto
+ also have "\<dots> = norm s powr (-real (n - 1))"
+ using assms by (simp add: of_nat_diff)
+ also have "D * \<dots> = D / norm s ^ (n - 1)"
+ by (auto simp: powr_minus powr_realpow field_simps)
+ also have "\<dots> \<le> C / norm s ^ (n - 1)"
+ by (intro divide_right_mono) (auto simp: C_def)
+ finally show "norm (integral {0..real N} (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n)) \<le> \<dots>" .
+
+ next
+
+ case False
+ have "cos \<bar>arg s\<bar> = cos (arg s)"
+ by (simp add: abs_if)
+ also have "cos (arg s) = Re (rcis (norm s) (arg s)) / norm s"
+ by (subst Re_rcis) auto
+ also have "\<dots> = Re s / norm s"
+ by (subst rcis_cmod_arg) auto
+ also have "\<dots> \<le> cos (pi / 2)"
+ using False by (auto simp: field_simps)
+ finally have "\<bar>arg s\<bar> \<ge> pi / 2"
+ using arg \<alpha> by (subst (asm) cos_mono_le_eq) auto
+
+ have "sin \<alpha> * norm s = sin (pi - \<alpha>) * norm s"
+ by simp
+ also have "\<dots> \<le> sin (pi - \<bar>arg s\<bar>) * norm s"
+ using \<alpha> arg \<open>\<bar>arg s\<bar> \<ge> pi / 2\<close>
+ by (intro mult_right_mono sin_monotone_2pi_le) auto
+ also have "sin \<bar>arg s\<bar> \<ge> 0"
+ using arg_bounded[of s] by (intro sin_ge_zero) auto
+ hence "sin (pi - \<bar>arg s\<bar>) = \<bar>sin \<bar>arg s\<bar>\<bar>"
+ by simp
+ also have "\<dots> = \<bar>sin (arg s)\<bar>"
+ by (simp add: abs_if)
+ also have "\<dots> * norm s = \<bar>Im (rcis (norm s) (arg s))\<bar>"
+ by (simp add: abs_mult)
+ also have "\<dots> = \<bar>Im s\<bar>"
+ by (subst rcis_cmod_arg) auto
+ finally have abs_Im_ge: "\<bar>Im s\<bar> \<ge> sin \<alpha> * norm s" .
+
+ have [simp]: "Im s \<noteq> 0" "s \<noteq> 0"
+ using s \<open>s \<notin> \<real>\<^sub>\<le>\<^sub>0\<close> False
+ by (auto simp: cmod_def zero_le_mult_iff complex_nonpos_Reals_iff)
+ have "sin \<alpha> > 0"
+ using assms by (intro sin_gt_zero) auto
+
+ obtain I where I: "((\<lambda>x. c / max \<bar>Im s\<bar> \<bar>x + Re s\<bar> ^ n) has_integral I) {0..real N}"
+ "I \<le> 2*c*(n/(n-1)) / \<bar>Im s\<bar> ^ (n - 1)"
+ using s c_nonneg assms False
+ stirling_integral_bound_aux_integral2[of "-Re s" "\<bar>Im s\<bar>" c n 0 "real N"] by auto
+
+ have "norm (integral {0..real N} (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n)) \<le>
+ integral {0..real N} (\<lambda>x. c / max \<bar>Im s\<bar> \<bar>x + Re s\<bar> ^ n)"
+ proof (intro integral_norm_bound_integral integrable_ln_Gamma_aux s ballI)
+ show "(\<lambda>x. c / max \<bar>Im s\<bar> \<bar>x + Re s\<bar> ^ n) integrable_on {0..real N}"
+ using I(1) by (simp add: has_integral_iff)
+ next
+ fix x assume x: "x \<in> {0..real N}"
+ have nz: "complex_of_real x + s \<noteq> 0"
+ by (auto simp: complex_eq_iff)
+ have "norm (complex_of_real (pbernpoly n x) / (complex_of_real x + s) ^ n) \<le>
+ c / norm (complex_of_real x + s) ^ n"
+ unfolding norm_divide norm_power using c[of x] by (intro divide_right_mono) simp_all
+ also have "\<dots> \<le> c / max \<bar>Im s\<bar> \<bar>x + Re s\<bar> ^ n"
+ using c_nonneg nz abs_Re_le_cmod[of "of_real x + s"] abs_Im_le_cmod[of "of_real x + s"]
+ by (intro divide_left_mono power_mono mult_pos_pos zero_less_power)
+ (auto simp: less_max_iff_disj)
+ finally show "norm (complex_of_real (pbernpoly n x) / (complex_of_real x + s) ^ n) \<le> \<dots>" .
+ qed (auto simp: complex_nonpos_Reals_iff)
+ also have "\<dots> \<le> 2*c*(n/(n-1)) / \<bar>Im s\<bar> ^ (n - 1)"
+ using I by (simp add: has_integral_iff)
+ also have "\<dots> \<le> 2*c*(n/(n-1)) / (sin \<alpha> * norm s) ^ (n - 1)"
+ using \<open>sin \<alpha> > 0\<close> s c_nonneg abs_Im_ge
+ by (intro divide_left_mono mult_pos_pos zero_less_power power_mono mult_nonneg_nonneg) auto
+ also have "\<dots> = 2*c*(n/(n-1))/sin \<alpha>^(n-1) / norm s ^ (n - 1)"
+ by (simp add: field_simps)
+ also have "\<dots> \<le> C / norm s ^ (n - 1)"
+ by (intro divide_right_mono) (auto simp: C_def)
+ finally show ?thesis .
+ qed
+ qed
+ qed (use that assms complex_cone_inter_nonpos_Reals[of "-\<alpha>" \<alpha>] \<alpha> in auto)
+ thus ?thesis by (rule that)
+qed
+
+lemma stirling_integral_bound:
+ assumes "n > 0"
+ obtains c where
+ "\<And>s. Re s > 0 \<Longrightarrow> norm (stirling_integral n s) \<le> c / Re s ^ n"
+proof -
+ let ?f = "\<lambda>s. of_nat n / of_nat (Suc n) * stirling_integral (Suc n) s -
+ of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n)"
+ from stirling_integral_bound_aux[of "Suc n"] assms obtain c where
+ c: "\<And>s. Re s > 0 \<Longrightarrow> norm (stirling_integral (Suc n) s) \<le> c / Re s ^ n" by auto
+ define c1 where "c1 = real n / real (Suc n) * c"
+ define c2 where "c2 = \<bar>bernoulli (Suc n)\<bar> / real (Suc n)"
+ have c2_nonneg: "c2 \<ge> 0" by (simp add: c2_def)
+ show ?thesis
+ proof (rule that)
+ fix s :: complex assume s: "Re s > 0"
+ hence s': "s \<notin> \<real>\<^sub>\<le>\<^sub>0" by (auto simp: complex_nonpos_Reals_iff)
+ have "stirling_integral n s = ?f s" using s' assms
+ by (rule stirling_integral_conv_stirling_integral_Suc)
+ also have "norm \<dots> \<le> norm (of_nat n / of_nat (Suc n) * stirling_integral (Suc n) s) +
+ norm (of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n))"
+ by (rule norm_triangle_ineq4)
+ also have "\<dots> = real n / real (Suc n) * norm (stirling_integral (Suc n) s) +
+ c2 / norm s ^ n" (is "_ = ?A + ?B")
+ by (simp add: norm_divide norm_mult norm_power c2_def field_simps del: of_nat_Suc)
+ also have "?A \<le> real n / real (Suc n) * (c / Re s ^ n)"
+ by (intro mult_left_mono c s) simp_all
+ also have "\<dots> = c1 / Re s ^ n" by (simp add: c1_def)
+ also have "c2 / norm s ^ n \<le> c2 / Re s ^ n" using s c2_nonneg
+ by (intro divide_left_mono power_mono complex_Re_le_cmod mult_pos_pos zero_less_power) auto
+ also have "c1 / Re s ^ n + c2 / Re s ^ n = (c1 + c2) / Re s ^ n"
+ using s by (simp add: field_simps)
+ finally show "norm (stirling_integral n s) \<le> (c1 + c2) / Re s ^ n" by - simp_all
+ qed
+qed
+
+lemma stirling_integral_bound':
+ assumes "n > 0" and "\<alpha> \<in> {0<..<pi}"
+ obtains c where
+ "\<And>s::complex. s \<in> complex_cone' \<alpha> - {0} \<Longrightarrow> norm (stirling_integral n s) \<le> c / norm s ^ n"
+proof -
+ let ?f = "\<lambda>s. of_nat n / of_nat (Suc n) * stirling_integral (Suc n) s -
+ of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n)"
+ from stirling_integral_bound_aux'[of "Suc n"] assms obtain c where
+ c: "\<And>s::complex. s \<in> complex_cone' \<alpha> - {0} \<Longrightarrow>
+ norm (stirling_integral (Suc n) s) \<le> c / norm s ^ n" by auto
+ define c1 where "c1 = real n / real (Suc n) * c"
+ define c2 where "c2 = \<bar>bernoulli (Suc n)\<bar> / real (Suc n)"
+ have c2_nonneg: "c2 \<ge> 0" by (simp add: c2_def)
+ show ?thesis
+ proof (rule that)
+ fix s :: complex assume s: "s \<in> complex_cone' \<alpha> - {0}"
+ have s': "s \<notin> \<real>\<^sub>\<le>\<^sub>0"
+ using complex_cone_inter_nonpos_Reals[of "-\<alpha>" \<alpha>] assms s by auto
+
+ have "stirling_integral n s = ?f s" using s' assms
+ by (intro stirling_integral_conv_stirling_integral_Suc) auto
+ also have "norm \<dots> \<le> norm (of_nat n / of_nat (Suc n) * stirling_integral (Suc n) s) +
+ norm (of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n))"
+ by (rule norm_triangle_ineq4)
+ also have "\<dots> = real n / real (Suc n) * norm (stirling_integral (Suc n) s) +
+ c2 / norm s ^ n" (is "_ = ?A + ?B")
+ by (simp add: norm_divide norm_mult norm_power c2_def field_simps del: of_nat_Suc)
+ also have "?A \<le> real n / real (Suc n) * (c / norm s ^ n)"
+ by (intro mult_left_mono c s) simp_all
+ also have "\<dots> = c1 / norm s ^ n" by (simp add: c1_def)
+ also have "c1 / norm s ^ n + c2 / norm s ^ n = (c1 + c2) / norm s ^ n"
+ using s by (simp add: divide_simps)
+ finally show "norm (stirling_integral n s) \<le> (c1 + c2) / norm s ^ n" by - simp_all
+ qed
+qed
+
+
+lemma stirling_integral_holomorphic [holomorphic_intros]:
+ assumes m: "m > 0" and "A \<inter> \<real>\<^sub>\<le>\<^sub>0 = {}"
+ shows "stirling_integral m holomorphic_on A"
+proof -
+ from assms have [simp]: "z \<notin> \<real>\<^sub>\<le>\<^sub>0" if "z \<in> A" for z
+ using that by auto
+ let ?f = "\<lambda>s::complex. of_nat m * ((s - 1 / 2) * Ln s - s + of_real (ln (2 * pi) / 2) +
+ (\<Sum>k=1..<m. of_real (bernoulli (Suc k)) / (of_nat k * of_nat (Suc k) * s ^ k)) -
+ ln_Gamma s)"
+ have "?f holomorphic_on A" using assms
+ by (auto intro!: holomorphic_intros simp del: of_nat_Suc elim!: nonpos_Reals_cases)
+ also have "?this \<longleftrightarrow> stirling_integral m holomorphic_on A"
+ using assms by (intro holomorphic_cong refl)
+ (simp_all add: field_simps ln_Gamma_stirling_complex)
+ finally show "stirling_integral m holomorphic_on A" .
+qed
+
+lemma stirling_integral_continuous_on_complex [continuous_intros]:
+ assumes m: "m > 0" and "A \<inter> \<real>\<^sub>\<le>\<^sub>0 = {}"
+ shows "continuous_on A (stirling_integral m :: _ \<Rightarrow> complex)"
+ by (intro holomorphic_on_imp_continuous_on stirling_integral_holomorphic assms)
+
+lemma has_field_derivative_stirling_integral_complex:
+ fixes x :: complex
+ assumes "x \<notin> \<real>\<^sub>\<le>\<^sub>0" "n > 0"
+ shows "(stirling_integral n has_field_derivative deriv (stirling_integral n) x) (at x)"
+ using assms
+ by (intro holomorphic_derivI[OF stirling_integral_holomorphic, of n "-\<real>\<^sub>\<le>\<^sub>0"]) auto
+
+
+
+lemma
+ assumes n: "n > 0" and "x > 0"
+ shows deriv_stirling_integral_complex_of_real:
+ "(deriv ^^ j) (stirling_integral n) (complex_of_real x) =
+ complex_of_real ((deriv ^^ j) (stirling_integral n) x)" (is "?lhs x = ?rhs x")
+ and differentiable_stirling_integral_real:
+ "(deriv ^^ j) (stirling_integral n) field_differentiable at x" (is ?thesis2)
+proof -
+ let ?A = "{s. Re s > 0}"
+ let ?f = "\<lambda>j x. (deriv ^^ j) (stirling_integral n) (complex_of_real x)"
+ let ?f' = "\<lambda>j x. complex_of_real ((deriv ^^ j) (stirling_integral n) x)"
+
+ have [simp]: "open ?A" by (simp add: open_halfspace_Re_gt)
+
+ have "?lhs x = ?rhs x \<and> (deriv ^^ j) (stirling_integral n) field_differentiable at x"
+ if "x > 0" for x using that
+ proof (induction j arbitrary: x)
+ case 0
+ have "((\<lambda>x. Re (stirling_integral n (of_real x))) has_field_derivative
+ Re (deriv (\<lambda>x. stirling_integral n x) (of_real x))) (at x)" using 0 n
+ by (auto intro!: derivative_intros has_vector_derivative_real_field
+ field_differentiable_derivI holomorphic_on_imp_differentiable_at[of _ ?A]
+ stirling_integral_holomorphic simp: complex_nonpos_Reals_iff)
+ also have "?this \<longleftrightarrow> (stirling_integral n has_field_derivative
+ Re (deriv (\<lambda>x. stirling_integral n x) (of_real x))) (at x)"
+ using eventually_nhds_in_open[of "{0<..}" x] 0 n
+ by (intro has_field_derivative_cong_ev refl)
+ (auto elim!: eventually_mono simp: stirling_integral_complex_of_real)
+ finally have "stirling_integral n field_differentiable at x"
+ by (auto simp: field_differentiable_def)
+ with 0 n show ?case by (auto simp: stirling_integral_complex_of_real)
+ next
+ case (Suc j x)
+ note IH = conjunct1[OF Suc.IH] conjunct2[OF Suc.IH]
+ have *: "(deriv ^^ Suc j) (stirling_integral n) (complex_of_real x) =
+ of_real ((deriv ^^ Suc j) (stirling_integral n) x)" if x: "x > 0" for x
+ proof -
+ have "deriv ((deriv ^^ j) (stirling_integral n)) (complex_of_real x) =
+ vector_derivative (\<lambda>x. (deriv ^^ j) (stirling_integral n) (of_real x)) (at x)"
+ using n x
+ by (intro vector_derivative_of_real_right [symmetric]
+ holomorphic_on_imp_differentiable_at[of _ ?A] holomorphic_higher_deriv
+ stirling_integral_holomorphic) (auto simp: complex_nonpos_Reals_iff)
+ also have "\<dots> = vector_derivative (\<lambda>x. of_real ((deriv ^^ j) (stirling_integral n) x)) (at x)"
+ using eventually_nhds_in_open[of "{0<..}" x] x
+ by (intro vector_derivative_cong_eq) (auto elim!: eventually_mono simp: IH(1))
+ also have "\<dots> = of_real (deriv ((deriv ^^ j) (stirling_integral n)) x)"
+ by (intro vector_derivative_of_real_left holomorphic_on_imp_differentiable_at[of _ ?A]
+ field_differentiable_imp_differentiable IH(2) x)
+ finally show ?thesis by simp
+ qed
+ have "((\<lambda>x. Re ((deriv ^^ Suc j) (stirling_integral n) (of_real x))) has_field_derivative
+ Re (deriv ((deriv ^^ Suc j) (stirling_integral n)) (of_real x))) (at x)"
+ using Suc.prems n
+ by (intro derivative_intros has_vector_derivative_real_field field_differentiable_derivI
+ holomorphic_on_imp_differentiable_at[of _ ?A] stirling_integral_holomorphic
+ holomorphic_higher_deriv) (auto simp: complex_nonpos_Reals_iff)
+ also have "?this \<longleftrightarrow> ((deriv ^^ Suc j) (stirling_integral n) has_field_derivative
+ Re (deriv ((deriv ^^ Suc j) (stirling_integral n)) (of_real x))) (at x)"
+ using eventually_nhds_in_open[of "{0<..}" x] Suc.prems *
+ by (intro has_field_derivative_cong_ev refl) (auto elim!: eventually_mono)
+ finally have "(deriv ^^ Suc j) (stirling_integral n) field_differentiable at x"
+ by (auto simp: field_differentiable_def)
+ with *[OF Suc.prems] show ?case by blast
+ qed
+ from this[OF assms(2)] show "?lhs x = ?rhs x" ?thesis2 by blast+
+qed
+
+text \<open>
+ Unfortunately, asymptotic power series cannot, in general, be differentiated. However, since
+ @{term ln_Gamma} is holomorphic on the entire positive real half-space, we can differentiate
+ its asymptotic expansion after all.
+
+ To do this, we use an ad-hoc version of the more general approach outlined in Erdelyi's
+ ``Asymptotic Expansions'' for holomorphic functions: We bound the value of the $j$-th derivative
+ of the remainder term at some value $x$ by applying Cauchy's integral formula along a circle
+ centred at $x$ with radius $\frac{1}{2} x$.
+\<close>
+lemma deriv_stirling_integral_real_bound:
+ assumes m: "m > 0"
+ shows "(deriv ^^ j) (stirling_integral m) \<in> O(\<lambda>x::real. 1 / x ^ (m + j))"
+proof -
+ from stirling_integral_bound[OF m] guess c . note c = this
+ have "0 \<le> cmod (stirling_integral m 1)" by simp
+ also have "\<dots> \<le> c" using c[of 1] by simp
+ finally have c_nonneg: "c \<ge> 0" .
+ define B where "B = c * 2 ^ (m + Suc j)"
+ define B' where "B' = B * fact j / 2"
+
+ have "eventually (\<lambda>x::real. norm ((deriv ^^ j) (stirling_integral m) x) \<le>
+ B' * norm (1 / x ^ (m+ j))) at_top"
+ using eventually_gt_at_top[of "0::real"]
+ proof eventually_elim
+ case (elim x)
+ have "s \<notin> \<real>\<^sub>\<le>\<^sub>0" if "s \<in> cball (of_real x) (x/2)" for s :: complex
+ proof -
+ have "x - Re s \<le> norm (of_real x - s)" using complex_Re_le_cmod[of "of_real x - s"] by simp
+ also from that have "\<dots> \<le> x/2" by (simp add: dist_complex_def)
+ finally show ?thesis using elim by (auto simp: complex_nonpos_Reals_iff)
+ qed
+ hence "((\<lambda>u. stirling_integral m u / (u - of_real x) ^ Suc j) has_contour_integral
+ complex_of_real (2 * pi) * \<i> / fact j *
+ (deriv ^^ j) (stirling_integral m) (of_real x)) (circlepath (of_real x) (x/2))"
+ using m elim
+ by (intro Cauchy_has_contour_integral_higher_derivative_circlepath
+ stirling_integral_continuous_on_complex stirling_integral_holomorphic) auto
+ hence "norm (of_real (2 * pi) * \<i> / fact j * (deriv ^^ j) (stirling_integral m) (of_real x)) \<le>
+ B / x ^ (m + Suc j) * (2 * pi * (x / 2))"
+ proof (rule has_contour_integral_bound_circlepath)
+ fix u :: complex assume dist: "norm (u - of_real x) = x / 2"
+ have "Re (of_real x - u) \<le> norm (of_real x - u)" by (rule complex_Re_le_cmod)
+ also have "\<dots> = x / 2" using dist by (simp add: norm_minus_commute)
+ finally have Re_u: "Re u \<ge> x/2" using elim by simp
+ have "norm (stirling_integral m u / (u - of_real x) ^ Suc j) \<le>
+ c / Re u ^ m / (x / 2) ^ Suc j" using Re_u elim
+ unfolding norm_divide norm_power dist
+ by (intro divide_right_mono zero_le_power c) simp_all
+ also have "\<dots> \<le> c / (x/2) ^ m / (x / 2) ^ Suc j" using c_nonneg elim Re_u
+ by (intro divide_right_mono divide_left_mono power_mono) simp_all
+ also have "\<dots> = B / x ^ (m + Suc j)" using elim by (simp add: B_def field_simps power_add)
+ finally show "norm (stirling_integral m u / (u - of_real x) ^ Suc j) \<le> B / x ^ (m + Suc j)" .
+ qed (insert elim c_nonneg, auto simp: B_def simp del: power_Suc)
+ hence "cmod ((deriv ^^ j) (stirling_integral m) (of_real x)) \<le> B' / x ^ (j + m)"
+ using elim by (simp add: field_simps norm_divide norm_mult norm_power B'_def)
+ with elim m show ?case by (simp_all add: add_ac deriv_stirling_integral_complex_of_real)
+ qed
+ thus ?thesis by (rule bigoI)
+qed
+
+definition stirling_sum where
+ "stirling_sum j m x =
+ (-1) ^ j * (\<Sum>k = 1..<m. (of_real (bernoulli (Suc k)) * pochhammer (of_nat k) j / (of_nat k *
+ of_nat (Suc k))) * inverse x ^ (k + j))"
+
+definition stirling_sum' where
+ "stirling_sum' j m x =
+ (-1) ^ (Suc j) * (\<Sum>k\<le>m. (of_real (bernoulli' k) *
+ pochhammer (of_nat (Suc k)) (j - 1) * inverse x ^ (k + j)))"
+
+lemma stirling_sum_complex_of_real:
+ "stirling_sum j m (complex_of_real x) = complex_of_real (stirling_sum j m x)"
+ by (simp add: stirling_sum_def pochhammer_of_real [symmetric] del: of_nat_Suc)
+
+lemma stirling_sum'_complex_of_real:
+ "stirling_sum' j m (complex_of_real x) = complex_of_real (stirling_sum' j m x)"
+ by (simp add: stirling_sum'_def pochhammer_of_real [symmetric] del: of_nat_Suc)
+
+lemma has_field_derivative_stirling_sum_complex [derivative_intros]:
+ "Re x > 0 \<Longrightarrow> (stirling_sum j m has_field_derivative stirling_sum (Suc j) m x) (at x)"
+ unfolding stirling_sum_def [abs_def] sum_distrib_left
+ by (rule DERIV_sum) (auto intro!: derivative_eq_intros simp del: of_nat_Suc
+ simp: pochhammer_Suc power_diff)
+
+lemma has_field_derivative_stirling_sum_real [derivative_intros]:
+ "x > (0::real) \<Longrightarrow> (stirling_sum j m has_field_derivative stirling_sum (Suc j) m x) (at x)"
+ unfolding stirling_sum_def [abs_def] sum_distrib_left
+ by (rule DERIV_sum) (auto intro!: derivative_eq_intros simp del: of_nat_Suc
+ simp: pochhammer_Suc power_diff)
+
+lemma has_field_derivative_stirling_sum'_complex [derivative_intros]:
+ assumes "j > 0" "Re x > 0"
+ shows "(stirling_sum' j m has_field_derivative stirling_sum' (Suc j) m x) (at x)"
+proof (cases j)
+ case (Suc j')
+ from assms have [simp]: "x \<noteq> 0" by auto
+ define c where "c = (\<lambda>n. (-1) ^ Suc j * complex_of_real (bernoulli' n) *
+ pochhammer (of_nat (Suc n)) j')"
+ define T where "T = (\<lambda>n x. c n * inverse x ^ (j + n))"
+ define T' where "T' = (\<lambda>n x. - (of_nat (j + n)) * c n * inverse x ^ (Suc (j + n)))"
+ have "((\<lambda>x. \<Sum>k\<le>m. T k x) has_field_derivative (\<Sum>k\<le>m. T' k x)) (at x)" using assms Suc
+ by (intro DERIV_sum)
+ (auto simp: T_def T'_def intro!: derivative_eq_intros
+ simp: field_simps power_add [symmetric] simp del: of_nat_Suc power_Suc of_nat_add)
+ also have "(\<lambda>x. (\<Sum>k\<le>m. T k x)) = stirling_sum' j m"
+ by (simp add: Suc T_def c_def stirling_sum'_def fun_eq_iff add_ac mult.assoc sum_distrib_left)
+ also have "(\<Sum>k\<le>m. T' k x) = stirling_sum' (Suc j) m x"
+ by (simp add: T'_def c_def Suc stirling_sum'_def sum_distrib_left
+ sum_distrib_right algebra_simps pochhammer_Suc)
+ finally show ?thesis .
+qed (insert assms, simp_all)
+
+lemma has_field_derivative_stirling_sum'_real [derivative_intros]:
+ assumes "j > 0" "x > (0::real)"
+ shows "(stirling_sum' j m has_field_derivative stirling_sum' (Suc j) m x) (at x)"
+proof (cases j)
+ case (Suc j')
+ from assms have [simp]: "x \<noteq> 0" by auto
+ define c where "c = (\<lambda>n. (-1) ^ Suc j * (bernoulli' n) * pochhammer (of_nat (Suc n)) j')"
+ define T where "T = (\<lambda>n x. c n * inverse x ^ (j + n))"
+ define T' where "T' = (\<lambda>n x. - (of_nat (j + n)) * c n * inverse x ^ (Suc (j + n)))"
+ have "((\<lambda>x. \<Sum>k\<le>m. T k x) has_field_derivative (\<Sum>k\<le>m. T' k x)) (at x)" using assms Suc
+ by (intro DERIV_sum)
+ (auto simp: T_def T'_def intro!: derivative_eq_intros
+ simp: field_simps power_add [symmetric] simp del: of_nat_Suc power_Suc of_nat_add)
+ also have "(\<lambda>x. (\<Sum>k\<le>m. T k x)) = stirling_sum' j m"
+ by (simp add: Suc T_def c_def stirling_sum'_def fun_eq_iff add_ac mult.assoc sum_distrib_left)
+ also have "(\<Sum>k\<le>m. T' k x) = stirling_sum' (Suc j) m x"
+ by (simp add: T'_def c_def Suc stirling_sum'_def sum_distrib_left
+ sum_distrib_right algebra_simps pochhammer_Suc)
+ finally show ?thesis .
+qed (insert assms, simp_all)
+
+lemma higher_deriv_stirling_sum_complex:
+ "Re x > 0 \<Longrightarrow> (deriv ^^ i) (stirling_sum j m) x = stirling_sum (i + j) m x"
+proof (induction i arbitrary: x)
+ case (Suc i)
+ have "deriv ((deriv ^^ i) (stirling_sum j m)) x = deriv (stirling_sum (i + j) m) x"
+ using eventually_nhds_in_open[of "{x. Re x > 0}" x] Suc.prems
+ by (intro deriv_cong_ev refl) (auto elim!: eventually_mono simp: open_halfspace_Re_gt Suc.IH)
+ also from Suc.prems have "\<dots> = stirling_sum (Suc (i + j)) m x"
+ by (intro DERIV_imp_deriv has_field_derivative_stirling_sum_complex)
+ finally show ?case by simp
+qed simp_all
+
+
+definition Polygamma_approx :: "nat \<Rightarrow> nat \<Rightarrow> 'a \<Rightarrow> 'a :: {real_normed_field, ln}" where
+ "Polygamma_approx j m =
+ (deriv ^^ j) (\<lambda>x. (x - 1 / 2) * ln x - x + of_real (ln (2 * pi)) / 2 + stirling_sum 0 m x)"
+
+lemma Polygamma_approx_Suc: "Polygamma_approx (Suc j) m = deriv (Polygamma_approx j m)"
+ by (simp add: Polygamma_approx_def)
+
+lemma Polygamma_approx_0:
+ "Polygamma_approx 0 m x = (x - 1/2) * ln x - x + of_real (ln (2*pi)) / 2 + stirling_sum 0 m x"
+ by (simp add: Polygamma_approx_def)
+
+lemma Polygamma_approx_1_complex:
+ "Re x > 0 \<Longrightarrow>
+ Polygamma_approx (Suc 0) m x = ln x - 1 / (2*x) + stirling_sum (Suc 0) m x"
+ unfolding Polygamma_approx_Suc Polygamma_approx_0
+ by (intro DERIV_imp_deriv)
+ (auto intro!: derivative_eq_intros elim!: nonpos_Reals_cases simp: field_simps)
+
+lemma Polygamma_approx_1_real:
+ "x > (0 :: real) \<Longrightarrow>
+ Polygamma_approx (Suc 0) m x = ln x - 1 / (2*x) + stirling_sum (Suc 0) m x"
+ unfolding Polygamma_approx_Suc Polygamma_approx_0
+ by (intro DERIV_imp_deriv)
+ (auto intro!: derivative_eq_intros elim!: nonpos_Reals_cases simp: field_simps)
+
+lemma stirling_sum_2_conv_stirling_sum'_1:
+ fixes x :: "'a :: {real_div_algebra, field_char_0}"
+ assumes "m > 0" "x \<noteq> 0"
+ shows "stirling_sum' 1 m x = 1 / x + 1 / (2 * x^2) + stirling_sum 2 m x"
+proof -
+ have pochhammer_2: "pochhammer (of_nat k) 2 = of_nat k * of_nat (Suc k)" for k
+ by (simp add: pochhammer_Suc eval_nat_numeral add_ac)
+ have "stirling_sum 2 m x =
+ (\<Sum>k = Suc 0..<m. of_real (bernoulli' (Suc k)) * inverse x ^ Suc (Suc k))"
+ unfolding stirling_sum_def pochhammer_2 power2_minus power_one mult_1_left
+ by (intro sum.cong refl)
+ (simp_all add: stirling_sum_def pochhammer_2 power2_eq_square divide_simps bernoulli'_def
+ del: of_nat_Suc power_Suc)
+ also have "1 / (2 * x^2) + \<dots> =
+ (\<Sum>k=0..<m. of_real (bernoulli' (Suc k)) * inverse x ^ Suc (Suc k))" using assms
+ by (subst (2) sum.atLeast_Suc_lessThan) (simp_all add: power2_eq_square field_simps)
+ also have "1 / x + \<dots> = (\<Sum>k=0..<Suc m. of_real (bernoulli' k) * inverse x ^ Suc k)"
+ by (subst sum.atLeast0_lessThan_Suc_shift) (simp_all add: bernoulli'_def divide_simps)
+ also have "\<dots> = (\<Sum>k\<le>m. of_real (bernoulli' k) * inverse x ^ Suc k)"
+ by (intro sum.cong) auto
+ also have "\<dots> = stirling_sum' 1 m x" by (simp add: stirling_sum'_def)
+ finally show ?thesis by (simp add: add_ac)
+qed
+
+lemma Polygamma_approx_2_real:
+ assumes "x > (0::real)" "m > 0"
+ shows "Polygamma_approx (Suc (Suc 0)) m x = stirling_sum' 1 m x"
+proof -
+ have "Polygamma_approx (Suc (Suc 0)) m x = deriv (Polygamma_approx (Suc 0) m) x"
+ by (simp add: Polygamma_approx_Suc)
+ also have "\<dots> = deriv (\<lambda>x. ln x - 1 / (2*x) + stirling_sum (Suc 0) m x) x"
+ using eventually_nhds_in_open[of "{0<..}" x] assms
+ by (intro deriv_cong_ev) (auto elim!: eventually_mono simp: Polygamma_approx_1_real)
+ also have "\<dots> = 1 / x + 1 / (2*x^2) + stirling_sum (Suc (Suc 0)) m x" using assms
+ by (intro DERIV_imp_deriv) (auto intro!: derivative_eq_intros
+ elim!: nonpos_Reals_cases simp: field_simps power2_eq_square)
+ also have "\<dots> = stirling_sum' 1 m x" using stirling_sum_2_conv_stirling_sum'_1[of m x] assms
+ by (simp add: eval_nat_numeral)
+ finally show ?thesis .
+qed
+
+lemma Polygamma_approx_2_complex:
+ assumes "Re x > 0" "m > 0"
+ shows "Polygamma_approx (Suc (Suc 0)) m x = stirling_sum' 1 m x"
+proof -
+ have "Polygamma_approx (Suc (Suc 0)) m x = deriv (Polygamma_approx (Suc 0) m) x"
+ by (simp add: Polygamma_approx_Suc)
+ also have "\<dots> = deriv (\<lambda>x. ln x - 1 / (2*x) + stirling_sum (Suc 0) m x) x"
+ using eventually_nhds_in_open[of "{s. Re s > 0}" x] assms
+ by (intro deriv_cong_ev)
+ (auto simp: open_halfspace_Re_gt elim!: eventually_mono simp: Polygamma_approx_1_complex)
+ also have "\<dots> = 1 / x + 1 / (2*x^2) + stirling_sum (Suc (Suc 0)) m x" using assms
+ by (intro DERIV_imp_deriv) (auto intro!: derivative_eq_intros
+ elim!: nonpos_Reals_cases simp: field_simps power2_eq_square)
+ also have "\<dots> = stirling_sum' 1 m x" using stirling_sum_2_conv_stirling_sum'_1[of m x] assms
+ by (subst stirling_sum_2_conv_stirling_sum'_1) (auto simp: eval_nat_numeral)
+ finally show ?thesis .
+qed
+
+lemma Polygamma_approx_ge_2_real:
+ assumes "x > (0::real)" "m > 0"
+ shows "Polygamma_approx (Suc (Suc j)) m x = stirling_sum' (Suc j) m x"
+using assms(1)
+proof (induction j arbitrary: x)
+ case (0 x)
+ with assms show ?case by (simp add: Polygamma_approx_2_real)
+next
+ case (Suc j x)
+ have "Polygamma_approx (Suc (Suc (Suc j))) m x = deriv (Polygamma_approx (Suc (Suc j)) m) x"
+ by (simp add: Polygamma_approx_Suc)
+ also have "\<dots> = deriv (stirling_sum' (Suc j) m) x"
+ using eventually_nhds_in_open[of "{0<..}" x] Suc.prems
+ by (intro deriv_cong_ev refl) (auto elim!: eventually_mono simp: Suc.IH)
+ also have "\<dots> = stirling_sum' (Suc (Suc j)) m x" using Suc.prems
+ by (intro DERIV_imp_deriv derivative_intros) simp_all
+ finally show ?case .
+qed
+
+lemma Polygamma_approx_ge_2_complex:
+ assumes "Re x > 0" "m > 0"
+ shows "Polygamma_approx (Suc (Suc j)) m x = stirling_sum' (Suc j) m x"
+using assms(1)
+proof (induction j arbitrary: x)
+ case (0 x)
+ with assms show ?case by (simp add: Polygamma_approx_2_complex)
+next
+ case (Suc j x)
+ have "Polygamma_approx (Suc (Suc (Suc j))) m x = deriv (Polygamma_approx (Suc (Suc j)) m) x"
+ by (simp add: Polygamma_approx_Suc)
+ also have "\<dots> = deriv (stirling_sum' (Suc j) m) x"
+ using eventually_nhds_in_open[of "{x. Re x > 0}" x] Suc.prems
+ by (intro deriv_cong_ev refl) (auto elim!: eventually_mono simp: Suc.IH open_halfspace_Re_gt)
+ also have "\<dots> = stirling_sum' (Suc (Suc j)) m x" using Suc.prems
+ by (intro DERIV_imp_deriv derivative_intros) simp_all
+ finally show ?case .
+qed
+
+lemma Polygamma_approx_complex_of_real:
+ assumes "x > 0" "m > 0"
+ shows "Polygamma_approx j m (complex_of_real x) = of_real (Polygamma_approx j m x)"
+proof (cases j)
+ case 0
+ with assms show ?thesis by (simp add: Polygamma_approx_0 Ln_of_real stirling_sum_complex_of_real)
+next
+ case [simp]: (Suc j')
+ thus ?thesis
+ proof (cases j')
+ case 0
+ with assms show ?thesis
+ by (simp add: Polygamma_approx_1_complex
+ Polygamma_approx_1_real stirling_sum_complex_of_real Ln_of_real)
+ next
+ case (Suc j'')
+ with assms show ?thesis
+ by (simp add: Polygamma_approx_ge_2_complex Polygamma_approx_ge_2_real
+ stirling_sum'_complex_of_real)
+ qed
+qed
+
+lemma higher_deriv_Polygamma_approx [simp]:
+ "(deriv ^^ j) (Polygamma_approx i m) = Polygamma_approx (j + i) m"
+ by (simp add: Polygamma_approx_def funpow_add)
+
+lemma stirling_sum_holomorphic [holomorphic_intros]:
+ "0 \<notin> A \<Longrightarrow> stirling_sum j m holomorphic_on A"
+ unfolding stirling_sum_def by (intro holomorphic_intros) auto
+
+lemma Polygamma_approx_holomorphic [holomorphic_intros]:
+ "Polygamma_approx j m holomorphic_on {s. Re s > 0}"
+ unfolding Polygamma_approx_def
+ by (intro holomorphic_intros) (auto simp: open_halfspace_Re_gt elim!: nonpos_Reals_cases)
+
+lemma higher_deriv_lnGamma_stirling:
+ assumes m: "m > 0"
+ shows "(\<lambda>x::real. (deriv ^^ j) ln_Gamma x - Polygamma_approx j m x) \<in> O(\<lambda>x. 1 / x ^ (m + j))"
+proof -
+ have "eventually (\<lambda>x. \<bar>(deriv ^^ j) ln_Gamma x - Polygamma_approx j m x\<bar> =
+ inverse (real m) * \<bar>(deriv ^^ j) (stirling_integral m) x\<bar>) at_top"
+ using eventually_gt_at_top[of "0::real"]
+ proof eventually_elim
+ case (elim x)
+ note x = this
+ have "\<forall>\<^sub>F y in nhds (complex_of_real x). y \<in> - \<real>\<^sub>\<le>\<^sub>0"
+ using elim by (intro eventually_nhds_in_open) auto
+ hence "(deriv ^^ j) (\<lambda>x. ln_Gamma x - Polygamma_approx 0 m x) (complex_of_real x) =
+ (deriv ^^ j) (\<lambda>x. (-inverse (of_nat m)) * stirling_integral m x) (complex_of_real x)"
+ using x m
+ by (intro higher_deriv_cong_ev refl)
+ (auto elim!: eventually_mono simp: ln_Gamma_stirling_complex Polygamma_approx_def
+ field_simps open_halfspace_Re_gt stirling_sum_def)
+ also have "\<dots> = - inverse (of_nat m) * (deriv ^^ j) (stirling_integral m) (of_real x)" using x m
+ by (intro higher_deriv_cmult[of _ "-\<real>\<^sub>\<le>\<^sub>0"] stirling_integral_holomorphic)
+ (auto simp: open_halfspace_Re_gt)
+ also have "(deriv ^^ j) (\<lambda>x. ln_Gamma x - Polygamma_approx 0 m x) (complex_of_real x) =
+ (deriv ^^ j) ln_Gamma (of_real x) - (deriv ^^ j) (Polygamma_approx 0 m) (of_real x)"
+ using x
+ by (intro higher_deriv_diff[of _ "{s. Re s > 0}"])
+ (auto intro!: holomorphic_intros elim!: nonpos_Reals_cases simp: open_halfspace_Re_gt)
+ also have "(deriv ^^ j) (Polygamma_approx 0 m) (complex_of_real x) =
+ of_real (Polygamma_approx j m x)" using x m
+ by (simp add: Polygamma_approx_complex_of_real)
+ also have "norm (- inverse (of_nat m) * (deriv ^^ j) (stirling_integral m) (complex_of_real x)) =
+ inverse (real m) * \<bar>(deriv ^^ j) (stirling_integral m) x\<bar>"
+ using x m by (simp add: norm_mult norm_inverse deriv_stirling_integral_complex_of_real)
+ also have "(deriv ^^ j) ln_Gamma (complex_of_real x) = of_real ((deriv ^^ j) ln_Gamma x)" using x
+ by (simp add: higher_deriv_ln_Gamma_complex_of_real)
+ also have "norm (\<dots> - of_real (Polygamma_approx j m x)) =
+ \<bar>(deriv ^^ j) ln_Gamma x - Polygamma_approx j m x\<bar>"
+ by (simp only: of_real_diff [symmetric] norm_of_real)
+ finally show ?case .
+ qed
+ from bigthetaI_cong[OF this] m
+ have "(\<lambda>x::real. (deriv ^^ j) ln_Gamma x - Polygamma_approx j m x) \<in>
+ \<Theta>(\<lambda>x. (deriv ^^ j) (stirling_integral m) x)" by simp
+ also have "(\<lambda>x::real. (deriv ^^ j) (stirling_integral m) x) \<in> O(\<lambda>x. 1 / x ^ (m + j))" using m
+ by (rule deriv_stirling_integral_real_bound)
+ finally show ?thesis .
+qed
+
+lemma Polygamma_approx_1_real':
+ assumes x: "(x::real) > 0" and m: "m > 0"
+ shows "Polygamma_approx 1 m x = ln x - (\<Sum>k = Suc 0..m. bernoulli' k * inverse x ^ k / real k)"
+proof -
+ have "Polygamma_approx 1 m x = ln x - (1 / (2 * x) +
+ (\<Sum>k=Suc 0..<m. bernoulli (Suc k) * inverse x ^ Suc k / real (Suc k)))"
+ (is "_ = _ - (_ + ?S)") using x by (simp add: Polygamma_approx_1_real stirling_sum_def)
+ also have "?S = (\<Sum>k=Suc 0..<m. bernoulli' (Suc k) * inverse x ^ Suc k / real (Suc k))"
+ by (intro sum.cong refl) (simp_all add: bernoulli'_def)
+ also have "1 / (2 * x) + \<dots> =
+ (\<Sum>k=0..<m. bernoulli' (Suc k) * inverse x ^ Suc k / real (Suc k))" using m
+ by (subst (2) sum.atLeast_Suc_lessThan) (simp_all add: field_simps)
+ also have "\<dots> = (\<Sum>k = Suc 0..m. bernoulli' k * inverse x ^ k / real k)" using assms
+ by (subst sum.shift_bounds_Suc_ivl [symmetric]) (simp add: atLeastLessThanSuc_atLeastAtMost)
+ finally show ?thesis .
+qed
+
+theorem
+ assumes m: "m > 0"
+ shows ln_Gamma_real_asymptotics:
+ "(\<lambda>x. ln_Gamma x - ((x - 1 / 2) * ln x - x + ln (2 * pi) / 2 +
+ (\<Sum>k = 1..<m. bernoulli (Suc k) / (real k * real (Suc k)) / x^k)))
+ \<in> O(\<lambda>x. 1 / x ^ m)" (is ?th1)
+ and Digamma_real_asymptotics:
+ "(\<lambda>x. Digamma x - (ln x - (\<Sum>k=1..m. bernoulli' k / real k / x ^ k)))
+ \<in> O(\<lambda>x. 1 / (x ^ Suc m))" (is ?th2)
+ and Polygamma_real_asymptotics: "j > 0 \<Longrightarrow>
+ (\<lambda>x. Polygamma j x - (- 1) ^ Suc j * (\<Sum>k\<le>m. bernoulli' k *
+ pochhammer (real (Suc k)) (j - 1) / x ^ (k + j)))
+ \<in> O(\<lambda>x. 1 / x ^ (m+j+1))" (is "_ \<Longrightarrow> ?th3")
+proof -
+ define G :: "nat \<Rightarrow> real \<Rightarrow> real" where
+ "G = (\<lambda>m. if m = 0 then ln_Gamma else Polygamma (m - 1))"
+ have *: "(\<lambda>x. G j x - h x) \<in> O(\<lambda>x. 1 / x ^ (m + j))"
+ if "\<And>x::real. x > 0 \<Longrightarrow> Polygamma_approx j m x = h x" for j h
+ proof -
+ have "(\<lambda>x. G j x - h x) \<in>
+ \<Theta>(\<lambda>x. (deriv ^^ j) ln_Gamma x - Polygamma_approx j m x)" (is "_ \<in> \<Theta>(?f)")
+ using that
+ by (intro bigthetaI_cong) (auto intro: eventually_mono[OF eventually_gt_at_top[of "0::real"]]
+ simp del: funpow.simps simp: higher_deriv_ln_Gamma_real G_def)
+ also have "?f \<in> O(\<lambda>x::real. 1 / x ^ (m + j))" using m
+ by (rule higher_deriv_lnGamma_stirling)
+ finally show ?thesis .
+ qed
+
+ note [[simproc del: simplify_landau_sum]]
+ from *[OF Polygamma_approx_0] assms show ?th1
+ by (simp add: G_def Polygamma_approx_0 stirling_sum_def field_simps)
+ from *[OF Polygamma_approx_1_real'] assms show ?th2 by (simp add: G_def field_simps)
+
+ assume j: "j > 0"
+ from *[OF Polygamma_approx_ge_2_real, of "j - 1"] assms j show ?th3
+ by (simp add: G_def stirling_sum'_def power_add power_diff field_simps)
+qed
+
+
+
+
+subsection \<open>Asymptotics of the complex Gamma function\<close>
+
+text \<open>
+ The \<open>m\<close>-th order remainder of Stirling's formula for $\log\Gamma$ is $O(s^{-m})$ uniformly over
+ any complex cone $\text{arg}(z) \leq \alpha$, $z\neq 0$ for any angle
+ $\alpha\in(0, \pi)$. This means that there is bounded by $c z^{-m}$ for some constant $c$ for
+ all $z$ in this cone.
+\<close>
+context
+ fixes F and \<alpha>
+ assumes \<alpha>: "\<alpha> \<in> {0<..<pi}"
+ defines "F \<equiv> principal (complex_cone' \<alpha> - {0})"
+begin
+
+lemma stirling_integral_bigo:
+ fixes m :: nat
+ assumes m: "m > 0"
+ shows "stirling_integral m \<in> O[F](\<lambda>s. 1 / s ^ m)"
+proof -
+ obtain c where c: "\<And>s. s \<in> complex_cone' \<alpha> - {0} \<Longrightarrow> norm (stirling_integral m s) \<le> c / norm s ^ m"
+ using stirling_integral_bound'[OF \<open>m > 0\<close> \<alpha>] by blast
+ have "0 \<le> norm (stirling_integral m 1 :: complex)"
+ by simp
+ also have "\<dots> \<le> c"
+ using c[of 1] \<alpha> by simp
+ finally have "c \<ge> 0" .
+
+ have "eventually (\<lambda>s. s \<in> complex_cone' \<alpha> - {0}) F"
+ unfolding F_def by (auto simp: eventually_principal)
+ hence "eventually (\<lambda>s. norm (stirling_integral m s) \<le>
+ c * norm (1 / s ^ m)) F"
+ by eventually_elim (use c in \<open>simp add: norm_divide norm_power\<close>)
+ thus "stirling_integral m \<in> O[F](\<lambda>s. 1 / s ^ m)"
+ by (intro bigoI[of _ c]) auto
+qed
+
+end
+
+text \<open>
+ The following is a more explicit statement of this:
+\<close>
+theorem ln_Gamma_complex_asymptotics_explicit:
+ fixes m :: nat and \<alpha> :: real
+ assumes "m > 0" and "\<alpha> \<in> {0<..<pi}"
+ obtains C :: real and R :: "complex \<Rightarrow> complex"
+ where "\<forall>s::complex. s \<notin> \<real>\<^sub>\<le>\<^sub>0 \<longrightarrow>
+ ln_Gamma s = (s - 1/2) * ln s - s + ln (2 * pi) / 2 +
+ (\<Sum>k=1..<m. bernoulli (k+1) / (k * (k+1) * s ^ k)) - R s"
+ and "\<forall>s. s \<noteq> 0 \<and> \<bar>arg s\<bar> \<le> \<alpha> \<longrightarrow> norm (R s) \<le> C / norm s ^ m"
+proof -
+ obtain c where c: "\<And>s. s \<in> complex_cone' \<alpha> - {0} \<Longrightarrow> norm (stirling_integral m s) \<le> c / norm s ^ m"
+ using stirling_integral_bound'[OF assms] by blast
+ have "0 \<le> norm (stirling_integral m 1 :: complex)"
+ by simp
+ also have "\<dots> \<le> c"
+ using c[of 1] assms by simp
+ finally have "c \<ge> 0" .
+ define R where "R = (\<lambda>s::complex. stirling_integral m s / of_nat m)"
+ show ?thesis
+ proof (rule that)
+ from ln_Gamma_stirling_complex[of _ m] assms show
+ "\<forall>s::complex. s \<notin> \<real>\<^sub>\<le>\<^sub>0 \<longrightarrow>
+ ln_Gamma s = (s - 1 / 2) * ln s - s + ln (2 * pi) / 2 +
+ (\<Sum>k=1..<m. bernoulli (k+1) / (k * (k+1) * s ^ k)) - R s"
+ by (auto simp add: R_def algebra_simps)
+ show "\<forall>s. s \<noteq> 0 \<and> \<bar>arg s\<bar> \<le> \<alpha> \<longrightarrow> cmod (R s) \<le> c / real m / cmod s ^ m"
+ proof (safe, goal_cases)
+ case (1 s)
+ show ?case
+ using 1 c[of s] assms
+ by (auto simp: complex_cone_altdef abs_le_iff R_def norm_divide field_simps)
+ qed
+ qed
+qed
+
+
+text \<open>
+ Lastly, we can also derive the asymptotics of $\Gamma$ itself:
+ \[\Gamma(z) \sim \sqrt{2\pi / z} \left(\frac{z}{e}\right)^z\]
+ uniformly for $|z|\to\infty$ within the cone $\text{arg}(z) \leq \alpha$ for $\alpha\in(0,\pi)$:
+\<close>
+
+context
+ fixes F and \<alpha>
+ assumes \<alpha>: "\<alpha> \<in> {0<..<pi}"
+ defines "F \<equiv> inf at_infinity (principal (complex_cone' \<alpha>))"
+begin
+
+lemma Gamma_complex_asymp_equiv:
+ "Gamma \<sim>[F] (\<lambda>s. sqrt (2 * pi) * (s / exp 1) powr s / s powr (1 / 2))"
+proof -
+ define I :: "complex \<Rightarrow> complex" where "I = stirling_integral 1"
+ have "eventually (\<lambda>s. s \<in> complex_cone' \<alpha>) F"
+ by (auto simp: eventually_inf_principal F_def)
+ moreover have "eventually (\<lambda>s. s \<noteq> 0) F"
+ unfolding F_def eventually_inf_principal
+ using eventually_not_equal_at_infinity by eventually_elim auto
+ ultimately have "eventually (\<lambda>s. Gamma s =
+ sqrt (2 * pi) * (s / exp 1) powr s / s powr (1 / 2) / exp (I s)) F"
+ proof eventually_elim
+ case (elim s)
+ from elim have s': "s \<notin> \<real>\<^sub>\<le>\<^sub>0"
+ using complex_cone_inter_nonpos_Reals[of "-\<alpha>" \<alpha>] \<alpha> by auto
+ from elim have [simp]: "s \<noteq> 0" by auto
+ from s' have "Gamma s = exp (ln_Gamma s)"
+ unfolding Gamma_complex_altdef using nonpos_Ints_subset_nonpos_Reals by auto
+ also from s' have "ln_Gamma s = (s-1/2) * Ln s - s + complex_of_real (ln (2 * pi) / 2) - I s"
+ by (subst ln_Gamma_stirling_complex[of _ 1]) (simp_all add: exp_add exp_diff I_def)
+ also have "exp \<dots> = exp ((s - 1 / 2) * Ln s) / exp s *
+ exp (complex_of_real (ln (2 * pi) / 2)) / exp (I s)"
+ unfolding exp_diff exp_add by (simp add: exp_diff exp_add)
+ also have "exp ((s - 1 / 2) * Ln s) = s powr (s - 1 / 2)"
+ by (simp add: powr_def)
+ also have "exp (complex_of_real (ln (2 * pi) / 2)) = sqrt (2 * pi)"
+ by (subst exp_of_real) (auto simp: powr_def simp flip: powr_half_sqrt)
+ also have "exp s = exp 1 powr s"
+ by (simp add: powr_def)
+ also have "s powr (s - 1 / 2) / exp 1 powr s = (s powr s / exp 1 powr s) / s powr (1/2)"
+ by (subst powr_diff) auto
+ also have *: "Ln (s / exp 1) = Ln s - 1"
+ using Ln_divide_of_real[of "exp 1" s] by (simp flip: exp_of_real)
+ hence "s powr s / exp 1 powr s = (s / exp 1) powr s"
+ unfolding powr_def by (subst *) (auto simp: exp_diff field_simps)
+ finally show "Gamma s = sqrt (2 * pi) * (s / exp 1) powr s / s powr (1 / 2) / exp (I s)"
+ by (simp add: algebra_simps)
+ qed
+ hence "Gamma \<sim>[F] (\<lambda>s. sqrt (2 * pi) * (s / exp 1) powr s / s powr (1 / 2) / exp (I s))"
+ by (rule asymp_equiv_refl_ev)
+ also have "\<dots> \<sim>[F] (\<lambda>s. sqrt (2 * pi) * (s / exp 1) powr s / s powr (1 / 2) / 1)"
+ proof (intro asymp_equiv_intros)
+ have "F \<le> principal (complex_cone' \<alpha> - {0})"
+ unfolding le_principal F_def eventually_inf_principal
+ using eventually_not_equal_at_infinity by eventually_elim auto
+ moreover have "I \<in> O[principal (complex_cone' \<alpha> - {0})](\<lambda>s. 1 / s)"
+ using stirling_integral_bigo[of \<alpha> 1] \<alpha> unfolding F_def by (simp add: I_def)
+ ultimately have "I \<in> O[F](\<lambda>s. 1 / s)"
+ by (rule landau_o.big.filter_mono)
+ also have "(\<lambda>s. 1 / s) \<in> o[F](\<lambda>s. 1)"
+ proof (rule landau_o.smallI)
+ fix c :: real
+ assume c: "c > 0"
+ hence "eventually (\<lambda>z::complex. norm z \<ge> 1 / c) at_infinity"
+ by (auto simp: eventually_at_infinity)
+ moreover have "eventually (\<lambda>z::complex. z \<noteq> 0) at_infinity"
+ by (rule eventually_not_equal_at_infinity)
+ ultimately show "eventually (\<lambda>z::complex. norm (1 / z) \<le> c * norm (1 :: complex)) F"
+ unfolding F_def eventually_inf_principal
+ by eventually_elim (use \<open>c > 0\<close> in \<open>auto simp: norm_divide field_simps\<close>)
+ qed
+ finally have "I \<in> o[F](\<lambda>s. 1)" .
+ from smalloD_tendsto[OF this] have [tendsto_intros]: "(I \<longlongrightarrow> 0) F"
+ by simp
+ show "(\<lambda>x. exp (I x)) \<sim>[F] (\<lambda>x. 1)"
+ by (rule asymp_equivI' tendsto_eq_intros refl | simp)+
+ qed
+ finally show ?thesis by simp
+qed
+
+end
+
+end
diff --git a/thys/Stirling_Formula/Ln_Gamma_Asymptotics.thy b/thys/Stirling_Formula/Ln_Gamma_Asymptotics.thy
deleted file mode 100644
--- a/thys/Stirling_Formula/Ln_Gamma_Asymptotics.thy
+++ /dev/null
@@ -1,1328 +0,0 @@
-(*
- File: Ln_Gamma_Asymptotics.thy
- Author: Manuel Eberl
-
- The complete asymptotics of the logarithmic Gamma function and the Polygamma functions.
-*)
-section \<open>Complete asymptotics of the logarithmic Gamma function\<close>
-theory Ln_Gamma_Asymptotics
-imports
- "HOL-Complex_Analysis.Complex_Analysis"
- Bernoulli.Bernoulli_FPS
- Bernoulli.Periodic_Bernpoly
- Stirling_Formula
-begin
-
-subsection \<open>Auxiliary Facts\<close>
-
-(* TODO Move *)
-lemma filterlim_at_infinity_conv_norm_at_top:
- "filterlim f at_infinity G \<longleftrightarrow> filterlim (\<lambda>x. norm (f x)) at_top G"
- by (auto simp: filterlim_at_infinity[OF order.refl] filterlim_at_top_gt[of _ _ 0])
-
-corollary Ln_times_of_nat:
- "\<lbrakk>r > 0; z \<noteq> 0\<rbrakk> \<Longrightarrow> Ln(of_nat r * z :: complex) = ln (of_nat r) + Ln(z)"
- using Ln_times_of_real[of "of_nat r" z] by simp
-
-lemma tendsto_of_real_0_I:
- "(f \<longlongrightarrow> 0) G \<Longrightarrow> ((\<lambda>x. (of_real (f x))) \<longlongrightarrow> (0 ::'a::real_normed_div_algebra)) G"
- by (subst (asm) tendsto_of_real_iff [symmetric]) simp
-
-lemma negligible_atLeastAtMostI: "b \<le> a \<Longrightarrow> negligible {a..(b::real)}"
- by (cases "b < a") auto
-
-lemma integrable_on_negligible:
- "negligible A \<Longrightarrow> (f :: 'n :: euclidean_space \<Rightarrow> 'a :: banach) integrable_on A"
- by (subst integrable_spike_set_eq[of _ "{}"]) (simp_all add: integrable_on_empty)
-
-lemma vector_derivative_cong_eq:
- assumes "eventually (\<lambda>x. x \<in> A \<longrightarrow> f x = g x) (nhds x)" "x = y" "A = B" "x \<in> A"
- shows "vector_derivative f (at x within A) = vector_derivative g (at y within B)"
-proof -
- from eventually_nhds_x_imp_x[OF assms(1)] assms(4) have "f x = g x" by blast
- hence "(\<lambda>D. (f has_vector_derivative D) (at x within A)) =
- (\<lambda>D. (g has_vector_derivative D) (at x within A))" using assms
- by (intro ext has_vector_derivative_cong_ev refl assms) simp_all
- thus ?thesis by (simp add: vector_derivative_def assms)
-qed
-
-lemma differentiable_of_real [simp]: "of_real differentiable at x within A"
-proof -
- have "(of_real has_vector_derivative 1) (at x within A)"
- by (auto intro!: derivative_eq_intros)
- thus ?thesis by (rule differentiableI_vector)
-qed
-
-lemma higher_deriv_cong_ev:
- assumes "eventually (\<lambda>x. f x = g x) (nhds x)" "x = y"
- shows "(deriv ^^ n) f x = (deriv ^^ n) g y"
-proof -
- from assms(1) have "eventually (\<lambda>x. (deriv ^^ n) f x = (deriv ^^ n) g x) (nhds x)"
- proof (induction n arbitrary: f g)
- case (Suc n)
- from Suc.prems have "eventually (\<lambda>y. eventually (\<lambda>z. f z = g z) (nhds y)) (nhds x)"
- by (simp add: eventually_eventually)
- hence "eventually (\<lambda>x. deriv f x = deriv g x) (nhds x)"
- by eventually_elim (rule deriv_cong_ev, simp_all)
- thus ?case by (auto intro!: deriv_cong_ev Suc simp: funpow_Suc_right simp del: funpow.simps)
- qed auto
- from eventually_nhds_x_imp_x[OF this] assms(2) show ?thesis by simp
-qed
-
-lemma deriv_of_real [simp]:
- "at x within A \<noteq> bot \<Longrightarrow> vector_derivative of_real (at x within A) = 1"
- by (auto intro!: vector_derivative_within derivative_eq_intros)
-
-lemma deriv_Re [simp]: "deriv Re = (\<lambda>_. 1)"
- by (auto intro!: DERIV_imp_deriv simp: fun_eq_iff)
-
-lemma vector_derivative_of_real_left:
- assumes "f differentiable at x"
- shows "vector_derivative (\<lambda>x. of_real (f x)) (at x) = of_real (deriv f x)"
-proof -
- have "vector_derivative (of_real \<circ> f) (at x) = (of_real (deriv f x))"
- by (subst vector_derivative_chain_at)
- (simp_all add: scaleR_conv_of_real field_derivative_eq_vector_derivative assms)
- thus ?thesis by (simp add: o_def)
-qed
-
-lemma vector_derivative_of_real_right:
- assumes "f field_differentiable at (of_real x)"
- shows "vector_derivative (\<lambda>x. f (of_real x)) (at x) = deriv f (of_real x)"
-proof -
- have "vector_derivative (f \<circ> of_real) (at x) = deriv f (of_real x)"
- using assms by (subst vector_derivative_chain_at_general) simp_all
- thus ?thesis by (simp add: o_def)
-qed
-
-lemma Ln_holomorphic [holomorphic_intros]:
- assumes "A \<inter> \<real>\<^sub>\<le>\<^sub>0 = {}"
- shows "Ln holomorphic_on (A :: complex set)"
-proof (intro holomorphic_onI)
- fix z assume "z \<in> A"
- with assms have "(Ln has_field_derivative inverse z) (at z within A)"
- by (auto intro!: derivative_eq_intros)
- thus "Ln field_differentiable at z within A" by (auto simp: field_differentiable_def)
-qed
-
-lemma ln_Gamma_holomorphic [holomorphic_intros]:
- assumes "A \<inter> \<real>\<^sub>\<le>\<^sub>0 = {}"
- shows "ln_Gamma holomorphic_on (A :: complex set)"
-proof (intro holomorphic_onI)
- fix z assume "z \<in> A"
- with assms have "(ln_Gamma has_field_derivative Digamma z) (at z within A)"
- by (auto intro!: derivative_eq_intros)
- thus "ln_Gamma field_differentiable at z within A" by (auto simp: field_differentiable_def)
-qed
-
-lemma higher_deriv_Polygamma:
- assumes "z \<notin> \<int>\<^sub>\<le>\<^sub>0"
- shows "(deriv ^^ n) (Polygamma m) z =
- Polygamma (m + n) (z :: 'a :: {real_normed_field,euclidean_space})"
-proof -
- have "eventually (\<lambda>u. (deriv ^^ n) (Polygamma m) u = Polygamma (m + n) u) (nhds z)"
- proof (induction n)
- case (Suc n)
- from Suc.IH have "eventually (\<lambda>z. eventually (\<lambda>u. (deriv ^^ n) (Polygamma m) u = Polygamma (m + n) u) (nhds z)) (nhds z)"
- by (simp add: eventually_eventually)
- hence "eventually (\<lambda>z. deriv ((deriv ^^ n) (Polygamma m)) z =
- deriv (Polygamma (m + n)) z) (nhds z)"
- by eventually_elim (intro deriv_cong_ev refl)
- moreover have "eventually (\<lambda>z. z \<in> UNIV - \<int>\<^sub>\<le>\<^sub>0) (nhds z)" using assms
- by (intro eventually_nhds_in_open open_Diff open_UNIV) auto
- ultimately show ?case by eventually_elim (simp_all add: deriv_Polygamma)
- qed simp_all
- thus ?thesis by (rule eventually_nhds_x_imp_x)
-qed
-
-lemma higher_deriv_cmult:
- assumes "f holomorphic_on A" "x \<in> A" "open A"
- shows "(deriv ^^ j) (\<lambda>x. c * f x) x = c * (deriv ^^ j) f x"
- using assms
-proof (induction j arbitrary: f x)
- case (Suc j f x)
- have "deriv ((deriv ^^ j) (\<lambda>x. c * f x)) x = deriv (\<lambda>x. c * (deriv ^^ j) f x) x"
- using eventually_nhds_in_open[of A x] assms(2,3) Suc.prems
- by (intro deriv_cong_ev refl) (auto elim!: eventually_mono simp: Suc.IH)
- also have "\<dots> = c * deriv ((deriv ^^ j) f) x" using Suc.prems assms(2,3)
- by (intro deriv_cmult holomorphic_on_imp_differentiable_at holomorphic_higher_deriv) auto
- finally show ?case by simp
-qed simp_all
-
-lemma higher_deriv_ln_Gamma_complex:
- assumes "(x::complex) \<notin> \<real>\<^sub>\<le>\<^sub>0"
- shows "(deriv ^^ j) ln_Gamma x = (if j = 0 then ln_Gamma x else Polygamma (j - 1) x)"
-proof (cases j)
- case (Suc j')
- have "(deriv ^^ j') (deriv ln_Gamma) x = (deriv ^^ j') Digamma x"
- using eventually_nhds_in_open[of "UNIV - \<real>\<^sub>\<le>\<^sub>0" x] assms
- by (intro higher_deriv_cong_ev refl)
- (auto elim!: eventually_mono simp: open_Diff deriv_ln_Gamma_complex)
- also have "\<dots> = Polygamma j' x" using assms
- by (subst higher_deriv_Polygamma)
- (auto elim!: nonpos_Ints_cases simp: complex_nonpos_Reals_iff)
- finally show ?thesis using Suc by (simp del: funpow.simps add: funpow_Suc_right)
-qed simp_all
-
-lemma higher_deriv_ln_Gamma_real:
- assumes "(x::real) > 0"
- shows "(deriv ^^ j) ln_Gamma x = (if j = 0 then ln_Gamma x else Polygamma (j - 1) x)"
-proof (cases j)
- case (Suc j')
- have "(deriv ^^ j') (deriv ln_Gamma) x = (deriv ^^ j') Digamma x"
- using eventually_nhds_in_open[of "{0<..}" x] assms
- by (intro higher_deriv_cong_ev refl)
- (auto elim!: eventually_mono simp: open_Diff deriv_ln_Gamma_real)
- also have "\<dots> = Polygamma j' x" using assms
- by (subst higher_deriv_Polygamma)
- (auto elim!: nonpos_Ints_cases simp: complex_nonpos_Reals_iff)
- finally show ?thesis using Suc by (simp del: funpow.simps add: funpow_Suc_right)
-qed simp_all
-
-lemma higher_deriv_ln_Gamma_complex_of_real:
- assumes "(x :: real) > 0"
- shows "(deriv ^^ j) ln_Gamma (complex_of_real x) = of_real ((deriv ^^ j) ln_Gamma x)"
- using assms
- by (auto simp: higher_deriv_ln_Gamma_real higher_deriv_ln_Gamma_complex
- ln_Gamma_complex_of_real Polygamma_of_real)
-(* END TODO *)
-
-lemma stirling_limit_aux1:
- "((\<lambda>y. Ln (1 + z * of_real y) / of_real y) \<longlongrightarrow> z) (at_right 0)" for z :: complex
-proof (cases "z = 0")
- case True
- then show ?thesis by simp
-next
- case False
- have "((\<lambda>y. ln (1 + z * of_real y)) has_vector_derivative 1 * z) (at 0)"
- by (rule has_vector_derivative_real_field) (auto intro!: derivative_eq_intros)
- then have "(\<lambda>y. (Ln (1 + z * of_real y) - of_real y * z) / of_real \<bar>y\<bar>) \<midarrow>0\<rightarrow> 0"
- by (auto simp add: has_vector_derivative_def has_derivative_def netlimit_at
- scaleR_conv_of_real field_simps)
- then have "((\<lambda>y. (Ln (1 + z * of_real y) - of_real y * z) / of_real \<bar>y\<bar>) \<longlongrightarrow> 0) (at_right 0)"
- by (rule filterlim_mono[OF _ _ at_le]) simp_all
- also have "?this \<longleftrightarrow> ((\<lambda>y. Ln (1 + z * of_real y) / (of_real y) - z) \<longlongrightarrow> 0) (at_right 0)"
- using eventually_at_right_less[of "0::real"]
- by (intro filterlim_cong refl) (auto elim!: eventually_mono simp: field_simps)
- finally show ?thesis by (simp only: LIM_zero_iff)
-qed
-
-lemma stirling_limit_aux2:
- "((\<lambda>y. y * Ln (1 + z / of_real y)) \<longlongrightarrow> z) at_top" for z :: complex
- using stirling_limit_aux1[of z] by (subst filterlim_at_top_to_right) (simp add: field_simps)
-
-lemma Union_atLeastAtMost:
- assumes "N > 0"
- shows "(\<Union>n\<in>{0..<N}. {real n..real (n + 1)}) = {0..real N}"
-proof (intro equalityI subsetI)
- fix x assume x: "x \<in> {0..real N}"
- thus "x \<in> (\<Union>n\<in>{0..<N}. {real n..real (n + 1)})"
- proof (cases "x = real N")
- case True
- with assms show ?thesis by (auto intro!: bexI[of _ "N - 1"])
- next
- case False
- with x have x: "x \<ge> 0" "x < real N" by simp_all
- hence "x \<ge> real (nat \<lfloor>x\<rfloor>)" "x \<le> real (nat \<lfloor>x\<rfloor> + 1)" by linarith+
- moreover from x have "nat \<lfloor>x\<rfloor> < N" by linarith
- ultimately have "\<exists>n\<in>{0..<N}. x \<in> {real n..real (n + 1)}"
- by (intro bexI[of _ "nat \<lfloor>x\<rfloor>"]) simp_all
- thus ?thesis by blast
- qed
-qed auto
-
-
-subsection \<open>Asymptotics of @{term ln_Gamma}\<close>
-
-text \<open>
- This is the error term that occurs in the expansion of @{term ln_Gamma}. It can be shown to
- be of order $O(s^{-n})$.
-\<close>
-definition stirling_integral :: "nat \<Rightarrow> 'a :: {real_normed_div_algebra, banach} \<Rightarrow> 'a" where
- "stirling_integral n s =
- lim (\<lambda>N. integral {0..N} (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n))"
-
-text \<open>
-
-\<close>
-
-context
- fixes s :: complex assumes s: "Re s > 0"
- fixes approx :: "nat \<Rightarrow> complex"
- defines "approx \<equiv> (\<lambda>N.
- (\<Sum>n = 1..<N. s / of_nat n - ln (1 + s / of_nat n)) - (euler_mascheroni * s + ln s) - \<comment> \<open>\<open>\<longrightarrow> ln_Gamma s\<close>\<close>
- (ln_Gamma (of_nat N) - ln (2 * pi / of_nat N) / 2 - of_nat N * ln (of_nat N) + of_nat N) - \<comment> \<open>\<open>\<longrightarrow> 0\<close>\<close>
- s * (harm (N - 1) - ln (of_nat (N - 1)) - euler_mascheroni) + \<comment> \<open>\<open>\<longrightarrow> 0\<close>\<close>
- s * (ln (of_nat N + s) - ln (of_nat (N - 1))) - \<comment> \<open>\<open>\<longrightarrow> 0\<close>\<close>
- (1/2) * (ln (of_nat N + s) - ln (of_nat N)) + \<comment> \<open>\<open>\<longrightarrow> 0\<close>\<close>
- of_nat N * (ln (of_nat N + s) - ln (of_nat N)) - \<comment> \<open>\<open>\<longrightarrow> s\<close>\<close>
- (s - 1/2) * ln s - ln (2 * pi) / 2)"
-begin
-
-qualified lemma
- assumes N: "N > 0"
- shows integrable_pbernpoly_1:
- "(\<lambda>x. of_real (-pbernpoly 1 x) / (of_real x + s)) integrable_on {0..real N}"
- and integral_pbernpoly_1_aux:
- "integral {0..real N} (\<lambda>x. -of_real (pbernpoly 1 x) / (of_real x + s)) = approx N"
- and has_integral_pbernpoly_1:
- "((\<lambda>x. pbernpoly 1 x /(x + s)) has_integral
- (\<Sum>m<N. (of_nat m + 1 / 2 + s) * (ln (of_nat m + s) -
- ln (of_nat m + 1 + s)) + 1)) {0..real N}"
-proof -
- let ?A = "(\<lambda>n. {of_nat n..of_nat (n+1)}) ` {0..<N}"
- have has_integral:
- "((\<lambda>x. -pbernpoly 1 x / (x + s)) has_integral
- (of_nat n + 1/2 + s) * (ln (of_nat (n + 1) + s) - ln (of_nat n + s)) - 1)
- {of_nat n..of_nat (n + 1)}" for n
- proof (rule has_integral_spike)
- have "((\<lambda>x. (of_nat n + 1/2 + s) * (1 / (x + s)) - 1) has_integral
- (of_nat n + 1/2 + s) * (ln (of_real (real (n + 1)) + s) - ln (of_real (real n) + s)) - 1)
- {of_nat n..of_nat (n + 1)}"
- using s has_integral_const_real[of 1 "of_nat n" "of_nat (n + 1)"]
- by (intro has_integral_diff has_integral_mult_right fundamental_theorem_of_calculus)
- (auto intro!: derivative_eq_intros has_vector_derivative_real_field
- simp: has_field_derivative_iff_has_vector_derivative [symmetric] field_simps
- complex_nonpos_Reals_iff)
- thus "((\<lambda>x. (of_nat n + 1/2 + s) * (1 / (x + s)) - 1) has_integral
- (of_nat n + 1/2 + s) * (ln (of_nat (n + 1) + s) - ln (of_nat n + s)) - 1)
- {of_nat n..of_nat (n + 1)}" by simp
-
- show "-pbernpoly 1 x / (x + s) = (of_nat n + 1/2 + s) * (1 / (x + s)) - 1"
- if "x \<in> {of_nat n..of_nat (n + 1)} - {of_nat (n + 1)}" for x
- proof -
- have x: "x \<ge> real n" "x < real (n + 1)" using that by simp_all
- hence "floor x = int n" by linarith
- moreover from s x have "complex_of_real x \<noteq> -s"
- by (auto simp add: complex_eq_iff simp del: of_nat_Suc)
- ultimately show "-pbernpoly 1 x / (x + s) = (of_nat n + 1/2 + s) * (1 / (x + s)) - 1"
- by (auto simp: pbernpoly_def bernpoly_def frac_def divide_simps add_eq_0_iff2)
- qed
- qed simp_all
- hence *: "\<And>I. I\<in>?A \<Longrightarrow> ((\<lambda>x. -pbernpoly 1 x / (x + s)) has_integral
- (Inf I + 1/2 + s) * (ln (Inf I + 1 + s) - ln (Inf I + s)) - 1) I"
- by (auto simp: add_ac)
- have "((\<lambda>x. - pbernpoly 1 x / (x + s)) has_integral
- (\<Sum>I\<in>?A. (Inf I + 1 / 2 + s) * (ln (Inf I + 1 + s) - ln (Inf I + s)) - 1))
- (\<Union>n\<in>{0..<N}. {real n..real (n + 1)})" (is "(_ has_integral ?i) _")
- apply (intro has_integral_Union * finite_imageI)
- apply (force intro!: negligible_atLeastAtMostI pairwiseI)+
- done
- hence has_integral: "((\<lambda>x. - pbernpoly 1 x / (x + s)) has_integral ?i) {0..real N}"
- by (subst has_integral_spike_set_eq)
- (use Union_atLeastAtMost assms in \<open>auto simp: intro!: empty_imp_negligible\<close>)
- hence "(\<lambda>x. - pbernpoly 1 x / (x + s)) integrable_on {0..real N}"
- and integral: "integral {0..real N} (\<lambda>x. - pbernpoly 1 x / (x + s)) = ?i"
- by (simp_all add: has_integral_iff)
- show "(\<lambda>x. - pbernpoly 1 x / (x + s)) integrable_on {0..real N}" by fact
-
- note has_integral_neg[OF has_integral]
- also have "-?i = (\<Sum>x<N. (of_nat x + 1 / 2 + s) * (ln (of_nat x + s) - ln (of_nat x + 1 + s)) + 1)"
- by (subst sum.reindex)
- (simp_all add: inj_on_def atLeast0LessThan algebra_simps sum_negf [symmetric])
- finally show has_integral:
- "((\<lambda>x. of_real (pbernpoly 1 x) / (of_real x + s)) has_integral \<dots>) {0..real N}" by simp
-
- note integral
- also have "?i = (\<Sum>n<N. (of_nat n + 1 / 2 + s) *
- (ln (of_nat n + 1 + s) - ln (of_nat n + s))) - N" (is "_ = ?S - _")
- by (subst sum.reindex) (simp_all add: inj_on_def sum_subtractf atLeast0LessThan)
- also have "?S = (\<Sum>n<N. of_nat n * (ln (of_nat n + 1 + s) - ln (of_nat n + s))) +
- (s + 1 / 2) * (\<Sum>n<N. ln (of_nat (Suc n) + s) - ln (of_nat n + s))"
- (is "_ = ?S1 + _ * ?S2") by (simp add: algebra_simps sum.distrib sum_subtractf sum_distrib_left)
- also have "?S2 = ln (of_nat N + s) - ln s" by (subst sum_lessThan_telescope) simp
- also have "?S1 = (\<Sum>n=1..<N. of_nat n * (ln (of_nat n + 1 + s) - ln (of_nat n + s)))"
- by (intro sum.mono_neutral_right) auto
- also have "\<dots> = (\<Sum>n=1..<N. of_nat n * ln (of_nat n + 1 + s)) - (\<Sum>n=1..<N. of_nat n * ln (of_nat n + s))"
- by (simp add: algebra_simps sum_subtractf)
- also have "(\<Sum>n=1..<N. of_nat n * ln (of_nat n + 1 + s)) =
- (\<Sum>n=1..<N. (of_nat n - 1) * ln (of_nat n + s)) + (N - 1) * ln (of_nat N + s)"
- by (induction N) (simp_all add: add_ac of_nat_diff)
- also have "\<dots> - (\<Sum>n = 1..<N. of_nat n * ln (of_nat n + s)) =
- -(\<Sum>n=1..<N. ln (of_nat n + s)) + (N - 1) * ln (of_nat N + s)"
- by (induction N) (simp_all add: algebra_simps)
- also from s have neq: "s + of_nat x \<noteq> 0" for x by (auto simp: complex_eq_iff)
- hence "(\<Sum>n=1..<N. ln (of_nat n + s)) = (\<Sum>n=1..<N. ln (of_nat n) + ln (1 + s/n))"
- by (intro sum.cong refl, subst Ln_times_of_nat [symmetric]) (auto simp: divide_simps add_ac)
- also have "\<dots> = ln (fact (N - 1)) + (\<Sum>n=1..<N. ln (1 + s/n))"
- by (induction N) (simp_all add: Ln_times_of_nat fact_reduce add_ac)
- also have "(\<Sum>n=1..<N. ln (1 + s/n)) = -(\<Sum>n=1..<N. s / n - ln (1 + s/n)) + s * (\<Sum>n=1..<N. 1 / of_nat n)"
- by (simp add: sum_distrib_left sum_subtractf)
- also from N have "ln (fact (N - 1)) = ln_Gamma (of_nat N :: complex)"
- by (simp add: ln_Gamma_complex_conv_fact)
- also have "{1..<N} = {1..N - 1}" by auto
- hence "(\<Sum>n = 1..<N. 1 / of_nat n) = (harm (N - 1) :: complex)"
- by (simp add: harm_def divide_simps)
- also have "- (ln_Gamma (of_nat N) + (- (\<Sum>n = 1..<N. s / of_nat n - ln (1 + s / of_nat n)) +
- s * harm (N - 1))) + of_nat (N - 1) * ln (of_nat N + s) +
- (s + 1 / 2) * (ln (of_nat N + s) - ln s) - of_nat N = approx N"
- using N by (simp add: field_simps of_nat_diff ln_div approx_def Ln_of_nat
- ln_Gamma_complex_of_real [symmetric])
- finally show "integral {0..of_nat N} (\<lambda>x. -of_real (pbernpoly 1 x) / (of_real x + s)) = \<dots>"
- by simp
-qed
-
-lemma integrable_ln_Gamma_aux:
- shows "(\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n) integrable_on {0..real N}"
-proof (cases "n = 1")
- case True
- with s show ?thesis using integrable_neg[OF integrable_pbernpoly_1[of N]]
- by (cases "N = 0") (simp_all add: integrable_on_negligible)
-next
- case False
- from s have "of_real x + s \<noteq> 0" if "x \<ge> 0" for x using that
- by (auto simp: complex_eq_iff add_eq_0_iff2)
- with False s show ?thesis
- by (auto intro!: integrable_continuous_real continuous_intros)
-qed
-
-text \<open>
- This following proof is based on ``Rudiments of the theory of the gamma function''
- by Bruce Berndt~\cite{berndt}.
-\<close>
-qualified lemma integral_pbernpoly_1:
- "(\<lambda>N. integral {0..real N} (\<lambda>x. pbernpoly 1 x / (x + s)))
- \<longlonglongrightarrow> -ln_Gamma s - s + (s - 1 / 2) * ln s + ln (2 * pi) / 2"
-proof -
- have neq: "s + of_real x \<noteq> 0" if "x \<ge> 0" for x :: real
- using that s by (auto simp: complex_eq_iff)
- have "(approx \<longlongrightarrow> ln_Gamma s - 0 - 0 + 0 - 0 + s - (s - 1/2) * ln s - ln (2 * pi) / 2) at_top"
- unfolding approx_def
- proof (intro tendsto_add tendsto_diff)
- from s have s': "s \<notin> \<int>\<^sub>\<le>\<^sub>0" by (auto elim!: nonpos_Ints_cases)
- have "(\<lambda>n. \<Sum>i=1..<n. s / of_nat i - ln (1 + s / of_nat i)) \<longlonglongrightarrow>
- ln_Gamma s + euler_mascheroni * s + ln s" (is "?f \<longlonglongrightarrow> _")
- using ln_Gamma_series'_aux[OF s'] unfolding sums_def
- by (subst LIMSEQ_Suc_iff [symmetric], subst (asm) sum.atLeast1_atMost_eq [symmetric])
- (simp add: atLeastLessThanSuc_atLeastAtMost)
- thus "((\<lambda>n. ?f n - (euler_mascheroni * s + ln s)) \<longlongrightarrow> ln_Gamma s) at_top"
- by (auto intro: tendsto_eq_intros)
- next
- show "(\<lambda>x. complex_of_real (ln_Gamma (real x) - ln (2 * pi / real x) / 2 -
- real x * ln (real x) + real x)) \<longlonglongrightarrow> 0"
- proof (intro tendsto_of_real_0_I
- filterlim_compose[OF tendsto_sandwich filterlim_real_sequentially])
- show "eventually (\<lambda>x::real. ln_Gamma x - ln (2 * pi / x) / 2 - x * ln x + x \<ge> 0) at_top"
- using eventually_ge_at_top[of "1::real"]
- by eventually_elim (insert ln_Gamma_bounds(1), simp add: algebra_simps)
- show "eventually (\<lambda>x::real. ln_Gamma x - ln (2 * pi / x) / 2 - x * ln x + x \<le>
- 1 / 12 * inverse x) at_top"
- using eventually_ge_at_top[of "1::real"]
- by eventually_elim (insert ln_Gamma_bounds(2), simp add: field_simps)
- show "((\<lambda>x::real. 1 / 12 * inverse x) \<longlongrightarrow> 0) at_top"
- by (intro tendsto_mult_right_zero tendsto_inverse_0_at_top filterlim_ident)
- qed simp_all
- next
- have "(\<lambda>x. s * of_real (harm (x - 1) - ln (real (x - 1)) - euler_mascheroni)) \<longlonglongrightarrow>
- s * of_real (euler_mascheroni - euler_mascheroni)"
- by (subst LIMSEQ_Suc_iff [symmetric], intro tendsto_intros)
- (insert euler_mascheroni_LIMSEQ, simp_all)
- also have "?this \<longleftrightarrow> (\<lambda>x. s * (harm (x - 1) - ln (of_nat (x - 1)) - euler_mascheroni)) \<longlonglongrightarrow> 0"
- by (intro filterlim_cong refl eventually_mono[OF eventually_gt_at_top[of "1::nat"]])
- (auto simp: Ln_of_nat of_real_harm)
- finally show "(\<lambda>x. s * (harm (x - 1) - ln (of_nat (x - 1)) - euler_mascheroni)) \<longlonglongrightarrow> 0" .
- next
- have "((\<lambda>x. ln (1 + (s + 1) / of_real x)) \<longlongrightarrow> ln (1 + 0)) at_top" (is ?P)
- by (intro tendsto_intros tendsto_divide_0[OF tendsto_const])
- (simp_all add: filterlim_ident filterlim_at_infinity_conv_norm_at_top filterlim_abs_real)
- also have "ln (of_real (x + 1) + s) - ln (complex_of_real x) = ln (1 + (s + 1) / of_real x)"
- if "x > 1" for x using that s
- using Ln_divide_of_real[of x "of_real (x + 1) + s", symmetric] neq[of "x+1"]
- by (simp add: field_simps Ln_of_real)
- hence "?P \<longleftrightarrow> ((\<lambda>x. ln (of_real (x + 1) + s) - ln (of_real x)) \<longlongrightarrow> 0) at_top"
- by (intro filterlim_cong refl)
- (auto intro: eventually_mono[OF eventually_gt_at_top[of "1::real"]])
- finally have "((\<lambda>n. ln (of_real (real n + 1) + s) - ln (of_real (real n))) \<longlongrightarrow> 0) at_top"
- by (rule filterlim_compose[OF _ filterlim_real_sequentially])
- hence "((\<lambda>n. ln (of_nat n + s) - ln (of_nat (n - 1))) \<longlongrightarrow> 0) at_top"
- by (subst LIMSEQ_Suc_iff [symmetric]) (simp add: add_ac)
- thus "(\<lambda>x. s * (ln (of_nat x + s) - ln (of_nat (x - 1)))) \<longlonglongrightarrow> 0"
- by (rule tendsto_mult_right_zero)
- next
- have "((\<lambda>x. ln (1 + s / of_real x)) \<longlongrightarrow> ln (1 + 0)) at_top" (is ?P)
- by (intro tendsto_intros tendsto_divide_0[OF tendsto_const])
- (simp_all add: filterlim_ident filterlim_at_infinity_conv_norm_at_top filterlim_abs_real)
- also have "ln (of_real x + s) - ln (of_real x) = ln (1 + s / of_real x)" if "x > 0" for x
- using Ln_divide_of_real[of x "of_real x + s"] neq[of x] that
- by (auto simp: field_simps Ln_of_real)
- hence "?P \<longleftrightarrow> ((\<lambda>x. ln (of_real x + s) - ln (of_real x)) \<longlongrightarrow> 0) at_top"
- using s by (intro filterlim_cong refl)
- (auto intro: eventually_mono [OF eventually_gt_at_top[of "1::real"]])
- finally have "(\<lambda>x. (1/2) * (ln (of_real (real x) + s) - ln (of_real (real x)))) \<longlonglongrightarrow> 0"
- by (rule tendsto_mult_right_zero[OF filterlim_compose[OF _ filterlim_real_sequentially]])
- thus "(\<lambda>x. (1/2) * (ln (of_nat x + s) - ln (of_nat x))) \<longlonglongrightarrow> 0" by simp
- next
- have "((\<lambda>x. x * (ln (1 + s / of_real x))) \<longlongrightarrow> s) at_top" (is ?P)
- by (rule stirling_limit_aux2)
- also have "ln (1 + s / of_real x) = ln (of_real x + s) - ln (of_real x)" if "x > 1" for x
- using that s Ln_divide_of_real [of x "of_real x + s", symmetric] neq[of x]
- by (auto simp: Ln_of_real field_simps)
- hence "?P \<longleftrightarrow> ((\<lambda>x. of_real x * (ln (of_real x + s) - ln (of_real x))) \<longlongrightarrow> s) at_top"
- by (intro filterlim_cong refl)
- (auto intro: eventually_mono[OF eventually_gt_at_top[of "1::real"]])
- finally have "(\<lambda>n. of_real (real n) * (ln (of_real (real n) + s) - ln (of_real (real n)))) \<longlonglongrightarrow> s"
- by (rule filterlim_compose[OF _ filterlim_real_sequentially])
- thus "(\<lambda>n. of_nat n * (ln (of_nat n + s) - ln (of_nat n))) \<longlonglongrightarrow> s" by simp
- qed simp_all
- also have "?this \<longleftrightarrow> ((\<lambda>N. integral {0..real N} (\<lambda>x. -pbernpoly 1 x / (x + s))) \<longlongrightarrow>
- ln_Gamma s + s - (s - 1/2) * ln s - ln (2 * pi) / 2) at_top"
- using integral_pbernpoly_1_aux
- by (intro filterlim_cong refl)
- (auto intro: eventually_mono[OF eventually_gt_at_top[of "0::nat"]])
- also have "(\<lambda>N. integral {0..real N} (\<lambda>x. -pbernpoly 1 x / (x + s))) =
- (\<lambda>N. -integral {0..real N} (\<lambda>x. pbernpoly 1 x / (x + s)))"
- by (simp add: fun_eq_iff)
- finally show ?thesis by (simp add: tendsto_minus_cancel_left [symmetric] algebra_simps)
-qed
-
-
-qualified lemma pbernpoly_integral_conv_pbernpoly_integral_Suc:
- assumes "n \<ge> 1"
- shows "integral {0..real N} (\<lambda>x. pbernpoly n x / (x + s) ^ n) =
- of_real (pbernpoly (Suc n) (real N)) / (of_nat (Suc n) * (s + of_nat N) ^ n) -
- of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n) + of_nat n / of_nat (Suc n) *
- integral {0..real N} (\<lambda>x. of_real (pbernpoly (Suc n) x) / (of_real x + s) ^ Suc n)"
-proof -
- note [derivative_intros] = has_field_derivative_pbernpoly_Suc'
- define I where "I = -of_real (pbernpoly (Suc n) (of_nat N)) / (of_nat (Suc n) * (of_nat N + s) ^ n) +
- of_real (bernoulli (Suc n) / real (Suc n)) / s ^ n +
- integral {0..real N} (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n)"
- have "((\<lambda>x. (-of_nat n * inverse (of_real x + s) ^ Suc n) *
- (of_real (pbernpoly (Suc n) x) / (of_nat (Suc n))))
- has_integral -I) {0..real N}"
- proof (rule integration_by_parts_interior_strong[OF bounded_bilinear_mult])
- fix x :: real assume x: "x \<in> {0<..<real N} - real ` {0..N}"
- have "x \<notin> \<int>"
- proof
- assume "x \<in> \<int>"
- then obtain n where "x = of_int n" by (auto elim!: Ints_cases)
- with x have x': "x = of_nat (nat n)" by simp
- from x show False by (auto simp: x')
- qed
- hence "((\<lambda>x. of_real (pbernpoly (Suc n) x / of_nat (Suc n))) has_vector_derivative
- complex_of_real (pbernpoly n x)) (at x)"
- by (intro has_vector_derivative_of_real) (auto intro!: derivative_eq_intros)
- thus "((\<lambda>x. of_real (pbernpoly (Suc n) x) / of_nat (Suc n)) has_vector_derivative
- complex_of_real (pbernpoly n x)) (at x)" by simp
- from x s have "complex_of_real x + s \<noteq> 0" by (auto simp: complex_eq_iff)
- thus "((\<lambda>x. inverse (of_real x + s) ^ n) has_vector_derivative
- - of_nat n * inverse (of_real x + s) ^ Suc n) (at x)" using x s assms
- by (auto intro!: derivative_eq_intros has_vector_derivative_real_field simp: divide_simps power_add [symmetric]
- simp del: power_Suc)
- next
- have "complex_of_real x + s \<noteq> 0" if "x \<ge> 0" for x
- using that s by (auto simp: complex_eq_iff)
- thus "continuous_on {0..real N} (\<lambda>x. inverse (of_real x + s) ^ n)"
- "continuous_on {0..real N} (\<lambda>x. complex_of_real (pbernpoly (Suc n) x) / of_nat (Suc n))"
- using assms s by (auto intro!: continuous_intros simp del: of_nat_Suc)
- next
- have "((\<lambda>x. inverse (of_real x + s) ^ n * of_real (pbernpoly n x)) has_integral
- pbernpoly (Suc n) (of_nat N) / (of_nat (Suc n) * (of_nat N + s) ^ n) -
- of_real (bernoulli (Suc n) / real (Suc n)) / s ^ n - -I) {0..real N}"
- using integrable_ln_Gamma_aux[of n N] assms
- by (auto simp: I_def has_integral_integral divide_simps)
- thus "((\<lambda>x. inverse (of_real x + s) ^ n * of_real (pbernpoly n x)) has_integral
- inverse (of_real (real N) + s) ^ n * (of_real (pbernpoly (Suc n) (real N)) /
- of_nat (Suc n)) -
- inverse (of_real 0 + s) ^ n * (of_real (pbernpoly (Suc n) 0) / of_nat (Suc n)) - - I)
- {0..real N}" by (simp_all add: field_simps)
- qed simp_all
- also have "(\<lambda>x. - of_nat n * inverse (of_real x + s) ^ Suc n * (of_real (pbernpoly (Suc n) x) /
- of_nat (Suc n))) =
- (\<lambda>x. - (of_nat n / of_nat (Suc n) * of_real (pbernpoly (Suc n) x) /
- (of_real x + s) ^ Suc n))"
- by (simp add: divide_simps fun_eq_iff)
- finally have "((\<lambda>x. - (of_nat n / of_nat (Suc n) * of_real (pbernpoly (Suc n) x) /
- (of_real x + s) ^ Suc n)) has_integral - I) {0..real N}" .
- from has_integral_neg[OF this] show ?thesis
- by (auto simp add: I_def has_integral_iff algebra_simps integral_mult_right [symmetric]
- simp del: power_Suc of_nat_Suc )
-qed
-
-lemma pbernpoly_over_power_tendsto_0:
- assumes "n > 0"
- shows "(\<lambda>x. of_real (pbernpoly (Suc n) (real x)) / (of_nat (Suc n) * (s + of_nat x) ^ n)) \<longlonglongrightarrow> 0"
-proof -
- from s have neq: "s + of_nat n \<noteq> 0" for n by (auto simp: complex_eq_iff)
- from bounded_pbernpoly[of "Suc n"] guess c . note c = this
- have "eventually (\<lambda>x. norm (of_real (pbernpoly (Suc n) (real x)) /
- (of_nat (Suc n) * (s + of_nat x) ^ n)) \<le>
- (c / real (Suc n)) / real x ^ n) at_top"
- using eventually_gt_at_top[of "0::nat"]
- proof eventually_elim
- case (elim x)
- have "norm (of_real (pbernpoly (Suc n) (real x)) /
- (of_nat (Suc n) * (s + of_nat x) ^ n)) \<le>
- (c / real (Suc n)) / norm (s + of_nat x) ^ n" (is "_ \<le> ?rhs") using c[of x]
- by (auto simp: norm_divide norm_mult norm_power neq field_simps simp del: of_nat_Suc)
- also have "real x \<le> cmod (s + of_nat x)"
- using complex_Re_le_cmod[of "s + of_nat x"] s by simp
- hence "?rhs \<le> (c / real (Suc n)) / real x ^ n" using s elim c[of 0] neq[of x]
- by (intro divide_left_mono power_mono mult_pos_pos divide_nonneg_pos zero_less_power) auto
- finally show ?case .
- qed
- moreover have "(\<lambda>x. (c / real (Suc n)) / real x ^ n) \<longlonglongrightarrow> 0"
- by (intro real_tendsto_divide_at_top[OF tendsto_const] filterlim_pow_at_top assms
- filterlim_real_sequentially)
- ultimately show ?thesis by (rule Lim_null_comparison)
-qed
-
-lemma convergent_stirling_integral:
- assumes "n > 0"
- shows "convergent (\<lambda>N. integral {0..real N}
- (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n))" (is "convergent (?f n)")
-proof -
- have "convergent (?f (Suc n))" for n
- proof (induction n)
- case 0
- thus ?case using integral_pbernpoly_1 by (auto intro!: convergentI)
- next
- case (Suc n)
- have "convergent (\<lambda>N. ?f (Suc n) N -
- of_real (pbernpoly (Suc (Suc n)) (real N)) /
- (of_nat (Suc (Suc n)) * (s + of_nat N) ^ Suc n) +
- of_real (bernoulli (Suc (Suc n)) / (real (Suc (Suc n)))) / s ^ Suc n)"
- (is "convergent ?g")
- by (intro convergent_add convergent_diff Suc
- convergent_const convergentI[OF pbernpoly_over_power_tendsto_0]) simp_all
- also have "?g = (\<lambda>N. of_nat (Suc n) / of_nat (Suc (Suc n)) * ?f (Suc (Suc n)) N)" using s
- by (subst pbernpoly_integral_conv_pbernpoly_integral_Suc)
- (auto simp: fun_eq_iff field_simps simp del: of_nat_Suc power_Suc)
- also have "convergent \<dots> \<longleftrightarrow> convergent (?f (Suc (Suc n)))"
- by (intro convergent_mult_const_iff) (simp_all del: of_nat_Suc)
- finally show ?case .
- qed
- from this[of "n - 1"] assms show ?thesis by simp
-qed
-
-lemma stirling_integral_conv_stirling_integral_Suc:
- assumes "n > 0"
- shows "stirling_integral n s =
- of_nat n / of_nat (Suc n) * stirling_integral (Suc n) s -
- of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n)"
-proof -
- have "(\<lambda>N. of_real (pbernpoly (Suc n) (real N)) / (of_nat (Suc n) * (s + of_nat N) ^ n) -
- of_real (bernoulli (Suc n)) / (real (Suc n) * s ^ n) +
- integral {0..real N} (\<lambda>x. of_nat n / of_nat (Suc n) *
- (of_real (pbernpoly (Suc n) x) / (of_real x + s) ^ Suc n)))
- \<longlonglongrightarrow> 0 - of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n) +
- of_nat n / of_nat (Suc n) * stirling_integral (Suc n) s" (is "?f \<longlonglongrightarrow> _")
- unfolding stirling_integral_def integral_mult_right
- using convergent_stirling_integral[of "Suc n"] assms s
- by (intro tendsto_intros pbernpoly_over_power_tendsto_0)
- (auto simp: convergent_LIMSEQ_iff simp del: of_nat_Suc)
- also have "?this \<longleftrightarrow> (\<lambda>N. integral {0..real N}
- (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n)) \<longlonglongrightarrow>
- of_nat n / of_nat (Suc n) * stirling_integral (Suc n) s -
- of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n)"
- using eventually_gt_at_top[of "0::nat"] pbernpoly_integral_conv_pbernpoly_integral_Suc[of n]
- assms unfolding integral_mult_right
- by (intro filterlim_cong refl) (auto elim!: eventually_mono simp del: power_Suc)
- finally show ?thesis unfolding stirling_integral_def[of n] by (rule limI)
-qed
-
-lemma stirling_integral_1_unfold:
- assumes "m > 0"
- shows "stirling_integral 1 s = stirling_integral m s / of_nat m -
- (\<Sum>k=1..<m. of_real (bernoulli (Suc k)) / (of_nat k * of_nat (Suc k) * s ^ k))"
-proof -
- have "stirling_integral 1 s = stirling_integral (Suc m) s / of_nat (Suc m) -
- (\<Sum>k=1..<Suc m. of_real (bernoulli (Suc k)) / (of_nat k * of_nat (Suc k) * s ^ k))" for m
- proof (induction m)
- case (Suc m)
- let ?C = "(\<Sum>k = 1..<Suc m. of_real (bernoulli (Suc k)) / (of_nat k * of_nat (Suc k) * s ^ k))"
- note Suc.IH
- also have "stirling_integral (Suc m) s / of_nat (Suc m) =
- stirling_integral (Suc (Suc m)) s / of_nat (Suc (Suc m)) -
- of_real (bernoulli (Suc (Suc m))) /
- (of_nat (Suc m) * of_nat (Suc (Suc m)) * s ^ Suc m)"
- (is "_ = ?A - ?B") by (subst stirling_integral_conv_stirling_integral_Suc)
- (simp_all del: of_nat_Suc power_Suc add: divide_simps)
- also have "?A - ?B - ?C = ?A - (?B + ?C)" by (rule diff_diff_eq)
- also have "?B + ?C = (\<Sum>k = 1..<Suc (Suc m). of_real (bernoulli (Suc k)) /
- (of_nat k * of_nat (Suc k) * s ^ k))"
- using s by (simp add: divide_simps)
- finally show ?case .
- qed simp_all
- note this[of "m - 1"]
- also from assms have "Suc (m - 1) = m" by simp
- finally show ?thesis .
-qed
-
-lemma ln_Gamma_stirling_complex:
- assumes "m > 0"
- shows "ln_Gamma s = (s - 1 / 2) * ln s - s + ln (2 * pi) / 2 +
- (\<Sum>k=1..<m. of_real (bernoulli (Suc k)) / (of_nat k * of_nat (Suc k) * s ^ k)) -
- stirling_integral m s / of_nat m"
-proof -
- have "ln_Gamma s = (s - 1 / 2) * ln s - s + ln (2 * pi) / 2 - stirling_integral 1 s"
- using limI[OF integral_pbernpoly_1] by (simp add: stirling_integral_def algebra_simps)
- also have "stirling_integral 1 s = stirling_integral m s / of_nat m -
- (\<Sum>k = 1..<m. of_real (bernoulli (Suc k)) / (of_nat k * of_nat (Suc k) * s ^ k))"
- using assms by (rule stirling_integral_1_unfold)
- finally show ?thesis by simp
-qed
-
-lemma LIMSEQ_stirling_integral:
- "n > 0 \<Longrightarrow> (\<lambda>x. integral {0..real x} (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n))
- \<longlonglongrightarrow> stirling_integral n s" unfolding stirling_integral_def
- using convergent_stirling_integral[of n] by (simp only: convergent_LIMSEQ_iff)
-
-end
-
-lemmas has_integral_of_real = has_integral_linear[OF _ bounded_linear_of_real, unfolded o_def]
-lemmas integral_of_real = integral_linear[OF _ bounded_linear_of_real, unfolded o_def]
-
-lemma integrable_ln_Gamma_aux_real:
- assumes "0 < s"
- shows "(\<lambda>x. pbernpoly n x / (x + s) ^ n) integrable_on {0..real N}"
-proof -
- have "(\<lambda>x. complex_of_real (pbernpoly n x / (x + s) ^ n)) integrable_on {0..real N}"
- using integrable_ln_Gamma_aux[of "of_real s" n N] assms by simp
- from integrable_linear[OF this bounded_linear_Re] show ?thesis
- by (simp only: o_def Re_complex_of_real)
-qed
-
-lemma
- assumes "x > 0" "n > 0"
- shows stirling_integral_complex_of_real:
- "stirling_integral n (complex_of_real x) = of_real (stirling_integral n x)"
- and LIMSEQ_stirling_integral_real:
- "(\<lambda>N. integral {0..real N} (\<lambda>t. pbernpoly n t / (t + x) ^ n))
- \<longlonglongrightarrow> stirling_integral n x"
- and stirling_integral_real_convergent:
- "convergent (\<lambda>N. integral {0..real N} (\<lambda>t. pbernpoly n t / (t + x) ^ n))"
-proof -
- have "(\<lambda>N. integral {0..real N} (\<lambda>t. of_real (pbernpoly n t / (t + x) ^ n)))
- \<longlonglongrightarrow> stirling_integral n (complex_of_real x)"
- using LIMSEQ_stirling_integral[of "complex_of_real x" n] assms by simp
- hence "(\<lambda>N. of_real (integral {0..real N} (\<lambda>t. pbernpoly n t / (t + x) ^ n)))
- \<longlonglongrightarrow> stirling_integral n (complex_of_real x)"
- using integrable_ln_Gamma_aux_real[OF assms(1), of n]
- by (subst (asm) integral_of_real) simp
- from tendsto_Re[OF this]
- have "(\<lambda>N. integral {0..real N} (\<lambda>t. pbernpoly n t / (t + x) ^ n))
- \<longlonglongrightarrow> Re (stirling_integral n (complex_of_real x))" by simp
- thus "convergent (\<lambda>N. integral {0..real N} (\<lambda>t. pbernpoly n t / (t + x) ^ n))"
- by (rule convergentI)
- thus "(\<lambda>N. integral {0..real N} (\<lambda>t. pbernpoly n t / (t + x) ^ n))
- \<longlonglongrightarrow> stirling_integral n x" unfolding stirling_integral_def
- by (simp add: convergent_LIMSEQ_iff)
- from tendsto_of_real[OF this, where 'a = complex]
- integrable_ln_Gamma_aux_real[OF assms(1), of n]
- have "(\<lambda>xa. integral {0..real xa}
- (\<lambda>xa. complex_of_real (pbernpoly n xa) / (complex_of_real xa + x) ^ n))
- \<longlonglongrightarrow> complex_of_real (stirling_integral n x)"
- by (subst (asm) integral_of_real [symmetric]) simp_all
- from LIMSEQ_unique[OF this LIMSEQ_stirling_integral[of "complex_of_real x" n]] assms
- show "stirling_integral n (complex_of_real x) = of_real (stirling_integral n x)" by simp
-qed
-
-lemma ln_Gamma_stirling_real:
- assumes "x > (0 :: real)" "m > (0::nat)"
- shows "ln_Gamma x = (x - 1 / 2) * ln x - x + ln (2 * pi) / 2 +
- (\<Sum>k = 1..<m. bernoulli (Suc k) / (of_nat k * of_nat (Suc k) * x ^ k)) -
- stirling_integral m x / of_nat m"
-proof -
- from assms have "complex_of_real (ln_Gamma x) = ln_Gamma (complex_of_real x)"
- by (simp add: ln_Gamma_complex_of_real)
- also have "ln_Gamma (complex_of_real x) = complex_of_real (
- (x - 1 / 2) * ln x - x + ln (2 * pi) / 2 +
- (\<Sum>k = 1..<m. bernoulli (Suc k) / (of_nat k * of_nat (Suc k) * x ^ k)) -
- stirling_integral m x / of_nat m)" using assms
- by (subst ln_Gamma_stirling_complex[of _ m])
- (simp_all add: Ln_of_real stirling_integral_complex_of_real)
- finally show ?thesis by (subst (asm) of_real_eq_iff)
-qed
-
-
-context
-begin
-
-private lemma stirling_integral_bound_aux:
- assumes n: "n > (1::nat)"
- obtains c where "\<And>s. Re s > 0 \<Longrightarrow> norm (stirling_integral n s) \<le> c / Re s ^ (n - 1)"
-proof -
- obtain c where c: "norm (pbernpoly n x) \<le> c" for x by (rule bounded_pbernpoly[of n]) blast
- have c': "pbernpoly n x \<le> c" for x using c[of x] by (simp add: abs_real_def split: if_splits)
- from c[of 0] have c_nonneg: "c \<ge> 0" by simp
- have "norm (stirling_integral n s) \<le> c / (real n - 1) / Re s ^ (n - 1)" if s: "Re s > 0" for s
- proof (rule Lim_norm_ubound[OF _ LIMSEQ_stirling_integral])
- have pos: "x + norm s > 0" if "x \<ge> 0" for x using s that by (intro add_nonneg_pos) auto
- have nz: "of_real x + s \<noteq> 0" if "x \<ge> 0" for x using s that by (auto simp: complex_eq_iff)
- let ?bound = "\<lambda>N. c / (Re s ^ (n - 1) * (real n - 1)) -
- c / ((real N + Re s) ^ (n - 1) * (real n - 1))"
- show "eventually (\<lambda>N. norm (integral {0..real N}
- (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n)) \<le>
- c / (real n - 1) / Re s ^ (n - 1)) at_top"
- using eventually_gt_at_top[of "0::nat"]
- proof eventually_elim
- case (elim N)
- let ?F = "\<lambda>x. -c / ((x + Re s) ^ (n - 1) * (real n - 1))"
- from n s have "((\<lambda>x. c / (x + Re s) ^ n) has_integral (?F (real N) - ?F 0)) {0..real N}"
- by (intro fundamental_theorem_of_calculus)
- (auto intro!: derivative_eq_intros simp: divide_simps power_diff add_eq_0_iff2
- has_field_derivative_iff_has_vector_derivative [symmetric])
- also have "?F (real N) - ?F 0 = ?bound N" by simp
- finally have *: "((\<lambda>x. c / (x + Re s) ^ n) has_integral ?bound N) {0..real N}" .
- have "norm (integral {0..real N} (\<lambda>x. of_real (pbernpoly n x) / (of_real x + s) ^ n)) \<le>
- integral {0..real N} (\<lambda>x. c / (x + Re s) ^ n)"
- proof (intro integral_norm_bound_integral integrable_ln_Gamma_aux s ballI)
- fix x assume x: "x \<in> {0..real N}"
- have "norm (of_real (pbernpoly n x) / (of_real x + s) ^ n) \<le> c / norm (of_real x + s) ^ n"
- unfolding norm_divide norm_power using c by (intro divide_right_mono) simp_all
- also have "\<dots> \<le> c / (x + Re s) ^ n"
- using x c c_nonneg s nz[of x] complex_Re_le_cmod[of "of_real x + s"]
- by (intro divide_left_mono power_mono mult_pos_pos zero_less_power add_nonneg_pos) auto
- finally show "norm (of_real (pbernpoly n x) / (of_real x + s) ^ n) \<le> \<dots>" .
- qed (insert n s * pos nz c, auto)
- also have "\<dots> = ?bound N" using * by (simp add: has_integral_iff)
- also have "\<dots> \<le> c / (Re s ^ (n - 1) * (real n - 1))" using c_nonneg elim s n by simp
- also have "\<dots> = c / (real n - 1) / (Re s ^ (n - 1))" by simp
- finally show "norm (integral {0..real N} (\<lambda>x. of_real (pbernpoly n x) /
- (of_real x + s) ^ n)) \<le> c / (real n - 1) / Re s ^ (n - 1)" .
- qed
- qed (insert s n, simp_all)
- thus ?thesis by (rule that)
-qed
-
-lemma stirling_integral_bound:
- assumes "n > 0"
- obtains c where
- "\<And>s. Re s > 0 \<Longrightarrow> norm (stirling_integral n s) \<le> c / Re s ^ n"
-proof -
- let ?f = "\<lambda>s. of_nat n / of_nat (Suc n) * stirling_integral (Suc n) s -
- of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n)"
- from stirling_integral_bound_aux[of "Suc n"] assms obtain c where
- c: "\<And>s. Re s > 0 \<Longrightarrow> norm (stirling_integral (Suc n) s) \<le> c / Re s ^ n" by auto
- define c1 where "c1 = real n / real (Suc n) * c"
- define c2 where "c2 = \<bar>bernoulli (Suc n)\<bar> / real (Suc n)"
- have c2_nonneg: "c2 \<ge> 0" by (simp add: c2_def)
- show ?thesis
- proof (rule that)
- fix s :: complex assume s: "Re s > 0"
- have "stirling_integral n s = ?f s" using s assms
- by (rule stirling_integral_conv_stirling_integral_Suc)
- also have "norm \<dots> \<le> norm (of_nat n / of_nat (Suc n) * stirling_integral (Suc n) s) +
- norm (of_real (bernoulli (Suc n)) / (of_nat (Suc n) * s ^ n))"
- by (rule norm_triangle_ineq4)
- also have "\<dots> = real n / real (Suc n) * norm (stirling_integral (Suc n) s) +
- c2 / norm s ^ n" (is "_ = ?A + ?B")
- by (simp add: norm_divide norm_mult norm_power c2_def field_simps del: of_nat_Suc)
- also have "?A \<le> real n / real (Suc n) * (c / Re s ^ n)"
- by (intro mult_left_mono c s) simp_all
- also have "\<dots> = c1 / Re s ^ n" by (simp add: c1_def)
- also have "c2 / norm s ^ n \<le> c2 / Re s ^ n" using s c2_nonneg
- by (intro divide_left_mono power_mono complex_Re_le_cmod mult_pos_pos zero_less_power) auto
- also have "c1 / Re s ^ n + c2 / Re s ^ n = (c1 + c2) / Re s ^ n"
- using s by (simp add: field_simps)
- finally show "norm (stirling_integral n s) \<le> (c1 + c2) / Re s ^ n" by - simp_all
- qed
-qed
-
-end
-
-
-lemma stirling_integral_holomorphic [holomorphic_intros]:
- assumes m: "m > 0" and "\<forall>s\<in>A. Re s > 0"
- shows "stirling_integral m holomorphic_on A"
-proof -
- let ?f = "\<lambda>s::complex. of_nat m * ((s - 1 / 2) * Ln s - s + of_real (ln (2 * pi) / 2) +
- (\<Sum>k=1..<m. of_real (bernoulli (Suc k)) / (of_nat k * of_nat (Suc k) * s ^ k)) -
- ln_Gamma s)"
- have "?f holomorphic_on A" using assms
- by (auto intro!: holomorphic_intros simp del: of_nat_Suc elim!: nonpos_Reals_cases)
- also have "?this \<longleftrightarrow> stirling_integral m holomorphic_on A"
- using assms by (intro holomorphic_cong refl)
- (simp_all add: field_simps ln_Gamma_stirling_complex)
- finally show "stirling_integral m holomorphic_on A" .
-qed
-
-lemma stirling_integral_continuous_on [continuous_intros]:
- assumes m: "m > 0" and "\<forall>s\<in>A. Re s > 0"
- shows "continuous_on A (stirling_integral m)"
- by (intro holomorphic_on_imp_continuous_on stirling_integral_holomorphic assms)
-
-lemma has_field_derivative_stirling_integral:
- assumes "Re x > 0" "n > 0"
- shows "(stirling_integral n has_field_derivative deriv (stirling_integral n) x) (at x)"
- using assms
- by (intro holomorphic_derivI[OF stirling_integral_holomorphic, of n "{s. Re s > 0}"]
- open_halfspace_Re_gt) auto
-
-
-
-lemma
- assumes n: "n > 0" and "x > 0"
- shows deriv_stirling_integral_complex_of_real:
- "(deriv ^^ j) (stirling_integral n) (complex_of_real x) =
- complex_of_real ((deriv ^^ j) (stirling_integral n) x)" (is "?lhs x = ?rhs x")
- and differentiable_stirling_integral_real:
- "(deriv ^^ j) (stirling_integral n) field_differentiable at x" (is ?thesis2)
-proof -
- let ?A = "{s. Re s > 0}"
- let ?f = "\<lambda>j x. (deriv ^^ j) (stirling_integral n) (complex_of_real x)"
- let ?f' = "\<lambda>j x. complex_of_real ((deriv ^^ j) (stirling_integral n) x)"
-
- have [simp]: "open ?A" by (simp add: open_halfspace_Re_gt)
-
- have "?lhs x = ?rhs x \<and> (deriv ^^ j) (stirling_integral n) field_differentiable at x"
- if "x > 0" for x using that
- proof (induction j arbitrary: x)
- case 0
- have "((\<lambda>x. Re (stirling_integral n (of_real x))) has_field_derivative
- Re (deriv (\<lambda>x. stirling_integral n x) (of_real x))) (at x)" using 0 n
- by (auto intro!: derivative_intros has_vector_derivative_real_field
- field_differentiable_derivI holomorphic_on_imp_differentiable_at[of _ ?A]
- stirling_integral_holomorphic)
- also have "?this \<longleftrightarrow> (stirling_integral n has_field_derivative
- Re (deriv (\<lambda>x. stirling_integral n x) (of_real x))) (at x)"
- using eventually_nhds_in_open[of "{0<..}" x] 0 n
- by (intro has_field_derivative_cong_ev refl)
- (auto elim!: eventually_mono simp: stirling_integral_complex_of_real)
- finally have "stirling_integral n field_differentiable at x"
- by (auto simp: field_differentiable_def)
- with 0 n show ?case by (auto simp: stirling_integral_complex_of_real)
- next
- case (Suc j x)
- note IH = conjunct1[OF Suc.IH] conjunct2[OF Suc.IH]
- have *: "(deriv ^^ Suc j) (stirling_integral n) (complex_of_real x) =
- of_real ((deriv ^^ Suc j) (stirling_integral n) x)" if x: "x > 0" for x
- proof -
- have "deriv ((deriv ^^ j) (stirling_integral n)) (complex_of_real x) =
- vector_derivative (\<lambda>x. (deriv ^^ j) (stirling_integral n) (of_real x)) (at x)"
- using n x
- by (intro vector_derivative_of_real_right [symmetric]
- holomorphic_on_imp_differentiable_at[of _ ?A] holomorphic_higher_deriv
- stirling_integral_holomorphic) auto
- also have "\<dots> = vector_derivative (\<lambda>x. of_real ((deriv ^^ j) (stirling_integral n) x)) (at x)"
- using eventually_nhds_in_open[of "{0<..}" x] x
- by (intro vector_derivative_cong_eq) (auto elim!: eventually_mono simp: IH(1))
- also have "\<dots> = of_real (deriv ((deriv ^^ j) (stirling_integral n)) x)"
- by (intro vector_derivative_of_real_left holomorphic_on_imp_differentiable_at[of _ ?A]
- field_differentiable_imp_differentiable IH(2) x)
- finally show ?thesis by simp
- qed
- have "((\<lambda>x. Re ((deriv ^^ Suc j) (stirling_integral n) (of_real x))) has_field_derivative
- Re (deriv ((deriv ^^ Suc j) (stirling_integral n)) (of_real x))) (at x)"
- using Suc.prems n
- by (intro derivative_intros has_vector_derivative_real_field field_differentiable_derivI
- holomorphic_on_imp_differentiable_at[of _ ?A] stirling_integral_holomorphic
- holomorphic_higher_deriv) auto
- also have "?this \<longleftrightarrow> ((deriv ^^ Suc j) (stirling_integral n) has_field_derivative
- Re (deriv ((deriv ^^ Suc j) (stirling_integral n)) (of_real x))) (at x)"
- using eventually_nhds_in_open[of "{0<..}" x] Suc.prems *
- by (intro has_field_derivative_cong_ev refl) (auto elim!: eventually_mono)
- finally have "(deriv ^^ Suc j) (stirling_integral n) field_differentiable at x"
- by (auto simp: field_differentiable_def)
- with *[OF Suc.prems] show ?case by blast
- qed
- from this[OF assms(2)] show "?lhs x = ?rhs x" ?thesis2 by blast+
-qed
-
-text \<open>
- Unfortunately, asymptotic power series cannot, in general, be differentiated. However, since
- @{term ln_Gamma} is holomorphic on the entire positive real half-space, we can differentiate
- its asymptotic expansion after all.
-
- To do this, we use an ad-hoc version of the more general approach outlined in Erdelyi's
- ``Asymptotic Expansions'' for holomorphic functions: We bound the value of the $j$-th derivative
- of the remainder term at some value $x$ by applying Cauchy's integral formula along a circle
- centred at $x$ with radius $\frac{1}{2} x$.
-\<close>
-lemma deriv_stirling_integral_real_bound:
- assumes m: "m > 0"
- shows "(deriv ^^ j) (stirling_integral m) \<in> O(\<lambda>x::real. 1 / x ^ (m + j))"
-proof -
- from stirling_integral_bound[OF m] guess c . note c = this
- have "0 \<le> cmod (stirling_integral m 1)" by simp
- also have "\<dots> \<le> c" using c[of 1] by simp
- finally have c_nonneg: "c \<ge> 0" .
- define B where "B = c * 2 ^ (m + Suc j)"
- define B' where "B' = B * fact j / 2"
-
- have "eventually (\<lambda>x::real. norm ((deriv ^^ j) (stirling_integral m) x) \<le>
- B' * norm (1 / x ^ (m+ j))) at_top"
- using eventually_gt_at_top[of "0::real"]
- proof eventually_elim
- case (elim x)
- have "Re s > 0" if "s \<in> cball (of_real x) (x/2)" for s :: complex
- proof -
- have "x - Re s \<le> norm (of_real x - s)" using complex_Re_le_cmod[of "of_real x - s"] by simp
- also from that have "\<dots> \<le> x/2" by (simp add: dist_complex_def)
- finally show ?thesis using elim by simp
- qed
- hence "((\<lambda>u. stirling_integral m u / (u - of_real x) ^ Suc j) has_contour_integral
- complex_of_real (2 * pi) * \<i> / fact j *
- (deriv ^^ j) (stirling_integral m) (of_real x)) (circlepath (of_real x) (x/2))"
- using m elim
- by (intro Cauchy_has_contour_integral_higher_derivative_circlepath
- stirling_integral_continuous_on stirling_integral_holomorphic) auto
- hence "norm (of_real (2 * pi) * \<i> / fact j * (deriv ^^ j) (stirling_integral m) (of_real x)) \<le>
- B / x ^ (m + Suc j) * (2 * pi * (x / 2))"
- proof (rule has_contour_integral_bound_circlepath)
- fix u :: complex assume dist: "norm (u - of_real x) = x / 2"
- have "Re (of_real x - u) \<le> norm (of_real x - u)" by (rule complex_Re_le_cmod)
- also have "\<dots> = x / 2" using dist by (simp add: norm_minus_commute)
- finally have Re_u: "Re u \<ge> x/2" using elim by simp
- have "norm (stirling_integral m u / (u - of_real x) ^ Suc j) \<le>
- c / Re u ^ m / (x / 2) ^ Suc j" using Re_u elim
- unfolding norm_divide norm_power dist
- by (intro divide_right_mono zero_le_power c) simp_all
- also have "\<dots> \<le> c / (x/2) ^ m / (x / 2) ^ Suc j" using c_nonneg elim Re_u
- by (intro divide_right_mono divide_left_mono power_mono) simp_all
- also have "\<dots> = B / x ^ (m + Suc j)" using elim by (simp add: B_def field_simps power_add)
- finally show "norm (stirling_integral m u / (u - of_real x) ^ Suc j) \<le> B / x ^ (m + Suc j)" .
- qed (insert elim c_nonneg, auto simp: B_def simp del: power_Suc)
- hence "cmod ((deriv ^^ j) (stirling_integral m) (of_real x)) \<le> B' / x ^ (j + m)"
- using elim by (simp add: field_simps norm_divide norm_mult norm_power B'_def)
- with elim m show ?case by (simp_all add: add_ac deriv_stirling_integral_complex_of_real)
- qed
- thus ?thesis by (rule bigoI)
-qed
-
-definition stirling_sum where
- "stirling_sum j m x =
- (-1) ^ j * (\<Sum>k = 1..<m. (of_real (bernoulli (Suc k)) * pochhammer (of_nat k) j / (of_nat k *
- of_nat (Suc k))) * inverse x ^ (k + j))"
-
-definition stirling_sum' where
- "stirling_sum' j m x =
- (-1) ^ (Suc j) * (\<Sum>k\<le>m. (of_real (bernoulli' k) *
- pochhammer (of_nat (Suc k)) (j - 1) * inverse x ^ (k + j)))"
-
-lemma stirling_sum_complex_of_real:
- "stirling_sum j m (complex_of_real x) = complex_of_real (stirling_sum j m x)"
- by (simp add: stirling_sum_def pochhammer_of_real [symmetric] del: of_nat_Suc)
-
-lemma stirling_sum'_complex_of_real:
- "stirling_sum' j m (complex_of_real x) = complex_of_real (stirling_sum' j m x)"
- by (simp add: stirling_sum'_def pochhammer_of_real [symmetric] del: of_nat_Suc)
-
-lemma has_field_derivative_stirling_sum_complex [derivative_intros]:
- "Re x > 0 \<Longrightarrow> (stirling_sum j m has_field_derivative stirling_sum (Suc j) m x) (at x)"
- unfolding stirling_sum_def [abs_def] sum_distrib_left
- by (rule DERIV_sum) (auto intro!: derivative_eq_intros simp del: of_nat_Suc
- simp: pochhammer_Suc power_diff)
-
-lemma has_field_derivative_stirling_sum_real [derivative_intros]:
- "x > (0::real) \<Longrightarrow> (stirling_sum j m has_field_derivative stirling_sum (Suc j) m x) (at x)"
- unfolding stirling_sum_def [abs_def] sum_distrib_left
- by (rule DERIV_sum) (auto intro!: derivative_eq_intros simp del: of_nat_Suc
- simp: pochhammer_Suc power_diff)
-
-lemma has_field_derivative_stirling_sum'_complex [derivative_intros]:
- assumes "j > 0" "Re x > 0"
- shows "(stirling_sum' j m has_field_derivative stirling_sum' (Suc j) m x) (at x)"
-proof (cases j)
- case (Suc j')
- from assms have [simp]: "x \<noteq> 0" by auto
- define c where "c = (\<lambda>n. (-1) ^ Suc j * complex_of_real (bernoulli' n) *
- pochhammer (of_nat (Suc n)) j')"
- define T where "T = (\<lambda>n x. c n * inverse x ^ (j + n))"
- define T' where "T' = (\<lambda>n x. - (of_nat (j + n)) * c n * inverse x ^ (Suc (j + n)))"
- have "((\<lambda>x. \<Sum>k\<le>m. T k x) has_field_derivative (\<Sum>k\<le>m. T' k x)) (at x)" using assms Suc
- by (intro DERIV_sum)
- (auto simp: T_def T'_def intro!: derivative_eq_intros
- simp: field_simps power_add [symmetric] simp del: of_nat_Suc power_Suc of_nat_add)
- also have "(\<lambda>x. (\<Sum>k\<le>m. T k x)) = stirling_sum' j m"
- by (simp add: Suc T_def c_def stirling_sum'_def fun_eq_iff add_ac mult.assoc sum_distrib_left)
- also have "(\<Sum>k\<le>m. T' k x) = stirling_sum' (Suc j) m x"
- by (simp add: T'_def c_def Suc stirling_sum'_def sum_distrib_left
- sum_distrib_right algebra_simps pochhammer_Suc)
- finally show ?thesis .
-qed (insert assms, simp_all)
-
-lemma has_field_derivative_stirling_sum'_real [derivative_intros]:
- assumes "j > 0" "x > (0::real)"
- shows "(stirling_sum' j m has_field_derivative stirling_sum' (Suc j) m x) (at x)"
-proof (cases j)
- case (Suc j')
- from assms have [simp]: "x \<noteq> 0" by auto
- define c where "c = (\<lambda>n. (-1) ^ Suc j * (bernoulli' n) * pochhammer (of_nat (Suc n)) j')"
- define T where "T = (\<lambda>n x. c n * inverse x ^ (j + n))"
- define T' where "T' = (\<lambda>n x. - (of_nat (j + n)) * c n * inverse x ^ (Suc (j + n)))"
- have "((\<lambda>x. \<Sum>k\<le>m. T k x) has_field_derivative (\<Sum>k\<le>m. T' k x)) (at x)" using assms Suc
- by (intro DERIV_sum)
- (auto simp: T_def T'_def intro!: derivative_eq_intros
- simp: field_simps power_add [symmetric] simp del: of_nat_Suc power_Suc of_nat_add)
- also have "(\<lambda>x. (\<Sum>k\<le>m. T k x)) = stirling_sum' j m"
- by (simp add: Suc T_def c_def stirling_sum'_def fun_eq_iff add_ac mult.assoc sum_distrib_left)
- also have "(\<Sum>k\<le>m. T' k x) = stirling_sum' (Suc j) m x"
- by (simp add: T'_def c_def Suc stirling_sum'_def sum_distrib_left
- sum_distrib_right algebra_simps pochhammer_Suc)
- finally show ?thesis .
-qed (insert assms, simp_all)
-
-lemma higher_deriv_stirling_sum_complex:
- "Re x > 0 \<Longrightarrow> (deriv ^^ i) (stirling_sum j m) x = stirling_sum (i + j) m x"
-proof (induction i arbitrary: x)
- case (Suc i)
- have "deriv ((deriv ^^ i) (stirling_sum j m)) x = deriv (stirling_sum (i + j) m) x"
- using eventually_nhds_in_open[of "{x. Re x > 0}" x] Suc.prems
- by (intro deriv_cong_ev refl) (auto elim!: eventually_mono simp: open_halfspace_Re_gt Suc.IH)
- also from Suc.prems have "\<dots> = stirling_sum (Suc (i + j)) m x"
- by (intro DERIV_imp_deriv has_field_derivative_stirling_sum_complex)
- finally show ?case by simp
-qed simp_all
-
-
-definition Polygamma_approx :: "nat \<Rightarrow> nat \<Rightarrow> 'a \<Rightarrow> 'a :: {real_normed_field, ln}" where
- "Polygamma_approx j m =
- (deriv ^^ j) (\<lambda>x. (x - 1 / 2) * ln x - x + of_real (ln (2 * pi)) / 2 + stirling_sum 0 m x)"
-
-lemma Polygamma_approx_Suc: "Polygamma_approx (Suc j) m = deriv (Polygamma_approx j m)"
- by (simp add: Polygamma_approx_def)
-
-lemma Polygamma_approx_0:
- "Polygamma_approx 0 m x = (x - 1/2) * ln x - x + of_real (ln (2*pi)) / 2 + stirling_sum 0 m x"
- by (simp add: Polygamma_approx_def)
-
-lemma Polygamma_approx_1_complex:
- "Re x > 0 \<Longrightarrow>
- Polygamma_approx (Suc 0) m x = ln x - 1 / (2*x) + stirling_sum (Suc 0) m x"
- unfolding Polygamma_approx_Suc Polygamma_approx_0
- by (intro DERIV_imp_deriv)
- (auto intro!: derivative_eq_intros elim!: nonpos_Reals_cases simp: field_simps)
-
-lemma Polygamma_approx_1_real:
- "x > (0 :: real) \<Longrightarrow>
- Polygamma_approx (Suc 0) m x = ln x - 1 / (2*x) + stirling_sum (Suc 0) m x"
- unfolding Polygamma_approx_Suc Polygamma_approx_0
- by (intro DERIV_imp_deriv)
- (auto intro!: derivative_eq_intros elim!: nonpos_Reals_cases simp: field_simps)
-
-lemma stirling_sum_2_conv_stirling_sum'_1:
- fixes x :: "'a :: {real_div_algebra, field_char_0}"
- assumes "m > 0" "x \<noteq> 0"
- shows "stirling_sum' 1 m x = 1 / x + 1 / (2 * x^2) + stirling_sum 2 m x"
-proof -
- have pochhammer_2: "pochhammer (of_nat k) 2 = of_nat k * of_nat (Suc k)" for k
- by (simp add: pochhammer_Suc eval_nat_numeral add_ac)
- have "stirling_sum 2 m x =
- (\<Sum>k = Suc 0..<m. of_real (bernoulli' (Suc k)) * inverse x ^ Suc (Suc k))"
- unfolding stirling_sum_def pochhammer_2 power2_minus power_one mult_1_left
- by (intro sum.cong refl)
- (simp_all add: stirling_sum_def pochhammer_2 power2_eq_square divide_simps bernoulli'_def
- del: of_nat_Suc power_Suc)
- also have "1 / (2 * x^2) + \<dots> =
- (\<Sum>k=0..<m. of_real (bernoulli' (Suc k)) * inverse x ^ Suc (Suc k))" using assms
- by (subst (2) sum.atLeast_Suc_lessThan) (simp_all add: power2_eq_square field_simps)
- also have "1 / x + \<dots> = (\<Sum>k=0..<Suc m. of_real (bernoulli' k) * inverse x ^ Suc k)"
- by (subst sum.atLeast0_lessThan_Suc_shift) (simp_all add: bernoulli'_def divide_simps)
- also have "\<dots> = (\<Sum>k\<le>m. of_real (bernoulli' k) * inverse x ^ Suc k)"
- by (intro sum.cong) auto
- also have "\<dots> = stirling_sum' 1 m x" by (simp add: stirling_sum'_def)
- finally show ?thesis by (simp add: add_ac)
-qed
-
-lemma Polygamma_approx_2_real:
- assumes "x > (0::real)" "m > 0"
- shows "Polygamma_approx (Suc (Suc 0)) m x = stirling_sum' 1 m x"
-proof -
- have "Polygamma_approx (Suc (Suc 0)) m x = deriv (Polygamma_approx (Suc 0) m) x"
- by (simp add: Polygamma_approx_Suc)
- also have "\<dots> = deriv (\<lambda>x. ln x - 1 / (2*x) + stirling_sum (Suc 0) m x) x"
- using eventually_nhds_in_open[of "{0<..}" x] assms
- by (intro deriv_cong_ev) (auto elim!: eventually_mono simp: Polygamma_approx_1_real)
- also have "\<dots> = 1 / x + 1 / (2*x^2) + stirling_sum (Suc (Suc 0)) m x" using assms
- by (intro DERIV_imp_deriv) (auto intro!: derivative_eq_intros
- elim!: nonpos_Reals_cases simp: field_simps power2_eq_square)
- also have "\<dots> = stirling_sum' 1 m x" using stirling_sum_2_conv_stirling_sum'_1[of m x] assms
- by (simp add: eval_nat_numeral)
- finally show ?thesis .
-qed
-
-lemma Polygamma_approx_2_complex:
- assumes "Re x > 0" "m > 0"
- shows "Polygamma_approx (Suc (Suc 0)) m x = stirling_sum' 1 m x"
-proof -
- have "Polygamma_approx (Suc (Suc 0)) m x = deriv (Polygamma_approx (Suc 0) m) x"
- by (simp add: Polygamma_approx_Suc)
- also have "\<dots> = deriv (\<lambda>x. ln x - 1 / (2*x) + stirling_sum (Suc 0) m x) x"
- using eventually_nhds_in_open[of "{s. Re s > 0}" x] assms
- by (intro deriv_cong_ev)
- (auto simp: open_halfspace_Re_gt elim!: eventually_mono simp: Polygamma_approx_1_complex)
- also have "\<dots> = 1 / x + 1 / (2*x^2) + stirling_sum (Suc (Suc 0)) m x" using assms
- by (intro DERIV_imp_deriv) (auto intro!: derivative_eq_intros
- elim!: nonpos_Reals_cases simp: field_simps power2_eq_square)
- also have "\<dots> = stirling_sum' 1 m x" using stirling_sum_2_conv_stirling_sum'_1[of m x] assms
- by (subst stirling_sum_2_conv_stirling_sum'_1) (auto simp: eval_nat_numeral)
- finally show ?thesis .
-qed
-
-lemma Polygamma_approx_ge_2_real:
- assumes "x > (0::real)" "m > 0"
- shows "Polygamma_approx (Suc (Suc j)) m x = stirling_sum' (Suc j) m x"
-using assms(1)
-proof (induction j arbitrary: x)
- case (0 x)
- with assms show ?case by (simp add: Polygamma_approx_2_real)
-next
- case (Suc j x)
- have "Polygamma_approx (Suc (Suc (Suc j))) m x = deriv (Polygamma_approx (Suc (Suc j)) m) x"
- by (simp add: Polygamma_approx_Suc)
- also have "\<dots> = deriv (stirling_sum' (Suc j) m) x"
- using eventually_nhds_in_open[of "{0<..}" x] Suc.prems
- by (intro deriv_cong_ev refl) (auto elim!: eventually_mono simp: Suc.IH)
- also have "\<dots> = stirling_sum' (Suc (Suc j)) m x" using Suc.prems
- by (intro DERIV_imp_deriv derivative_intros) simp_all
- finally show ?case .
-qed
-
-lemma Polygamma_approx_ge_2_complex:
- assumes "Re x > 0" "m > 0"
- shows "Polygamma_approx (Suc (Suc j)) m x = stirling_sum' (Suc j) m x"
-using assms(1)
-proof (induction j arbitrary: x)
- case (0 x)
- with assms show ?case by (simp add: Polygamma_approx_2_complex)
-next
- case (Suc j x)
- have "Polygamma_approx (Suc (Suc (Suc j))) m x = deriv (Polygamma_approx (Suc (Suc j)) m) x"
- by (simp add: Polygamma_approx_Suc)
- also have "\<dots> = deriv (stirling_sum' (Suc j) m) x"
- using eventually_nhds_in_open[of "{x. Re x > 0}" x] Suc.prems
- by (intro deriv_cong_ev refl) (auto elim!: eventually_mono simp: Suc.IH open_halfspace_Re_gt)
- also have "\<dots> = stirling_sum' (Suc (Suc j)) m x" using Suc.prems
- by (intro DERIV_imp_deriv derivative_intros) simp_all
- finally show ?case .
-qed
-
-lemma Polygamma_approx_complex_of_real:
- assumes "x > 0" "m > 0"
- shows "Polygamma_approx j m (complex_of_real x) = of_real (Polygamma_approx j m x)"
-proof (cases j)
- case 0
- with assms show ?thesis by (simp add: Polygamma_approx_0 Ln_of_real stirling_sum_complex_of_real)
-next
- case [simp]: (Suc j')
- thus ?thesis
- proof (cases j')
- case 0
- with assms show ?thesis
- by (simp add: Polygamma_approx_1_complex
- Polygamma_approx_1_real stirling_sum_complex_of_real Ln_of_real)
- next
- case (Suc j'')
- with assms show ?thesis
- by (simp add: Polygamma_approx_ge_2_complex Polygamma_approx_ge_2_real
- stirling_sum'_complex_of_real)
- qed
-qed
-
-lemma higher_deriv_Polygamma_approx [simp]:
- "(deriv ^^ j) (Polygamma_approx i m) = Polygamma_approx (j + i) m"
- by (simp add: Polygamma_approx_def funpow_add)
-
-lemma stirling_sum_holomorphic [holomorphic_intros]:
- "0 \<notin> A \<Longrightarrow> stirling_sum j m holomorphic_on A"
- unfolding stirling_sum_def by (intro holomorphic_intros) auto
-
-lemma Polygamma_approx_holomorphic [holomorphic_intros]:
- "Polygamma_approx j m holomorphic_on {s. Re s > 0}"
- unfolding Polygamma_approx_def
- by (intro holomorphic_intros) (auto simp: open_halfspace_Re_gt elim!: nonpos_Reals_cases)
-
-lemma higher_deriv_lnGamma_stirling:
- assumes m: "m > 0"
- shows "(\<lambda>x::real. (deriv ^^ j) ln_Gamma x - Polygamma_approx j m x) \<in> O(\<lambda>x. 1 / x ^ (m + j))"
-proof -
- have "eventually (\<lambda>x. \<bar>(deriv ^^ j) ln_Gamma x - Polygamma_approx j m x\<bar> =
- inverse (real m) * \<bar>(deriv ^^ j) (stirling_integral m) x\<bar>) at_top"
- using eventually_gt_at_top[of "0::real"]
- proof eventually_elim
- case (elim x)
- note x = this
- have "(deriv ^^ j) (\<lambda>x. ln_Gamma x - Polygamma_approx 0 m x) (complex_of_real x) =
- (deriv ^^ j) (\<lambda>x. (-inverse (of_nat m)) * stirling_integral m x) (complex_of_real x)"
- using eventually_nhds_in_open[of "{s. Re s > 0}" x] x m
- by (intro higher_deriv_cong_ev refl)
- (auto elim!: eventually_mono simp: ln_Gamma_stirling_complex Polygamma_approx_def
- field_simps open_halfspace_Re_gt stirling_sum_def)
- also have "\<dots> = - inverse (of_nat m) * (deriv ^^ j) (stirling_integral m) (of_real x)" using x m
- by (intro higher_deriv_cmult[of _ "{s. Re s > 0}"] stirling_integral_holomorphic)
- (auto simp: open_halfspace_Re_gt)
- also have "(deriv ^^ j) (\<lambda>x. ln_Gamma x - Polygamma_approx 0 m x) (complex_of_real x) =
- (deriv ^^ j) ln_Gamma (of_real x) - (deriv ^^ j) (Polygamma_approx 0 m) (of_real x)"
- using x
- by (intro higher_deriv_diff[of _ "{s. Re s > 0}"])
- (auto intro!: holomorphic_intros elim!: nonpos_Reals_cases simp: open_halfspace_Re_gt)
- also have "(deriv ^^ j) (Polygamma_approx 0 m) (complex_of_real x) =
- of_real (Polygamma_approx j m x)" using x m
- by (simp add: Polygamma_approx_complex_of_real)
- also have "norm (- inverse (of_nat m) * (deriv ^^ j) (stirling_integral m) (complex_of_real x)) =
- inverse (real m) * \<bar>(deriv ^^ j) (stirling_integral m) x\<bar>"
- using x m by (simp add: norm_mult norm_inverse deriv_stirling_integral_complex_of_real)
- also have "(deriv ^^ j) ln_Gamma (complex_of_real x) = of_real ((deriv ^^ j) ln_Gamma x)" using x
- by (simp add: higher_deriv_ln_Gamma_complex_of_real)
- also have "norm (\<dots> - of_real (Polygamma_approx j m x)) =
- \<bar>(deriv ^^ j) ln_Gamma x - Polygamma_approx j m x\<bar>"
- by (simp only: of_real_diff [symmetric] norm_of_real)
- finally show ?case .
- qed
- from bigthetaI_cong[OF this] m
- have "(\<lambda>x::real. (deriv ^^ j) ln_Gamma x - Polygamma_approx j m x) \<in>
- \<Theta>(\<lambda>x. (deriv ^^ j) (stirling_integral m) x)" by simp
- also have "(\<lambda>x::real. (deriv ^^ j) (stirling_integral m) x) \<in> O(\<lambda>x. 1 / x ^ (m + j))" using m
- by (rule deriv_stirling_integral_real_bound)
- finally show ?thesis .
-qed
-
-lemma Polygamma_approx_1_real':
- assumes x: "(x::real) > 0" and m: "m > 0"
- shows "Polygamma_approx 1 m x = ln x - (\<Sum>k = Suc 0..m. bernoulli' k * inverse x ^ k / real k)"
-proof -
- have "Polygamma_approx 1 m x = ln x - (1 / (2 * x) +
- (\<Sum>k=Suc 0..<m. bernoulli (Suc k) * inverse x ^ Suc k / real (Suc k)))"
- (is "_ = _ - (_ + ?S)") using x by (simp add: Polygamma_approx_1_real stirling_sum_def)
- also have "?S = (\<Sum>k=Suc 0..<m. bernoulli' (Suc k) * inverse x ^ Suc k / real (Suc k))"
- by (intro sum.cong refl) (simp_all add: bernoulli'_def)
- also have "1 / (2 * x) + \<dots> =
- (\<Sum>k=0..<m. bernoulli' (Suc k) * inverse x ^ Suc k / real (Suc k))" using m
- by (subst (2) sum.atLeast_Suc_lessThan) (simp_all add: field_simps)
- also have "\<dots> = (\<Sum>k = Suc 0..m. bernoulli' k * inverse x ^ k / real k)" using assms
- by (subst sum.shift_bounds_Suc_ivl [symmetric]) (simp add: atLeastLessThanSuc_atLeastAtMost)
- finally show ?thesis .
-qed
-
-theorem
- assumes m: "m > 0"
- shows ln_Gamma_real_asymptotics:
- "(\<lambda>x. ln_Gamma x - ((x - 1 / 2) * ln x - x + ln (2 * pi) / 2 +
- (\<Sum>k = 1..<m. bernoulli (Suc k) / (real k * real (Suc k)) / x^k)))
- \<in> O(\<lambda>x. 1 / x ^ m)" (is ?th1)
- and Digamma_real_asymptotics:
- "(\<lambda>x. Digamma x - (ln x - (\<Sum>k=1..m. bernoulli' k / real k / x ^ k)))
- \<in> O(\<lambda>x. 1 / (x ^ Suc m))" (is ?th2)
- and Polygamma_real_asymptotics: "j > 0 \<Longrightarrow>
- (\<lambda>x. Polygamma j x - (- 1) ^ Suc j * (\<Sum>k\<le>m. bernoulli' k *
- pochhammer (real (Suc k)) (j - 1) / x ^ (k + j)))
- \<in> O(\<lambda>x. 1 / x ^ (m+j+1))" (is "_ \<Longrightarrow> ?th3")
-proof -
- define G :: "nat \<Rightarrow> real \<Rightarrow> real" where
- "G = (\<lambda>m. if m = 0 then ln_Gamma else Polygamma (m - 1))"
- have *: "(\<lambda>x. G j x - h x) \<in> O(\<lambda>x. 1 / x ^ (m + j))"
- if "\<And>x::real. x > 0 \<Longrightarrow> Polygamma_approx j m x = h x" for j h
- proof -
- have "(\<lambda>x. G j x - h x) \<in>
- \<Theta>(\<lambda>x. (deriv ^^ j) ln_Gamma x - Polygamma_approx j m x)" (is "_ \<in> \<Theta>(?f)")
- using that
- by (intro bigthetaI_cong) (auto intro: eventually_mono[OF eventually_gt_at_top[of "0::real"]]
- simp del: funpow.simps simp: higher_deriv_ln_Gamma_real G_def)
- also have "?f \<in> O(\<lambda>x::real. 1 / x ^ (m + j))" using m
- by (rule higher_deriv_lnGamma_stirling)
- finally show ?thesis .
- qed
-
- note [[simproc del: simplify_landau_sum]]
- from *[OF Polygamma_approx_0] assms show ?th1
- by (simp add: G_def Polygamma_approx_0 stirling_sum_def field_simps)
- from *[OF Polygamma_approx_1_real'] assms show ?th2 by (simp add: G_def field_simps)
-
- assume j: "j > 0"
- from *[OF Polygamma_approx_ge_2_real, of "j - 1"] assms j show ?th3
- by (simp add: G_def stirling_sum'_def power_add power_diff field_simps)
-qed
-
-end
diff --git a/thys/Stirling_Formula/ROOT b/thys/Stirling_Formula/ROOT
--- a/thys/Stirling_Formula/ROOT
+++ b/thys/Stirling_Formula/ROOT
@@ -1,13 +1,14 @@
chapter AFP
session Stirling_Formula (AFP) = Bernoulli +
options [timeout = 600]
sessions
Bernoulli
Landau_Symbols
+ "HOL-Real_Asymp"
theories
Stirling_Formula
- Ln_Gamma_Asymptotics
+ Gamma_Asymptotics
document_files
"root.tex"
"root.bib"
diff --git a/thys/Stirling_Formula/document/root.tex b/thys/Stirling_Formula/document/root.tex
--- a/thys/Stirling_Formula/document/root.tex
+++ b/thys/Stirling_Formula/document/root.tex
@@ -1,63 +1,68 @@
\documentclass[11pt,a4paper]{article}
\usepackage{isabelle,isabellesym}
% further packages required for unusual symbols (see also
% isabellesym.sty), use only when needed
\usepackage{amssymb,amsmath}
%for \<leadsto>, \<box>, \<diamond>, \<sqsupset>, \<mho>, \<Join>,
%\<lhd>, \<lesssim>, \<greatersim>, \<lessapprox>, \<greaterapprox>,
%\<triangleq>, \<yen>, \<lozenge>
%\usepackage{eurosym}
%for \<euro>
%\usepackage[only,bigsqcap]{stmaryrd}
%for \<Sqinter>
%\usepackage{eufrak}
%for \<AA> ... \<ZZ>, \<aa> ... \<zz> (also included in amssymb)
%\usepackage{textcomp}
%for \<onequarter>, \<onehalf>, \<threequarters>, \<degree>, \<cent>,
%\<currency>
% this should be the last package used
\usepackage{pdfsetup}
% urls in roman style, theory text in math-similar italics
\urlstyle{rm}
\isabellestyle{it}
% for uniform font size
%\renewcommand{\isastyle}{\isastyleminor}
\begin{document}
\title{Stirling's formula}
\author{Manuel Eberl}
\maketitle
\begin{abstract}
This work contains a proof of Stirling's formula both for the factorial $n! \sim \sqrt{2\pi n} (n/e)^n$ on natural numbers and the real Gamma function $\Gamma(x)\sim \sqrt{2\pi/x} (x/e)^x$. The proof is based on work by Graham Jameson~\cite{jameson}.
-This is then used to derive the complete asymptotic expansion of the logarithmic Gamma function and its derivatives (the Polygamma functions) in terms of Bernoulli numbers.
+This is then extended to the full asymptotic expansion
+\begin{multline*}
+\log\Gamma(z) = \big(z - \tfrac{1}{2}\big)\log z - z + \tfrac{1}{2}\log(2\pi) + \sum_{k=1}^{n-1} \frac{B_{k+1}}{k(k+1)} z^{-k}\\
+{} - \frac{1}{n} \int_0^\infty B_n([t])(t + z)^{-n}\,\text{d}t
+\end{multline*}
+uniformly for all complex $z\neq 0$ in the cone $\text{arg}(z)\leq \alpha$ for any $\alpha\in(0,\pi)$, with which the above asymptotic relation for $\Gamma$ is also extended to complex arguments.
\end{abstract}
\tableofcontents
\parindent 0pt\parskip 0.5ex
\input{session}
\nocite{*}
\bibliographystyle{abbrv}
\bibliography{root}
\end{document}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: t
%%% End:
diff --git a/web/entries/ADS_Functor.html b/web/entries/ADS_Functor.html
new file mode 100644
--- /dev/null
+++ b/web/entries/ADS_Functor.html
@@ -0,0 +1,204 @@
+<!DOCTYPE html>
+<html lang="en">
+<head>
+<meta charset="utf-8">
+<title>Authenticated Data Structures As Functors - Archive of Formal Proofs
+</title>
+<link rel="stylesheet" type="text/css" href="../front.css">
+<link rel="icon" href="../images/favicon.ico" type="image/icon">
+<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
+<!-- MathJax for LaTeX support in abstracts -->
+<script>
+MathJax = {
+ tex: {
+ inlineMath: [['$', '$'], ['\\(', '\\)']]
+ },
+ processEscapes: true,
+ svg: {
+ fontCache: 'global'
+ }
+};
+</script>
+<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
+</head>
+
+<body class="mathjax_ignore">
+
+<table width="100%">
+<tbody>
+<tr>
+
+<!-- Navigation -->
+<td width="20%" align="center" valign="top">
+ <p>&nbsp;</p>
+ <a href="https://www.isa-afp.org/">
+ <img src="../images/isabelle.png" width="100" height="88" border=0>
+ </a>
+ <p>&nbsp;</p>
+ <p>&nbsp;</p>
+ <table class="nav" width="80%">
+ <tr>
+ <td class="nav" width="100%"><a href="../index.html">Home</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../about.html">About</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../submitting.html">Submission</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../updating.html">Updating Entries</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../using.html">Using Entries</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../search.html">Search</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../statistics.html">Statistics</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../topics.html">Index</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../download.html">Download</a></td>
+ </tr>
+ </table>
+ <p>&nbsp;</p>
+ <p>&nbsp;</p>
+</td>
+
+
+<!-- Content -->
+<td width="80%" valign="top">
+<div align="center">
+ <p>&nbsp;</p>
+ <h1> <font class="first">A</font>uthenticated
+
+ <font class="first">D</font>ata
+
+ <font class="first">S</font>tructures
+
+ <font class="first">A</font>s
+
+ <font class="first">F</font>unctors
+
+</h1>
+ <p>&nbsp;</p>
+
+<table width="80%" class="data">
+<tbody>
+<tr>
+ <td class="datahead" width="20%">Title:</td>
+ <td class="data" width="80%">Authenticated Data Structures As Functors</td>
+</tr>
+
+<tr>
+ <td class="datahead">
+ Authors:
+ </td>
+ <td class="data">
+ <a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a> and
+ Ognjen Marić (ogi /dot/ afp /at/ mynosefroze /dot/ com)
+ </td>
+</tr>
+
+
+
+<tr>
+ <td class="datahead">Submission date:</td>
+ <td class="data">2020-04-16</td>
+</tr>
+
+<tr>
+ <td class="datahead" valign="top">Abstract:</td>
+ <td class="abstract mathjax_process">
+Authenticated data structures allow several systems to convince each
+other that they are referring to the same data structure, even if each
+of them knows only a part of the data structure. Using inclusion
+proofs, knowledgeable systems can selectively share their knowledge
+with other systems and the latter can verify the authenticity of what
+is being shared. In this article, we show how to modularly define
+authenticated data structures, their inclusion proofs, and operations
+thereon as datatypes in Isabelle/HOL, using a shallow embedding.
+Modularity allows us to construct complicated trees from reusable
+building blocks, which we call Merkle functors. Merkle functors
+include sums, products, and function spaces and are closed under
+composition and least fixpoints. As a practical application, we model
+the hierarchical transactions of <a
+href="https://www.canton.io">Canton</a>, a
+practical interoperability protocol for distributed ledgers, as
+authenticated data structures. This is a first step towards
+formalizing the Canton protocol and verifying its integrity and
+security guarantees.</td>
+</tr>
+
+
+<tr>
+ <td class="datahead" valign="top">BibTeX:</td>
+ <td class="formatted">
+ <pre>@article{ADS_Functor-AFP,
+ author = {Andreas Lochbihler and Ognjen Marić},
+ title = {Authenticated Data Structures As Functors},
+ journal = {Archive of Formal Proofs},
+ month = apr,
+ year = 2020,
+ note = {\url{http://isa-afp.org/entries/ADS_Functor.html},
+ Formal proof development},
+ ISSN = {2150-914x},
+}</pre>
+ </td>
+</tr>
+
+ <tr><td class="datahead">License:</td>
+ <td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
+
+
+
+
+
+
+ </tbody>
+</table>
+
+<p></p>
+
+<table class="links">
+ <tbody>
+ <tr>
+ <td class="links">
+ <a href="../browser_info/current/AFP/ADS_Functor/outline.pdf">Proof outline</a><br>
+ <a href="../browser_info/current/AFP/ADS_Functor/document.pdf">Proof document</a>
+ </td>
+ </tr>
+ <tr>
+ <td class="links">
+ <a href="../browser_info/current/AFP/ADS_Functor/index.html">Browse theories</a>
+ </td></tr>
+ <tr>
+ <td class="links">
+ <a href="../release/afp-ADS_Functor-current.tar.gz">Download this entry</a>
+ </td>
+ </tr>
+
+
+ <tr><td class="links">Older releases:
+ None
+ </td></tr>
+
+ </tbody>
+</table>
+
+</div>
+</td>
+
+</tr>
+</tbody>
+</table>
+
+<script src="../jquery.min.js"></script>
+<script src="../script.js"></script>
+
+</body>
+</html>
\ No newline at end of file
diff --git a/web/entries/AODV.html b/web/entries/AODV.html
--- a/web/entries/AODV.html
+++ b/web/entries/AODV.html
@@ -1,252 +1,252 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Loop freedom of the (untimed) AODV routing protocol - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>oop
freedom
of
the
<font class="first">(</font>untimed)
<font class="first">A</font>ODV
routing
protocol
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Loop freedom of the (untimed) AODV routing protocol</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.tbrk.org">Timothy Bourke</a> and
<a href="http://www.hoefner-online.de/">Peter Höfner</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-10-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
The Ad hoc On-demand Distance Vector (AODV) routing protocol allows
the nodes in a Mobile Ad hoc Network (MANET) or a Wireless Mesh
Network (WMN) to know where to forward data packets. Such a protocol
is ‘loop free’ if it never leads to routing decisions that forward
packets in circles.
<p>
This development mechanises an existing pen-and-paper proof of loop
freedom of AODV. The protocol is modelled in the Algebra of
Wireless Networks (AWN), which is the subject of an earlier paper
and AFP mechanization. The proof relies on a novel compositional
approach for lifting invariants to networks of nodes.
</p><p>
We exploit the mechanization to analyse several variants of AODV and
show that Isabelle/HOL can re-establish most proof obligations
automatically and identify exactly the steps that are no longer valid.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{AODV-AFP,
author = {Timothy Bourke and Peter Höfner},
title = {Loop freedom of the (untimed) AODV routing protocol},
journal = {Archive of Formal Proofs},
month = oct,
year = 2014,
note = {\url{http://isa-afp.org/entries/AODV.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="AWN.html">AWN</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/AODV/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/AODV/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/AODV/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-AODV-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-AODV-2019-06-11.tar.gz">
afp-AODV-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-AODV-2018-08-16.tar.gz">
afp-AODV-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-AODV-2017-10-10.tar.gz">
afp-AODV-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-AODV-2016-12-17.tar.gz">
afp-AODV-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-AODV-2016-02-22.tar.gz">
afp-AODV-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-AODV-2015-05-27.tar.gz">
afp-AODV-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-AODV-2014-11-03.tar.gz">
afp-AODV-2014-11-03.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-AODV-2014-11-01.tar.gz">
afp-AODV-2014-11-01.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/AVL-Trees.html b/web/entries/AVL-Trees.html
--- a/web/entries/AVL-Trees.html
+++ b/web/entries/AVL-Trees.html
@@ -1,295 +1,295 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>AVL Trees - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>VL
<font class="first">T</font>rees
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">AVL Trees</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a> and
Cornelia Pusch
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-03-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Two formalizations of AVL trees with room for extensions. The first formalization is monolithic and shorter, the second one in two stages, longer and a bit simpler. The final implementation is the same. If you are interested in developing this further, please contact <tt>gerwin.klein@nicta.com.au</tt>.</div></td>
+ <td class="abstract mathjax_process">Two formalizations of AVL trees with room for extensions. The first formalization is monolithic and shorter, the second one in two stages, longer and a bit simpler. The final implementation is the same. If you are interested in developing this further, please contact <tt>gerwin.klein@nicta.com.au</tt>.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2011-04-11]: Ondrej Kuncar added delete function</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{AVL-Trees-AFP,
author = {Tobias Nipkow and Cornelia Pusch},
title = {AVL Trees},
journal = {Archive of Formal Proofs},
month = mar,
year = 2004,
note = {\url{http://isa-afp.org/entries/AVL-Trees.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/AVL-Trees/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/AVL-Trees/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/AVL-Trees/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-AVL-Trees-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-AVL-Trees-2019-06-11.tar.gz">
afp-AVL-Trees-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-AVL-Trees-2018-08-16.tar.gz">
afp-AVL-Trees-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-AVL-Trees-2017-10-10.tar.gz">
afp-AVL-Trees-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-AVL-Trees-2016-12-17.tar.gz">
afp-AVL-Trees-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-AVL-Trees-2016-02-22.tar.gz">
afp-AVL-Trees-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-AVL-Trees-2015-05-27.tar.gz">
afp-AVL-Trees-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-AVL-Trees-2014-08-28.tar.gz">
afp-AVL-Trees-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-AVL-Trees-2013-12-11.tar.gz">
afp-AVL-Trees-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-AVL-Trees-2013-11-17.tar.gz">
afp-AVL-Trees-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-AVL-Trees-2013-02-16.tar.gz">
afp-AVL-Trees-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-AVL-Trees-2012-05-24.tar.gz">
afp-AVL-Trees-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-AVL-Trees-2011-10-11.tar.gz">
afp-AVL-Trees-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-AVL-Trees-2011-02-11.tar.gz">
afp-AVL-Trees-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-AVL-Trees-2010-06-30.tar.gz">
afp-AVL-Trees-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-AVL-Trees-2009-12-12.tar.gz">
afp-AVL-Trees-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-AVL-Trees-2009-04-29.tar.gz">
afp-AVL-Trees-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-AVL-Trees-2008-06-10.tar.gz">
afp-AVL-Trees-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-AVL-Trees-2007-11-27.tar.gz">
afp-AVL-Trees-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-AVL-Trees-2005-10-14.tar.gz">
afp-AVL-Trees-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-AVL-Trees-2004-05-21.tar.gz">
afp-AVL-Trees-2004-05-21.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-AVL-Trees-2004-04-20.tar.gz">
afp-AVL-Trees-2004-04-20.tar.gz
</a>
</li>
<li>Isabelle 2003:
<a href="../release/afp-AVL-Trees-2004-03-19.tar.gz">
afp-AVL-Trees-2004-03-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/AWN.html b/web/entries/AWN.html
--- a/web/entries/AWN.html
+++ b/web/entries/AWN.html
@@ -1,253 +1,253 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Mechanization of the Algebra for Wireless Networks (AWN) - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>echanization
of
the
<font class="first">A</font>lgebra
for
<font class="first">W</font>ireless
<font class="first">N</font>etworks
<font class="first">(</font>AWN)
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Mechanization of the Algebra for Wireless Networks (AWN)</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.tbrk.org">Timothy Bourke</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-03-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
AWN is a process algebra developed for modelling and analysing
protocols for Mobile Ad hoc Networks (MANETs) and Wireless Mesh
Networks (WMNs). AWN models comprise five distinct layers:
sequential processes, local parallel compositions, nodes, partial
networks, and complete networks.</p>
<p>
This development mechanises the original operational semantics of
AWN and introduces a variant 'open' operational semantics that
enables the compositional statement and proof of invariants across
distinct network nodes. It supports labels (for weakening
invariants) and (abstract) data state manipulations. A framework for
compositional invariant proofs is developed, including a tactic
(inv_cterms) for inductive invariant proofs of sequential processes,
lifting rules for the open versions of the higher layers, and a rule
for transferring lifted properties back to the standard semantics. A
notion of 'control terms' reduces proof obligations to the subset of
subterms that act directly (in contrast to operators for combining
-terms and joining processes).</p></div></td>
+terms and joining processes).</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{AWN-AFP,
author = {Timothy Bourke},
title = {Mechanization of the Algebra for Wireless Networks (AWN)},
journal = {Archive of Formal Proofs},
month = mar,
year = 2014,
note = {\url{http://isa-afp.org/entries/AWN.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="AODV.html">AODV</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/AWN/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/AWN/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/AWN/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-AWN-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-AWN-2019-06-11.tar.gz">
afp-AWN-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-AWN-2018-08-16.tar.gz">
afp-AWN-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-AWN-2017-10-10.tar.gz">
afp-AWN-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-AWN-2016-12-17.tar.gz">
afp-AWN-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-AWN-2016-02-22.tar.gz">
afp-AWN-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-AWN-2015-05-27.tar.gz">
afp-AWN-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-AWN-2014-08-28.tar.gz">
afp-AWN-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-AWN-2014-03-15.tar.gz">
afp-AWN-2014-03-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Abortable_Linearizable_Modules.html b/web/entries/Abortable_Linearizable_Modules.html
--- a/web/entries/Abortable_Linearizable_Modules.html
+++ b/web/entries/Abortable_Linearizable_Modules.html
@@ -1,255 +1,255 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Abortable Linearizable Modules - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>bortable
<font class="first">L</font>inearizable
<font class="first">M</font>odules
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Abortable Linearizable Modules</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Rachid Guerraoui (rachid /dot/ guerraoui /at/ epfl /dot/ ch),
<a href="http://lara.epfl.ch/~kuncak/">Viktor Kuncak</a> and
Giuliano Losa (giuliano /at/ galois /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-03-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We define the Abortable Linearizable Module automaton (ALM for short)
and prove its key composition property using the IOA theory of
HOLCF. The ALM is at the heart of the Speculative Linearizability
framework. This framework simplifies devising correct speculative
algorithms by enabling their decomposition into independent modules
that can be analyzed and proved correct in isolation. It is
particularly useful when working in a distributed environment, where
the need to tolerate faults and asynchrony has made current
monolithic protocols so intricate that it is no longer tractable to
check their correctness. Our theory contains a typical example of a
-refinement proof in the I/O-automata framework of Lynch and Tuttle.</div></td>
+refinement proof in the I/O-automata framework of Lynch and Tuttle.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Abortable_Linearizable_Modules-AFP,
author = {Rachid Guerraoui and Viktor Kuncak and Giuliano Losa},
title = {Abortable Linearizable Modules},
journal = {Archive of Formal Proofs},
month = mar,
year = 2012,
note = {\url{http://isa-afp.org/entries/Abortable_Linearizable_Modules.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Abortable_Linearizable_Modules/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Abortable_Linearizable_Modules/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Abortable_Linearizable_Modules/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Abortable_Linearizable_Modules-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Abortable_Linearizable_Modules-2019-06-11.tar.gz">
afp-Abortable_Linearizable_Modules-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Abortable_Linearizable_Modules-2018-08-16.tar.gz">
afp-Abortable_Linearizable_Modules-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Abortable_Linearizable_Modules-2017-10-10.tar.gz">
afp-Abortable_Linearizable_Modules-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Abortable_Linearizable_Modules-2016-12-17.tar.gz">
afp-Abortable_Linearizable_Modules-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Abortable_Linearizable_Modules-2016-02-22.tar.gz">
afp-Abortable_Linearizable_Modules-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Abortable_Linearizable_Modules-2015-05-27.tar.gz">
afp-Abortable_Linearizable_Modules-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Abortable_Linearizable_Modules-2014-08-28.tar.gz">
afp-Abortable_Linearizable_Modules-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Abortable_Linearizable_Modules-2013-12-11.tar.gz">
afp-Abortable_Linearizable_Modules-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Abortable_Linearizable_Modules-2013-11-17.tar.gz">
afp-Abortable_Linearizable_Modules-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Abortable_Linearizable_Modules-2013-02-16.tar.gz">
afp-Abortable_Linearizable_Modules-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Abortable_Linearizable_Modules-2012-05-24.tar.gz">
afp-Abortable_Linearizable_Modules-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Abortable_Linearizable_Modules-2012-03-02.tar.gz">
afp-Abortable_Linearizable_Modules-2012-03-02.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Abs_Int_ITP2012.html b/web/entries/Abs_Int_ITP2012.html
--- a/web/entries/Abs_Int_ITP2012.html
+++ b/web/entries/Abs_Int_ITP2012.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Abstract Interpretation of Annotated Commands - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>bstract
<font class="first">I</font>nterpretation
of
<font class="first">A</font>nnotated
<font class="first">C</font>ommands
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Abstract Interpretation of Annotated Commands</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-11-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This is the Isabelle formalization of the material decribed in the
eponymous <a href="https://doi.org/10.1007/978-3-642-32347-8_9">ITP 2012 paper</a>.
It develops a generic abstract interpreter for a
while-language, including widening and narrowing. The collecting
semantics and the abstract interpreter operate on annotated commands:
the program is represented as a syntax tree with the semantic
information directly embedded, without auxiliary labels. The aim of
the formalization is simplicity, not efficiency or
precision. This is motivated by the inclusion of the material in a
theorem prover based course on semantics. A similar (but more
polished) development is covered in the book
-<a href="https://doi.org/10.1007/978-3-319-10542-0">Concrete Semantics</a>.</div></td>
+<a href="https://doi.org/10.1007/978-3-319-10542-0">Concrete Semantics</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Abs_Int_ITP2012-AFP,
author = {Tobias Nipkow},
title = {Abstract Interpretation of Annotated Commands},
journal = {Archive of Formal Proofs},
month = nov,
year = 2016,
note = {\url{http://isa-afp.org/entries/Abs_Int_ITP2012.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Abs_Int_ITP2012/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Abs_Int_ITP2012/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Abs_Int_ITP2012/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Abs_Int_ITP2012-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Abs_Int_ITP2012-2019-06-11.tar.gz">
afp-Abs_Int_ITP2012-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Abs_Int_ITP2012-2018-08-16.tar.gz">
afp-Abs_Int_ITP2012-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Abs_Int_ITP2012-2017-10-10.tar.gz">
afp-Abs_Int_ITP2012-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Abs_Int_ITP2012-2016-12-17.tar.gz">
afp-Abs_Int_ITP2012-2016-12-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Abstract-Hoare-Logics.html b/web/entries/Abstract-Hoare-Logics.html
--- a/web/entries/Abstract-Hoare-Logics.html
+++ b/web/entries/Abstract-Hoare-Logics.html
@@ -1,272 +1,272 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Abstract Hoare Logics - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>bstract
<font class="first">H</font>oare
<font class="first">L</font>ogics
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Abstract Hoare Logics</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2006-08-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">These therories describe Hoare logics for a number of imperative language constructs, from while-loops to mutually recursive procedures. Both partial and total correctness are treated. In particular a proof system for total correctness of recursive procedures in the presence of unbounded nondeterminism is presented.</div></td>
+ <td class="abstract mathjax_process">These therories describe Hoare logics for a number of imperative language constructs, from while-loops to mutually recursive procedures. Both partial and total correctness are treated. In particular a proof system for total correctness of recursive procedures in the presence of unbounded nondeterminism is presented.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Abstract-Hoare-Logics-AFP,
author = {Tobias Nipkow},
title = {Abstract Hoare Logics},
journal = {Archive of Formal Proofs},
month = aug,
year = 2006,
note = {\url{http://isa-afp.org/entries/Abstract-Hoare-Logics.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Abstract-Hoare-Logics/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Abstract-Hoare-Logics/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Abstract-Hoare-Logics/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Abstract-Hoare-Logics-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Abstract-Hoare-Logics-2019-06-11.tar.gz">
afp-Abstract-Hoare-Logics-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Abstract-Hoare-Logics-2018-08-16.tar.gz">
afp-Abstract-Hoare-Logics-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Abstract-Hoare-Logics-2017-10-10.tar.gz">
afp-Abstract-Hoare-Logics-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Abstract-Hoare-Logics-2016-12-17.tar.gz">
afp-Abstract-Hoare-Logics-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Abstract-Hoare-Logics-2016-02-22.tar.gz">
afp-Abstract-Hoare-Logics-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Abstract-Hoare-Logics-2015-05-27.tar.gz">
afp-Abstract-Hoare-Logics-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Abstract-Hoare-Logics-2014-08-28.tar.gz">
afp-Abstract-Hoare-Logics-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Abstract-Hoare-Logics-2013-12-11.tar.gz">
afp-Abstract-Hoare-Logics-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Abstract-Hoare-Logics-2013-11-17.tar.gz">
afp-Abstract-Hoare-Logics-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Abstract-Hoare-Logics-2013-02-16.tar.gz">
afp-Abstract-Hoare-Logics-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Abstract-Hoare-Logics-2012-05-24.tar.gz">
afp-Abstract-Hoare-Logics-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Abstract-Hoare-Logics-2011-10-11.tar.gz">
afp-Abstract-Hoare-Logics-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Abstract-Hoare-Logics-2011-02-11.tar.gz">
afp-Abstract-Hoare-Logics-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Abstract-Hoare-Logics-2010-06-30.tar.gz">
afp-Abstract-Hoare-Logics-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Abstract-Hoare-Logics-2009-12-12.tar.gz">
afp-Abstract-Hoare-Logics-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Abstract-Hoare-Logics-2009-04-29.tar.gz">
afp-Abstract-Hoare-Logics-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Abstract-Hoare-Logics-2008-06-10.tar.gz">
afp-Abstract-Hoare-Logics-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Abstract-Hoare-Logics-2007-11-27.tar.gz">
afp-Abstract-Hoare-Logics-2007-11-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Abstract-Rewriting.html b/web/entries/Abstract-Rewriting.html
--- a/web/entries/Abstract-Rewriting.html
+++ b/web/entries/Abstract-Rewriting.html
@@ -1,277 +1,277 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Abstract Rewriting - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>bstract
<font class="first">R</font>ewriting
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Abstract Rewriting</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com) and
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-06-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present an Isabelle formalization of abstract rewriting (see, e.g.,
the book by Baader and Nipkow). First, we define standard relations like
<i>joinability</i>, <i>meetability</i>, <i>conversion</i>, etc. Then, we
formalize important properties of abstract rewrite systems, e.g.,
confluence and strong normalization. Our main concern is on strong
normalization, since this formalization is the basis of <a
href="http://cl-informatik.uibk.ac.at/software/ceta">CeTA</a> (which is
mainly about strong normalization of term rewrite systems). Hence lemmas
involving strong normalization constitute by far the biggest part of this
-theory. One of those is Newman's lemma.</div></td>
+theory. One of those is Newman's lemma.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2010-09-17]: Added theories defining several (ordered)
semirings related to strong normalization and giving some standard
instances. <br>
[2013-10-16]: Generalized delta-orders from rationals to Archimedean fields.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Abstract-Rewriting-AFP,
author = {Christian Sternagel and René Thiemann},
title = {Abstract Rewriting},
journal = {Archive of Formal Proofs},
month = jun,
year = 2010,
note = {\url{http://isa-afp.org/entries/Abstract-Rewriting.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Regular-Sets.html">Regular-Sets</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Decreasing-Diagrams.html">Decreasing-Diagrams</a>, <a href="Decreasing-Diagrams-II.html">Decreasing-Diagrams-II</a>, <a href="First_Order_Terms.html">First_Order_Terms</a>, <a href="Matrix.html">Matrix</a>, <a href="Minsky_Machines.html">Minsky_Machines</a>, <a href="Myhill-Nerode.html">Myhill-Nerode</a>, <a href="Polynomials.html">Polynomials</a>, <a href="Rewriting_Z.html">Rewriting_Z</a>, <a href="Well_Quasi_Orders.html">Well_Quasi_Orders</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Abstract-Rewriting/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Abstract-Rewriting/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Abstract-Rewriting/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Abstract-Rewriting-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Abstract-Rewriting-2019-06-11.tar.gz">
afp-Abstract-Rewriting-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Abstract-Rewriting-2018-08-16.tar.gz">
afp-Abstract-Rewriting-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Abstract-Rewriting-2017-10-10.tar.gz">
afp-Abstract-Rewriting-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Abstract-Rewriting-2016-12-17.tar.gz">
afp-Abstract-Rewriting-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Abstract-Rewriting-2016-02-22.tar.gz">
afp-Abstract-Rewriting-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Abstract-Rewriting-2015-05-27.tar.gz">
afp-Abstract-Rewriting-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Abstract-Rewriting-2014-08-28.tar.gz">
afp-Abstract-Rewriting-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Abstract-Rewriting-2013-12-11.tar.gz">
afp-Abstract-Rewriting-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Abstract-Rewriting-2013-11-17.tar.gz">
afp-Abstract-Rewriting-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Abstract-Rewriting-2013-02-16.tar.gz">
afp-Abstract-Rewriting-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Abstract-Rewriting-2012-05-24.tar.gz">
afp-Abstract-Rewriting-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Abstract-Rewriting-2011-10-11.tar.gz">
afp-Abstract-Rewriting-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Abstract-Rewriting-2011-02-11.tar.gz">
afp-Abstract-Rewriting-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Abstract-Rewriting-2010-06-30.tar.gz">
afp-Abstract-Rewriting-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Abstract-Rewriting-2010-06-17.tar.gz">
afp-Abstract-Rewriting-2010-06-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Abstract_Completeness.html b/web/entries/Abstract_Completeness.html
--- a/web/entries/Abstract_Completeness.html
+++ b/web/entries/Abstract_Completeness.html
@@ -1,227 +1,227 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Abstract Completeness - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>bstract
<font class="first">C</font>ompleteness
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Abstract Completeness</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Jasmin Christian Blanchette (j /dot/ c /dot/ blanchette /at/ vu /dot/ nl),
Andrei Popescu (a /dot/ popescu /at/ mdx /dot/ ac /dot/ uk) and
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-04-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">A formalization of an abstract property of possibly infinite derivation trees (modeled by a codatatype), representing the core of a proof (in Beth/Hintikka style) of the first-order logic completeness theorem, independent of the concrete syntax or inference rules. This work is described in detail in the IJCAR 2014 publication by the authors.
-The abstract proof can be instantiated for a wide range of Gentzen and tableau systems as well as various flavors of FOL---e.g., with or without predicates, equality, or sorts. Here, we give only a toy example instantiation with classical propositional logic. A more serious instance---many-sorted FOL with equality---is described elsewhere [Blanchette and Popescu, FroCoS 2013].</div></td>
+ <td class="abstract mathjax_process">A formalization of an abstract property of possibly infinite derivation trees (modeled by a codatatype), representing the core of a proof (in Beth/Hintikka style) of the first-order logic completeness theorem, independent of the concrete syntax or inference rules. This work is described in detail in the IJCAR 2014 publication by the authors.
+The abstract proof can be instantiated for a wide range of Gentzen and tableau systems as well as various flavors of FOL---e.g., with or without predicates, equality, or sorts. Here, we give only a toy example instantiation with classical propositional logic. A more serious instance---many-sorted FOL with equality---is described elsewhere [Blanchette and Popescu, FroCoS 2013].</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Abstract_Completeness-AFP,
author = {Jasmin Christian Blanchette and Andrei Popescu and Dmitriy Traytel},
title = {Abstract Completeness},
journal = {Archive of Formal Proofs},
month = apr,
year = 2014,
note = {\url{http://isa-afp.org/entries/Abstract_Completeness.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Collections.html">Collections</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Abstract_Soundness.html">Abstract_Soundness</a>, <a href="Incredible_Proof_Machine.html">Incredible_Proof_Machine</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Abstract_Completeness/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Abstract_Completeness/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Abstract_Completeness/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Abstract_Completeness-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Abstract_Completeness-2019-06-11.tar.gz">
afp-Abstract_Completeness-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Abstract_Completeness-2018-08-16.tar.gz">
afp-Abstract_Completeness-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Abstract_Completeness-2017-10-10.tar.gz">
afp-Abstract_Completeness-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Abstract_Completeness-2016-12-17.tar.gz">
afp-Abstract_Completeness-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Abstract_Completeness-2016-02-22.tar.gz">
afp-Abstract_Completeness-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Abstract_Completeness-2015-05-27.tar.gz">
afp-Abstract_Completeness-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Abstract_Completeness-2014-08-28.tar.gz">
afp-Abstract_Completeness-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Abstract_Completeness-2014-04-16.tar.gz">
afp-Abstract_Completeness-2014-04-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Abstract_Soundness.html b/web/entries/Abstract_Soundness.html
--- a/web/entries/Abstract_Soundness.html
+++ b/web/entries/Abstract_Soundness.html
@@ -1,212 +1,212 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Abstract Soundness - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>bstract
<font class="first">S</font>oundness
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Abstract Soundness</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Jasmin Christian Blanchette (j /dot/ c /dot/ blanchette /at/ vu /dot/ nl),
Andrei Popescu (a /dot/ popescu /at/ mdx /dot/ ac /dot/ uk) and
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-02-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
A formalized coinductive account of the abstract development of
Brotherston, Gorogiannis, and Petersen [APLAS 2012], in a slightly
more general form since we work with arbitrary infinite proofs, which
may be acyclic. This work is described in detail in an article by the
authors, published in 2017 in the <em>Journal of Automated
Reasoning</em>. The abstract proof can be instantiated for
various formalisms, including first-order logic with inductive
-predicates.</div></td>
+predicates.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Abstract_Soundness-AFP,
author = {Jasmin Christian Blanchette and Andrei Popescu and Dmitriy Traytel},
title = {Abstract Soundness},
journal = {Archive of Formal Proofs},
month = feb,
year = 2017,
note = {\url{http://isa-afp.org/entries/Abstract_Soundness.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Abstract_Completeness.html">Abstract_Completeness</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Abstract_Soundness/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Abstract_Soundness/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Abstract_Soundness/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Abstract_Soundness-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Abstract_Soundness-2019-06-11.tar.gz">
afp-Abstract_Soundness-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Abstract_Soundness-2018-08-16.tar.gz">
afp-Abstract_Soundness-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Abstract_Soundness-2017-10-10.tar.gz">
afp-Abstract_Soundness-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Abstract_Soundness-2017-02-13.tar.gz">
afp-Abstract_Soundness-2017-02-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Adaptive_State_Counting.html b/web/entries/Adaptive_State_Counting.html
--- a/web/entries/Adaptive_State_Counting.html
+++ b/web/entries/Adaptive_State_Counting.html
@@ -1,210 +1,210 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalisation of an Adaptive State Counting Algorithm - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalisation
of
an
<font class="first">A</font>daptive
<font class="first">S</font>tate
<font class="first">C</font>ounting
<font class="first">A</font>lgorithm
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalisation of an Adaptive State Counting Algorithm</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Robert Sachtleben (rob_sac /at/ uni-bremen /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-08-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry provides a formalisation of a refinement of an adaptive
state counting algorithm, used to test for reduction between finite
state machines. The algorithm has been originally presented by Hierons
in the paper <a
href="https://doi.org/10.1109/TC.2004.85">Testing from a
Non-Deterministic Finite State Machine Using Adaptive State
Counting</a>. Definitions for finite state machines and
adaptive test cases are given and many useful theorems are derived
from these. The algorithm is formalised using mutually recursive
functions, for which it is proven that the generated test suite is
sufficient to test for reduction against finite state machines of a
certain fault domain. Additionally, the algorithm is specified in a
-simple WHILE-language and its correctness is shown using Hoare-logic.</div></td>
+simple WHILE-language and its correctness is shown using Hoare-logic.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Adaptive_State_Counting-AFP,
author = {Robert Sachtleben},
title = {Formalisation of an Adaptive State Counting Algorithm},
journal = {Archive of Formal Proofs},
month = aug,
year = 2019,
note = {\url{http://isa-afp.org/entries/Adaptive_State_Counting.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Transition_Systems_and_Automata.html">Transition_Systems_and_Automata</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Adaptive_State_Counting/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Adaptive_State_Counting/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Adaptive_State_Counting/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Adaptive_State_Counting-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Adaptive_State_Counting-2019-08-19.tar.gz">
afp-Adaptive_State_Counting-2019-08-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Affine_Arithmetic.html b/web/entries/Affine_Arithmetic.html
--- a/web/entries/Affine_Arithmetic.html
+++ b/web/entries/Affine_Arithmetic.html
@@ -1,229 +1,229 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Affine Arithmetic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>ffine
<font class="first">A</font>rithmetic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Affine Arithmetic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-02-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We give a formalization of affine forms as abstract representations of zonotopes.
We provide affine operations as well as overapproximations of some non-affine operations like multiplication and division.
Expressions involving those operations can automatically be turned into (executable) functions approximating the original
-expression in affine arithmetic.</div></td>
+expression in affine arithmetic.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2015-01-31]: added algorithm for zonotope/hyperplane intersection<br>
[2017-09-20]: linear approximations for all symbols from the floatarith data
type</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Affine_Arithmetic-AFP,
author = {Fabian Immler},
title = {Affine Arithmetic},
journal = {Archive of Formal Proofs},
month = feb,
year = 2014,
note = {\url{http://isa-afp.org/entries/Affine_Arithmetic.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Deriving.html">Deriving</a>, <a href="List-Index.html">List-Index</a>, <a href="Show.html">Show</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Ordinary_Differential_Equations.html">Ordinary_Differential_Equations</a>, <a href="Taylor_Models.html">Taylor_Models</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Affine_Arithmetic/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Affine_Arithmetic/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Affine_Arithmetic/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Affine_Arithmetic-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Affine_Arithmetic-2019-06-11.tar.gz">
afp-Affine_Arithmetic-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Affine_Arithmetic-2018-08-16.tar.gz">
afp-Affine_Arithmetic-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Affine_Arithmetic-2017-10-10.tar.gz">
afp-Affine_Arithmetic-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Affine_Arithmetic-2016-12-17.tar.gz">
afp-Affine_Arithmetic-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Affine_Arithmetic-2016-02-22.tar.gz">
afp-Affine_Arithmetic-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Affine_Arithmetic-2015-05-27.tar.gz">
afp-Affine_Arithmetic-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Affine_Arithmetic-2014-08-28.tar.gz">
afp-Affine_Arithmetic-2014-08-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Aggregation_Algebras.html b/web/entries/Aggregation_Algebras.html
--- a/web/entries/Aggregation_Algebras.html
+++ b/web/entries/Aggregation_Algebras.html
@@ -1,197 +1,197 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Aggregation Algebras - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>ggregation
<font class="first">A</font>lgebras
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Aggregation Algebras</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.cosc.canterbury.ac.nz/walter.guttmann/">Walter Guttmann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-09-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We develop algebras for aggregation and minimisation for weight
matrices and for edge weights in graphs. We verify the correctness of
Prim's and Kruskal's minimum spanning tree algorithms based
on these algebras. We also show numerous instances of these algebras
-based on linearly ordered commutative semigroups.</div></td>
+based on linearly ordered commutative semigroups.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Aggregation_Algebras-AFP,
author = {Walter Guttmann},
title = {Aggregation Algebras},
journal = {Archive of Formal Proofs},
month = sep,
year = 2018,
note = {\url{http://isa-afp.org/entries/Aggregation_Algebras.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Stone_Kleene_Relation_Algebras.html">Stone_Kleene_Relation_Algebras</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Aggregation_Algebras/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Aggregation_Algebras/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Aggregation_Algebras/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Aggregation_Algebras-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Aggregation_Algebras-2019-06-11.tar.gz">
afp-Aggregation_Algebras-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Aggregation_Algebras-2018-09-16.tar.gz">
afp-Aggregation_Algebras-2018-09-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Akra_Bazzi.html b/web/entries/Akra_Bazzi.html
--- a/web/entries/Akra_Bazzi.html
+++ b/web/entries/Akra_Bazzi.html
@@ -1,238 +1,238 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Akra-Bazzi theorem and the Master theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">A</font>kra-Bazzi
theorem
and
the
<font class="first">M</font>aster
theorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Akra-Bazzi theorem and the Master theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-07-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This article contains a formalisation of the Akra-Bazzi method
+ <td class="abstract mathjax_process">This article contains a formalisation of the Akra-Bazzi method
based on a proof by Leighton. It is a generalisation of the well-known
Master Theorem for analysing the complexity of Divide & Conquer algorithms.
We also include a generalised version of the Master theorem based on the
Akra-Bazzi theorem, which is easier to apply than the Akra-Bazzi theorem
itself.
<p>
Some proof methods that facilitate applying the Master theorem are also
included. For a more detailed explanation of the formalisation and the
-proof methods, see the accompanying paper (publication forthcoming).</div></td>
+proof methods, see the accompanying paper (publication forthcoming).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Akra_Bazzi-AFP,
author = {Manuel Eberl},
title = {The Akra-Bazzi theorem and the Master theorem},
journal = {Archive of Formal Proofs},
month = jul,
year = 2015,
note = {\url{http://isa-afp.org/entries/Akra_Bazzi.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Landau_Symbols.html">Landau_Symbols</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Closest_Pair_Points.html">Closest_Pair_Points</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Akra_Bazzi/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Akra_Bazzi/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Akra_Bazzi/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Akra_Bazzi-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Akra_Bazzi-2019-06-11.tar.gz">
afp-Akra_Bazzi-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Akra_Bazzi-2018-08-16.tar.gz">
afp-Akra_Bazzi-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Akra_Bazzi-2017-10-10.tar.gz">
afp-Akra_Bazzi-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Akra_Bazzi-2016-12-17.tar.gz">
afp-Akra_Bazzi-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Akra_Bazzi-2016-02-22.tar.gz">
afp-Akra_Bazzi-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Akra_Bazzi-2015-07-24.tar.gz">
afp-Akra_Bazzi-2015-07-24.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Akra_Bazzi-2015-07-15.tar.gz">
afp-Akra_Bazzi-2015-07-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Algebraic_Numbers.html b/web/entries/Algebraic_Numbers.html
--- a/web/entries/Algebraic_Numbers.html
+++ b/web/entries/Algebraic_Numbers.html
@@ -1,228 +1,228 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Algebraic Numbers in Isabelle/HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>lgebraic
<font class="first">N</font>umbers
in
<font class="first">I</font>sabelle/HOL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Algebraic Numbers in Isabelle/HOL</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>,
<a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a> and
<a href="http://sjcjoosten.nl/">Sebastiaan Joosten</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-12-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Based on existing libraries for matrices, factorization of rational polynomials, and Sturm's theorem, we formalized algebraic numbers in Isabelle/HOL. Our development serves as an implementation for real and complex numbers, and it admits to compute roots and completely factorize real and complex polynomials, provided that all coefficients are rational numbers. Moreover, we provide two implementations to display algebraic numbers, an injective and expensive one, or a faster but approximative version.
+ <td class="abstract mathjax_process">Based on existing libraries for matrices, factorization of rational polynomials, and Sturm's theorem, we formalized algebraic numbers in Isabelle/HOL. Our development serves as an implementation for real and complex numbers, and it admits to compute roots and completely factorize real and complex polynomials, provided that all coefficients are rational numbers. Moreover, we provide two implementations to display algebraic numbers, an injective and expensive one, or a faster but approximative version.
</p><p>
To this end, we mechanized several results on resultants, which also required us to prove that polynomials over a unique factorization domain form again a unique factorization domain.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2016-01-29]: Split off Polynomial Interpolation and Polynomial Factorization<br>
[2017-04-16]: Use certified Berlekamp-Zassenhaus factorization, use subresultant algorithm for computing resultants, improved bisection algorithm</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Algebraic_Numbers-AFP,
author = {René Thiemann and Akihisa Yamada and Sebastiaan Joosten},
title = {Algebraic Numbers in Isabelle/HOL},
journal = {Archive of Formal Proofs},
month = dec,
year = 2015,
note = {\url{http://isa-afp.org/entries/Algebraic_Numbers.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Berlekamp_Zassenhaus.html">Berlekamp_Zassenhaus</a>, <a href="Sturm_Sequences.html">Sturm_Sequences</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="LLL_Basis_Reduction.html">LLL_Basis_Reduction</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Algebraic_Numbers/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Algebraic_Numbers/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Algebraic_Numbers/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Algebraic_Numbers-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Algebraic_Numbers-2019-06-11.tar.gz">
afp-Algebraic_Numbers-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Algebraic_Numbers-2018-08-16.tar.gz">
afp-Algebraic_Numbers-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Algebraic_Numbers-2017-10-10.tar.gz">
afp-Algebraic_Numbers-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Algebraic_Numbers-2016-12-17.tar.gz">
afp-Algebraic_Numbers-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Algebraic_Numbers-2016-02-22.tar.gz">
afp-Algebraic_Numbers-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Algebraic_Numbers-2015-12-22.tar.gz">
afp-Algebraic_Numbers-2015-12-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Algebraic_VCs.html b/web/entries/Algebraic_VCs.html
--- a/web/entries/Algebraic_VCs.html
+++ b/web/entries/Algebraic_VCs.html
@@ -1,234 +1,234 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Program Construction and Verification Components Based on Kleene Algebra - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>rogram
<font class="first">C</font>onstruction
and
<font class="first">V</font>erification
<font class="first">C</font>omponents
<font class="first">B</font>ased
on
<font class="first">K</font>leene
<font class="first">A</font>lgebra
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Program Construction and Verification Components Based on Kleene Algebra</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Victor B. F. Gomes (vb358 /at/ cl /dot/ cam /dot/ ac /dot/ uk) and
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-06-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Variants of Kleene algebra support program construction and
verification by algebraic reasoning. This entry provides a
verification component for Hoare logic based on Kleene algebra with
tests, verification components for weakest preconditions and strongest
postconditions based on Kleene algebra with domain and a component for
step-wise refinement based on refinement Kleene algebra with tests. In
addition to these components for the partial correctness of while
programs, a verification component for total correctness based on
divergence Kleene algebras and one for (partial correctness) of
recursive programs based on domain quantales are provided. Finally we
have integrated memory models for programs with pointers and a program
-trace semantics into the weakest precondition component.</div></td>
+trace semantics into the weakest precondition component.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Algebraic_VCs-AFP,
author = {Victor B. F. Gomes and Georg Struth},
title = {Program Construction and Verification Components Based on Kleene Algebra},
journal = {Archive of Formal Proofs},
month = jun,
year = 2016,
note = {\url{http://isa-afp.org/entries/Algebraic_VCs.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="KAD.html">KAD</a>, <a href="KAT_and_DRA.html">KAT_and_DRA</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Algebraic_VCs/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Algebraic_VCs/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Algebraic_VCs/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Algebraic_VCs-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Algebraic_VCs-2019-06-11.tar.gz">
afp-Algebraic_VCs-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Algebraic_VCs-2018-08-16.tar.gz">
afp-Algebraic_VCs-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Algebraic_VCs-2017-10-10.tar.gz">
afp-Algebraic_VCs-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Algebraic_VCs-2016-12-17.tar.gz">
afp-Algebraic_VCs-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Algebraic_VCs-2016-06-18.tar.gz">
afp-Algebraic_VCs-2016-06-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Allen_Calculus.html b/web/entries/Allen_Calculus.html
--- a/web/entries/Allen_Calculus.html
+++ b/web/entries/Allen_Calculus.html
@@ -1,233 +1,233 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Allen's Interval Calculus - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>llen's
<font class="first">I</font>nterval
<font class="first">C</font>alculus
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Allen's Interval Calculus</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Fadoua Ghourabi
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-09-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Allen’s interval calculus is a qualitative temporal representation of
time events. Allen introduced 13 binary relations that describe all
the possible arrangements between two events, i.e. intervals with
non-zero finite length. The compositions are pertinent to
reasoning about knowledge of time. In particular, a consistency
problem of relation constraints is commonly solved with a guideline
from these compositions. We formalize the relations together with an
axiomatic system. We proof the validity of the 169 compositions of
these relations. We also define nests as the sets of intervals that
share a meeting point. We prove that nests give the ordering
properties of points without introducing a new datatype for points.
[1] J.F. Allen. Maintaining Knowledge about Temporal Intervals. In
Commun. ACM, volume 26, pages 832–843, 1983. [2] J. F. Allen and P. J.
Hayes. A Common-sense Theory of Time. In Proceedings of the 9th
International Joint Conference on Artificial Intelligence (IJCAI’85),
-pages 528–531, 1985.</div></td>
+pages 528–531, 1985.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Allen_Calculus-AFP,
author = {Fadoua Ghourabi},
title = {Allen's Interval Calculus},
journal = {Archive of Formal Proofs},
month = sep,
year = 2016,
note = {\url{http://isa-afp.org/entries/Allen_Calculus.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Allen_Calculus/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Allen_Calculus/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Allen_Calculus/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Allen_Calculus-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Allen_Calculus-2019-06-28.tar.gz">
afp-Allen_Calculus-2019-06-28.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-Allen_Calculus-2019-06-11.tar.gz">
afp-Allen_Calculus-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Allen_Calculus-2018-08-16.tar.gz">
afp-Allen_Calculus-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Allen_Calculus-2017-10-10.tar.gz">
afp-Allen_Calculus-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Allen_Calculus-2016-12-17.tar.gz">
afp-Allen_Calculus-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Allen_Calculus-2016-10-05.tar.gz">
afp-Allen_Calculus-2016-10-05.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Allen_Calculus-2016-09-29.tar.gz">
afp-Allen_Calculus-2016-09-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Amortized_Complexity.html b/web/entries/Amortized_Complexity.html
--- a/web/entries/Amortized_Complexity.html
+++ b/web/entries/Amortized_Complexity.html
@@ -1,247 +1,247 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Amortized Complexity Verified - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>mortized
<font class="first">C</font>omplexity
<font class="first">V</font>erified
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Amortized Complexity Verified</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-07-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
A framework for the analysis of the amortized complexity of functional
data structures is formalized in Isabelle/HOL and applied to a number of
standard examples and to the folowing non-trivial ones: skew heaps,
splay trees, splay heaps and pairing heaps.
<p>
A preliminary version of this work (without pairing heaps) is described
in a <a href="http://www21.in.tum.de/~nipkow/pubs/itp15.html">paper</a>
published in the proceedings of the conference on Interactive
Theorem Proving ITP 2015. An extended version of this publication
-is available <a href="http://www21.in.tum.de/~nipkow/pubs/jfp16.html">here</a>.</div></td>
+is available <a href="http://www21.in.tum.de/~nipkow/pubs/jfp16.html">here</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2015-03-17]: Added pairing heaps by Hauke Brinkop.<br>
[2016-07-12]: Moved splay heaps from here to Splay_Tree<br>
[2016-07-14]: Moved pairing heaps from here to the new Pairing_Heap</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Amortized_Complexity-AFP,
author = {Tobias Nipkow},
title = {Amortized Complexity Verified},
journal = {Archive of Formal Proofs},
month = jul,
year = 2014,
note = {\url{http://isa-afp.org/entries/Amortized_Complexity.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Pairing_Heap.html">Pairing_Heap</a>, <a href="Skew_Heap.html">Skew_Heap</a>, <a href="Splay_Tree.html">Splay_Tree</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Dynamic_Tables.html">Dynamic_Tables</a>, <a href="Root_Balanced_Tree.html">Root_Balanced_Tree</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Amortized_Complexity/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Amortized_Complexity/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Amortized_Complexity/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Amortized_Complexity-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Amortized_Complexity-2019-06-11.tar.gz">
afp-Amortized_Complexity-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Amortized_Complexity-2018-08-16.tar.gz">
afp-Amortized_Complexity-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Amortized_Complexity-2017-10-10.tar.gz">
afp-Amortized_Complexity-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Amortized_Complexity-2016-12-17.tar.gz">
afp-Amortized_Complexity-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Amortized_Complexity-2016-02-22.tar.gz">
afp-Amortized_Complexity-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Amortized_Complexity-2015-05-28.tar.gz">
afp-Amortized_Complexity-2015-05-28.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Amortized_Complexity-2015-05-27.tar.gz">
afp-Amortized_Complexity-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Amortized_Complexity-2015-05-19.tar.gz">
afp-Amortized_Complexity-2015-05-19.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Amortized_Complexity-2014-08-28.tar.gz">
afp-Amortized_Complexity-2014-08-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/AnselmGod.html b/web/entries/AnselmGod.html
--- a/web/entries/AnselmGod.html
+++ b/web/entries/AnselmGod.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Anselm's God in Isabelle/HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>nselm's
<font class="first">G</font>od
in
<font class="first">I</font>sabelle/HOL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Anselm's God in Isabelle/HOL</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://philpapers.org/profile/805">Ben Blumson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-09-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Paul Oppenheimer and Edward Zalta's formalisation of
Anselm's ontological argument for the existence of God is
automated by embedding a free logic for definite descriptions within
-Isabelle/HOL.</div></td>
+Isabelle/HOL.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{AnselmGod-AFP,
author = {Ben Blumson},
title = {Anselm's God in Isabelle/HOL},
journal = {Archive of Formal Proofs},
month = sep,
year = 2017,
note = {\url{http://isa-afp.org/entries/AnselmGod.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/AnselmGod/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/AnselmGod/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/AnselmGod/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-AnselmGod-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-AnselmGod-2019-06-11.tar.gz">
afp-AnselmGod-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-AnselmGod-2018-08-16.tar.gz">
afp-AnselmGod-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-AnselmGod-2017-10-10.tar.gz">
afp-AnselmGod-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-AnselmGod-2017-09-18.tar.gz">
afp-AnselmGod-2017-09-18.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-AnselmGod-2017-09-11.tar.gz">
afp-AnselmGod-2017-09-11.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-AnselmGod-2017-09-08.tar.gz">
afp-AnselmGod-2017-09-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Applicative_Lifting.html b/web/entries/Applicative_Lifting.html
--- a/web/entries/Applicative_Lifting.html
+++ b/web/entries/Applicative_Lifting.html
@@ -1,229 +1,229 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Applicative Lifting - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>pplicative
<font class="first">L</font>ifting
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Applicative Lifting</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a> and
Joshua Schneider
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-12-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Applicative functors augment computations with effects by lifting function application to types which model the effects. As the structure of the computation cannot depend on the effects, applicative expressions can be analysed statically. This allows us to lift universally quantified equations to the effectful types, as observed by Hinze. Thus, equational reasoning over effectful computations can be reduced to pure types.
+ <td class="abstract mathjax_process">Applicative functors augment computations with effects by lifting function application to types which model the effects. As the structure of the computation cannot depend on the effects, applicative expressions can be analysed statically. This allows us to lift universally quantified equations to the effectful types, as observed by Hinze. Thus, equational reasoning over effectful computations can be reduced to pure types.
</p><p>
This entry provides a package for registering applicative functors and two proof methods for lifting of equations over applicative functors. The first method normalises applicative expressions according to the laws of applicative functors. This way, equations whose two sides contain the same list of variables can be lifted to every applicative functor.
</p><p>
To lift larger classes of equations, the second method exploits a number of additional properties (e.g., commutativity of effects) provided the properties have been declared for the concrete applicative functor at hand upon registration.
</p><p>
We declare several types from the Isabelle library as applicative functors and illustrate the use of the methods with two examples: the lifting of the arithmetic type class hierarchy to streams and the verification of a relabelling function on binary trees. We also formalise and verify the normalisation algorithm used by the first proof method.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2016-03-03]: added formalisation of lifting with combinators<br>
[2016-06-10]:
implemented automatic derivation of lifted combinator reductions;
support arbitrary lifted relations using relators;
improved compatibility with locale interpretation
(revision ec336f354f37)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Applicative_Lifting-AFP,
author = {Andreas Lochbihler and Joshua Schneider},
title = {Applicative Lifting},
journal = {Archive of Formal Proofs},
month = dec,
year = 2015,
note = {\url{http://isa-afp.org/entries/Applicative_Lifting.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="CryptHOL.html">CryptHOL</a>, <a href="Free-Groups.html">Free-Groups</a>, <a href="Locally-Nameless-Sigma.html">Locally-Nameless-Sigma</a>, <a href="Stern_Brocot.html">Stern_Brocot</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Applicative_Lifting/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Applicative_Lifting/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Applicative_Lifting/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Applicative_Lifting-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Applicative_Lifting-2019-06-11.tar.gz">
afp-Applicative_Lifting-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Applicative_Lifting-2018-08-16.tar.gz">
afp-Applicative_Lifting-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Applicative_Lifting-2017-10-10.tar.gz">
afp-Applicative_Lifting-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Applicative_Lifting-2016-12-17.tar.gz">
afp-Applicative_Lifting-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Applicative_Lifting-2016-02-22.tar.gz">
afp-Applicative_Lifting-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Applicative_Lifting-2015-12-22.tar.gz">
afp-Applicative_Lifting-2015-12-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Approximation_Algorithms.html b/web/entries/Approximation_Algorithms.html
--- a/web/entries/Approximation_Algorithms.html
+++ b/web/entries/Approximation_Algorithms.html
@@ -1,193 +1,193 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Verified Approximation Algorithms - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>erified
<font class="first">A</font>pproximation
<font class="first">A</font>lgorithms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Verified Approximation Algorithms</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Robin Eßmann (robin /dot/ essmann /at/ tum /dot/ de),
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a> and
<a href="https://simon-robillard.net/">Simon Robillard</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-01-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present the first formal verification of approximation algorithms
for NP-complete optimization problems: vertex cover, independent set,
load balancing, and bin packing. The proofs correct incompletenesses
-in existing proofs and improve the approximation ratio in one case.</div></td>
+in existing proofs and improve the approximation ratio in one case.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Approximation_Algorithms-AFP,
author = {Robin Eßmann and Tobias Nipkow and Simon Robillard},
title = {Verified Approximation Algorithms},
journal = {Archive of Formal Proofs},
month = jan,
year = 2020,
note = {\url{http://isa-afp.org/entries/Approximation_Algorithms.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Approximation_Algorithms/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Approximation_Algorithms/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Approximation_Algorithms/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Approximation_Algorithms-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Approximation_Algorithms-2020-01-16.tar.gz">
afp-Approximation_Algorithms-2020-01-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Architectural_Design_Patterns.html b/web/entries/Architectural_Design_Patterns.html
--- a/web/entries/Architectural_Design_Patterns.html
+++ b/web/entries/Architectural_Design_Patterns.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Theory of Architectural Design Patterns - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">T</font>heory
of
<font class="first">A</font>rchitectural
<font class="first">D</font>esign
<font class="first">P</font>atterns
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Theory of Architectural Design Patterns</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://marmsoler.com">Diego Marmsoler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-03-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The following document formalizes and verifies several architectural
design patterns. Each pattern specification is formalized in terms of
a locale where the locale assumptions correspond to the assumptions
which a pattern poses on an architecture. Thus, pattern specifications
may build on top of each other by interpreting the corresponding
locale. A pattern is verified using the framework provided by the AFP
entry Dynamic Architectures. Currently, the document consists of
formalizations of 4 different patterns: the singleton, the publisher
subscriber, the blackboard pattern, and the blockchain pattern.
Thereby, the publisher component of the publisher subscriber pattern
is modeled as an instance of the singleton pattern and the blackboard
pattern is modeled as an instance of the publisher subscriber pattern.
In general, this entry provides the first steps towards an overall
-theory of architectural design patterns.</div></td>
+theory of architectural design patterns.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2018-05-25]: changing the major assumption for blockchain architectures from alternative minings to relative mining frequencies (revision 5043c5c71685)<br>
[2019-04-08]: adapting the terminology: honest instead of trusted, dishonest instead of untrusted (revision 7af3431a22ae)</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Architectural_Design_Patterns-AFP,
author = {Diego Marmsoler},
title = {A Theory of Architectural Design Patterns},
journal = {Archive of Formal Proofs},
month = mar,
year = 2018,
note = {\url{http://isa-afp.org/entries/Architectural_Design_Patterns.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="DynamicArchitectures.html">DynamicArchitectures</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Architectural_Design_Patterns/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Architectural_Design_Patterns/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Architectural_Design_Patterns/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Architectural_Design_Patterns-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Architectural_Design_Patterns-2019-06-11.tar.gz">
afp-Architectural_Design_Patterns-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Architectural_Design_Patterns-2018-08-16.tar.gz">
afp-Architectural_Design_Patterns-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Architectural_Design_Patterns-2018-03-01.tar.gz">
afp-Architectural_Design_Patterns-2018-03-01.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Aristotles_Assertoric_Syllogistic.html b/web/entries/Aristotles_Assertoric_Syllogistic.html
--- a/web/entries/Aristotles_Assertoric_Syllogistic.html
+++ b/web/entries/Aristotles_Assertoric_Syllogistic.html
@@ -1,198 +1,198 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Aristotle's Assertoric Syllogistic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>ristotle's
<font class="first">A</font>ssertoric
<font class="first">S</font>yllogistic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Aristotle's Assertoric Syllogistic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~ak2110/">Angeliki Koutsoukou-Argyraki</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-10-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalise with Isabelle/HOL some basic elements of Aristotle's
assertoric syllogistic following the <a
href="https://plato.stanford.edu/entries/aristotle-logic/">article from the Stanford Encyclopedia of Philosophy by Robin Smith.</a> To
this end, we use a set theoretic formulation (covering both individual
and general predication). In particular, we formalise the deductions
in the Figures and after that we present Aristotle's
metatheoretical observation that all deductions in the Figures can in
fact be reduced to either Barbara or Celarent. As the formal proofs
prove to be straightforward, the interest of this entry lies in
illustrating the functionality of Isabelle and high efficiency of
-Sledgehammer for simple exercises in philosophy.</div></td>
+Sledgehammer for simple exercises in philosophy.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Aristotles_Assertoric_Syllogistic-AFP,
author = {Angeliki Koutsoukou-Argyraki},
title = {Aristotle's Assertoric Syllogistic},
journal = {Archive of Formal Proofs},
month = oct,
year = 2019,
note = {\url{http://isa-afp.org/entries/Aristotles_Assertoric_Syllogistic.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Aristotles_Assertoric_Syllogistic/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Aristotles_Assertoric_Syllogistic/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Aristotles_Assertoric_Syllogistic/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Aristotles_Assertoric_Syllogistic-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Aristotles_Assertoric_Syllogistic-2019-10-17.tar.gz">
afp-Aristotles_Assertoric_Syllogistic-2019-10-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Arith_Prog_Rel_Primes.html b/web/entries/Arith_Prog_Rel_Primes.html
--- a/web/entries/Arith_Prog_Rel_Primes.html
+++ b/web/entries/Arith_Prog_Rel_Primes.html
@@ -1,198 +1,198 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Arithmetic progressions and relative primes - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>rithmetic
progressions
and
relative
primes
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Arithmetic progressions and relative primes</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://josephcmac.github.io/">José Manuel Rodríguez Caballero</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-02-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This article provides a formalization of the solution obtained by the
author of the Problem “ARITHMETIC PROGRESSIONS” from the
<a href="https://www.ocf.berkeley.edu/~wwu/riddles/putnam.shtml">
Putnam exam problems of 2002</a>. The statement of the problem is
as follows: For which integers <em>n</em> > 1 does the set of positive
integers less than and relatively prime to <em>n</em> constitute an
-arithmetic progression?</div></td>
+arithmetic progression?</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Arith_Prog_Rel_Primes-AFP,
author = {José Manuel Rodríguez Caballero},
title = {Arithmetic progressions and relative primes},
journal = {Archive of Formal Proofs},
month = feb,
year = 2020,
note = {\url{http://isa-afp.org/entries/Arith_Prog_Rel_Primes.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Arith_Prog_Rel_Primes/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Arith_Prog_Rel_Primes/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Arith_Prog_Rel_Primes/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Arith_Prog_Rel_Primes-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Arith_Prog_Rel_Primes-2020-02-10.tar.gz">
afp-Arith_Prog_Rel_Primes-2020-02-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/ArrowImpossibilityGS.html b/web/entries/ArrowImpossibilityGS.html
--- a/web/entries/ArrowImpossibilityGS.html
+++ b/web/entries/ArrowImpossibilityGS.html
@@ -1,272 +1,272 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Arrow and Gibbard-Satterthwaite - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>rrow
and
<font class="first">G</font>ibbard-Satterthwaite
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Arrow and Gibbard-Satterthwaite</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-09-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This article formalizes two proofs of Arrow's impossibility theorem due to Geanakoplos and derives the Gibbard-Satterthwaite theorem as a corollary. One formalization is based on utility functions, the other one on strict partial orders.<br><br>An article about these proofs is found <a href="http://www21.in.tum.de/~nipkow/pubs/arrow.html">here</a>.</div></td>
+ <td class="abstract mathjax_process">This article formalizes two proofs of Arrow's impossibility theorem due to Geanakoplos and derives the Gibbard-Satterthwaite theorem as a corollary. One formalization is based on utility functions, the other one on strict partial orders.<br><br>An article about these proofs is found <a href="http://www21.in.tum.de/~nipkow/pubs/arrow.html">here</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{ArrowImpossibilityGS-AFP,
author = {Tobias Nipkow},
title = {Arrow and Gibbard-Satterthwaite},
journal = {Archive of Formal Proofs},
month = sep,
year = 2008,
note = {\url{http://isa-afp.org/entries/ArrowImpossibilityGS.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ArrowImpossibilityGS/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/ArrowImpossibilityGS/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ArrowImpossibilityGS/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-ArrowImpossibilityGS-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-ArrowImpossibilityGS-2019-06-11.tar.gz">
afp-ArrowImpossibilityGS-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-ArrowImpossibilityGS-2018-08-16.tar.gz">
afp-ArrowImpossibilityGS-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-ArrowImpossibilityGS-2017-10-10.tar.gz">
afp-ArrowImpossibilityGS-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-ArrowImpossibilityGS-2016-12-17.tar.gz">
afp-ArrowImpossibilityGS-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-ArrowImpossibilityGS-2016-02-22.tar.gz">
afp-ArrowImpossibilityGS-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-ArrowImpossibilityGS-2015-05-27.tar.gz">
afp-ArrowImpossibilityGS-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-ArrowImpossibilityGS-2014-08-28.tar.gz">
afp-ArrowImpossibilityGS-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-ArrowImpossibilityGS-2013-12-11.tar.gz">
afp-ArrowImpossibilityGS-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-ArrowImpossibilityGS-2013-11-17.tar.gz">
afp-ArrowImpossibilityGS-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-ArrowImpossibilityGS-2013-03-02.tar.gz">
afp-ArrowImpossibilityGS-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-ArrowImpossibilityGS-2013-02-16.tar.gz">
afp-ArrowImpossibilityGS-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-ArrowImpossibilityGS-2012-05-24.tar.gz">
afp-ArrowImpossibilityGS-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-ArrowImpossibilityGS-2011-10-11.tar.gz">
afp-ArrowImpossibilityGS-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-ArrowImpossibilityGS-2011-02-11.tar.gz">
afp-ArrowImpossibilityGS-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-ArrowImpossibilityGS-2010-06-30.tar.gz">
afp-ArrowImpossibilityGS-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-ArrowImpossibilityGS-2009-12-12.tar.gz">
afp-ArrowImpossibilityGS-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-ArrowImpossibilityGS-2009-09-29.tar.gz">
afp-ArrowImpossibilityGS-2009-09-29.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-ArrowImpossibilityGS-2009-04-29.tar.gz">
afp-ArrowImpossibilityGS-2009-04-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Attack_Trees.html b/web/entries/Attack_Trees.html
new file mode 100644
--- /dev/null
+++ b/web/entries/Attack_Trees.html
@@ -0,0 +1,211 @@
+<!DOCTYPE html>
+<html lang="en">
+<head>
+<meta charset="utf-8">
+<title>Attack Trees in Isabelle for GDPR compliance of IoT healthcare systems - Archive of Formal Proofs
+</title>
+<link rel="stylesheet" type="text/css" href="../front.css">
+<link rel="icon" href="../images/favicon.ico" type="image/icon">
+<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
+<!-- MathJax for LaTeX support in abstracts -->
+<script>
+MathJax = {
+ tex: {
+ inlineMath: [['$', '$'], ['\\(', '\\)']]
+ },
+ processEscapes: true,
+ svg: {
+ fontCache: 'global'
+ }
+};
+</script>
+<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
+</head>
+
+<body class="mathjax_ignore">
+
+<table width="100%">
+<tbody>
+<tr>
+
+<!-- Navigation -->
+<td width="20%" align="center" valign="top">
+ <p>&nbsp;</p>
+ <a href="https://www.isa-afp.org/">
+ <img src="../images/isabelle.png" width="100" height="88" border=0>
+ </a>
+ <p>&nbsp;</p>
+ <p>&nbsp;</p>
+ <table class="nav" width="80%">
+ <tr>
+ <td class="nav" width="100%"><a href="../index.html">Home</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../about.html">About</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../submitting.html">Submission</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../updating.html">Updating Entries</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../using.html">Using Entries</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../search.html">Search</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../statistics.html">Statistics</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../topics.html">Index</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../download.html">Download</a></td>
+ </tr>
+ </table>
+ <p>&nbsp;</p>
+ <p>&nbsp;</p>
+</td>
+
+
+<!-- Content -->
+<td width="80%" valign="top">
+<div align="center">
+ <p>&nbsp;</p>
+ <h1> <font class="first">A</font>ttack
+
+ <font class="first">T</font>rees
+
+ in
+
+ <font class="first">I</font>sabelle
+
+ for
+
+ <font class="first">G</font>DPR
+
+ compliance
+
+ of
+
+ <font class="first">I</font>oT
+
+ healthcare
+
+ systems
+
+</h1>
+ <p>&nbsp;</p>
+
+<table width="80%" class="data">
+<tbody>
+<tr>
+ <td class="datahead" width="20%">Title:</td>
+ <td class="data" width="80%">Attack Trees in Isabelle for GDPR compliance of IoT healthcare systems</td>
+</tr>
+
+<tr>
+ <td class="datahead">
+ Author:
+ </td>
+ <td class="data">
+ <a href="http://www.cs.mdx.ac.uk/people/florian-kammueller/">Florian Kammueller</a>
+ </td>
+</tr>
+
+
+
+<tr>
+ <td class="datahead">Submission date:</td>
+ <td class="data">2020-04-27</td>
+</tr>
+
+<tr>
+ <td class="datahead" valign="top">Abstract:</td>
+ <td class="abstract mathjax_process">
+In this article, we present a proof theory for Attack Trees. Attack
+Trees are a well established and useful model for the construction of
+attacks on systems since they allow a stepwise exploration of high
+level attacks in application scenarios. Using the expressiveness of
+Higher Order Logic in Isabelle, we develop a generic
+theory of Attack Trees with a state-based semantics based on Kripke
+structures and CTL. The resulting framework
+allows mechanically supported logic analysis of the meta-theory of the
+proof calculus of Attack Trees and at the same time the developed
+proof theory enables application to case studies. A central
+correctness and completeness result proved in Isabelle establishes a
+connection between the notion of Attack Tree validity and CTL. The
+application is illustrated on the example of a healthcare IoT system
+and GDPR compliance verification.</td>
+</tr>
+
+
+<tr>
+ <td class="datahead" valign="top">BibTeX:</td>
+ <td class="formatted">
+ <pre>@article{Attack_Trees-AFP,
+ author = {Florian Kammueller},
+ title = {Attack Trees in Isabelle for GDPR compliance of IoT healthcare systems},
+ journal = {Archive of Formal Proofs},
+ month = apr,
+ year = 2020,
+ note = {\url{http://isa-afp.org/entries/Attack_Trees.html},
+ Formal proof development},
+ ISSN = {2150-914x},
+}</pre>
+ </td>
+</tr>
+
+ <tr><td class="datahead">License:</td>
+ <td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
+
+
+
+
+
+
+ </tbody>
+</table>
+
+<p></p>
+
+<table class="links">
+ <tbody>
+ <tr>
+ <td class="links">
+ <a href="../browser_info/current/AFP/Attack_Trees/outline.pdf">Proof outline</a><br>
+ <a href="../browser_info/current/AFP/Attack_Trees/document.pdf">Proof document</a>
+ </td>
+ </tr>
+ <tr>
+ <td class="links">
+ <a href="../browser_info/current/AFP/Attack_Trees/index.html">Browse theories</a>
+ </td></tr>
+ <tr>
+ <td class="links">
+ <a href="../release/afp-Attack_Trees-current.tar.gz">Download this entry</a>
+ </td>
+ </tr>
+
+
+ <tr><td class="links">Older releases:
+ None
+ </td></tr>
+
+ </tbody>
+</table>
+
+</div>
+</td>
+
+</tr>
+</tbody>
+</table>
+
+<script src="../jquery.min.js"></script>
+<script src="../script.js"></script>
+
+</body>
+</html>
\ No newline at end of file
diff --git a/web/entries/Auto2_HOL.html b/web/entries/Auto2_HOL.html
--- a/web/entries/Auto2_HOL.html
+++ b/web/entries/Auto2_HOL.html
@@ -1,197 +1,197 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Auto2 Prover - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>uto2
<font class="first">P</font>rover
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Auto2 Prover</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://lcs.ios.ac.cn/~bzhan/">Bohua Zhan</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-11-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Auto2 is a saturation-based heuristic prover for higher-order logic,
implemented as a tactic in Isabelle. This entry contains the
instantiation of auto2 for Isabelle/HOL, along with two basic
examples: solutions to some of the Pelletier’s problems, and
-elementary number theory of primes.</div></td>
+elementary number theory of primes.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Auto2_HOL-AFP,
author = {Bohua Zhan},
title = {Auto2 Prover},
journal = {Archive of Formal Proofs},
month = nov,
year = 2018,
note = {\url{http://isa-afp.org/entries/Auto2_HOL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Auto2_Imperative_HOL.html">Auto2_Imperative_HOL</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Auto2_HOL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Auto2_HOL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Auto2_HOL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Auto2_HOL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Auto2_HOL-2019-06-11.tar.gz">
afp-Auto2_HOL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Auto2_HOL-2018-11-29.tar.gz">
afp-Auto2_HOL-2018-11-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Auto2_Imperative_HOL.html b/web/entries/Auto2_Imperative_HOL.html
--- a/web/entries/Auto2_Imperative_HOL.html
+++ b/web/entries/Auto2_Imperative_HOL.html
@@ -1,208 +1,208 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Verifying Imperative Programs using Auto2 - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>erifying
<font class="first">I</font>mperative
<font class="first">P</font>rograms
using
<font class="first">A</font>uto2
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Verifying Imperative Programs using Auto2</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://lcs.ios.ac.cn/~bzhan/">Bohua Zhan</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-12-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry contains the application of auto2 to verifying functional
and imperative programs. Algorithms and data structures that are
verified include linked lists, binary search trees, red-black trees,
interval trees, priority queue, quicksort, union-find, Dijkstra's
algorithm, and a sweep-line algorithm for detecting rectangle
intersection. The imperative verification is based on Imperative HOL
and its separation logic framework. A major goal of this work is to
set up automation in order to reduce the length of proof that the user
needs to provide, both for verifying functional programs and for
-working with separation logic.</div></td>
+working with separation logic.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Auto2_Imperative_HOL-AFP,
author = {Bohua Zhan},
title = {Verifying Imperative Programs using Auto2},
journal = {Archive of Formal Proofs},
month = dec,
year = 2018,
note = {\url{http://isa-afp.org/entries/Auto2_Imperative_HOL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Auto2_HOL.html">Auto2_HOL</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Auto2_Imperative_HOL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Auto2_Imperative_HOL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Auto2_Imperative_HOL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Auto2_Imperative_HOL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Auto2_Imperative_HOL-2019-06-11.tar.gz">
afp-Auto2_Imperative_HOL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Auto2_Imperative_HOL-2019-01-22.tar.gz">
afp-Auto2_Imperative_HOL-2019-01-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/AutoFocus-Stream.html b/web/entries/AutoFocus-Stream.html
--- a/web/entries/AutoFocus-Stream.html
+++ b/web/entries/AutoFocus-Stream.html
@@ -1,264 +1,264 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>AutoFocus Stream Processing for Single-Clocking and Multi-Clocking Semantics - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>utoFocus
<font class="first">S</font>tream
<font class="first">P</font>rocessing
for
<font class="first">S</font>ingle-Clocking
and
<font class="first">M</font>ulti-Clocking
<font class="first">S</font>emantics
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">AutoFocus Stream Processing for Single-Clocking and Multi-Clocking Semantics</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
David Trachtenherz
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-02-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We formalize the AutoFocus Semantics (a time-synchronous subset of the Focus formalism) as stream processing functions on finite and infinite message streams represented as finite/infinite lists. The formalization comprises both the conventional single-clocking semantics (uniform global clock for all components and communications channels) and its extension to multi-clocking semantics (internal execution clocking of a component may be a multiple of the external communication clocking). The semantics is defined by generic stream processing functions making it suitable for simulation/code generation in Isabelle/HOL. Furthermore, a number of AutoFocus semantics properties are formalized using definitions from the IntervalLogic theories.</div></td>
+ <td class="abstract mathjax_process">We formalize the AutoFocus Semantics (a time-synchronous subset of the Focus formalism) as stream processing functions on finite and infinite message streams represented as finite/infinite lists. The formalization comprises both the conventional single-clocking semantics (uniform global clock for all components and communications channels) and its extension to multi-clocking semantics (internal execution clocking of a component may be a multiple of the external communication clocking). The semantics is defined by generic stream processing functions making it suitable for simulation/code generation in Isabelle/HOL. Furthermore, a number of AutoFocus semantics properties are formalized using definitions from the IntervalLogic theories.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{AutoFocus-Stream-AFP,
author = {David Trachtenherz},
title = {AutoFocus Stream Processing for Single-Clocking and Multi-Clocking Semantics},
journal = {Archive of Formal Proofs},
month = feb,
year = 2011,
note = {\url{http://isa-afp.org/entries/AutoFocus-Stream.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Nat-Interval-Logic.html">Nat-Interval-Logic</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/AutoFocus-Stream/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/AutoFocus-Stream/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/AutoFocus-Stream/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-AutoFocus-Stream-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-AutoFocus-Stream-2019-06-11.tar.gz">
afp-AutoFocus-Stream-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-AutoFocus-Stream-2018-08-16.tar.gz">
afp-AutoFocus-Stream-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-AutoFocus-Stream-2017-10-10.tar.gz">
afp-AutoFocus-Stream-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-AutoFocus-Stream-2016-12-17.tar.gz">
afp-AutoFocus-Stream-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-AutoFocus-Stream-2016-02-22.tar.gz">
afp-AutoFocus-Stream-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-AutoFocus-Stream-2015-05-27.tar.gz">
afp-AutoFocus-Stream-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-AutoFocus-Stream-2014-08-28.tar.gz">
afp-AutoFocus-Stream-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-AutoFocus-Stream-2013-12-11.tar.gz">
afp-AutoFocus-Stream-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-AutoFocus-Stream-2013-11-17.tar.gz">
afp-AutoFocus-Stream-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-AutoFocus-Stream-2013-03-08.tar.gz">
afp-AutoFocus-Stream-2013-03-08.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-AutoFocus-Stream-2013-02-16.tar.gz">
afp-AutoFocus-Stream-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-AutoFocus-Stream-2012-05-24.tar.gz">
afp-AutoFocus-Stream-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-AutoFocus-Stream-2011-10-11.tar.gz">
afp-AutoFocus-Stream-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-AutoFocus-Stream-2011-02-24.tar.gz">
afp-AutoFocus-Stream-2011-02-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Automatic_Refinement.html b/web/entries/Automatic_Refinement.html
--- a/web/entries/Automatic_Refinement.html
+++ b/web/entries/Automatic_Refinement.html
@@ -1,241 +1,241 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Automatic Data Refinement - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>utomatic
<font class="first">D</font>ata
<font class="first">R</font>efinement
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Automatic Data Refinement</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-10-02</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We present the Autoref tool for Isabelle/HOL, which automatically
+ <td class="abstract mathjax_process">We present the Autoref tool for Isabelle/HOL, which automatically
refines algorithms specified over abstract concepts like maps
and sets to algorithms over concrete implementations like red-black-trees,
and produces a refinement theorem. It is based on ideas borrowed from
relational parametricity due to Reynolds and Wadler.
The tool allows for rapid prototyping of verified, executable algorithms.
Moreover, it can be configured to fine-tune the result to the user~s needs.
Our tool is able to automatically instantiate generic algorithms, which
greatly simplifies the implementation of executable data structures.
<p>
This AFP-entry provides the basic tool, which is then used by the
Refinement and Collection Framework to provide automatic data refinement for
-the nondeterminism monad and various collection datastructures.</div></td>
+the nondeterminism monad and various collection datastructures.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Automatic_Refinement-AFP,
author = {Peter Lammich},
title = {Automatic Data Refinement},
journal = {Archive of Formal Proofs},
month = oct,
year = 2013,
note = {\url{http://isa-afp.org/entries/Automatic_Refinement.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Containers.html">Containers</a>, <a href="Dict_Construction.html">Dict_Construction</a>, <a href="IP_Addresses.html">IP_Addresses</a>, <a href="JinjaThreads.html">JinjaThreads</a>, <a href="Network_Security_Policy_Verification.html">Network_Security_Policy_Verification</a>, <a href="Refine_Monadic.html">Refine_Monadic</a>, <a href="ROBDD.html">ROBDD</a>, <a href="Separation_Logic_Imperative_HOL.html">Separation_Logic_Imperative_HOL</a>, <a href="UpDown_Scheme.html">UpDown_Scheme</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Automatic_Refinement/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Automatic_Refinement/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Automatic_Refinement/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Automatic_Refinement-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Automatic_Refinement-2019-06-11.tar.gz">
afp-Automatic_Refinement-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Automatic_Refinement-2018-08-16.tar.gz">
afp-Automatic_Refinement-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Automatic_Refinement-2017-10-10.tar.gz">
afp-Automatic_Refinement-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Automatic_Refinement-2016-12-17.tar.gz">
afp-Automatic_Refinement-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Automatic_Refinement-2016-02-22.tar.gz">
afp-Automatic_Refinement-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Automatic_Refinement-2015-05-27.tar.gz">
afp-Automatic_Refinement-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Automatic_Refinement-2014-08-28.tar.gz">
afp-Automatic_Refinement-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Automatic_Refinement-2013-12-11.tar.gz">
afp-Automatic_Refinement-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Automatic_Refinement-2013-11-17.tar.gz">
afp-Automatic_Refinement-2013-11-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/AxiomaticCategoryTheory.html b/web/entries/AxiomaticCategoryTheory.html
--- a/web/entries/AxiomaticCategoryTheory.html
+++ b/web/entries/AxiomaticCategoryTheory.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Axiom Systems for Category Theory in Free Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>xiom
<font class="first">S</font>ystems
for
<font class="first">C</font>ategory
<font class="first">T</font>heory
in
<font class="first">F</font>ree
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Axiom Systems for Category Theory in Free Logic</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://christoph-benzmueller.de">Christoph Benzmüller</a> and
<a href="http://www.cs.cmu.edu/~scott/">Dana Scott</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-05-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This document provides a concise overview on the core results of our
previous work on the exploration of axioms systems for category
theory. Extending the previous studies
(http://arxiv.org/abs/1609.01493) we include one further axiomatic
theory in our experiments. This additional theory has been suggested
by Mac Lane in 1948. We show that the axioms proposed by Mac Lane are
equivalent to the ones we studied before, which includes an axioms set
suggested by Scott in the 1970s and another axioms set proposed by
Freyd and Scedrov in 1990, which we slightly modified to remedy a
-minor technical issue.</div></td>
+minor technical issue.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{AxiomaticCategoryTheory-AFP,
author = {Christoph Benzmüller and Dana Scott},
title = {Axiom Systems for Category Theory in Free Logic},
journal = {Archive of Formal Proofs},
month = may,
year = 2018,
note = {\url{http://isa-afp.org/entries/AxiomaticCategoryTheory.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/AxiomaticCategoryTheory/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/AxiomaticCategoryTheory/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/AxiomaticCategoryTheory/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-AxiomaticCategoryTheory-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-AxiomaticCategoryTheory-2019-06-11.tar.gz">
afp-AxiomaticCategoryTheory-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-AxiomaticCategoryTheory-2018-08-16.tar.gz">
afp-AxiomaticCategoryTheory-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-AxiomaticCategoryTheory-2018-05-23.tar.gz">
afp-AxiomaticCategoryTheory-2018-05-23.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/BDD.html b/web/entries/BDD.html
--- a/web/entries/BDD.html
+++ b/web/entries/BDD.html
@@ -1,273 +1,273 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>BDD Normalisation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>DD
<font class="first">N</font>ormalisation
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">BDD Normalisation</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Veronika Ortner and
Norbert Schirmer (norbert /dot/ schirmer /at/ web /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-02-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We present the verification of the normalisation of a binary decision diagram (BDD). The normalisation follows the original algorithm presented by Bryant in 1986 and transforms an ordered BDD in a reduced, ordered and shared BDD. The verification is based on Hoare logics.</div></td>
+ <td class="abstract mathjax_process">We present the verification of the normalisation of a binary decision diagram (BDD). The normalisation follows the original algorithm presented by Bryant in 1986 and transforms an ordered BDD in a reduced, ordered and shared BDD. The verification is based on Hoare logics.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{BDD-AFP,
author = {Veronika Ortner and Norbert Schirmer},
title = {BDD Normalisation},
journal = {Archive of Formal Proofs},
month = feb,
year = 2008,
note = {\url{http://isa-afp.org/entries/BDD.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Simpl.html">Simpl</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/BDD/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/BDD/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/BDD/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-BDD-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-BDD-2019-06-11.tar.gz">
afp-BDD-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-BDD-2018-08-16.tar.gz">
afp-BDD-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-BDD-2017-10-10.tar.gz">
afp-BDD-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-BDD-2016-12-17.tar.gz">
afp-BDD-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-BDD-2016-02-22.tar.gz">
afp-BDD-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-BDD-2015-05-27.tar.gz">
afp-BDD-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-BDD-2014-08-28.tar.gz">
afp-BDD-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-BDD-2013-12-11.tar.gz">
afp-BDD-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-BDD-2013-11-17.tar.gz">
afp-BDD-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-BDD-2013-02-16.tar.gz">
afp-BDD-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-BDD-2012-05-24.tar.gz">
afp-BDD-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-BDD-2011-10-11.tar.gz">
afp-BDD-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-BDD-2011-02-11.tar.gz">
afp-BDD-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-BDD-2010-06-30.tar.gz">
afp-BDD-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-BDD-2009-12-12.tar.gz">
afp-BDD-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-BDD-2009-04-29.tar.gz">
afp-BDD-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-BDD-2008-06-10.tar.gz">
afp-BDD-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-BDD-2008-03-07.tar.gz">
afp-BDD-2008-03-07.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/BNF_CC.html b/web/entries/BNF_CC.html
--- a/web/entries/BNF_CC.html
+++ b/web/entries/BNF_CC.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Bounded Natural Functors with Covariance and Contravariance - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>ounded
<font class="first">N</font>atural
<font class="first">F</font>unctors
with
<font class="first">C</font>ovariance
and
<font class="first">C</font>ontravariance
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Bounded Natural Functors with Covariance and Contravariance</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a> and
Joshua Schneider
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-04-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Bounded natural functors (BNFs) provide a modular framework for the
construction of (co)datatypes in higher-order logic. Their functorial
operations, the mapper and relator, are restricted to a subset of the
parameters, namely those where recursion can take place. For certain
applications, such as free theorems, data refinement, quotients, and
generalised rewriting, it is desirable that these operations do not
ignore the other parameters. In this article, we formalise the
generalisation BNF<sub>CC</sub> that extends the mapper
and relator to covariant and contravariant parameters. We show that
<ol> <li> BNF<sub>CC</sub>s are closed under
functor composition and least and greatest fixpoints,</li>
<li> subtypes inherit the BNF<sub>CC</sub> structure
under conditions that generalise those for the BNF case,
and</li> <li> BNF<sub>CC</sub>s preserve
quotients under mild conditions.</li> </ol> These proofs
are carried out for abstract BNF<sub>CC</sub>s similar to
the AFP entry BNF Operations. In addition, we apply the
-BNF<sub>CC</sub> theory to several concrete functors.</div></td>
+BNF<sub>CC</sub> theory to several concrete functors.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{BNF_CC-AFP,
author = {Andreas Lochbihler and Joshua Schneider},
title = {Bounded Natural Functors with Covariance and Contravariance},
journal = {Archive of Formal Proofs},
month = apr,
year = 2018,
note = {\url{http://isa-afp.org/entries/BNF_CC.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/BNF_CC/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/BNF_CC/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/BNF_CC/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-BNF_CC-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-BNF_CC-2019-06-11.tar.gz">
afp-BNF_CC-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-BNF_CC-2018-08-16.tar.gz">
afp-BNF_CC-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-BNF_CC-2018-04-25.tar.gz">
afp-BNF_CC-2018-04-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/BNF_Operations.html b/web/entries/BNF_Operations.html
--- a/web/entries/BNF_Operations.html
+++ b/web/entries/BNF_Operations.html
@@ -1,208 +1,208 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Operations on Bounded Natural Functors - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">O</font>perations
on
<font class="first">B</font>ounded
<font class="first">N</font>atural
<font class="first">F</font>unctors
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Operations on Bounded Natural Functors</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Jasmin Christian Blanchette (j /dot/ c /dot/ blanchette /at/ vu /dot/ nl),
Andrei Popescu (a /dot/ popescu /at/ mdx /dot/ ac /dot/ uk) and
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-12-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry formalizes the closure property of bounded natural functors
(BNFs) under seven operations. These operations and the corresponding
proofs constitute the core of Isabelle's (co)datatype package. To
be close to the implemented tactics, the proofs are deliberately
formulated as detailed apply scripts. The (co)datatypes together with
(co)induction principles and (co)recursors are byproducts of the
fixpoint operations LFP and GFP. Composition of BNFs is subdivided
into four simpler operations: Compose, Kill, Lift, and Permute. The
N2M operation provides mutual (co)induction principles and
-(co)recursors for nested (co)datatypes.</div></td>
+(co)recursors for nested (co)datatypes.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{BNF_Operations-AFP,
author = {Jasmin Christian Blanchette and Andrei Popescu and Dmitriy Traytel},
title = {Operations on Bounded Natural Functors},
journal = {Archive of Formal Proofs},
month = dec,
year = 2017,
note = {\url{http://isa-afp.org/entries/BNF_Operations.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/BNF_Operations/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/BNF_Operations/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/BNF_Operations/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-BNF_Operations-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-BNF_Operations-2019-06-11.tar.gz">
afp-BNF_Operations-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-BNF_Operations-2018-08-16.tar.gz">
afp-BNF_Operations-2018-08-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Bell_Numbers_Spivey.html b/web/entries/Bell_Numbers_Spivey.html
--- a/web/entries/Bell_Numbers_Spivey.html
+++ b/web/entries/Bell_Numbers_Spivey.html
@@ -1,230 +1,230 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Spivey's Generalized Recurrence for Bell Numbers - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>pivey's
<font class="first">G</font>eneralized
<font class="first">R</font>ecurrence
for
<font class="first">B</font>ell
<font class="first">N</font>umbers
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Spivey's Generalized Recurrence for Bell Numbers</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-05-04</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry defines the Bell numbers as the cardinality of set partitions for
a carrier set of given size, and derives Spivey's generalized recurrence
relation for Bell numbers following his elegant and intuitive combinatorial
proof.
<p>
As the set construction for the combinatorial proof requires construction of
three intermediate structures, the main difficulty of the formalization is
handling the overall combinatorial argument in a structured way.
The introduced proof structure allows us to compose the combinatorial argument
from its subparts, and supports to keep track how the detailed proof steps are
related to the overall argument. To obtain this structure, this entry uses set
monad notation for the set construction's definition, introduces suitable
-predicates and rules, and follows a repeating structure in its Isar proof.</div></td>
+predicates and rules, and follows a repeating structure in its Isar proof.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Bell_Numbers_Spivey-AFP,
author = {Lukas Bulwahn},
title = {Spivey's Generalized Recurrence for Bell Numbers},
journal = {Archive of Formal Proofs},
month = may,
year = 2016,
note = {\url{http://isa-afp.org/entries/Bell_Numbers_Spivey.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Card_Partitions.html">Card_Partitions</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Card_Equiv_Relations.html">Card_Equiv_Relations</a>, <a href="Twelvefold_Way.html">Twelvefold_Way</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Bell_Numbers_Spivey/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Bell_Numbers_Spivey/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Bell_Numbers_Spivey/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Bell_Numbers_Spivey-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Bell_Numbers_Spivey-2019-06-11.tar.gz">
afp-Bell_Numbers_Spivey-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Bell_Numbers_Spivey-2018-08-16.tar.gz">
afp-Bell_Numbers_Spivey-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Bell_Numbers_Spivey-2017-10-10.tar.gz">
afp-Bell_Numbers_Spivey-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Bell_Numbers_Spivey-2016-12-17.tar.gz">
afp-Bell_Numbers_Spivey-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Bell_Numbers_Spivey-2016-05-04.tar.gz">
afp-Bell_Numbers_Spivey-2016-05-04.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Berlekamp_Zassenhaus.html b/web/entries/Berlekamp_Zassenhaus.html
--- a/web/entries/Berlekamp_Zassenhaus.html
+++ b/web/entries/Berlekamp_Zassenhaus.html
@@ -1,236 +1,236 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Factorization Algorithm of Berlekamp and Zassenhaus - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">F</font>actorization
<font class="first">A</font>lgorithm
of
<font class="first">B</font>erlekamp
and
<font class="first">Z</font>assenhaus
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Factorization Algorithm of Berlekamp and Zassenhaus</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a>,
<a href="http://sjcjoosten.nl/">Sebastiaan Joosten</a>,
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a> and
<a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-10-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>We formalize the Berlekamp-Zassenhaus algorithm for factoring
square-free integer polynomials in Isabelle/HOL. We further adapt an
existing formalization of Yun’s square-free factorization algorithm to
integer polynomials, and thus provide an efficient and certified
factorization algorithm for arbitrary univariate polynomials.
</p>
<p>The algorithm first performs a factorization in the prime field GF(p) and
then performs computations in the integer ring modulo p^k, where both
p and k are determined at runtime. Since a natural modeling of these
structures via dependent types is not possible in Isabelle/HOL, we
formalize the whole algorithm using Isabelle’s recent addition of
local type definitions.
</p>
<p>Through experiments we verify that our algorithm factors polynomials of degree
100 within seconds.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Berlekamp_Zassenhaus-AFP,
author = {Jose Divasón and Sebastiaan Joosten and René Thiemann and Akihisa Yamada},
title = {The Factorization Algorithm of Berlekamp and Zassenhaus},
journal = {Archive of Formal Proofs},
month = oct,
year = 2016,
note = {\url{http://isa-afp.org/entries/Berlekamp_Zassenhaus.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Algebraic_Numbers.html">Algebraic_Numbers</a>, <a href="LLL_Basis_Reduction.html">LLL_Basis_Reduction</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Berlekamp_Zassenhaus/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Berlekamp_Zassenhaus/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Berlekamp_Zassenhaus/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Berlekamp_Zassenhaus-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Berlekamp_Zassenhaus-2019-06-11.tar.gz">
afp-Berlekamp_Zassenhaus-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Berlekamp_Zassenhaus-2018-09-07.tar.gz">
afp-Berlekamp_Zassenhaus-2018-09-07.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Berlekamp_Zassenhaus-2018-08-16.tar.gz">
afp-Berlekamp_Zassenhaus-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Berlekamp_Zassenhaus-2017-10-10.tar.gz">
afp-Berlekamp_Zassenhaus-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Berlekamp_Zassenhaus-2016-12-17.tar.gz">
afp-Berlekamp_Zassenhaus-2016-12-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Bernoulli.html b/web/entries/Bernoulli.html
--- a/web/entries/Bernoulli.html
+++ b/web/entries/Bernoulli.html
@@ -1,220 +1,220 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Bernoulli Numbers - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>ernoulli
<font class="first">N</font>umbers
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Bernoulli Numbers</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com) and
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-01-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>Bernoulli numbers were first discovered in the closed-form
expansion of the sum 1<sup>m</sup> +
2<sup>m</sup> + &hellip; + n<sup>m</sup>
for a fixed m and appear in many other places. This entry provides
three different definitions for them: a recursive one, an explicit
one, and one through their exponential generating function.</p>
<p>In addition, we prove some basic facts, e.g. their relation
to sums of powers of integers and that all odd Bernoulli numbers
except the first are zero, and some advanced facts like their
relationship to the Riemann zeta function on positive even
integers.</p>
<p>We also prove the correctness of the
Akiyama&ndash;Tanigawa algorithm for computing Bernoulli numbers
with reasonable efficiency, and we define the periodic Bernoulli
polynomials (which appear e.g. in the Euler&ndash;MacLaurin
summation formula and the expansion of the log-Gamma function) and
-prove their basic properties.</p></div></td>
+prove their basic properties.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Bernoulli-AFP,
author = {Lukas Bulwahn and Manuel Eberl},
title = {Bernoulli Numbers},
journal = {Archive of Formal Proofs},
month = jan,
year = 2017,
note = {\url{http://isa-afp.org/entries/Bernoulli.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Euler_MacLaurin.html">Euler_MacLaurin</a>, <a href="Stirling_Formula.html">Stirling_Formula</a>, <a href="Zeta_Function.html">Zeta_Function</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Bernoulli/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Bernoulli/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Bernoulli/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Bernoulli-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Bernoulli-2019-06-11.tar.gz">
afp-Bernoulli-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Bernoulli-2018-08-16.tar.gz">
afp-Bernoulli-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Bernoulli-2017-10-10.tar.gz">
afp-Bernoulli-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Bernoulli-2017-01-24.tar.gz">
afp-Bernoulli-2017-01-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Bertrands_Postulate.html b/web/entries/Bertrands_Postulate.html
--- a/web/entries/Bertrands_Postulate.html
+++ b/web/entries/Bertrands_Postulate.html
@@ -1,220 +1,220 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Bertrand's postulate - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>ertrand's
postulate
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Bertrand's postulate</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Julian Biendarra and
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">
Contributor:
</td>
<td class="data">
<a href="http://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-01-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>Bertrand's postulate is an early result on the
distribution of prime numbers: For every positive integer n, there
exists a prime number that lies strictly between n and 2n.
The proof is ported from John Harrison's formalisation
in HOL Light. It proceeds by first showing that the property is true
for all n greater than or equal to 600 and then showing that it also
-holds for all n below 600 by case distinction. </p></div></td>
+holds for all n below 600 by case distinction. </p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Bertrands_Postulate-AFP,
author = {Julian Biendarra and Manuel Eberl},
title = {Bertrand's postulate},
journal = {Archive of Formal Proofs},
month = jan,
year = 2017,
note = {\url{http://isa-afp.org/entries/Bertrands_Postulate.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Pratt_Certificate.html">Pratt_Certificate</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Dirichlet_L.html">Dirichlet_L</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Bertrands_Postulate/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Bertrands_Postulate/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Bertrands_Postulate/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Bertrands_Postulate-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Bertrands_Postulate-2019-06-11.tar.gz">
afp-Bertrands_Postulate-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Bertrands_Postulate-2018-08-16.tar.gz">
afp-Bertrands_Postulate-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Bertrands_Postulate-2017-10-10.tar.gz">
afp-Bertrands_Postulate-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Bertrands_Postulate-2017-01-18.tar.gz">
afp-Bertrands_Postulate-2017-01-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Bicategory.html b/web/entries/Bicategory.html
--- a/web/entries/Bicategory.html
+++ b/web/entries/Bicategory.html
@@ -1,207 +1,207 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Bicategories - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>icategories
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Bicategories</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Eugene W. Stark (stark /at/ cs /dot/ stonybrook /dot/ edu)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-01-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Taking as a starting point the author's previous work on
developing aspects of category theory in Isabelle/HOL, this article
gives a compatible formalization of the notion of
"bicategory" and develops a framework within which formal
proofs of facts about bicategories can be given. The framework
includes a number of basic results, including the Coherence Theorem,
the Strictness Theorem, pseudofunctors and biequivalence, and facts
about internal equivalences and adjunctions in a bicategory. As a
driving application and demonstration of the utility of the framework,
it is used to give a formal proof of a theorem, due to Carboni,
Kasangian, and Street, that characterizes up to biequivalence the
bicategories of spans in a category with pullbacks. The formalization
effort necessitated the filling-in of many details that were not
evident from the brief presentation in the original paper, as well as
-identifying a few minor corrections along the way.</div></td>
+identifying a few minor corrections along the way.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2020-02-15]:
Move ConcreteCategory.thy from Bicategory to Category3 and use it systematically.
Make other minor improvements throughout.
(revision a51840d36867)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Bicategory-AFP,
author = {Eugene W. Stark},
title = {Bicategories},
journal = {Archive of Formal Proofs},
month = jan,
year = 2020,
note = {\url{http://isa-afp.org/entries/Bicategory.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="MonoidalCategory.html">MonoidalCategory</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Bicategory/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Bicategory/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Bicategory/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Bicategory-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Bicategory-2020-01-09.tar.gz">
afp-Bicategory-2020-01-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/BinarySearchTree.html b/web/entries/BinarySearchTree.html
--- a/web/entries/BinarySearchTree.html
+++ b/web/entries/BinarySearchTree.html
@@ -1,292 +1,292 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Binary Search Trees - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>inary
<font class="first">S</font>earch
<font class="first">T</font>rees
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Binary Search Trees</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://lara.epfl.ch/~kuncak/">Viktor Kuncak</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-04-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">The correctness is shown of binary search tree operations (lookup, insert and remove) implementing a set. Two versions are given, for both structured and linear (tactic-style) proofs. An implementation of integer-indexed maps is also verified.</div></td>
+ <td class="abstract mathjax_process">The correctness is shown of binary search tree operations (lookup, insert and remove) implementing a set. Two versions are given, for both structured and linear (tactic-style) proofs. An implementation of integer-indexed maps is also verified.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{BinarySearchTree-AFP,
author = {Viktor Kuncak},
title = {Binary Search Trees},
journal = {Archive of Formal Proofs},
month = apr,
year = 2004,
note = {\url{http://isa-afp.org/entries/BinarySearchTree.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/BinarySearchTree/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/BinarySearchTree/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/BinarySearchTree/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-BinarySearchTree-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-BinarySearchTree-2019-06-11.tar.gz">
afp-BinarySearchTree-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-BinarySearchTree-2018-08-16.tar.gz">
afp-BinarySearchTree-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-BinarySearchTree-2017-10-10.tar.gz">
afp-BinarySearchTree-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-BinarySearchTree-2016-12-17.tar.gz">
afp-BinarySearchTree-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-BinarySearchTree-2016-02-22.tar.gz">
afp-BinarySearchTree-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-BinarySearchTree-2015-05-27.tar.gz">
afp-BinarySearchTree-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-BinarySearchTree-2014-08-28.tar.gz">
afp-BinarySearchTree-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-BinarySearchTree-2013-12-11.tar.gz">
afp-BinarySearchTree-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-BinarySearchTree-2013-11-17.tar.gz">
afp-BinarySearchTree-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-BinarySearchTree-2013-02-16.tar.gz">
afp-BinarySearchTree-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-BinarySearchTree-2012-05-24.tar.gz">
afp-BinarySearchTree-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-BinarySearchTree-2011-10-11.tar.gz">
afp-BinarySearchTree-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-BinarySearchTree-2011-02-11.tar.gz">
afp-BinarySearchTree-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-BinarySearchTree-2010-06-30.tar.gz">
afp-BinarySearchTree-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-BinarySearchTree-2009-12-12.tar.gz">
afp-BinarySearchTree-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-BinarySearchTree-2009-04-29.tar.gz">
afp-BinarySearchTree-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-BinarySearchTree-2008-06-10.tar.gz">
afp-BinarySearchTree-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-BinarySearchTree-2007-11-27.tar.gz">
afp-BinarySearchTree-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-BinarySearchTree-2005-10-14.tar.gz">
afp-BinarySearchTree-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-BinarySearchTree-2004-09-21.tar.gz">
afp-BinarySearchTree-2004-09-21.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-BinarySearchTree-2004-04-21.tar.gz">
afp-BinarySearchTree-2004-04-21.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-BinarySearchTree-2004-04-20.tar.gz">
afp-BinarySearchTree-2004-04-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Binding_Syntax_Theory.html b/web/entries/Binding_Syntax_Theory.html
--- a/web/entries/Binding_Syntax_Theory.html
+++ b/web/entries/Binding_Syntax_Theory.html
@@ -1,211 +1,211 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A General Theory of Syntax with Bindings - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">G</font>eneral
<font class="first">T</font>heory
of
<font class="first">S</font>yntax
with
<font class="first">B</font>indings
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A General Theory of Syntax with Bindings</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Lorenzo Gheri (lor /dot/ gheri /at/ gmail /dot/ com) and
Andrei Popescu (a /dot/ popescu /at/ mdx /dot/ ac /dot/ uk)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-04-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize a theory of syntax with bindings that has been developed
and refined over the last decade to support several large
formalization efforts. Terms are defined for an arbitrary number of
constructors of varying numbers of inputs, quotiented to
alpha-equivalence and sorted according to a binding signature. The
theory includes many properties of the standard operators on terms:
substitution, swapping and freshness. It also includes bindings-aware
induction and recursion principles and support for semantic
interpretation. This work has been presented in the ITP 2017 paper “A
-Formalized General Theory of Syntax with Bindings”.</div></td>
+Formalized General Theory of Syntax with Bindings”.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Binding_Syntax_Theory-AFP,
author = {Lorenzo Gheri and Andrei Popescu},
title = {A General Theory of Syntax with Bindings},
journal = {Archive of Formal Proofs},
month = apr,
year = 2019,
note = {\url{http://isa-afp.org/entries/Binding_Syntax_Theory.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Binding_Syntax_Theory/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Binding_Syntax_Theory/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Binding_Syntax_Theory/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Binding_Syntax_Theory-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Binding_Syntax_Theory-2019-06-11.tar.gz">
afp-Binding_Syntax_Theory-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Binding_Syntax_Theory-2019-04-08.tar.gz">
afp-Binding_Syntax_Theory-2019-04-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Binomial-Heaps.html b/web/entries/Binomial-Heaps.html
--- a/web/entries/Binomial-Heaps.html
+++ b/web/entries/Binomial-Heaps.html
@@ -1,277 +1,277 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Binomial Heaps and Skew Binomial Heaps - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>inomial
<font class="first">H</font>eaps
and
<font class="first">S</font>kew
<font class="first">B</font>inomial
<font class="first">H</font>eaps
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Binomial Heaps and Skew Binomial Heaps</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Rene Meis (rene /dot/ meis /at/ uni-due /dot/ de),
Finn Nielsen (finn /dot/ nielsen /at/ uni-muenster /dot/ de) and
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-10-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We implement and prove correct binomial heaps and skew binomial heaps.
Both are data-structures for priority queues.
While binomial heaps have logarithmic <em>findMin</em>, <em>deleteMin</em>,
<em>insert</em>, and <em>meld</em> operations,
skew binomial heaps have constant time <em>findMin</em>, <em>insert</em>,
and <em>meld</em> operations, and only the <em>deleteMin</em>-operation is
logarithmic. This is achieved by using <em>skew links</em> to avoid
cascading linking on <em>insert</em>-operations, and <em>data-structural
bootstrapping</em> to get constant-time <em>findMin</em> and <em>meld</em>
-operations. Our implementation follows the paper by Brodal and Okasaki.</div></td>
+operations. Our implementation follows the paper by Brodal and Okasaki.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Binomial-Heaps-AFP,
author = {Rene Meis and Finn Nielsen and Peter Lammich},
title = {Binomial Heaps and Skew Binomial Heaps},
journal = {Archive of Formal Proofs},
month = oct,
year = 2010,
note = {\url{http://isa-afp.org/entries/Binomial-Heaps.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Collections.html">Collections</a>, <a href="JinjaThreads.html">JinjaThreads</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Binomial-Heaps/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Binomial-Heaps/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Binomial-Heaps/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Binomial-Heaps-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Binomial-Heaps-2019-06-11.tar.gz">
afp-Binomial-Heaps-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Binomial-Heaps-2018-08-16.tar.gz">
afp-Binomial-Heaps-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Binomial-Heaps-2017-10-10.tar.gz">
afp-Binomial-Heaps-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Binomial-Heaps-2016-12-17.tar.gz">
afp-Binomial-Heaps-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Binomial-Heaps-2016-02-22.tar.gz">
afp-Binomial-Heaps-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Binomial-Heaps-2015-05-27.tar.gz">
afp-Binomial-Heaps-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Binomial-Heaps-2014-08-28.tar.gz">
afp-Binomial-Heaps-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Binomial-Heaps-2013-12-11.tar.gz">
afp-Binomial-Heaps-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Binomial-Heaps-2013-11-17.tar.gz">
afp-Binomial-Heaps-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Binomial-Heaps-2013-03-02.tar.gz">
afp-Binomial-Heaps-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Binomial-Heaps-2013-02-16.tar.gz">
afp-Binomial-Heaps-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Binomial-Heaps-2012-05-24.tar.gz">
afp-Binomial-Heaps-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Binomial-Heaps-2011-10-11.tar.gz">
afp-Binomial-Heaps-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Binomial-Heaps-2011-02-11.tar.gz">
afp-Binomial-Heaps-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Binomial-Heaps-2010-10-28.tar.gz">
afp-Binomial-Heaps-2010-10-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Binomial-Queues.html b/web/entries/Binomial-Queues.html
--- a/web/entries/Binomial-Queues.html
+++ b/web/entries/Binomial-Queues.html
@@ -1,247 +1,247 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Functional Binomial Queues - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>unctional
<font class="first">B</font>inomial
<font class="first">Q</font>ueues
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Functional Binomial Queues</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
René Neumann (rene /dot/ neumann /at/ in /dot/ tum /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-10-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Priority queues are an important data structure and efficient implementations of them are crucial. We implement a functional variant of binomial queues in Isabelle/HOL and show its functional correctness. A verification against an abstract reference specification of priority queues has also been attempted, but could not be achieved to the full extent.</div></td>
+ <td class="abstract mathjax_process">Priority queues are an important data structure and efficient implementations of them are crucial. We implement a functional variant of binomial queues in Isabelle/HOL and show its functional correctness. A verification against an abstract reference specification of priority queues has also been attempted, but could not be achieved to the full extent.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Binomial-Queues-AFP,
author = {René Neumann},
title = {Functional Binomial Queues},
journal = {Archive of Formal Proofs},
month = oct,
year = 2010,
note = {\url{http://isa-afp.org/entries/Binomial-Queues.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Binomial-Queues/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Binomial-Queues/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Binomial-Queues/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Binomial-Queues-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Binomial-Queues-2019-06-11.tar.gz">
afp-Binomial-Queues-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Binomial-Queues-2018-08-16.tar.gz">
afp-Binomial-Queues-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Binomial-Queues-2017-10-10.tar.gz">
afp-Binomial-Queues-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Binomial-Queues-2016-12-17.tar.gz">
afp-Binomial-Queues-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Binomial-Queues-2016-02-22.tar.gz">
afp-Binomial-Queues-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Binomial-Queues-2015-05-27.tar.gz">
afp-Binomial-Queues-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Binomial-Queues-2014-08-28.tar.gz">
afp-Binomial-Queues-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Binomial-Queues-2013-12-11.tar.gz">
afp-Binomial-Queues-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Binomial-Queues-2013-11-17.tar.gz">
afp-Binomial-Queues-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Binomial-Queues-2013-02-16.tar.gz">
afp-Binomial-Queues-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Binomial-Queues-2012-05-24.tar.gz">
afp-Binomial-Queues-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Binomial-Queues-2011-10-11.tar.gz">
afp-Binomial-Queues-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Binomial-Queues-2011-02-11.tar.gz">
afp-Binomial-Queues-2011-02-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Bondy.html b/web/entries/Bondy.html
--- a/web/entries/Bondy.html
+++ b/web/entries/Bondy.html
@@ -1,236 +1,236 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Bondy's Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>ondy's
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Bondy's Theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.andrew.cmu.edu/user/avigad/">Jeremy Avigad</a> and
<a href="http://www.logic.at/people/hetzl/">Stefan Hetzl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-10-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">A proof of Bondy's theorem following B. Bollabas, Combinatorics, 1986, Cambridge University Press.</div></td>
+ <td class="abstract mathjax_process">A proof of Bondy's theorem following B. Bollabas, Combinatorics, 1986, Cambridge University Press.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Bondy-AFP,
author = {Jeremy Avigad and Stefan Hetzl},
title = {Bondy's Theorem},
journal = {Archive of Formal Proofs},
month = oct,
year = 2012,
note = {\url{http://isa-afp.org/entries/Bondy.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Bondy/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Bondy/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Bondy/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Bondy-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Bondy-2019-06-11.tar.gz">
afp-Bondy-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Bondy-2018-08-16.tar.gz">
afp-Bondy-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Bondy-2017-10-10.tar.gz">
afp-Bondy-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Bondy-2016-12-17.tar.gz">
afp-Bondy-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Bondy-2016-02-22.tar.gz">
afp-Bondy-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Bondy-2015-05-27.tar.gz">
afp-Bondy-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Bondy-2014-08-28.tar.gz">
afp-Bondy-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Bondy-2013-12-11.tar.gz">
afp-Bondy-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Bondy-2013-11-17.tar.gz">
afp-Bondy-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Bondy-2013-02-16.tar.gz">
afp-Bondy-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Bondy-2012-10-27.tar.gz">
afp-Bondy-2012-10-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Boolean_Expression_Checkers.html b/web/entries/Boolean_Expression_Checkers.html
--- a/web/entries/Boolean_Expression_Checkers.html
+++ b/web/entries/Boolean_Expression_Checkers.html
@@ -1,232 +1,232 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Boolean Expression Checkers - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>oolean
<font class="first">E</font>xpression
<font class="first">C</font>heckers
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Boolean Expression Checkers</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-06-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry provides executable checkers for the following properties of
boolean expressions: satisfiability, tautology and equivalence. Internally,
the checkers operate on binary decision trees and are reasonably efficient
-(for purely functional algorithms).</div></td>
+(for purely functional algorithms).</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2015-09-23]: Salomon Sickert added an interface that does not require the usage of the Boolean formula datatype. Furthermore the general Mapping type is used instead of an association list.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Boolean_Expression_Checkers-AFP,
author = {Tobias Nipkow},
title = {Boolean Expression Checkers},
journal = {Archive of Formal Proofs},
month = jun,
year = 2014,
note = {\url{http://isa-afp.org/entries/Boolean_Expression_Checkers.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="LTL.html">LTL</a>, <a href="LTL_to_DRA.html">LTL_to_DRA</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Boolean_Expression_Checkers/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Boolean_Expression_Checkers/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Boolean_Expression_Checkers/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Boolean_Expression_Checkers-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Boolean_Expression_Checkers-2019-06-11.tar.gz">
afp-Boolean_Expression_Checkers-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Boolean_Expression_Checkers-2018-08-16.tar.gz">
afp-Boolean_Expression_Checkers-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Boolean_Expression_Checkers-2017-10-10.tar.gz">
afp-Boolean_Expression_Checkers-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Boolean_Expression_Checkers-2016-12-17.tar.gz">
afp-Boolean_Expression_Checkers-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Boolean_Expression_Checkers-2016-02-22.tar.gz">
afp-Boolean_Expression_Checkers-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Boolean_Expression_Checkers-2015-05-27.tar.gz">
afp-Boolean_Expression_Checkers-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Boolean_Expression_Checkers-2014-08-28.tar.gz">
afp-Boolean_Expression_Checkers-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Boolean_Expression_Checkers-2014-06-08.tar.gz">
afp-Boolean_Expression_Checkers-2014-06-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Bounded_Deducibility_Security.html b/web/entries/Bounded_Deducibility_Security.html
--- a/web/entries/Bounded_Deducibility_Security.html
+++ b/web/entries/Bounded_Deducibility_Security.html
@@ -1,228 +1,228 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Bounded-Deducibility Security - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>ounded-Deducibility
<font class="first">S</font>ecurity
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Bounded-Deducibility Security</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Andrei Popescu (a /dot/ popescu /at/ mdx /dot/ ac /dot/ uk) and
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-04-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This is a formalization of bounded-deducibility security (BD
+ <td class="abstract mathjax_process">This is a formalization of bounded-deducibility security (BD
security), a flexible notion of information-flow security applicable
to arbitrary input-output automata. It generalizes Sutherland's
classic notion of nondeducibility by factoring in declassification
bounds and trigger, whereas nondeducibility states that, in a
system, information cannot flow between specified sources and sinks,
BD security indicates upper bounds for the flow and triggers under
-which these upper bounds are no longer guaranteed.</div></td>
+which these upper bounds are no longer guaranteed.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Bounded_Deducibility_Security-AFP,
author = {Andrei Popescu and Peter Lammich},
title = {Bounded-Deducibility Security},
journal = {Archive of Formal Proofs},
month = apr,
year = 2014,
note = {\url{http://isa-afp.org/entries/Bounded_Deducibility_Security.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Bounded_Deducibility_Security/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Bounded_Deducibility_Security/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Bounded_Deducibility_Security/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Bounded_Deducibility_Security-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Bounded_Deducibility_Security-2019-06-11.tar.gz">
afp-Bounded_Deducibility_Security-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Bounded_Deducibility_Security-2018-08-16.tar.gz">
afp-Bounded_Deducibility_Security-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Bounded_Deducibility_Security-2017-10-10.tar.gz">
afp-Bounded_Deducibility_Security-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Bounded_Deducibility_Security-2016-12-17.tar.gz">
afp-Bounded_Deducibility_Security-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Bounded_Deducibility_Security-2016-02-22.tar.gz">
afp-Bounded_Deducibility_Security-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Bounded_Deducibility_Security-2015-05-27.tar.gz">
afp-Bounded_Deducibility_Security-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Bounded_Deducibility_Security-2014-08-28.tar.gz">
afp-Bounded_Deducibility_Security-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Bounded_Deducibility_Security-2014-04-24.tar.gz">
afp-Bounded_Deducibility_Security-2014-04-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Buchi_Complementation.html b/web/entries/Buchi_Complementation.html
--- a/web/entries/Buchi_Complementation.html
+++ b/web/entries/Buchi_Complementation.html
@@ -1,206 +1,206 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Büchi Complementation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>üchi
<font class="first">C</font>omplementation
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Büchi Complementation</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~brunnerj/">Julian Brunner</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-10-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry provides a verified implementation of rank-based Büchi
Complementation. The verification is done in three steps: <ol>
<li>Definition of odd rankings and proof that an automaton
rejects a word iff there exists an odd ranking for it.</li>
<li>Definition of the complement automaton and proof that it
accepts exactly those words for which there is an odd
ranking.</li> <li>Verified implementation of the
complement automaton using the Isabelle Collections
-Framework.</li> </ol></div></td>
+Framework.</li> </ol></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Buchi_Complementation-AFP,
author = {Julian Brunner},
title = {Büchi Complementation},
journal = {Archive of Formal Proofs},
month = oct,
year = 2017,
note = {\url{http://isa-afp.org/entries/Buchi_Complementation.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Transition_Systems_and_Automata.html">Transition_Systems_and_Automata</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Buchi_Complementation/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Buchi_Complementation/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Buchi_Complementation/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Buchi_Complementation-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Buchi_Complementation-2019-06-11.tar.gz">
afp-Buchi_Complementation-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Buchi_Complementation-2018-08-16.tar.gz">
afp-Buchi_Complementation-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Buchi_Complementation-2017-10-27.tar.gz">
afp-Buchi_Complementation-2017-10-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Budan_Fourier.html b/web/entries/Budan_Fourier.html
--- a/web/entries/Budan_Fourier.html
+++ b/web/entries/Budan_Fourier.html
@@ -1,223 +1,223 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Budan-Fourier Theorem and Counting Real Roots with Multiplicity - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">B</font>udan-Fourier
<font class="first">T</font>heorem
and
<font class="first">C</font>ounting
<font class="first">R</font>eal
<font class="first">R</font>oots
with
<font class="first">M</font>ultiplicity
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Budan-Fourier Theorem and Counting Real Roots with Multiplicity</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-09-02</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry is mainly about counting and approximating real roots (of a
polynomial) with multiplicity. We have first formalised the
Budan-Fourier theorem: given a polynomial with real coefficients, we
can calculate sign variations on Fourier sequences to over-approximate
the number of real roots (counting multiplicity) within an interval.
When all roots are known to be real, the over-approximation becomes
tight: we can utilise this theorem to count real roots exactly. It is
also worth noting that Descartes' rule of sign is a direct
consequence of the Budan-Fourier theorem, and has been included in
this entry. In addition, we have extended previous formalised
Sturm's theorem to count real roots with multiplicity, while the
original Sturm's theorem only counts distinct real roots.
Compared to the Budan-Fourier theorem, our extended Sturm's
theorem always counts roots exactly but may suffer from greater
-computational cost.</div></td>
+computational cost.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Budan_Fourier-AFP,
author = {Wenda Li},
title = {The Budan-Fourier Theorem and Counting Real Roots with Multiplicity},
journal = {Archive of Formal Proofs},
month = sep,
year = 2018,
note = {\url{http://isa-afp.org/entries/Budan_Fourier.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Sturm_Tarski.html">Sturm_Tarski</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Winding_Number_Eval.html">Winding_Number_Eval</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Budan_Fourier/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Budan_Fourier/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Budan_Fourier/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Budan_Fourier-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Budan_Fourier-2019-06-11.tar.gz">
afp-Budan_Fourier-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Budan_Fourier-2018-09-04.tar.gz">
afp-Budan_Fourier-2018-09-04.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Buffons_Needle.html b/web/entries/Buffons_Needle.html
--- a/web/entries/Buffons_Needle.html
+++ b/web/entries/Buffons_Needle.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Buffon's Needle Problem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>uffon's
<font class="first">N</font>eedle
<font class="first">P</font>roblem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Buffon's Needle Problem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-06-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
In the 18th century, Georges-Louis Leclerc, Comte de Buffon posed and
later solved the following problem, which is often called the first
problem ever solved in geometric probability: Given a floor divided
into vertical strips of the same width, what is the probability that a
needle thrown onto the floor randomly will cross two strips? This
entry formally defines the problem in the case where the needle's
position is chosen uniformly at random in a single strip around the
origin (which is equivalent to larger arrangements due to symmetry).
It then provides proofs of the simple solution in the case where the
needle's length is no greater than the width of the strips and
-the more complicated solution in the opposite case.</div></td>
+the more complicated solution in the opposite case.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Buffons_Needle-AFP,
author = {Manuel Eberl},
title = {Buffon's Needle Problem},
journal = {Archive of Formal Proofs},
month = jun,
year = 2017,
note = {\url{http://isa-afp.org/entries/Buffons_Needle.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Buffons_Needle/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Buffons_Needle/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Buffons_Needle/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Buffons_Needle-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Buffons_Needle-2019-06-11.tar.gz">
afp-Buffons_Needle-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Buffons_Needle-2018-08-16.tar.gz">
afp-Buffons_Needle-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Buffons_Needle-2017-10-10.tar.gz">
afp-Buffons_Needle-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Buffons_Needle-2017-06-06.tar.gz">
afp-Buffons_Needle-2017-06-06.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Buildings.html b/web/entries/Buildings.html
--- a/web/entries/Buildings.html
+++ b/web/entries/Buildings.html
@@ -1,223 +1,223 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Chamber Complexes, Coxeter Systems, and Buildings - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>hamber
<font class="first">C</font>omplexes,
<font class="first">C</font>oxeter
<font class="first">S</font>ystems,
and
<font class="first">B</font>uildings
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Chamber Complexes, Coxeter Systems, and Buildings</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://ualberta.ca/~jsylvest/">Jeremy Sylvestre</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-07-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We provide a basic formal framework for the theory of chamber
complexes and Coxeter systems, and for buildings as thick chamber
complexes endowed with a system of apartments. Along the way, we
develop some of the general theory of abstract simplicial complexes
and of groups (relying on the <i>group_add</i> class for the basics),
including free groups and group presentations, and their universal
properties. The main results verified are that the deletion condition
is both necessary and sufficient for a group with a set of generators
of order two to be a Coxeter system, and that the apartments in a
-(thick) building are all uniformly Coxeter.</div></td>
+(thick) building are all uniformly Coxeter.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Buildings-AFP,
author = {Jeremy Sylvestre},
title = {Chamber Complexes, Coxeter Systems, and Buildings},
journal = {Archive of Formal Proofs},
month = jul,
year = 2016,
note = {\url{http://isa-afp.org/entries/Buildings.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Buildings/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Buildings/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Buildings/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Buildings-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Buildings-2019-06-11.tar.gz">
afp-Buildings-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Buildings-2018-08-16.tar.gz">
afp-Buildings-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Buildings-2017-10-10.tar.gz">
afp-Buildings-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Buildings-2016-12-17.tar.gz">
afp-Buildings-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Buildings-2016-07-01.tar.gz">
afp-Buildings-2016-07-01.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/BytecodeLogicJmlTypes.html b/web/entries/BytecodeLogicJmlTypes.html
--- a/web/entries/BytecodeLogicJmlTypes.html
+++ b/web/entries/BytecodeLogicJmlTypes.html
@@ -1,276 +1,276 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Bytecode Logic for JML and Types - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">B</font>ytecode
<font class="first">L</font>ogic
for
<font class="first">J</font>ML
and
<font class="first">T</font>ypes
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Bytecode Logic for JML and Types</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Lennart Beringer and
<a href="http://www.tcs.informatik.uni-muenchen.de/~mhofmann">Martin Hofmann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-12-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This document contains the Isabelle/HOL sources underlying the paper <i>A bytecode logic for JML and types</i> by Beringer and Hofmann, updated to Isabelle 2008. We present a program logic for a subset of sequential Java bytecode that is suitable for representing both, features found in high-level specification language JML as well as interpretations of high-level type systems. To this end, we introduce a fine-grained collection of assertions, including strong invariants, local annotations and VDM-reminiscent partial-correctness specifications. Thanks to a goal-oriented structure and interpretation of judgements, verification may proceed without recourse to an additional control flow analysis. The suitability for interpreting intensional type systems is illustrated by the proof-carrying-code style encoding of a type system for a first-order functional language which guarantees a constant upper bound on the number of objects allocated throughout an execution, be the execution terminating or non-terminating. Like the published paper, the formal development is restricted to a comparatively small subset of the JVML, lacking (among other features) exceptions, arrays, virtual methods, and static fields. This shortcoming has been overcome meanwhile, as our paper has formed the basis of the Mobius base logic, a program logic for the full sequential fragment of the JVML. Indeed, the present formalisation formed the basis of a subsequent formalisation of the Mobius base logic in the proof assistant Coq, which includes a proof of soundness with respect to the Bicolano operational semantics by Pichardie.</div></td>
+ <td class="abstract mathjax_process">This document contains the Isabelle/HOL sources underlying the paper <i>A bytecode logic for JML and types</i> by Beringer and Hofmann, updated to Isabelle 2008. We present a program logic for a subset of sequential Java bytecode that is suitable for representing both, features found in high-level specification language JML as well as interpretations of high-level type systems. To this end, we introduce a fine-grained collection of assertions, including strong invariants, local annotations and VDM-reminiscent partial-correctness specifications. Thanks to a goal-oriented structure and interpretation of judgements, verification may proceed without recourse to an additional control flow analysis. The suitability for interpreting intensional type systems is illustrated by the proof-carrying-code style encoding of a type system for a first-order functional language which guarantees a constant upper bound on the number of objects allocated throughout an execution, be the execution terminating or non-terminating. Like the published paper, the formal development is restricted to a comparatively small subset of the JVML, lacking (among other features) exceptions, arrays, virtual methods, and static fields. This shortcoming has been overcome meanwhile, as our paper has formed the basis of the Mobius base logic, a program logic for the full sequential fragment of the JVML. Indeed, the present formalisation formed the basis of a subsequent formalisation of the Mobius base logic in the proof assistant Coq, which includes a proof of soundness with respect to the Bicolano operational semantics by Pichardie.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{BytecodeLogicJmlTypes-AFP,
author = {Lennart Beringer and Martin Hofmann},
title = {A Bytecode Logic for JML and Types},
journal = {Archive of Formal Proofs},
month = dec,
year = 2008,
note = {\url{http://isa-afp.org/entries/BytecodeLogicJmlTypes.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/BytecodeLogicJmlTypes/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/BytecodeLogicJmlTypes/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/BytecodeLogicJmlTypes/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-BytecodeLogicJmlTypes-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-BytecodeLogicJmlTypes-2019-06-11.tar.gz">
afp-BytecodeLogicJmlTypes-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-BytecodeLogicJmlTypes-2018-08-16.tar.gz">
afp-BytecodeLogicJmlTypes-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-BytecodeLogicJmlTypes-2017-10-10.tar.gz">
afp-BytecodeLogicJmlTypes-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-BytecodeLogicJmlTypes-2016-12-17.tar.gz">
afp-BytecodeLogicJmlTypes-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-BytecodeLogicJmlTypes-2016-02-22.tar.gz">
afp-BytecodeLogicJmlTypes-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-BytecodeLogicJmlTypes-2015-05-27.tar.gz">
afp-BytecodeLogicJmlTypes-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-BytecodeLogicJmlTypes-2014-08-28.tar.gz">
afp-BytecodeLogicJmlTypes-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-BytecodeLogicJmlTypes-2013-12-11.tar.gz">
afp-BytecodeLogicJmlTypes-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-BytecodeLogicJmlTypes-2013-11-17.tar.gz">
afp-BytecodeLogicJmlTypes-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-BytecodeLogicJmlTypes-2013-02-16.tar.gz">
afp-BytecodeLogicJmlTypes-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-BytecodeLogicJmlTypes-2012-05-24.tar.gz">
afp-BytecodeLogicJmlTypes-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-BytecodeLogicJmlTypes-2011-10-11.tar.gz">
afp-BytecodeLogicJmlTypes-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-BytecodeLogicJmlTypes-2011-02-11.tar.gz">
afp-BytecodeLogicJmlTypes-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-BytecodeLogicJmlTypes-2010-06-30.tar.gz">
afp-BytecodeLogicJmlTypes-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-BytecodeLogicJmlTypes-2009-12-12.tar.gz">
afp-BytecodeLogicJmlTypes-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-BytecodeLogicJmlTypes-2009-04-29.tar.gz">
afp-BytecodeLogicJmlTypes-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-BytecodeLogicJmlTypes-2008-12-22.tar.gz">
afp-BytecodeLogicJmlTypes-2008-12-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/C2KA_DistributedSystems.html b/web/entries/C2KA_DistributedSystems.html
--- a/web/entries/C2KA_DistributedSystems.html
+++ b/web/entries/C2KA_DistributedSystems.html
@@ -1,212 +1,212 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Communicating Concurrent Kleene Algebra for Distributed Systems Specification - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ommunicating
<font class="first">C</font>oncurrent
<font class="first">K</font>leene
<font class="first">A</font>lgebra
for
<font class="first">D</font>istributed
<font class="first">S</font>ystems
<font class="first">S</font>pecification
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Communicating Concurrent Kleene Algebra for Distributed Systems Specification</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Maxime Buyse (maxime /dot/ buyse /at/ polytechnique /dot/ edu) and
<a href="https://carleton.ca/jaskolka/">Jason Jaskolka</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-08-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Communicating Concurrent Kleene Algebra (C²KA) is a mathematical
framework for capturing the communicating and concurrent behaviour of
agents in distributed systems. It extends Hoare et al.'s
Concurrent Kleene Algebra (CKA) with communication actions through the
notions of stimuli and shared environments. C²KA has applications in
studying system-level properties of distributed systems such as
safety, security, and reliability. In this work, we formalize results
about C²KA and its application for distributed systems specification.
We first formalize the stimulus structure and behaviour structure
(CKA). Next, we combine them to formalize C²KA and its properties.
Then, we formalize notions and properties related to the topology of
distributed systems and the potential for communication via stimuli
and via shared environments of agents, all within the algebraic
-setting of C²KA.</div></td>
+setting of C²KA.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{C2KA_DistributedSystems-AFP,
author = {Maxime Buyse and Jason Jaskolka},
title = {Communicating Concurrent Kleene Algebra for Distributed Systems Specification},
journal = {Archive of Formal Proofs},
month = aug,
year = 2019,
note = {\url{http://isa-afp.org/entries/C2KA_DistributedSystems.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/C2KA_DistributedSystems/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/C2KA_DistributedSystems/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/C2KA_DistributedSystems/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-C2KA_DistributedSystems-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-C2KA_DistributedSystems-2019-08-06.tar.gz">
afp-C2KA_DistributedSystems-2019-08-06.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/CAVA_Automata.html b/web/entries/CAVA_Automata.html
--- a/web/entries/CAVA_Automata.html
+++ b/web/entries/CAVA_Automata.html
@@ -1,250 +1,250 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The CAVA Automata Library - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">C</font>AVA
<font class="first">A</font>utomata
<font class="first">L</font>ibrary
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The CAVA Automata Library</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-05-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We report on the graph and automata library that is used in the fully
verified LTL model checker CAVA.
As most components of CAVA use some type of graphs or automata, a common
automata library simplifies assembly of the components and reduces
redundancy.
<p>
The CAVA Automata Library provides a hierarchy of graph and automata
classes, together with some standard algorithms.
Its object oriented design allows for sharing of algorithms, theorems,
and implementations between its classes, and also simplifies extensions
of the library.
Moreover, it is integrated into the Automatic Refinement Framework,
supporting automatic refinement of the abstract automata types to
efficient data structures.
<p>
Note that the CAVA Automata Library is work in progress. Currently, it
is very specifically tailored towards the requirements of the CAVA model
checker.
Nevertheless, the formalization techniques presented here allow an
extension of the library to a wider scope. Moreover, they are not
limited to graph libraries, but apply to class hierarchies in general.
<p>
The CAVA Automata Library is described in the paper: Peter Lammich, The
-CAVA Automata Library, Isabelle Workshop 2014.</div></td>
+CAVA Automata Library, Isabelle Workshop 2014.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{CAVA_Automata-AFP,
author = {Peter Lammich},
title = {The CAVA Automata Library},
journal = {Archive of Formal Proofs},
month = may,
year = 2014,
note = {\url{http://isa-afp.org/entries/CAVA_Automata.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="DFS_Framework.html">DFS_Framework</a>, <a href="Flow_Networks.html">Flow_Networks</a>, <a href="Formal_SSA.html">Formal_SSA</a>, <a href="Gabow_SCC.html">Gabow_SCC</a>, <a href="LTL_to_GBA.html">LTL_to_GBA</a>, <a href="Promela.html">Promela</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CAVA_Automata/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/CAVA_Automata/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CAVA_Automata/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-CAVA_Automata-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-CAVA_Automata-2019-06-11.tar.gz">
afp-CAVA_Automata-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-CAVA_Automata-2018-08-16.tar.gz">
afp-CAVA_Automata-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-CAVA_Automata-2017-10-10.tar.gz">
afp-CAVA_Automata-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-CAVA_Automata-2016-12-17.tar.gz">
afp-CAVA_Automata-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-CAVA_Automata-2016-02-22.tar.gz">
afp-CAVA_Automata-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-CAVA_Automata-2015-05-27.tar.gz">
afp-CAVA_Automata-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-CAVA_Automata-2014-08-28.tar.gz">
afp-CAVA_Automata-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-CAVA_Automata-2014-05-29.tar.gz">
afp-CAVA_Automata-2014-05-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/CAVA_LTL_Modelchecker.html b/web/entries/CAVA_LTL_Modelchecker.html
--- a/web/entries/CAVA_LTL_Modelchecker.html
+++ b/web/entries/CAVA_LTL_Modelchecker.html
@@ -1,256 +1,256 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Fully Verified Executable LTL Model Checker - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">F</font>ully
<font class="first">V</font>erified
<font class="first">E</font>xecutable
<font class="first">L</font>TL
<font class="first">M</font>odel
<font class="first">C</font>hecker
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Fully Verified Executable LTL Model Checker</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www7.in.tum.de/~esparza/">Javier Esparza</a>,
Peter Lammich,
René Neumann (rene /dot/ neumann /at/ in /dot/ tum /dot/ de),
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>,
Alexander Schimpf (schimpfa /at/ informatik /dot/ uni-freiburg /dot/ de) and
<a href="http://www.irit.fr/~Jan-Georg.Smaus">Jan-Georg Smaus</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-05-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present an LTL model checker whose code has been completely verified
using the Isabelle theorem prover. The checker consists of over 4000
lines of ML code. The code is produced using the Isabelle Refinement
Framework, which allows us to split its correctness proof into (1) the
proof of an abstract version of the checker, consisting of a few hundred
lines of ``formalized pseudocode'', and (2) a verified refinement step
in which mathematical sets and other abstract structures are replaced by
implementations of efficient structures like red-black trees and
functional arrays. This leads to a checker that,
while still slower than unverified checkers, can already be used as a
trusted reference implementation against which advanced implementations
can be tested.
<p>
An early version of this model checker is described in the
<a href="http://www21.in.tum.de/~nipkow/pubs/cav13.html">CAV 2013 paper</a>
-with the same title.</div></td>
+with the same title.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{CAVA_LTL_Modelchecker-AFP,
author = {Javier Esparza and Peter Lammich and René Neumann and Tobias Nipkow and Alexander Schimpf and Jan-Georg Smaus},
title = {A Fully Verified Executable LTL Model Checker},
journal = {Archive of Formal Proofs},
month = may,
year = 2014,
note = {\url{http://isa-afp.org/entries/CAVA_LTL_Modelchecker.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CAVA_LTL_Modelchecker/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/CAVA_LTL_Modelchecker/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CAVA_LTL_Modelchecker/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-CAVA_LTL_Modelchecker-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-CAVA_LTL_Modelchecker-2019-06-11.tar.gz">
afp-CAVA_LTL_Modelchecker-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-CAVA_LTL_Modelchecker-2018-08-16.tar.gz">
afp-CAVA_LTL_Modelchecker-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-CAVA_LTL_Modelchecker-2017-10-10.tar.gz">
afp-CAVA_LTL_Modelchecker-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-CAVA_LTL_Modelchecker-2016-12-17.tar.gz">
afp-CAVA_LTL_Modelchecker-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-CAVA_LTL_Modelchecker-2016-02-22.tar.gz">
afp-CAVA_LTL_Modelchecker-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-CAVA_LTL_Modelchecker-2015-05-27.tar.gz">
afp-CAVA_LTL_Modelchecker-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-CAVA_LTL_Modelchecker-2014-08-28.tar.gz">
afp-CAVA_LTL_Modelchecker-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-CAVA_LTL_Modelchecker-2014-05-30.tar.gz">
afp-CAVA_LTL_Modelchecker-2014-05-30.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-CAVA_LTL_Modelchecker-2014-05-29.tar.gz">
afp-CAVA_LTL_Modelchecker-2014-05-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/CCS.html b/web/entries/CCS.html
--- a/web/entries/CCS.html
+++ b/web/entries/CCS.html
@@ -1,241 +1,241 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>CCS in nominal logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>CS
in
nominal
logic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">CCS in nominal logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.itu.dk/people/jebe">Jesper Bengtson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-05-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We formalise a large portion of CCS as described in Milner's book 'Communication and Concurrency' using the nominal datatype package in Isabelle. Our results include many of the standard theorems of bisimulation equivalence and congruence, for both weak and strong versions. One main goal of this formalisation is to keep the machine-checked proofs as close to their pen-and-paper counterpart as possible.
+ <td class="abstract mathjax_process">We formalise a large portion of CCS as described in Milner's book 'Communication and Concurrency' using the nominal datatype package in Isabelle. Our results include many of the standard theorems of bisimulation equivalence and congruence, for both weak and strong versions. One main goal of this formalisation is to keep the machine-checked proofs as close to their pen-and-paper counterpart as possible.
<p>
-This entry is described in detail in <a href="http://www.itu.dk/people/jebe/files/thesis.pdf">Bengtson's thesis</a>.</div></td>
+This entry is described in detail in <a href="http://www.itu.dk/people/jebe/files/thesis.pdf">Bengtson's thesis</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{CCS-AFP,
author = {Jesper Bengtson},
title = {CCS in nominal logic},
journal = {Archive of Formal Proofs},
month = may,
year = 2012,
note = {\url{http://isa-afp.org/entries/CCS.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CCS/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/CCS/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CCS/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-CCS-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-CCS-2019-06-11.tar.gz">
afp-CCS-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-CCS-2018-08-16.tar.gz">
afp-CCS-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-CCS-2017-10-10.tar.gz">
afp-CCS-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-CCS-2016-12-17.tar.gz">
afp-CCS-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-CCS-2016-02-22.tar.gz">
afp-CCS-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-CCS-2015-05-27.tar.gz">
afp-CCS-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-CCS-2014-08-28.tar.gz">
afp-CCS-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-CCS-2013-12-11.tar.gz">
afp-CCS-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-CCS-2013-11-17.tar.gz">
afp-CCS-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-CCS-2013-02-16.tar.gz">
afp-CCS-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-CCS-2012-06-14.tar.gz">
afp-CCS-2012-06-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/CISC-Kernel.html b/web/entries/CISC-Kernel.html
--- a/web/entries/CISC-Kernel.html
+++ b/web/entries/CISC-Kernel.html
@@ -1,253 +1,253 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formal Specification of a Generic Separation Kernel - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormal
<font class="first">S</font>pecification
of
a
<font class="first">G</font>eneric
<font class="first">S</font>eparation
<font class="first">K</font>ernel
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formal Specification of a Generic Separation Kernel</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Freek Verbeek (Freek /dot/ Verbeek /at/ ou /dot/ nl),
Sergey Tverdyshev (stv /at/ sysgo /dot/ com),
Oto Havle (oha /at/ sysgo /dot/ com),
Holger Blasum (holger /dot/ blasum /at/ sysgo /dot/ com),
Bruno Langenstein (langenstein /at/ dfki /dot/ de),
Werner Stephan (stephan /at/ dfki /dot/ de),
Yakoub Nemouchi (yakoub /dot/ nemouchi /at/ york /dot/ ac /dot/ uk),
Abderrahmane Feliachi (abderrahmane /dot/ feliachi /at/ lri /dot/ fr),
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a> and
Julien Schmaltz (Julien /dot/ Schmaltz /at/ ou /dot/ nl)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-07-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>Intransitive noninterference has been a widely studied topic in the last
few decades. Several well-established methodologies apply interactive
theorem proving to formulate a noninterference theorem over abstract
academic models. In joint work with several industrial and academic partners
throughout Europe, we are helping in the certification process of PikeOS, an
industrial separation kernel developed at SYSGO. In this process,
established theories could not be applied. We present a new generic model of
separation kernels and a new theory of intransitive noninterference. The
model is rich in detail, making it suitable for formal verification of
realistic and industrial systems such as PikeOS. Using a refinement-based
theorem proving approach, we ensure that proofs remain manageable.</p>
<p>
This document corresponds to the deliverable D31.1 of the EURO-MILS
-Project <a href="http://www.euromils.eu">http://www.euromils.eu</a>.</p></div></td>
+Project <a href="http://www.euromils.eu">http://www.euromils.eu</a>.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{CISC-Kernel-AFP,
author = {Freek Verbeek and Sergey Tverdyshev and Oto Havle and Holger Blasum and Bruno Langenstein and Werner Stephan and Yakoub Nemouchi and Abderrahmane Feliachi and Burkhart Wolff and Julien Schmaltz},
title = {Formal Specification of a Generic Separation Kernel},
journal = {Archive of Formal Proofs},
month = jul,
year = 2014,
note = {\url{http://isa-afp.org/entries/CISC-Kernel.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CISC-Kernel/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/CISC-Kernel/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CISC-Kernel/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-CISC-Kernel-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-CISC-Kernel-2019-06-11.tar.gz">
afp-CISC-Kernel-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-CISC-Kernel-2018-08-16.tar.gz">
afp-CISC-Kernel-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-CISC-Kernel-2017-10-10.tar.gz">
afp-CISC-Kernel-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-CISC-Kernel-2016-12-17.tar.gz">
afp-CISC-Kernel-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-CISC-Kernel-2016-02-22.tar.gz">
afp-CISC-Kernel-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-CISC-Kernel-2015-05-27.tar.gz">
afp-CISC-Kernel-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-CISC-Kernel-2014-08-28.tar.gz">
afp-CISC-Kernel-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-CISC-Kernel-2014-07-18.tar.gz">
afp-CISC-Kernel-2014-07-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/CRDT.html b/web/entries/CRDT.html
--- a/web/entries/CRDT.html
+++ b/web/entries/CRDT.html
@@ -1,237 +1,237 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A framework for establishing Strong Eventual Consistency for Conflict-free Replicated Datatypes - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
framework
for
establishing
<font class="first">S</font>trong
<font class="first">E</font>ventual
<font class="first">C</font>onsistency
for
<font class="first">C</font>onflict-free
<font class="first">R</font>eplicated
<font class="first">D</font>atatypes
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A framework for establishing Strong Eventual Consistency for Conflict-free Replicated Datatypes</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Victor B. F. Gomes (vb358 /at/ cl /dot/ cam /dot/ ac /dot/ uk),
Martin Kleppmann (mk428 /at/ cl /dot/ cam /dot/ ac /dot/ uk),
Dominic P. Mulligan (Dominic /dot/ Mulligan /at/ arm /dot/ com) and
Alastair R. Beresford (arb33 /at/ cl /dot/ cam /dot/ ac /dot/ uk)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-07-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
In this work, we focus on the correctness of Conflict-free Replicated
Data Types (CRDTs), a class of algorithm that provides strong eventual
consistency guarantees for replicated data. We develop a modular and
reusable framework for verifying the correctness of CRDT algorithms.
We avoid correctness issues that have dogged previous mechanised
proofs in this area by including a network model in our formalisation,
and proving that our theorems hold in all possible network behaviours.
Our axiomatic network model is a standard abstraction that accurately
reflects the behaviour of real-world computer networks. Moreover, we
identify an abstract convergence theorem, a property of order
relations, which provides a formal definition of strong eventual
consistency. We then obtain the first machine-checked correctness
theorems for three concrete CRDTs: the Replicated Growable Array, the
-Observed-Remove Set, and an Increment-Decrement Counter.</div></td>
+Observed-Remove Set, and an Increment-Decrement Counter.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{CRDT-AFP,
author = {Victor B. F. Gomes and Martin Kleppmann and Dominic P. Mulligan and Alastair R. Beresford},
title = {A framework for establishing Strong Eventual Consistency for Conflict-free Replicated Datatypes},
journal = {Archive of Formal Proofs},
month = jul,
year = 2017,
note = {\url{http://isa-afp.org/entries/CRDT.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="IMAP-CRDT.html">IMAP-CRDT</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CRDT/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/CRDT/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CRDT/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-CRDT-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-CRDT-2019-06-11.tar.gz">
afp-CRDT-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-CRDT-2018-08-16.tar.gz">
afp-CRDT-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-CRDT-2017-10-10.tar.gz">
afp-CRDT-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-CRDT-2017-07-07.tar.gz">
afp-CRDT-2017-07-07.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/CYK.html b/web/entries/CYK.html
--- a/web/entries/CYK.html
+++ b/web/entries/CYK.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A formalisation of the Cocke-Younger-Kasami algorithm - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
formalisation
of
the
<font class="first">C</font>ocke-Younger-Kasami
algorithm
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A formalisation of the Cocke-Younger-Kasami algorithm</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Maksym Bortin
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-04-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The theory provides a formalisation of the Cocke-Younger-Kasami
algorithm (CYK for short), an approach to solving the word problem
for context-free languages. CYK decides if a word is in the
languages generated by a context-free grammar in Chomsky normal form.
-The formalized algorithm is executable.</div></td>
+The formalized algorithm is executable.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{CYK-AFP,
author = {Maksym Bortin},
title = {A formalisation of the Cocke-Younger-Kasami algorithm},
journal = {Archive of Formal Proofs},
month = apr,
year = 2016,
note = {\url{http://isa-afp.org/entries/CYK.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CYK/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/CYK/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CYK/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-CYK-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-CYK-2019-06-11.tar.gz">
afp-CYK-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-CYK-2018-08-16.tar.gz">
afp-CYK-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-CYK-2017-10-10.tar.gz">
afp-CYK-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-CYK-2016-12-17.tar.gz">
afp-CYK-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-CYK-2016-04-27.tar.gz">
afp-CYK-2016-04-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/CakeML.html b/web/entries/CakeML.html
--- a/web/entries/CakeML.html
+++ b/web/entries/CakeML.html
@@ -1,207 +1,207 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>CakeML - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>akeML
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">CakeML</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a> and
Yu Zhang
</td>
</tr>
<tr>
<td class="datahead">
Contributor:
</td>
<td class="data">
Johannes Åman Pohjola
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-03-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
CakeML is a functional programming language with a proven-correct
compiler and runtime system. This entry contains an unofficial version
of the CakeML semantics that has been exported from the Lem
specifications to Isabelle. Additionally, there are some hand-written
theory files that adapt the exported code to Isabelle and port proofs
-from the HOL4 formalization, e.g. termination and equivalence proofs.</div></td>
+from the HOL4 formalization, e.g. termination and equivalence proofs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{CakeML-AFP,
author = {Lars Hupel and Yu Zhang},
title = {CakeML},
journal = {Archive of Formal Proofs},
month = mar,
year = 2018,
note = {\url{http://isa-afp.org/entries/CakeML.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Coinductive.html">Coinductive</a>, <a href="IEEE_Floating_Point.html">IEEE_Floating_Point</a>, <a href="Show.html">Show</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="CakeML_Codegen.html">CakeML_Codegen</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CakeML/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/CakeML/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CakeML/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-CakeML-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-CakeML-2019-06-11.tar.gz">
afp-CakeML-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-CakeML-2018-08-16.tar.gz">
afp-CakeML-2018-08-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/CakeML_Codegen.html b/web/entries/CakeML_Codegen.html
--- a/web/entries/CakeML_Codegen.html
+++ b/web/entries/CakeML_Codegen.html
@@ -1,205 +1,205 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Verified Code Generator from Isabelle/HOL to CakeML - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">V</font>erified
<font class="first">C</font>ode
<font class="first">G</font>enerator
from
<font class="first">I</font>sabelle/HOL
to
<font class="first">C</font>akeML
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Verified Code Generator from Isabelle/HOL to CakeML</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-07-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry contains the formalization that accompanies my PhD thesis
(see https://lars.hupel.info/research/codegen/). I develop a verified
compilation toolchain from executable specifications in Isabelle/HOL
to CakeML abstract syntax trees. This improves over the
state-of-the-art in Isabelle by providing a trustworthy procedure for
-code generation.</div></td>
+code generation.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{CakeML_Codegen-AFP,
author = {Lars Hupel},
title = {A Verified Code Generator from Isabelle/HOL to CakeML},
journal = {Archive of Formal Proofs},
month = jul,
year = 2019,
note = {\url{http://isa-afp.org/entries/CakeML_Codegen.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="CakeML.html">CakeML</a>, <a href="Constructor_Funs.html">Constructor_Funs</a>, <a href="Dict_Construction.html">Dict_Construction</a>, <a href="Higher_Order_Terms.html">Higher_Order_Terms</a>, <a href="Huffman.html">Huffman</a>, <a href="Pairing_Heap.html">Pairing_Heap</a>, <a href="Root_Balanced_Tree.html">Root_Balanced_Tree</a>, <a href="Show.html">Show</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CakeML_Codegen/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/CakeML_Codegen/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CakeML_Codegen/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-CakeML_Codegen-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-CakeML_Codegen-2019-07-11.tar.gz">
afp-CakeML_Codegen-2019-07-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Call_Arity.html b/web/entries/Call_Arity.html
--- a/web/entries/Call_Arity.html
+++ b/web/entries/Call_Arity.html
@@ -1,255 +1,255 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Safety of Call Arity - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">S</font>afety
of
<font class="first">C</font>all
<font class="first">A</font>rity
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Safety of Call Arity</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Joachim Breitner (joachim /at/ cis /dot/ upenn /dot/ edu)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-02-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize the Call Arity analysis, as implemented in GHC, and prove
both functional correctness and, more interestingly, safety (i.e. the
transformation does not increase allocation).
<p>
We use syntax and the denotational semantics from the entry
"Launchbury", where we formalized Launchbury's natural semantics for
lazy evaluation.
<p>
The functional correctness of Call Arity is proved with regard to that
denotational semantics. The operational properties are shown with
regard to a small-step semantics akin to Sestoft's mark 1 machine,
which we prove to be equivalent to Launchbury's semantics.
<p>
We use Christian Urban's Nominal2 package to define our terms and make
use of Brian Huffman's HOLCF package for the domain-theoretical
-aspects of the development.</div></td>
+aspects of the development.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2015-03-16]: This entry now builds on top of the Launchbury entry,
and the equivalency proof of the natural and the small-step semantics
was added.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Call_Arity-AFP,
author = {Joachim Breitner},
title = {The Safety of Call Arity},
journal = {Archive of Formal Proofs},
month = feb,
year = 2015,
note = {\url{http://isa-afp.org/entries/Call_Arity.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Launchbury.html">Launchbury</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Call_Arity/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Call_Arity/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Call_Arity/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Call_Arity-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Call_Arity-2019-06-11.tar.gz">
afp-Call_Arity-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Call_Arity-2018-08-16.tar.gz">
afp-Call_Arity-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Call_Arity-2017-10-10.tar.gz">
afp-Call_Arity-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Call_Arity-2016-12-17.tar.gz">
afp-Call_Arity-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Call_Arity-2016-02-22.tar.gz">
afp-Call_Arity-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Call_Arity-2015-05-27.tar.gz">
afp-Call_Arity-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Call_Arity-2015-05-11.tar.gz">
afp-Call_Arity-2015-05-11.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Call_Arity-2015-02-21.tar.gz">
afp-Call_Arity-2015-02-21.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Call_Arity-2015-02-20.tar.gz">
afp-Call_Arity-2015-02-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Card_Equiv_Relations.html b/web/entries/Card_Equiv_Relations.html
--- a/web/entries/Card_Equiv_Relations.html
+++ b/web/entries/Card_Equiv_Relations.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Cardinality of Equivalence Relations - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ardinality
of
<font class="first">E</font>quivalence
<font class="first">R</font>elations
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Cardinality of Equivalence Relations</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-05-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry provides formulae for counting the number of equivalence
relations and partial equivalence relations over a finite carrier set
with given cardinality. To count the number of equivalence relations,
we provide bijections between equivalence relations and set
partitions, and then transfer the main results of the two AFP entries,
Cardinality of Set Partitions and Spivey's Generalized Recurrence for
Bell Numbers, to theorems on equivalence relations. To count the
number of partial equivalence relations, we observe that counting
partial equivalence relations over a set A is equivalent to counting
all equivalence relations over all subsets of the set A. From this
observation and the results on equivalence relations, we show that the
cardinality of partial equivalence relations over a finite set of
-cardinality n is equal to the n+1-th Bell number.</div></td>
+cardinality n is equal to the n+1-th Bell number.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Card_Equiv_Relations-AFP,
author = {Lukas Bulwahn},
title = {Cardinality of Equivalence Relations},
journal = {Archive of Formal Proofs},
month = may,
year = 2016,
note = {\url{http://isa-afp.org/entries/Card_Equiv_Relations.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Bell_Numbers_Spivey.html">Bell_Numbers_Spivey</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Card_Equiv_Relations/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Card_Equiv_Relations/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Card_Equiv_Relations/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Card_Equiv_Relations-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Card_Equiv_Relations-2019-06-11.tar.gz">
afp-Card_Equiv_Relations-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Card_Equiv_Relations-2018-08-16.tar.gz">
afp-Card_Equiv_Relations-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Card_Equiv_Relations-2017-10-10.tar.gz">
afp-Card_Equiv_Relations-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Card_Equiv_Relations-2016-12-17.tar.gz">
afp-Card_Equiv_Relations-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Card_Equiv_Relations-2016-05-24.tar.gz">
afp-Card_Equiv_Relations-2016-05-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Card_Multisets.html b/web/entries/Card_Multisets.html
--- a/web/entries/Card_Multisets.html
+++ b/web/entries/Card_Multisets.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Cardinality of Multisets - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ardinality
of
<font class="first">M</font>ultisets
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Cardinality of Multisets</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-06-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This entry provides three lemmas to count the number of multisets
of a given size and finite carrier set. The first lemma provides a
cardinality formula assuming that the multiset's elements are chosen
from the given carrier set. The latter two lemmas provide formulas
assuming that the multiset's elements also cover the given carrier
set, i.e., each element of the carrier set occurs in the multiset at
least once.</p> <p>The proof of the first lemma uses the argument of
the recurrence relation for counting multisets. The proof of the
second lemma is straightforward, and the proof of the third lemma is
easily obtained using the first cardinality lemma. A challenge for the
formalization is the derivation of the required induction rule, which
is a special combination of the induction rules for finite sets and
natural numbers. The induction rule is derived by defining a suitable
inductive predicate and transforming the predicate's induction
-rule.</p></div></td>
+rule.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Card_Multisets-AFP,
author = {Lukas Bulwahn},
title = {Cardinality of Multisets},
journal = {Archive of Formal Proofs},
month = jun,
year = 2016,
note = {\url{http://isa-afp.org/entries/Card_Multisets.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Twelvefold_Way.html">Twelvefold_Way</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Card_Multisets/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Card_Multisets/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Card_Multisets/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Card_Multisets-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Card_Multisets-2019-06-11.tar.gz">
afp-Card_Multisets-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Card_Multisets-2018-08-16.tar.gz">
afp-Card_Multisets-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Card_Multisets-2017-10-10.tar.gz">
afp-Card_Multisets-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Card_Multisets-2016-12-17.tar.gz">
afp-Card_Multisets-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Card_Multisets-2016-06-26.tar.gz">
afp-Card_Multisets-2016-06-26.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Card_Number_Partitions.html b/web/entries/Card_Number_Partitions.html
--- a/web/entries/Card_Number_Partitions.html
+++ b/web/entries/Card_Number_Partitions.html
@@ -1,225 +1,225 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Cardinality of Number Partitions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ardinality
of
<font class="first">N</font>umber
<font class="first">P</font>artitions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Cardinality of Number Partitions</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-01-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry provides a basic library for number partitions, defines the
two-argument partition function through its recurrence relation and relates
this partition function to the cardinality of number partitions. The main
proof shows that the recursively-defined partition function with arguments
n and k equals the cardinality of number partitions of n with exactly k parts.
The combinatorial proof follows the proof sketch of Theorem 2.4.1 in
Mazur's textbook `Combinatorics: A Guided Tour`. This entry can serve as
starting point for various more intrinsic properties about number partitions,
-the partition function and related recurrence relations.</div></td>
+the partition function and related recurrence relations.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Card_Number_Partitions-AFP,
author = {Lukas Bulwahn},
title = {Cardinality of Number Partitions},
journal = {Archive of Formal Proofs},
month = jan,
year = 2016,
note = {\url{http://isa-afp.org/entries/Card_Number_Partitions.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Euler_Partition.html">Euler_Partition</a>, <a href="Twelvefold_Way.html">Twelvefold_Way</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Card_Number_Partitions/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Card_Number_Partitions/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Card_Number_Partitions/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Card_Number_Partitions-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Card_Number_Partitions-2019-06-11.tar.gz">
afp-Card_Number_Partitions-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Card_Number_Partitions-2018-08-16.tar.gz">
afp-Card_Number_Partitions-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Card_Number_Partitions-2017-10-10.tar.gz">
afp-Card_Number_Partitions-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Card_Number_Partitions-2016-12-17.tar.gz">
afp-Card_Number_Partitions-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Card_Number_Partitions-2016-02-22.tar.gz">
afp-Card_Number_Partitions-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Card_Number_Partitions-2016-01-14.tar.gz">
afp-Card_Number_Partitions-2016-01-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Card_Partitions.html b/web/entries/Card_Partitions.html
--- a/web/entries/Card_Partitions.html
+++ b/web/entries/Card_Partitions.html
@@ -1,227 +1,227 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Cardinality of Set Partitions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ardinality
of
<font class="first">S</font>et
<font class="first">P</font>artitions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Cardinality of Set Partitions</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-12-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The theory's main theorem states that the cardinality of set partitions of
size k on a carrier set of size n is expressed by Stirling numbers of the
second kind. In Isabelle, Stirling numbers of the second kind are defined
in the AFP entry `Discrete Summation` through their well-known recurrence
relation. The main theorem relates them to the alternative definition as
cardinality of set partitions. The proof follows the simple and short
explanation in Richard P. Stanley's `Enumerative Combinatorics: Volume 1`
and Wikipedia, and unravels the full details and implicit reasoning steps
-of these explanations.</div></td>
+of these explanations.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Card_Partitions-AFP,
author = {Lukas Bulwahn},
title = {Cardinality of Set Partitions},
journal = {Archive of Formal Proofs},
month = dec,
year = 2015,
note = {\url{http://isa-afp.org/entries/Card_Partitions.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Discrete_Summation.html">Discrete_Summation</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Bell_Numbers_Spivey.html">Bell_Numbers_Spivey</a>, <a href="Falling_Factorial_Sum.html">Falling_Factorial_Sum</a>, <a href="Twelvefold_Way.html">Twelvefold_Way</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Card_Partitions/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Card_Partitions/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Card_Partitions/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Card_Partitions-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Card_Partitions-2019-06-11.tar.gz">
afp-Card_Partitions-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Card_Partitions-2018-08-16.tar.gz">
afp-Card_Partitions-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Card_Partitions-2017-10-10.tar.gz">
afp-Card_Partitions-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Card_Partitions-2016-12-17.tar.gz">
afp-Card_Partitions-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Card_Partitions-2016-02-22.tar.gz">
afp-Card_Partitions-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Card_Partitions-2015-12-13.tar.gz">
afp-Card_Partitions-2015-12-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Cartan_FP.html b/web/entries/Cartan_FP.html
--- a/web/entries/Cartan_FP.html
+++ b/web/entries/Cartan_FP.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Cartan Fixed Point Theorems - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">C</font>artan
<font class="first">F</font>ixed
<font class="first">P</font>oint
<font class="first">T</font>heorems
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Cartan Fixed Point Theorems</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-03-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The Cartan fixed point theorems concern the group of holomorphic
automorphisms on a connected open set of C<sup>n</sup>. Ciolli et al.
have formalised the one-dimensional case of these theorems in HOL
Light. This entry contains their proofs, ported to Isabelle/HOL. Thus
it addresses the authors' remark that "it would be important to write
a formal proof in a language that can be read by both humans and
-machines".</div></td>
+machines".</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Cartan_FP-AFP,
author = {Lawrence C. Paulson},
title = {The Cartan Fixed Point Theorems},
journal = {Archive of Formal Proofs},
month = mar,
year = 2016,
note = {\url{http://isa-afp.org/entries/Cartan_FP.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Cartan_FP/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Cartan_FP/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Cartan_FP/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Cartan_FP-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Cartan_FP-2019-06-11.tar.gz">
afp-Cartan_FP-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Cartan_FP-2018-08-16.tar.gz">
afp-Cartan_FP-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Cartan_FP-2017-10-10.tar.gz">
afp-Cartan_FP-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Cartan_FP-2016-12-17.tar.gz">
afp-Cartan_FP-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Cartan_FP-2016-03-09.tar.gz">
afp-Cartan_FP-2016-03-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Case_Labeling.html b/web/entries/Case_Labeling.html
--- a/web/entries/Case_Labeling.html
+++ b/web/entries/Case_Labeling.html
@@ -1,239 +1,239 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Generating Cases from Labeled Subgoals - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>enerating
<font class="first">C</font>ases
from
<font class="first">L</font>abeled
<font class="first">S</font>ubgoals
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Generating Cases from Labeled Subgoals</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~noschinl/">Lars Noschinski</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-07-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Isabelle/Isar provides named cases to structure proofs. This article
contains an implementation of a proof method <tt>casify</tt>, which can
be used to easily extend proof tools with support for named cases. Such
a proof tool must produce labeled subgoals, which are then interpreted
by <tt>casify</tt>.
<p>
As examples, this work contains verification condition generators
producing named cases for three languages: The Hoare language from
<tt>HOL/Library</tt>, a monadic language for computations with failure
(inspired by the AutoCorres tool), and a language of conditional
-expressions. These VCGs are demonstrated by a number of example programs.</div></td>
+expressions. These VCGs are demonstrated by a number of example programs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Case_Labeling-AFP,
author = {Lars Noschinski},
title = {Generating Cases from Labeled Subgoals},
journal = {Archive of Formal Proofs},
month = jul,
year = 2015,
note = {\url{http://isa-afp.org/entries/Case_Labeling.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Planarity_Certificates.html">Planarity_Certificates</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Case_Labeling/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Case_Labeling/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Case_Labeling/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Case_Labeling-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Case_Labeling-2019-06-11.tar.gz">
afp-Case_Labeling-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Case_Labeling-2018-08-16.tar.gz">
afp-Case_Labeling-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Case_Labeling-2017-10-10.tar.gz">
afp-Case_Labeling-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Case_Labeling-2016-12-17.tar.gz">
afp-Case_Labeling-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Case_Labeling-2016-02-22.tar.gz">
afp-Case_Labeling-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Case_Labeling-2015-08-17.tar.gz">
afp-Case_Labeling-2015-08-17.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Case_Labeling-2015-07-27.tar.gz">
afp-Case_Labeling-2015-07-27.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Case_Labeling-2015-07-24.tar.gz">
afp-Case_Labeling-2015-07-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Catalan_Numbers.html b/web/entries/Catalan_Numbers.html
--- a/web/entries/Catalan_Numbers.html
+++ b/web/entries/Catalan_Numbers.html
@@ -1,215 +1,215 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Catalan Numbers - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>atalan
<font class="first">N</font>umbers
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Catalan Numbers</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-06-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>In this work, we define the Catalan numbers <em>C<sub>n</sub></em>
and prove several equivalent definitions (including some closed-form
formulae). We also show one of their applications (counting the number
of binary trees of size <em>n</em>), prove the asymptotic growth
approximation <em>C<sub>n</sub> &sim; 4<sup>n</sup> / (&radic;<span
style="text-decoration: overline">&pi;</span> &middot;
n<sup>1.5</sup>)</em>, and provide reasonably efficient executable
code to compute them.</p> <p>The derivation of the closed-form
formulae uses algebraic manipulations of the ordinary generating
function of the Catalan numbers, and the asymptotic approximation is
then done using generalised binomial coefficients and the Gamma
function. Thanks to these highly non-elementary mathematical tools,
-the proofs are very short and simple.</p></div></td>
+the proofs are very short and simple.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Catalan_Numbers-AFP,
author = {Manuel Eberl},
title = {Catalan Numbers},
journal = {Archive of Formal Proofs},
month = jun,
year = 2016,
note = {\url{http://isa-afp.org/entries/Catalan_Numbers.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Landau_Symbols.html">Landau_Symbols</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Catalan_Numbers/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Catalan_Numbers/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Catalan_Numbers/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Catalan_Numbers-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Catalan_Numbers-2019-06-11.tar.gz">
afp-Catalan_Numbers-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Catalan_Numbers-2018-08-16.tar.gz">
afp-Catalan_Numbers-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Catalan_Numbers-2017-10-10.tar.gz">
afp-Catalan_Numbers-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Catalan_Numbers-2016-12-17.tar.gz">
afp-Catalan_Numbers-2016-12-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Category.html b/web/entries/Category.html
--- a/web/entries/Category.html
+++ b/web/entries/Category.html
@@ -1,300 +1,300 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Category Theory to Yoneda's Lemma - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ategory
<font class="first">T</font>heory
to
<font class="first">Y</font>oneda's
<font class="first">L</font>emma
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Category Theory to Yoneda's Lemma</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://users.rsise.anu.edu.au/~okeefe/">Greg O'Keefe</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2005-04-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This development proves Yoneda's lemma and aims to be readable by humans. It only defines what is needed for the lemma: categories, functors and natural transformations. Limits, adjunctions and other important concepts are not included.</div></td>
+ <td class="abstract mathjax_process">This development proves Yoneda's lemma and aims to be readable by humans. It only defines what is needed for the lemma: categories, functors and natural transformations. Limits, adjunctions and other important concepts are not included.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2010-04-23]: The definition of the constant <tt>equinumerous</tt> was slightly too weak in the original submission and has been fixed in revision <a href="https://bitbucket.org/isa-afp/afp-devel/commits/8c2b5b3c995f">8c2b5b3c995f</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Category-AFP,
author = {Greg O'Keefe},
title = {Category Theory to Yoneda's Lemma},
journal = {Archive of Formal Proofs},
month = apr,
year = 2005,
note = {\url{http://isa-afp.org/entries/Category.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Category/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Category/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Category/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Category-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Category-2019-06-11.tar.gz">
afp-Category-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Category-2018-08-16.tar.gz">
afp-Category-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Category-2017-10-10.tar.gz">
afp-Category-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Category-2016-12-17.tar.gz">
afp-Category-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Category-2016-02-22.tar.gz">
afp-Category-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Category-2015-05-27.tar.gz">
afp-Category-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Category-2014-08-28.tar.gz">
afp-Category-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Category-2013-12-11.tar.gz">
afp-Category-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Category-2013-11-17.tar.gz">
afp-Category-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Category-2013-03-02.tar.gz">
afp-Category-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Category-2013-02-16.tar.gz">
afp-Category-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Category-2012-05-24.tar.gz">
afp-Category-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Category-2011-10-11.tar.gz">
afp-Category-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Category-2011-02-11.tar.gz">
afp-Category-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Category-2010-06-30.tar.gz">
afp-Category-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Category-2009-12-12.tar.gz">
afp-Category-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Category-2009-04-29.tar.gz">
afp-Category-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Category-2008-06-10.tar.gz">
afp-Category-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Category-2007-11-27.tar.gz">
afp-Category-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Category-2005-10-14.tar.gz">
afp-Category-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Category-2005-05-01.tar.gz">
afp-Category-2005-05-01.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Category-2005-04-21.tar.gz">
afp-Category-2005-04-21.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Category2.html b/web/entries/Category2.html
--- a/web/entries/Category2.html
+++ b/web/entries/Category2.html
@@ -1,260 +1,260 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Category Theory - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ategory
<font class="first">T</font>heory
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Category Theory</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Alexander Katovsky (apk32 /at/ cam /dot/ ac /dot/ uk)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-06-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This article presents a development of Category Theory in Isabelle/HOL. A Category is defined using records and locales. Functors and Natural Transformations are also defined. The main result that has been formalized is that the Yoneda functor is a full and faithful embedding. We also formalize the completeness of many sorted monadic equational logic. Extensive use is made of the HOLZF theory in both cases. For an informal description see <a href="http://www.srcf.ucam.org/~apk32/Isabelle/Category/Cat.pdf">here [pdf]</a>.</div></td>
+ <td class="abstract mathjax_process">This article presents a development of Category Theory in Isabelle/HOL. A Category is defined using records and locales. Functors and Natural Transformations are also defined. The main result that has been formalized is that the Yoneda functor is a full and faithful embedding. We also formalize the completeness of many sorted monadic equational logic. Extensive use is made of the HOLZF theory in both cases. For an informal description see <a href="http://www.srcf.ucam.org/~apk32/Isabelle/Category/Cat.pdf">here [pdf]</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Category2-AFP,
author = {Alexander Katovsky},
title = {Category Theory},
journal = {Archive of Formal Proofs},
month = jun,
year = 2010,
note = {\url{http://isa-afp.org/entries/Category2.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Category2/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Category2/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Category2/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Category2-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Category2-2019-06-11.tar.gz">
afp-Category2-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Category2-2018-08-16.tar.gz">
afp-Category2-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Category2-2017-10-10.tar.gz">
afp-Category2-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Category2-2016-12-17.tar.gz">
afp-Category2-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Category2-2016-02-22.tar.gz">
afp-Category2-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Category2-2015-05-27.tar.gz">
afp-Category2-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Category2-2014-08-28.tar.gz">
afp-Category2-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Category2-2013-12-11.tar.gz">
afp-Category2-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Category2-2013-11-17.tar.gz">
afp-Category2-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Category2-2013-03-02.tar.gz">
afp-Category2-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Category2-2013-02-16.tar.gz">
afp-Category2-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Category2-2012-05-24.tar.gz">
afp-Category2-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Category2-2011-10-11.tar.gz">
afp-Category2-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Category2-2011-02-11.tar.gz">
afp-Category2-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Category2-2010-06-30.tar.gz">
afp-Category2-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Category2-2010-06-21.tar.gz">
afp-Category2-2010-06-21.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Category3.html b/web/entries/Category3.html
--- a/web/entries/Category3.html
+++ b/web/entries/Category3.html
@@ -1,258 +1,258 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Category Theory with Adjunctions and Limits - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ategory
<font class="first">T</font>heory
with
<font class="first">A</font>djunctions
and
<font class="first">L</font>imits
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Category Theory with Adjunctions and Limits</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Eugene W. Stark (stark /at/ cs /dot/ stonybrook /dot/ edu)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-06-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This article attempts to develop a usable framework for doing category
theory in Isabelle/HOL. Our point of view, which to some extent
differs from that of the previous AFP articles on the subject, is to
try to explore how category theory can be done efficaciously within
HOL, rather than trying to match exactly the way things are done using
a traditional approach. To this end, we define the notion of category
in an "object-free" style, in which a category is represented by a
single partial composition operation on arrows. This way of defining
categories provides some advantages in the context of HOL, including
the ability to avoid the use of records and the possibility of
defining functors and natural transformations simply as certain
functions on arrows, rather than as composite objects. We define
various constructions associated with the basic notions, including:
dual category, product category, functor category, discrete category,
free category, functor composition, and horizontal and vertical
composite of natural transformations. A "set category" locale is
defined that axiomatizes the notion "category of all sets at a type
and all functions between them," and a fairly extensive set of
properties of set categories is derived from the locale assumptions.
The notion of a set category is used to prove the Yoneda Lemma in a
general setting of a category equipped with a "hom embedding," which
maps arrows of the category to the "universe" of the set category. We
also give a treatment of adjunctions, defining adjunctions via left
and right adjoint functors, natural bijections between hom-sets, and
unit and counit natural transformations, and showing the equivalence
of these definitions. We also develop the theory of limits, including
representations of functors, diagrams and cones, and diagonal
functors. We show that right adjoint functors preserve limits, and
that limits can be constructed via products and equalizers. We
characterize the conditions under which limits exist in a set
category. We also examine the case of limits in a functor category,
ultimately culminating in a proof that the Yoneda embedding preserves
-limits.</div></td>
+limits.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2018-05-29]:
Revised axioms for the category locale. Introduced notation for composition and "in hom".
(revision 8318366d4575)<br>
[2020-02-15]:
Move ConcreteCategory.thy from Bicategory to Category3 and use it systematically.
Make other minor improvements throughout.
(revision a51840d36867)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Category3-AFP,
author = {Eugene W. Stark},
title = {Category Theory with Adjunctions and Limits},
journal = {Archive of Formal Proofs},
month = jun,
year = 2016,
note = {\url{http://isa-afp.org/entries/Category3.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="MonoidalCategory.html">MonoidalCategory</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Category3/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Category3/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Category3/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Category3-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Category3-2019-06-11.tar.gz">
afp-Category3-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Category3-2018-08-16.tar.gz">
afp-Category3-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Category3-2017-10-10.tar.gz">
afp-Category3-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Category3-2016-12-17.tar.gz">
afp-Category3-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Category3-2016-06-26.tar.gz">
afp-Category3-2016-06-26.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Cauchy.html b/web/entries/Cauchy.html
--- a/web/entries/Cauchy.html
+++ b/web/entries/Cauchy.html
@@ -1,287 +1,287 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>auchy's
<font class="first">M</font>ean
<font class="first">T</font>heorem
and
the
<font class="first">C</font>auchy-Schwarz
<font class="first">I</font>nequality
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Benjamin Porter
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2006-03-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This document presents the mechanised proofs of two popular theorems attributed to Augustin Louis Cauchy - Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality.</div></td>
+ <td class="abstract mathjax_process">This document presents the mechanised proofs of two popular theorems attributed to Augustin Louis Cauchy - Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Cauchy-AFP,
author = {Benjamin Porter},
title = {Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality},
journal = {Archive of Formal Proofs},
month = mar,
year = 2006,
note = {\url{http://isa-afp.org/entries/Cauchy.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Sqrt_Babylonian.html">Sqrt_Babylonian</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Cauchy/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Cauchy/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Cauchy/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Cauchy-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Cauchy-2019-06-11.tar.gz">
afp-Cauchy-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Cauchy-2018-08-16.tar.gz">
afp-Cauchy-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Cauchy-2017-10-10.tar.gz">
afp-Cauchy-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Cauchy-2016-12-17.tar.gz">
afp-Cauchy-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Cauchy-2016-02-22.tar.gz">
afp-Cauchy-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Cauchy-2015-05-27.tar.gz">
afp-Cauchy-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Cauchy-2014-08-28.tar.gz">
afp-Cauchy-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Cauchy-2013-12-11.tar.gz">
afp-Cauchy-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Cauchy-2013-11-17.tar.gz">
afp-Cauchy-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Cauchy-2013-02-16.tar.gz">
afp-Cauchy-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Cauchy-2012-05-24.tar.gz">
afp-Cauchy-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Cauchy-2011-10-11.tar.gz">
afp-Cauchy-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Cauchy-2011-02-11.tar.gz">
afp-Cauchy-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Cauchy-2010-06-30.tar.gz">
afp-Cauchy-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Cauchy-2009-12-12.tar.gz">
afp-Cauchy-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Cauchy-2009-04-29.tar.gz">
afp-Cauchy-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Cauchy-2008-06-10.tar.gz">
afp-Cauchy-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Cauchy-2007-11-27.tar.gz">
afp-Cauchy-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Cauchy-2006-03-14.tar.gz">
afp-Cauchy-2006-03-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Cayley_Hamilton.html b/web/entries/Cayley_Hamilton.html
--- a/web/entries/Cayley_Hamilton.html
+++ b/web/entries/Cayley_Hamilton.html
@@ -1,223 +1,223 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Cayley-Hamilton Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">C</font>ayley-Hamilton
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Cayley-Hamilton Theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://nm.wu.ac.at/nm/sadelsbe">Stephan Adelsberger</a>,
<a href="http://www.logic.at/people/hetzl/">Stefan Hetzl</a> and
Florian Pollak (florian /dot/ pollak /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-09-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This document contains a proof of the Cayley-Hamilton theorem
-based on the development of matrices in HOL/Multivariate Analysis.</div></td>
+based on the development of matrices in HOL/Multivariate Analysis.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Cayley_Hamilton-AFP,
author = {Stephan Adelsberger and Stefan Hetzl and Florian Pollak},
title = {The Cayley-Hamilton Theorem},
journal = {Archive of Formal Proofs},
month = sep,
year = 2014,
note = {\url{http://isa-afp.org/entries/Cayley_Hamilton.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Echelon_Form.html">Echelon_Form</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Cayley_Hamilton/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Cayley_Hamilton/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Cayley_Hamilton/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Cayley_Hamilton-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Cayley_Hamilton-2019-06-11.tar.gz">
afp-Cayley_Hamilton-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Cayley_Hamilton-2018-08-16.tar.gz">
afp-Cayley_Hamilton-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Cayley_Hamilton-2017-10-10.tar.gz">
afp-Cayley_Hamilton-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Cayley_Hamilton-2016-12-17.tar.gz">
afp-Cayley_Hamilton-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Cayley_Hamilton-2016-02-22.tar.gz">
afp-Cayley_Hamilton-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Cayley_Hamilton-2015-05-27.tar.gz">
afp-Cayley_Hamilton-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Cayley_Hamilton-2014-09-16.tar.gz">
afp-Cayley_Hamilton-2014-09-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Certification_Monads.html b/web/entries/Certification_Monads.html
--- a/web/entries/Certification_Monads.html
+++ b/web/entries/Certification_Monads.html
@@ -1,220 +1,220 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Certification Monads - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ertification
<font class="first">M</font>onads
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Certification Monads</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com) and
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-10-03</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This entry provides several monads intended for the development of stand-alone certifiers via code generation from Isabelle/HOL. More specifically, there are three flavors of error monads (the sum type, for the case where all monadic functions are total; an instance of the former, the so called check monad, yielding either success without any further information or an error message; as well as a variant of the sum type that accommodates partial functions by providing an explicit bottom element) and a parser monad built on top. All of this monads are heavily used in the IsaFoR/CeTA project which thus provides many examples of their usage.</div></td>
+ <td class="abstract mathjax_process">This entry provides several monads intended for the development of stand-alone certifiers via code generation from Isabelle/HOL. More specifically, there are three flavors of error monads (the sum type, for the case where all monadic functions are total; an instance of the former, the so called check monad, yielding either success without any further information or an error message; as well as a variant of the sum type that accommodates partial functions by providing an explicit bottom element) and a parser monad built on top. All of this monads are heavily used in the IsaFoR/CeTA project which thus provides many examples of their usage.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Certification_Monads-AFP,
author = {Christian Sternagel and René Thiemann},
title = {Certification Monads},
journal = {Archive of Formal Proofs},
month = oct,
year = 2014,
note = {\url{http://isa-afp.org/entries/Certification_Monads.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Partial_Function_MR.html">Partial_Function_MR</a>, <a href="Show.html">Show</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="WOOT_Strong_Eventual_Consistency.html">WOOT_Strong_Eventual_Consistency</a>, <a href="XML.html">XML</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Certification_Monads/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Certification_Monads/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Certification_Monads/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Certification_Monads-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Certification_Monads-2019-06-11.tar.gz">
afp-Certification_Monads-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Certification_Monads-2018-08-16.tar.gz">
afp-Certification_Monads-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Certification_Monads-2017-10-10.tar.gz">
afp-Certification_Monads-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Certification_Monads-2016-12-17.tar.gz">
afp-Certification_Monads-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Certification_Monads-2016-02-22.tar.gz">
afp-Certification_Monads-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Certification_Monads-2015-05-27.tar.gz">
afp-Certification_Monads-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Certification_Monads-2014-10-08.tar.gz">
afp-Certification_Monads-2014-10-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Chord_Segments.html b/web/entries/Chord_Segments.html
--- a/web/entries/Chord_Segments.html
+++ b/web/entries/Chord_Segments.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Intersecting Chords Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>ntersecting
<font class="first">C</font>hords
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Intersecting Chords Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-10-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry provides a geometric proof of the intersecting chords
theorem. The theorem states that when two chords intersect each other
inside a circle, the products of their segments are equal. After a
short review of existing proofs in the literature, I decided to use a
proof approach that employs reasoning about lengths of line segments,
the orthogonality of two lines and the Pythagoras Law. Hence, one can
understand the formalized proof easily with the knowledge of a few
general geometric facts that are commonly taught in high-school. This
-theorem is the 55th theorem of the Top 100 Theorems list.</div></td>
+theorem is the 55th theorem of the Top 100 Theorems list.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Chord_Segments-AFP,
author = {Lukas Bulwahn},
title = {Intersecting Chords Theorem},
journal = {Archive of Formal Proofs},
month = oct,
year = 2016,
note = {\url{http://isa-afp.org/entries/Chord_Segments.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Triangle.html">Triangle</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Chord_Segments/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Chord_Segments/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Chord_Segments/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Chord_Segments-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Chord_Segments-2019-06-11.tar.gz">
afp-Chord_Segments-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Chord_Segments-2018-08-16.tar.gz">
afp-Chord_Segments-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Chord_Segments-2017-10-10.tar.gz">
afp-Chord_Segments-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Chord_Segments-2016-12-17.tar.gz">
afp-Chord_Segments-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Chord_Segments-2016-10-11.tar.gz">
afp-Chord_Segments-2016-10-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Circus.html b/web/entries/Circus.html
--- a/web/entries/Circus.html
+++ b/web/entries/Circus.html
@@ -1,249 +1,249 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Isabelle/Circus - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>sabelle/Circus
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Isabelle/Circus</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Abderrahmane Feliachi (abderrahmane /dot/ feliachi /at/ lri /dot/ fr),
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a> and
Marie-Claude Gaudel (mcg /at/ lri /dot/ fr)
</td>
</tr>
<tr>
<td class="datahead">
Contributor:
</td>
<td class="data">
Makarius Wenzel (Makarius /dot/ wenzel /at/ lri /dot/ fr)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-05-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">The Circus specification language combines elements for complex data and behavior specifications, using an integration of Z and CSP with a refinement calculus. Its semantics is based on Hoare and He's Unifying Theories of Programming (UTP). Isabelle/Circus is a formalization of the UTP and the Circus language in Isabelle/HOL. It contains proof rules and tactic support that allows for proofs of refinement for Circus processes (involving both data and behavioral aspects).
+ <td class="abstract mathjax_process">The Circus specification language combines elements for complex data and behavior specifications, using an integration of Z and CSP with a refinement calculus. Its semantics is based on Hoare and He's Unifying Theories of Programming (UTP). Isabelle/Circus is a formalization of the UTP and the Circus language in Isabelle/HOL. It contains proof rules and tactic support that allows for proofs of refinement for Circus processes (involving both data and behavioral aspects).
<p>
-The Isabelle/Circus environment supports a syntax for the semantic definitions which is close to textbook presentations of Circus. This article contains an extended version of corresponding VSTTE Paper together with the complete formal development of its underlying commented theories.</div></td>
+The Isabelle/Circus environment supports a syntax for the semantic definitions which is close to textbook presentations of Circus. This article contains an extended version of corresponding VSTTE Paper together with the complete formal development of its underlying commented theories.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2014-06-05]: More polishing, shorter proofs, added Circus syntax, added Makarius Wenzel as contributor.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Circus-AFP,
author = {Abderrahmane Feliachi and Burkhart Wolff and Marie-Claude Gaudel},
title = {Isabelle/Circus},
journal = {Archive of Formal Proofs},
month = may,
year = 2012,
note = {\url{http://isa-afp.org/entries/Circus.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Circus/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Circus/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Circus/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Circus-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Circus-2019-06-11.tar.gz">
afp-Circus-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Circus-2018-08-16.tar.gz">
afp-Circus-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Circus-2017-10-10.tar.gz">
afp-Circus-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Circus-2016-12-17.tar.gz">
afp-Circus-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Circus-2016-02-22.tar.gz">
afp-Circus-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Circus-2015-05-27.tar.gz">
afp-Circus-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Circus-2014-08-28.tar.gz">
afp-Circus-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Circus-2013-12-11.tar.gz">
afp-Circus-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Circus-2013-11-17.tar.gz">
afp-Circus-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Circus-2013-02-16.tar.gz">
afp-Circus-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Circus-2012-05-29.tar.gz">
afp-Circus-2012-05-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Clean.html b/web/entries/Clean.html
--- a/web/entries/Clean.html
+++ b/web/entries/Clean.html
@@ -1,221 +1,221 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Clean - An Abstract Imperative Programming Language and its Theory - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>lean
<font class="first">-</font>
<font class="first">A</font>n
<font class="first">A</font>bstract
<font class="first">I</font>mperative
<font class="first">P</font>rogramming
<font class="first">L</font>anguage
and
its
<font class="first">T</font>heory
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Clean - An Abstract Imperative Programming Language and its Theory</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www.lri.fr/~ftuong/">Frédéric Tuong</a> and
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-10-04</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Clean is based on a simple, abstract execution model for an imperative
target language. “Abstract” is understood in contrast to “Concrete
Semantics”; alternatively, the term “shallow-style embedding” could be
used. It strives for a type-safe notion of program-variables, an
incremental construction of the typed state-space, support of
incremental verification, and open-world extensibility of new type
definitions being intertwined with the program definitions. Clean is
based on a “no-frills” state-exception monad with the usual
definitions of bind and unit for the compositional glue of state-based
computations. Clean offers conditionals and loops supporting C-like
control-flow operators such as break and return. The state-space
construction is based on the extensible record package. Direct
recursion of procedures is supported. Clean’s design strives for
extreme simplicity. It is geared towards symbolic execution and proven
correct verification tools. The underlying libraries of this package,
however, deliberately restrict themselves to the most elementary
infrastructure for these tasks. The package is intended to serve as
demonstrator semantic backend for Isabelle/C, or for the
-test-generation techniques.</div></td>
+test-generation techniques.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Clean-AFP,
author = {Frédéric Tuong and Burkhart Wolff},
title = {Clean - An Abstract Imperative Programming Language and its Theory},
journal = {Archive of Formal Proofs},
month = oct,
year = 2019,
note = {\url{http://isa-afp.org/entries/Clean.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Clean/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Clean/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Clean/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Clean-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Clean-2019-10-16.tar.gz">
afp-Clean-2019-10-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/ClockSynchInst.html b/web/entries/ClockSynchInst.html
--- a/web/entries/ClockSynchInst.html
+++ b/web/entries/ClockSynchInst.html
@@ -1,292 +1,292 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Instances of Schneider's generalized protocol of clock synchronization - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>nstances
of
<font class="first">S</font>chneider's
generalized
protocol
of
clock
synchronization
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Instances of Schneider's generalized protocol of clock synchronization</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.cs.famaf.unc.edu.ar/~damian/">Damián Barsotti</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2006-03-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">F. B. Schneider ("Understanding protocols for Byzantine clock synchronization") generalizes a number of protocols for Byzantine fault-tolerant clock synchronization and presents a uniform proof for their correctness. In Schneider's schema, each processor maintains a local clock by periodically adjusting each value to one computed by a convergence function applied to the readings of all the clocks. Then, correctness of an algorithm, i.e. that the readings of two clocks at any time are within a fixed bound of each other, is based upon some conditions on the convergence function. To prove that a particular clock synchronization algorithm is correct it suffices to show that the convergence function used by the algorithm meets Schneider's conditions. Using the theorem prover Isabelle, we formalize the proofs that the convergence functions of two algorithms, namely, the Interactive Convergence Algorithm (ICA) of Lamport and Melliar-Smith and the Fault-tolerant Midpoint algorithm of Lundelius-Lynch, meet Schneider's conditions. Furthermore, we experiment on handling some parts of the proofs with fully automatic tools like ICS and CVC-lite. These theories are part of a joint work with Alwen Tiu and Leonor P. Nieto <a href="http://users.rsise.anu.edu.au/~tiu/clocksync.pdf">"Verification of Clock Synchronization Algorithms: Experiments on a combination of deductive tools"</a> in proceedings of AVOCS 2005. In this work the correctness of Schneider schema was also verified using Isabelle (entry <a href="GenClock.html">GenClock</a> in AFP).</div></td>
+ <td class="abstract mathjax_process">F. B. Schneider ("Understanding protocols for Byzantine clock synchronization") generalizes a number of protocols for Byzantine fault-tolerant clock synchronization and presents a uniform proof for their correctness. In Schneider's schema, each processor maintains a local clock by periodically adjusting each value to one computed by a convergence function applied to the readings of all the clocks. Then, correctness of an algorithm, i.e. that the readings of two clocks at any time are within a fixed bound of each other, is based upon some conditions on the convergence function. To prove that a particular clock synchronization algorithm is correct it suffices to show that the convergence function used by the algorithm meets Schneider's conditions. Using the theorem prover Isabelle, we formalize the proofs that the convergence functions of two algorithms, namely, the Interactive Convergence Algorithm (ICA) of Lamport and Melliar-Smith and the Fault-tolerant Midpoint algorithm of Lundelius-Lynch, meet Schneider's conditions. Furthermore, we experiment on handling some parts of the proofs with fully automatic tools like ICS and CVC-lite. These theories are part of a joint work with Alwen Tiu and Leonor P. Nieto <a href="http://users.rsise.anu.edu.au/~tiu/clocksync.pdf">"Verification of Clock Synchronization Algorithms: Experiments on a combination of deductive tools"</a> in proceedings of AVOCS 2005. In this work the correctness of Schneider schema was also verified using Isabelle (entry <a href="GenClock.html">GenClock</a> in AFP).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{ClockSynchInst-AFP,
author = {Damián Barsotti},
title = {Instances of Schneider's generalized protocol of clock synchronization},
journal = {Archive of Formal Proofs},
month = mar,
year = 2006,
note = {\url{http://isa-afp.org/entries/ClockSynchInst.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ClockSynchInst/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/ClockSynchInst/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ClockSynchInst/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-ClockSynchInst-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-ClockSynchInst-2019-06-11.tar.gz">
afp-ClockSynchInst-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-ClockSynchInst-2018-08-16.tar.gz">
afp-ClockSynchInst-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-ClockSynchInst-2017-10-10.tar.gz">
afp-ClockSynchInst-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-ClockSynchInst-2016-12-17.tar.gz">
afp-ClockSynchInst-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-ClockSynchInst-2016-02-22.tar.gz">
afp-ClockSynchInst-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-ClockSynchInst-2015-05-27.tar.gz">
afp-ClockSynchInst-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-ClockSynchInst-2014-08-28.tar.gz">
afp-ClockSynchInst-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-ClockSynchInst-2013-12-11.tar.gz">
afp-ClockSynchInst-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-ClockSynchInst-2013-11-17.tar.gz">
afp-ClockSynchInst-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-ClockSynchInst-2013-03-02.tar.gz">
afp-ClockSynchInst-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-ClockSynchInst-2013-02-16.tar.gz">
afp-ClockSynchInst-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-ClockSynchInst-2012-05-24.tar.gz">
afp-ClockSynchInst-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-ClockSynchInst-2011-10-11.tar.gz">
afp-ClockSynchInst-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-ClockSynchInst-2011-02-11.tar.gz">
afp-ClockSynchInst-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-ClockSynchInst-2010-06-30.tar.gz">
afp-ClockSynchInst-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-ClockSynchInst-2009-12-12.tar.gz">
afp-ClockSynchInst-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-ClockSynchInst-2009-04-29.tar.gz">
afp-ClockSynchInst-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-ClockSynchInst-2008-06-10.tar.gz">
afp-ClockSynchInst-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-ClockSynchInst-2007-11-27.tar.gz">
afp-ClockSynchInst-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-ClockSynchInst-2006-03-15.tar.gz">
afp-ClockSynchInst-2006-03-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Closest_Pair_Points.html b/web/entries/Closest_Pair_Points.html
--- a/web/entries/Closest_Pair_Points.html
+++ b/web/entries/Closest_Pair_Points.html
@@ -1,204 +1,204 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Closest Pair of Points Algorithms - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>losest
<font class="first">P</font>air
of
<font class="first">P</font>oints
<font class="first">A</font>lgorithms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Closest Pair of Points Algorithms</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Martin Rau (martin /dot/ rau /at/ tum /dot/ de) and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-01-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry provides two related verified divide-and-conquer algorithms
solving the fundamental <em>Closest Pair of Points</em>
problem in Computational Geometry. Functional correctness and the
optimal running time of <em>O</em>(<em>n</em> log <em>n</em>) are
proved. Executable code is generated which is empirically competitive
-with handwritten reference implementations.</div></td>
+with handwritten reference implementations.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2020-14-04]: Incorporate Time_Monad of the AFP entry Root_Balanced_Tree.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Closest_Pair_Points-AFP,
author = {Martin Rau and Tobias Nipkow},
title = {Closest Pair of Points Algorithms},
journal = {Archive of Formal Proofs},
month = jan,
year = 2020,
note = {\url{http://isa-afp.org/entries/Closest_Pair_Points.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Akra_Bazzi.html">Akra_Bazzi</a>, <a href="Root_Balanced_Tree.html">Root_Balanced_Tree</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Closest_Pair_Points/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Closest_Pair_Points/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Closest_Pair_Points/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Closest_Pair_Points-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Closest_Pair_Points-2020-01-14.tar.gz">
afp-Closest_Pair_Points-2020-01-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/CofGroups.html b/web/entries/CofGroups.html
--- a/web/entries/CofGroups.html
+++ b/web/entries/CofGroups.html
@@ -1,282 +1,282 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>An Example of a Cofinitary Group in Isabelle/HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>n
<font class="first">E</font>xample
of
a
<font class="first">C</font>ofinitary
<font class="first">G</font>roup
in
<font class="first">I</font>sabelle/HOL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">An Example of a Cofinitary Group in Isabelle/HOL</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://kasterma.net">Bart Kastermans</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2009-08-04</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We formalize the usual proof that the group generated by the function k -> k + 1 on the integers gives rise to a cofinitary group.</div></td>
+ <td class="abstract mathjax_process">We formalize the usual proof that the group generated by the function k -> k + 1 on the integers gives rise to a cofinitary group.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{CofGroups-AFP,
author = {Bart Kastermans},
title = {An Example of a Cofinitary Group in Isabelle/HOL},
journal = {Archive of Formal Proofs},
month = aug,
year = 2009,
note = {\url{http://isa-afp.org/entries/CofGroups.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CofGroups/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/CofGroups/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CofGroups/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-CofGroups-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-CofGroups-2019-06-11.tar.gz">
afp-CofGroups-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-CofGroups-2018-08-16.tar.gz">
afp-CofGroups-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-CofGroups-2017-10-10.tar.gz">
afp-CofGroups-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-CofGroups-2016-12-17.tar.gz">
afp-CofGroups-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-CofGroups-2016-02-22.tar.gz">
afp-CofGroups-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-CofGroups-2015-05-27.tar.gz">
afp-CofGroups-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-CofGroups-2014-08-28.tar.gz">
afp-CofGroups-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-CofGroups-2013-12-11.tar.gz">
afp-CofGroups-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-CofGroups-2013-11-17.tar.gz">
afp-CofGroups-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-CofGroups-2013-03-02.tar.gz">
afp-CofGroups-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-CofGroups-2013-02-16.tar.gz">
afp-CofGroups-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-CofGroups-2012-05-24.tar.gz">
afp-CofGroups-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-CofGroups-2011-10-11.tar.gz">
afp-CofGroups-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-CofGroups-2011-02-11.tar.gz">
afp-CofGroups-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-CofGroups-2010-06-30.tar.gz">
afp-CofGroups-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-CofGroups-2009-12-12.tar.gz">
afp-CofGroups-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-CofGroups-2009-09-05.tar.gz">
afp-CofGroups-2009-09-05.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-CofGroups-2009-08-09.tar.gz">
afp-CofGroups-2009-08-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Coinductive.html b/web/entries/Coinductive.html
--- a/web/entries/Coinductive.html
+++ b/web/entries/Coinductive.html
@@ -1,310 +1,310 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Coinductive - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>oinductive
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Coinductive</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="datahead">
Contributor:
</td>
<td class="data">
Johannes Hölzl (hoelzl /at/ in /dot/ tum /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-02-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This article collects formalisations of general-purpose coinductive data types and sets. Currently, it contains coinductive natural numbers, coinductive lists, i.e. lazy lists or streams, infinite streams, coinductive terminated lists, coinductive resumptions, a library of operations on coinductive lists, and a version of König's lemma as an application for coinductive lists.<br>The initial theory was contributed by Paulson and Wenzel. Extensions and other coinductive formalisations of general interest are welcome.</div></td>
+ <td class="abstract mathjax_process">This article collects formalisations of general-purpose coinductive data types and sets. Currently, it contains coinductive natural numbers, coinductive lists, i.e. lazy lists or streams, infinite streams, coinductive terminated lists, coinductive resumptions, a library of operations on coinductive lists, and a version of König's lemma as an application for coinductive lists.<br>The initial theory was contributed by Paulson and Wenzel. Extensions and other coinductive formalisations of general interest are welcome.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2010-06-10]:
coinductive lists: setup for quotient package
(revision 015574f3bf3c)<br>
[2010-06-28]:
new codatatype terminated lazy lists
(revision e12de475c558)<br>
[2010-08-04]:
terminated lazy lists: setup for quotient package;
more lemmas
(revision 6ead626f1d01)<br>
[2010-08-17]:
Koenig's lemma as an example application for coinductive lists
(revision f81ce373fa96)<br>
[2011-02-01]:
lazy implementation of coinductive (terminated) lists for the code generator
(revision 6034973dce83)<br>
[2011-07-20]:
new codatatype resumption
(revision 811364c776c7)<br>
[2012-06-27]:
new codatatype stream with operations (with contributions by Peter Gammie)
(revision dd789a56473c)<br>
[2013-03-13]:
construct codatatypes with the BNF package and adjust the definitions and proofs,
setup for lifting and transfer packages
(revision f593eda5b2c0)<br>
[2013-09-20]:
stream theory uses type and operations from HOL/BNF/Examples/Stream
(revision 692809b2b262)<br>
[2014-04-03]:
ccpo structure on codatatypes used to define ldrop, ldropWhile, lfilter, lconcat as least fixpoint;
ccpo topology on coinductive lists contributed by Johannes Hölzl;
added examples
(revision 23cd8156bd42)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Coinductive-AFP,
author = {Andreas Lochbihler},
title = {Coinductive},
journal = {Archive of Formal Proofs},
month = feb,
year = 2010,
note = {\url{http://isa-afp.org/entries/Coinductive.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="CakeML.html">CakeML</a>, <a href="CryptHOL.html">CryptHOL</a>, <a href="DynamicArchitectures.html">DynamicArchitectures</a>, <a href="JinjaThreads.html">JinjaThreads</a>, <a href="Lazy-Lists-II.html">Lazy-Lists-II</a>, <a href="Markov_Models.html">Markov_Models</a>, <a href="Ordered_Resolution_Prover.html">Ordered_Resolution_Prover</a>, <a href="Parity_Game.html">Parity_Game</a>, <a href="Partial_Order_Reduction.html">Partial_Order_Reduction</a>, <a href="Probabilistic_Noninterference.html">Probabilistic_Noninterference</a>, <a href="Stream_Fusion_Code.html">Stream_Fusion_Code</a>, <a href="Topology.html">Topology</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Coinductive/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Coinductive/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Coinductive/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Coinductive-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Coinductive-2019-06-11.tar.gz">
afp-Coinductive-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Coinductive-2018-08-16.tar.gz">
afp-Coinductive-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Coinductive-2017-10-10.tar.gz">
afp-Coinductive-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Coinductive-2016-12-17.tar.gz">
afp-Coinductive-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Coinductive-2016-02-22.tar.gz">
afp-Coinductive-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Coinductive-2015-05-27.tar.gz">
afp-Coinductive-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Coinductive-2014-08-28.tar.gz">
afp-Coinductive-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Coinductive-2013-12-11.tar.gz">
afp-Coinductive-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Coinductive-2013-11-17.tar.gz">
afp-Coinductive-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Coinductive-2013-03-02.tar.gz">
afp-Coinductive-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Coinductive-2013-02-16.tar.gz">
afp-Coinductive-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Coinductive-2012-05-24.tar.gz">
afp-Coinductive-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Coinductive-2011-10-11.tar.gz">
afp-Coinductive-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Coinductive-2011-02-11.tar.gz">
afp-Coinductive-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Coinductive-2010-06-30.tar.gz">
afp-Coinductive-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Coinductive-2010-02-15.tar.gz">
afp-Coinductive-2010-02-15.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Coinductive-2010-02-14.tar.gz">
afp-Coinductive-2010-02-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Coinductive_Languages.html b/web/entries/Coinductive_Languages.html
--- a/web/entries/Coinductive_Languages.html
+++ b/web/entries/Coinductive_Languages.html
@@ -1,251 +1,251 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Codatatype of Formal Languages - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">C</font>odatatype
of
<font class="first">F</font>ormal
<font class="first">L</font>anguages
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Codatatype of Formal Languages</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-11-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process"><p>We define formal languages as a codataype of infinite trees
+ <td class="abstract mathjax_process"><p>We define formal languages as a codataype of infinite trees
branching over the alphabet. Each node in such a tree indicates whether the
path to this node constitutes a word inside or outside of the language. This
codatatype is isormorphic to the set of lists representation of languages,
but caters for definitions by corecursion and proofs by coinduction.</p>
<p>Regular operations on languages are then defined by primitive corecursion.
A difficulty arises here, since the standard definitions of concatenation and
iteration from the coalgebraic literature are not primitively
corecursive-they require guardedness up-to union/concatenation.
Without support for up-to corecursion, these operation must be defined as a
composition of primitive ones (and proved being equal to the standard
definitions). As an exercise in coinduction we also prove the axioms of
Kleene algebra for the defined regular operations.</p>
<p>Furthermore, a language for context-free grammars given by productions in
Greibach normal form and an initial nonterminal is constructed by primitive
corecursion, yielding an executable decision procedure for the word problem
-without further ado.</p></div></td>
+without further ado.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Coinductive_Languages-AFP,
author = {Dmitriy Traytel},
title = {A Codatatype of Formal Languages},
journal = {Archive of Formal Proofs},
month = nov,
year = 2013,
note = {\url{http://isa-afp.org/entries/Coinductive_Languages.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Regular-Sets.html">Regular-Sets</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Formula_Derivatives.html">Formula_Derivatives</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Coinductive_Languages/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Coinductive_Languages/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Coinductive_Languages/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Coinductive_Languages-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Coinductive_Languages-2019-06-11.tar.gz">
afp-Coinductive_Languages-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Coinductive_Languages-2018-08-16.tar.gz">
afp-Coinductive_Languages-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Coinductive_Languages-2017-10-10.tar.gz">
afp-Coinductive_Languages-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Coinductive_Languages-2016-12-17.tar.gz">
afp-Coinductive_Languages-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Coinductive_Languages-2016-02-22.tar.gz">
afp-Coinductive_Languages-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Coinductive_Languages-2015-05-27.tar.gz">
afp-Coinductive_Languages-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Coinductive_Languages-2014-08-28.tar.gz">
afp-Coinductive_Languages-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Coinductive_Languages-2013-12-11.tar.gz">
afp-Coinductive_Languages-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Coinductive_Languages-2013-11-17.tar.gz">
afp-Coinductive_Languages-2013-11-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Collections.html b/web/entries/Collections.html
--- a/web/entries/Collections.html
+++ b/web/entries/Collections.html
@@ -1,307 +1,307 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Collections Framework - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ollections
<font class="first">F</font>ramework
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Collections Framework</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">
Contributors:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a> and
Thomas Tuerk
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2009-11-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This development provides an efficient, extensible, machine checked collections framework. The library adopts the concepts of interface, implementation and generic algorithm from object-oriented programming and implements them in Isabelle/HOL. The framework features the use of data refinement techniques to refine an abstract specification (using high-level concepts like sets) to a more concrete implementation (using collection datastructures, like red-black-trees). The code-generator of Isabelle/HOL can be used to generate efficient code.</div></td>
+ <td class="abstract mathjax_process">This development provides an efficient, extensible, machine checked collections framework. The library adopts the concepts of interface, implementation and generic algorithm from object-oriented programming and implements them in Isabelle/HOL. The framework features the use of data refinement techniques to refine an abstract specification (using high-level concepts like sets) to a more concrete implementation (using collection datastructures, like red-black-trees). The code-generator of Isabelle/HOL can be used to generate efficient code.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2010-10-08]: New Interfaces: OrderedSet, OrderedMap, List.
Fifo now implements list-interface: Function names changed: put/get --> enqueue/dequeue.
New Implementations: ArrayList, ArrayHashMap, ArrayHashSet, TrieMap, TrieSet.
Invariant-free datastructures: Invariant implicitely hidden in typedef.
Record-interfaces: All operations of an interface encapsulated as record.
Examples moved to examples subdirectory.<br>
[2010-12-01]: New Interfaces: Priority Queues, Annotated Lists. Implemented by finger trees, (skew) binomial queues.<br>
[2011-10-10]: SetSpec: Added operations: sng, isSng, bexists, size_abort, diff, filter, iterate_rule_insertP
MapSpec: Added operations: sng, isSng, iterate_rule_insertP, bexists, size, size_abort, restrict,
map_image_filter, map_value_image_filter
Some maintenance changes<br>
[2012-04-25]: New iterator foundation by Tuerk. Various maintenance changes.<br>
[2012-08]: Collections V2. New features: Polymorphic iterators. Generic algorithm instantiation where required. Naming scheme changed from xx_opname to xx.opname.
A compatibility file CollectionsV1 tries to simplify porting of existing theories, by providing old naming scheme and the old monomorphic iterator locales.<br>
[2013-09]: Added Generic Collection Framework based on Autoref. The GenCF provides: Arbitrary nesting, full integration with Autoref.<br>
[2014-06]: Maintenace changes to GenCF: Optimized inj_image on list_set. op_set_cart (Cartesian product). big-Union operation. atLeastLessThan - operation ({a..&lt;b})<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Collections-AFP,
author = {Peter Lammich},
title = {Collections Framework},
journal = {Archive of Formal Proofs},
month = nov,
year = 2009,
note = {\url{http://isa-afp.org/entries/Collections.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Binomial-Heaps.html">Binomial-Heaps</a>, <a href="Finger-Trees.html">Finger-Trees</a>, <a href="Native_Word.html">Native_Word</a>, <a href="Refine_Monadic.html">Refine_Monadic</a>, <a href="Trie.html">Trie</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Abstract_Completeness.html">Abstract_Completeness</a>, <a href="Containers.html">Containers</a>, <a href="Deriving.html">Deriving</a>, <a href="Dijkstra_Shortest_Path.html">Dijkstra_Shortest_Path</a>, <a href="Formal_SSA.html">Formal_SSA</a>, <a href="JinjaThreads.html">JinjaThreads</a>, <a href="Kruskal.html">Kruskal</a>, <a href="ROBDD.html">ROBDD</a>, <a href="Separation_Logic_Imperative_HOL.html">Separation_Logic_Imperative_HOL</a>, <a href="Transition_Systems_and_Automata.html">Transition_Systems_and_Automata</a>, <a href="Transitive-Closure.html">Transitive-Closure</a>, <a href="Tree-Automata.html">Tree-Automata</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Collections/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Collections/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Collections/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Collections-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Collections-2019-06-11.tar.gz">
afp-Collections-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Collections-2018-08-16.tar.gz">
afp-Collections-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Collections-2017-10-10.tar.gz">
afp-Collections-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Collections-2016-12-17.tar.gz">
afp-Collections-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Collections-2016-02-22.tar.gz">
afp-Collections-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Collections-2015-05-27.tar.gz">
afp-Collections-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Collections-2014-08-28.tar.gz">
afp-Collections-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Collections-2013-12-11.tar.gz">
afp-Collections-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Collections-2013-11-17.tar.gz">
afp-Collections-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Collections-2013-03-02.tar.gz">
afp-Collections-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Collections-2013-02-16.tar.gz">
afp-Collections-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Collections-2012-05-24.tar.gz">
afp-Collections-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Collections-2011-10-12.tar.gz">
afp-Collections-2011-10-12.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Collections-2011-10-11.tar.gz">
afp-Collections-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Collections-2011-02-11.tar.gz">
afp-Collections-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Collections-2010-06-30.tar.gz">
afp-Collections-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Collections-2009-12-13.tar.gz">
afp-Collections-2009-12-13.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Collections-2009-12-12.tar.gz">
afp-Collections-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Collections-2009-11-29.tar.gz">
afp-Collections-2009-11-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Comparison_Sort_Lower_Bound.html b/web/entries/Comparison_Sort_Lower_Bound.html
--- a/web/entries/Comparison_Sort_Lower_Bound.html
+++ b/web/entries/Comparison_Sort_Lower_Bound.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Lower bound on comparison-based sorting algorithms - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>ower
bound
on
comparison-based
sorting
algorithms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Lower bound on comparison-based sorting algorithms</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-03-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This article contains a formal proof of the well-known fact
that number of comparisons that a comparison-based sorting algorithm
needs to perform to sort a list of length <em>n</em> is at
least <em>log<sub>2</sub>&nbsp;(n!)</em>
in the worst case, i.&thinsp;e.&nbsp;<em>Ω(n log
n)</em>.</p> <p>For this purpose, a shallow
embedding for comparison-based sorting algorithms is defined: a
sorting algorithm is a recursive datatype containing either a HOL
function or a query of a comparison oracle with a continuation
containing the remaining computation. This makes it possible to force
the algorithm to use only comparisons and to track the number of
-comparisons made.</p></div></td>
+comparisons made.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Comparison_Sort_Lower_Bound-AFP,
author = {Manuel Eberl},
title = {Lower bound on comparison-based sorting algorithms},
journal = {Archive of Formal Proofs},
month = mar,
year = 2017,
note = {\url{http://isa-afp.org/entries/Comparison_Sort_Lower_Bound.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Landau_Symbols.html">Landau_Symbols</a>, <a href="List-Index.html">List-Index</a>, <a href="Stirling_Formula.html">Stirling_Formula</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Quick_Sort_Cost.html">Quick_Sort_Cost</a>, <a href="Treaps.html">Treaps</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Comparison_Sort_Lower_Bound/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Comparison_Sort_Lower_Bound/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Comparison_Sort_Lower_Bound/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Comparison_Sort_Lower_Bound-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Comparison_Sort_Lower_Bound-2019-06-11.tar.gz">
afp-Comparison_Sort_Lower_Bound-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Comparison_Sort_Lower_Bound-2018-08-16.tar.gz">
afp-Comparison_Sort_Lower_Bound-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Comparison_Sort_Lower_Bound-2017-10-10.tar.gz">
afp-Comparison_Sort_Lower_Bound-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Comparison_Sort_Lower_Bound-2017-03-16.tar.gz">
afp-Comparison_Sort_Lower_Bound-2017-03-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Compiling-Exceptions-Correctly.html b/web/entries/Compiling-Exceptions-Correctly.html
--- a/web/entries/Compiling-Exceptions-Correctly.html
+++ b/web/entries/Compiling-Exceptions-Correctly.html
@@ -1,282 +1,282 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Compiling Exceptions Correctly - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ompiling
<font class="first">E</font>xceptions
<font class="first">C</font>orrectly
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Compiling Exceptions Correctly</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-07-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">An exception compilation scheme that dynamically creates and removes exception handler entries on the stack. A formalization of an article of the same name by <a href="http://www.cs.nott.ac.uk/~gmh/">Hutton</a> and Wright.</div></td>
+ <td class="abstract mathjax_process">An exception compilation scheme that dynamically creates and removes exception handler entries on the stack. A formalization of an article of the same name by <a href="http://www.cs.nott.ac.uk/~gmh/">Hutton</a> and Wright.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Compiling-Exceptions-Correctly-AFP,
author = {Tobias Nipkow},
title = {Compiling Exceptions Correctly},
journal = {Archive of Formal Proofs},
month = jul,
year = 2004,
note = {\url{http://isa-afp.org/entries/Compiling-Exceptions-Correctly.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Compiling-Exceptions-Correctly/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Compiling-Exceptions-Correctly/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Compiling-Exceptions-Correctly/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Compiling-Exceptions-Correctly-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Compiling-Exceptions-Correctly-2019-06-11.tar.gz">
afp-Compiling-Exceptions-Correctly-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Compiling-Exceptions-Correctly-2018-08-16.tar.gz">
afp-Compiling-Exceptions-Correctly-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Compiling-Exceptions-Correctly-2017-10-10.tar.gz">
afp-Compiling-Exceptions-Correctly-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Compiling-Exceptions-Correctly-2016-12-17.tar.gz">
afp-Compiling-Exceptions-Correctly-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Compiling-Exceptions-Correctly-2016-02-22.tar.gz">
afp-Compiling-Exceptions-Correctly-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Compiling-Exceptions-Correctly-2015-05-27.tar.gz">
afp-Compiling-Exceptions-Correctly-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Compiling-Exceptions-Correctly-2014-08-28.tar.gz">
afp-Compiling-Exceptions-Correctly-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Compiling-Exceptions-Correctly-2013-12-11.tar.gz">
afp-Compiling-Exceptions-Correctly-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Compiling-Exceptions-Correctly-2013-11-17.tar.gz">
afp-Compiling-Exceptions-Correctly-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Compiling-Exceptions-Correctly-2013-02-16.tar.gz">
afp-Compiling-Exceptions-Correctly-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Compiling-Exceptions-Correctly-2012-05-24.tar.gz">
afp-Compiling-Exceptions-Correctly-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Compiling-Exceptions-Correctly-2011-10-11.tar.gz">
afp-Compiling-Exceptions-Correctly-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Compiling-Exceptions-Correctly-2011-02-11.tar.gz">
afp-Compiling-Exceptions-Correctly-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Compiling-Exceptions-Correctly-2010-06-30.tar.gz">
afp-Compiling-Exceptions-Correctly-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Compiling-Exceptions-Correctly-2009-12-12.tar.gz">
afp-Compiling-Exceptions-Correctly-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Compiling-Exceptions-Correctly-2009-04-29.tar.gz">
afp-Compiling-Exceptions-Correctly-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Compiling-Exceptions-Correctly-2008-06-10.tar.gz">
afp-Compiling-Exceptions-Correctly-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Compiling-Exceptions-Correctly-2007-11-27.tar.gz">
afp-Compiling-Exceptions-Correctly-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Compiling-Exceptions-Correctly-2005-10-14.tar.gz">
afp-Compiling-Exceptions-Correctly-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Compiling-Exceptions-Correctly-2004-07-09.tar.gz">
afp-Compiling-Exceptions-Correctly-2004-07-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Complete_Non_Orders.html b/web/entries/Complete_Non_Orders.html
--- a/web/entries/Complete_Non_Orders.html
+++ b/web/entries/Complete_Non_Orders.html
@@ -1,205 +1,205 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Complete Non-Orders and Fixed Points - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>omplete
<font class="first">N</font>on-Orders
and
<font class="first">F</font>ixed
<font class="first">P</font>oints
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Complete Non-Orders and Fixed Points</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a> and
<a href="http://group-mmm.org/~dubut/">Jérémy Dubut</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-06-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We develop an Isabelle/HOL library of order-theoretic concepts, such
as various completeness conditions and fixed-point theorems. We keep
our formalization as general as possible: we reprove several
well-known results about complete orders, often without any properties
of ordering, thus complete non-orders. In particular, we generalize
the Knaster–Tarski theorem so that we ensure the existence of a
quasi-fixed point of monotone maps over complete non-orders, and show
that the set of quasi-fixed points is complete under a mild
condition—attractivity—which is implied by either antisymmetry or
transitivity. This result generalizes and strengthens a result by
Stauti and Maaden. Finally, we recover Kleene’s fixed-point theorem
for omega-complete non-orders, again using attractivity to prove that
-Kleene’s fixed points are least quasi-fixed points.</div></td>
+Kleene’s fixed points are least quasi-fixed points.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Complete_Non_Orders-AFP,
author = {Akihisa Yamada and Jérémy Dubut},
title = {Complete Non-Orders and Fixed Points},
journal = {Archive of Formal Proofs},
month = jun,
year = 2019,
note = {\url{http://isa-afp.org/entries/Complete_Non_Orders.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Complete_Non_Orders/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Complete_Non_Orders/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Complete_Non_Orders/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Complete_Non_Orders-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Complete_Non_Orders-2019-06-28.tar.gz">
afp-Complete_Non_Orders-2019-06-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Completeness.html b/web/entries/Completeness.html
--- a/web/entries/Completeness.html
+++ b/web/entries/Completeness.html
@@ -1,296 +1,296 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Completeness theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ompleteness
theorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Completeness theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
James Margetson and
Tom Ridge
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-09-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">The completeness of first-order logic is proved, following the first five pages of Wainer and Wallen's chapter of the book <i>Proof Theory</i> by Aczel et al., CUP, 1992. Their presentation of formulas allows the proofs to use symmetry arguments. Margetson formalized this theorem by early 2000. The Isar conversion is thanks to Tom Ridge. A paper describing the formalization is available <a href="Completeness-paper.pdf">[pdf]</a>.</div></td>
+ <td class="abstract mathjax_process">The completeness of first-order logic is proved, following the first five pages of Wainer and Wallen's chapter of the book <i>Proof Theory</i> by Aczel et al., CUP, 1992. Their presentation of formulas allows the proofs to use symmetry arguments. Margetson formalized this theorem by early 2000. The Isar conversion is thanks to Tom Ridge. A paper describing the formalization is available <a href="Completeness-paper.pdf">[pdf]</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Completeness-AFP,
author = {James Margetson and Tom Ridge},
title = {Completeness theorem},
journal = {Archive of Formal Proofs},
month = sep,
year = 2004,
note = {\url{http://isa-afp.org/entries/Completeness.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Completeness/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Completeness/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Completeness/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Completeness-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Completeness-2019-06-11.tar.gz">
afp-Completeness-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Completeness-2018-08-16.tar.gz">
afp-Completeness-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Completeness-2017-10-10.tar.gz">
afp-Completeness-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Completeness-2016-12-17.tar.gz">
afp-Completeness-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Completeness-2016-02-22.tar.gz">
afp-Completeness-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Completeness-2015-05-27.tar.gz">
afp-Completeness-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Completeness-2014-08-28.tar.gz">
afp-Completeness-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Completeness-2013-12-11.tar.gz">
afp-Completeness-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Completeness-2013-11-17.tar.gz">
afp-Completeness-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Completeness-2013-03-02.tar.gz">
afp-Completeness-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Completeness-2013-02-16.tar.gz">
afp-Completeness-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Completeness-2012-05-24.tar.gz">
afp-Completeness-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Completeness-2011-10-11.tar.gz">
afp-Completeness-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Completeness-2011-02-11.tar.gz">
afp-Completeness-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Completeness-2010-06-30.tar.gz">
afp-Completeness-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Completeness-2009-12-12.tar.gz">
afp-Completeness-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Completeness-2009-04-29.tar.gz">
afp-Completeness-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Completeness-2008-06-10.tar.gz">
afp-Completeness-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Completeness-2007-11-27.tar.gz">
afp-Completeness-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Completeness-2005-10-14.tar.gz">
afp-Completeness-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Completeness-2005-07-22.tar.gz">
afp-Completeness-2005-07-22.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Completeness-2004-09-21.tar.gz">
afp-Completeness-2004-09-21.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Completeness-2004-09-20.tar.gz">
afp-Completeness-2004-09-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Complex_Geometry.html b/web/entries/Complex_Geometry.html
--- a/web/entries/Complex_Geometry.html
+++ b/web/entries/Complex_Geometry.html
@@ -1,195 +1,195 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Complex Geometry - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>omplex
<font class="first">G</font>eometry
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Complex Geometry</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Filip Marić (filip /at/ matf /dot/ bg /dot/ ac /dot/ rs) and
<a href="http://poincare.matf.bg.ac.rs/~danijela">Danijela Simić</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-12-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
A formalization of geometry of complex numbers is presented.
Fundamental objects that are investigated are the complex plane
extended by a single infinite point, its objects (points, lines and
circles), and groups of transformations that act on them (e.g.,
inversions and Möbius transformations). Most objects are defined
algebraically, but correspondence with classical geometric definitions
-is shown.</div></td>
+is shown.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Complex_Geometry-AFP,
author = {Filip Marić and Danijela Simić},
title = {Complex Geometry},
journal = {Archive of Formal Proofs},
month = dec,
year = 2019,
note = {\url{http://isa-afp.org/entries/Complex_Geometry.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Poincare_Disc.html">Poincare_Disc</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Complex_Geometry/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Complex_Geometry/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Complex_Geometry/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Complex_Geometry-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Complex_Geometry-2020-01-17.tar.gz">
afp-Complex_Geometry-2020-01-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Complx.html b/web/entries/Complx.html
--- a/web/entries/Complx.html
+++ b/web/entries/Complx.html
@@ -1,245 +1,245 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>COMPLX: A Verification Framework for Concurrent Imperative Programs - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>OMPLX:
<font class="first">A</font>
<font class="first">V</font>erification
<font class="first">F</font>ramework
for
<font class="first">C</font>oncurrent
<font class="first">I</font>mperative
<font class="first">P</font>rograms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">COMPLX: A Verification Framework for Concurrent Imperative Programs</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Sidney Amani,
June Andronick,
Maksym Bortin,
Corey Lewis,
Christine Rizkallah and
Joseph Tuong
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-11-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We propose a concurrency reasoning framework for imperative programs,
based on the Owicki-Gries (OG) foundational shared-variable
concurrency method. Our framework combines the approaches of
Hoare-Parallel, a formalisation of OG in Isabelle/HOL for a simple
while-language, and Simpl, a generic imperative language embedded in
Isabelle/HOL, allowing formal reasoning on C programs. We define the
Complx language, extending the syntax and semantics of Simpl with
support for parallel composition and synchronisation. We additionally
define an OG logic, which we prove sound w.r.t. the semantics, and a
verification condition generator, both supporting involved low-level
imperative constructs such as function calls and abrupt termination.
We illustrate our framework on an example that features exceptions,
guards and function calls. We aim to then target concurrent operating
systems, such as the interruptible eChronos embedded operating system
-for which we already have a model-level OG proof using Hoare-Parallel.</div></td>
+for which we already have a model-level OG proof using Hoare-Parallel.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2017-01-13]:
Improve VCG for nested parallels and sequential sections
(revision 30739dbc3dcb)</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Complx-AFP,
author = {Sidney Amani and June Andronick and Maksym Bortin and Corey Lewis and Christine Rizkallah and Joseph Tuong},
title = {COMPLX: A Verification Framework for Concurrent Imperative Programs},
journal = {Archive of Formal Proofs},
month = nov,
year = 2016,
note = {\url{http://isa-afp.org/entries/Complx.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Word_Lib.html">Word_Lib</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Complx/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Complx/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Complx/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Complx-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Complx-2019-06-11.tar.gz">
afp-Complx-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Complx-2018-08-16.tar.gz">
afp-Complx-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Complx-2017-10-10.tar.gz">
afp-Complx-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Complx-2016-12-17.tar.gz">
afp-Complx-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Complx-2016-11-29.tar.gz">
afp-Complx-2016-11-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/ComponentDependencies.html b/web/entries/ComponentDependencies.html
--- a/web/entries/ComponentDependencies.html
+++ b/web/entries/ComponentDependencies.html
@@ -1,228 +1,228 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalisation and Analysis of Component Dependencies - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalisation
and
<font class="first">A</font>nalysis
of
<font class="first">C</font>omponent
<font class="first">D</font>ependencies
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalisation and Analysis of Component Dependencies</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Maria Spichkova (maria /dot/ spichkova /at/ rmit /dot/ edu /dot/ au)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-04-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This set of theories presents a formalisation in Isabelle/HOL of data dependencies between components. The approach allows to analyse system structure oriented towards efficient checking of system: it aims at elaborating for a concrete system, which parts of the system are necessary to check a given property.</div></td>
+ <td class="abstract mathjax_process">This set of theories presents a formalisation in Isabelle/HOL of data dependencies between components. The approach allows to analyse system structure oriented towards efficient checking of system: it aims at elaborating for a concrete system, which parts of the system are necessary to check a given property.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{ComponentDependencies-AFP,
author = {Maria Spichkova},
title = {Formalisation and Analysis of Component Dependencies},
journal = {Archive of Formal Proofs},
month = apr,
year = 2014,
note = {\url{http://isa-afp.org/entries/ComponentDependencies.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ComponentDependencies/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/ComponentDependencies/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ComponentDependencies/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-ComponentDependencies-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-ComponentDependencies-2019-06-11.tar.gz">
afp-ComponentDependencies-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-ComponentDependencies-2018-08-16.tar.gz">
afp-ComponentDependencies-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-ComponentDependencies-2017-10-10.tar.gz">
afp-ComponentDependencies-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-ComponentDependencies-2016-12-17.tar.gz">
afp-ComponentDependencies-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-ComponentDependencies-2016-02-22.tar.gz">
afp-ComponentDependencies-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-ComponentDependencies-2015-05-27.tar.gz">
afp-ComponentDependencies-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-ComponentDependencies-2014-08-28.tar.gz">
afp-ComponentDependencies-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-ComponentDependencies-2014-04-29.tar.gz">
afp-ComponentDependencies-2014-04-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/ConcurrentGC.html b/web/entries/ConcurrentGC.html
--- a/web/entries/ConcurrentGC.html
+++ b/web/entries/ConcurrentGC.html
@@ -1,238 +1,238 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Relaxing Safely: Verified On-the-Fly Garbage Collection for x86-TSO - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>elaxing
<font class="first">S</font>afely:
<font class="first">V</font>erified
<font class="first">O</font>n-the-Fly
<font class="first">G</font>arbage
<font class="first">C</font>ollection
for
x86-TSO
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Relaxing Safely: Verified On-the-Fly Garbage Collection for x86-TSO</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://peteg.org">Peter Gammie</a>,
<a href="https://www.cs.purdue.edu/homes/hosking/">Tony Hosking</a> and
Kai Engelhardt
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-04-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
We use ConcurrentIMP to model Schism, a state-of-the-art real-time
garbage collection scheme for weak memory, and show that it is safe
on x86-TSO.</p>
<p>
This development accompanies the PLDI 2015 paper of the same name.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{ConcurrentGC-AFP,
author = {Peter Gammie and Tony Hosking and Kai Engelhardt},
title = {Relaxing Safely: Verified On-the-Fly Garbage Collection for x86-TSO},
journal = {Archive of Formal Proofs},
month = apr,
year = 2015,
note = {\url{http://isa-afp.org/entries/ConcurrentGC.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="ConcurrentIMP.html">ConcurrentIMP</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ConcurrentGC/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/ConcurrentGC/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ConcurrentGC/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-ConcurrentGC-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-ConcurrentGC-2019-06-11.tar.gz">
afp-ConcurrentGC-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-ConcurrentGC-2018-08-16.tar.gz">
afp-ConcurrentGC-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-ConcurrentGC-2017-10-10.tar.gz">
afp-ConcurrentGC-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-ConcurrentGC-2016-12-17.tar.gz">
afp-ConcurrentGC-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-ConcurrentGC-2016-02-22.tar.gz">
afp-ConcurrentGC-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-ConcurrentGC-2015-05-27.tar.gz">
afp-ConcurrentGC-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-ConcurrentGC-2015-04-15.tar.gz">
afp-ConcurrentGC-2015-04-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/ConcurrentIMP.html b/web/entries/ConcurrentIMP.html
--- a/web/entries/ConcurrentIMP.html
+++ b/web/entries/ConcurrentIMP.html
@@ -1,219 +1,219 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Concurrent IMP - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>oncurrent
<font class="first">I</font>MP
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Concurrent IMP</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-04-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
ConcurrentIMP extends the small imperative language IMP with control
-non-determinism and constructs for synchronous message passing.</div></td>
+non-determinism and constructs for synchronous message passing.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{ConcurrentIMP-AFP,
author = {Peter Gammie},
title = {Concurrent IMP},
journal = {Archive of Formal Proofs},
month = apr,
year = 2015,
note = {\url{http://isa-afp.org/entries/ConcurrentIMP.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="ConcurrentGC.html">ConcurrentGC</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ConcurrentIMP/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/ConcurrentIMP/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ConcurrentIMP/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-ConcurrentIMP-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-ConcurrentIMP-2019-06-11.tar.gz">
afp-ConcurrentIMP-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-ConcurrentIMP-2018-08-16.tar.gz">
afp-ConcurrentIMP-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-ConcurrentIMP-2017-10-10.tar.gz">
afp-ConcurrentIMP-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-ConcurrentIMP-2016-12-17.tar.gz">
afp-ConcurrentIMP-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-ConcurrentIMP-2016-02-22.tar.gz">
afp-ConcurrentIMP-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-ConcurrentIMP-2015-05-27.tar.gz">
afp-ConcurrentIMP-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-ConcurrentIMP-2015-04-15.tar.gz">
afp-ConcurrentIMP-2015-04-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Concurrent_Ref_Alg.html b/web/entries/Concurrent_Ref_Alg.html
--- a/web/entries/Concurrent_Ref_Alg.html
+++ b/web/entries/Concurrent_Ref_Alg.html
@@ -1,228 +1,228 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Concurrent Refinement Algebra and Rely Quotients - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>oncurrent
<font class="first">R</font>efinement
<font class="first">A</font>lgebra
and
<font class="first">R</font>ely
<font class="first">Q</font>uotients
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Concurrent Refinement Algebra and Rely Quotients</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Julian Fell (julian /dot/ fell /at/ uq /dot/ net /dot/ au),
Ian J. Hayes (ian /dot/ hayes /at/ itee /dot/ uq /dot/ edu /dot/ au) and
<a href="http://andrius.velykis.lt">Andrius Velykis</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-12-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The concurrent refinement algebra developed here is designed to
provide a foundation for rely/guarantee reasoning about concurrent
programs. The algebra builds on a complete lattice of commands by
providing sequential composition, parallel composition and a novel
weak conjunction operator. The weak conjunction operator coincides
with the lattice supremum providing its arguments are non-aborting,
but aborts if either of its arguments do. Weak conjunction provides an
abstract version of a guarantee condition as a guarantee process. We
distinguish between models that distribute sequential composition over
non-deterministic choice from the left (referred to as being
conjunctive in the refinement calculus literature) and those that
don't. Least and greatest fixed points of monotone functions are
provided to allow recursion and iteration operators to be added to the
language. Additional iteration laws are available for conjunctive
models. The rely quotient of processes <i>c</i> and
<i>i</i> is the process that, if executed in parallel with
<i>i</i> implements <i>c</i>. It represents an
-abstract version of a rely condition generalised to a process.</div></td>
+abstract version of a rely condition generalised to a process.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Concurrent_Ref_Alg-AFP,
author = {Julian Fell and Ian J. Hayes and Andrius Velykis},
title = {Concurrent Refinement Algebra and Rely Quotients},
journal = {Archive of Formal Proofs},
month = dec,
year = 2016,
note = {\url{http://isa-afp.org/entries/Concurrent_Ref_Alg.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Concurrent_Ref_Alg/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Concurrent_Ref_Alg/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Concurrent_Ref_Alg/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Concurrent_Ref_Alg-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Concurrent_Ref_Alg-2019-06-11.tar.gz">
afp-Concurrent_Ref_Alg-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Concurrent_Ref_Alg-2018-08-16.tar.gz">
afp-Concurrent_Ref_Alg-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Concurrent_Ref_Alg-2017-10-10.tar.gz">
afp-Concurrent_Ref_Alg-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Concurrent_Ref_Alg-2017-01-04.tar.gz">
afp-Concurrent_Ref_Alg-2017-01-04.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Concurrent_Revisions.html b/web/entries/Concurrent_Revisions.html
--- a/web/entries/Concurrent_Revisions.html
+++ b/web/entries/Concurrent_Revisions.html
@@ -1,203 +1,203 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of Concurrent Revisions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
<font class="first">C</font>oncurrent
<font class="first">R</font>evisions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of Concurrent Revisions</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Roy Overbeek (Roy /dot/ Overbeek /at/ cwi /dot/ nl)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-12-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Concurrent revisions is a concurrency control model developed by
Microsoft Research. It has many interesting properties that
distinguish it from other well-known models such as transactional
memory. One of these properties is <em>determinacy</em>:
programs written within the model always produce the same outcome,
independent of scheduling activity. The concurrent revisions model has
an operational semantics, with an informal proof of determinacy. This
document contains an Isabelle/HOL formalization of this semantics and
-the proof of determinacy.</div></td>
+the proof of determinacy.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Concurrent_Revisions-AFP,
author = {Roy Overbeek},
title = {Formalization of Concurrent Revisions},
journal = {Archive of Formal Proofs},
month = dec,
year = 2018,
note = {\url{http://isa-afp.org/entries/Concurrent_Revisions.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Concurrent_Revisions/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Concurrent_Revisions/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Concurrent_Revisions/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Concurrent_Revisions-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Concurrent_Revisions-2019-06-11.tar.gz">
afp-Concurrent_Revisions-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Concurrent_Revisions-2019-01-03.tar.gz">
afp-Concurrent_Revisions-2019-01-03.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Consensus_Refined.html b/web/entries/Consensus_Refined.html
--- a/web/entries/Consensus_Refined.html
+++ b/web/entries/Consensus_Refined.html
@@ -1,234 +1,234 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Consensus Refined - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>onsensus
<font class="first">R</font>efined
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Consensus Refined</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Ognjen Maric and
Christoph Sprenger (sprenger /at/ inf /dot/ ethz /dot/ ch)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-03-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Algorithms for solving the consensus problem are fundamental to
distributed computing. Despite their brevity, their
ability to operate in concurrent, asynchronous and failure-prone
environments comes at the cost of complex and subtle
behaviors. Accordingly, understanding how they work and proving
their correctness is a non-trivial endeavor where abstraction
is immensely helpful.
Moreover, research on consensus has yielded a large number of
algorithms, many of which appear to share common algorithmic
ideas. A natural question is whether and how these similarities can
be distilled and described in a precise, unified way.
In this work, we combine stepwise refinement and
lockstep models to provide an abstract and unified
view of a sizeable family of consensus algorithms. Our models
provide insights into the design choices underlying the different
-algorithms, and classify them based on those choices.</div></td>
+algorithms, and classify them based on those choices.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Consensus_Refined-AFP,
author = {Ognjen Maric and Christoph Sprenger},
title = {Consensus Refined},
journal = {Archive of Formal Proofs},
month = mar,
year = 2015,
note = {\url{http://isa-afp.org/entries/Consensus_Refined.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Heard_Of.html">Heard_Of</a>, <a href="Stuttering_Equivalence.html">Stuttering_Equivalence</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Consensus_Refined/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Consensus_Refined/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Consensus_Refined/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Consensus_Refined-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Consensus_Refined-2019-06-11.tar.gz">
afp-Consensus_Refined-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Consensus_Refined-2018-08-16.tar.gz">
afp-Consensus_Refined-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Consensus_Refined-2017-10-10.tar.gz">
afp-Consensus_Refined-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Consensus_Refined-2016-12-17.tar.gz">
afp-Consensus_Refined-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Consensus_Refined-2016-02-22.tar.gz">
afp-Consensus_Refined-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Consensus_Refined-2015-05-27.tar.gz">
afp-Consensus_Refined-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Consensus_Refined-2015-03-19.tar.gz">
afp-Consensus_Refined-2015-03-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Constructive_Cryptography.html b/web/entries/Constructive_Cryptography.html
--- a/web/entries/Constructive_Cryptography.html
+++ b/web/entries/Constructive_Cryptography.html
@@ -1,212 +1,212 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Constructive Cryptography in HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>onstructive
<font class="first">C</font>ryptography
in
<font class="first">H</font>OL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Constructive Cryptography in HOL</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a> and
S. Reza Sefidgar
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-12-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Inspired by Abstract Cryptography, we extend CryptHOL, a framework for
formalizing game-based proofs, with an abstract model of Random
Systems and provide proof rules about their composition and equality.
This foundation facilitates the formalization of Constructive
Cryptography proofs, where the security of a cryptographic scheme is
realized as a special form of construction in which a complex random
system is built from simpler ones. This is a first step towards a
fully-featured compositional framework, similar to Universal
Composability framework, that supports formalization of
-simulation-based proofs.</div></td>
+simulation-based proofs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Constructive_Cryptography-AFP,
author = {Andreas Lochbihler and S. Reza Sefidgar},
title = {Constructive Cryptography in HOL},
journal = {Archive of Formal Proofs},
month = dec,
year = 2018,
note = {\url{http://isa-afp.org/entries/Constructive_Cryptography.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="CryptHOL.html">CryptHOL</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Constructive_Cryptography/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Constructive_Cryptography/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Constructive_Cryptography/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Constructive_Cryptography-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Constructive_Cryptography-2019-06-11.tar.gz">
afp-Constructive_Cryptography-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Constructive_Cryptography-2018-12-20.tar.gz">
afp-Constructive_Cryptography-2018-12-20.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Constructive_Cryptography-2018-12-19.tar.gz">
afp-Constructive_Cryptography-2018-12-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Constructor_Funs.html b/web/entries/Constructor_Funs.html
--- a/web/entries/Constructor_Funs.html
+++ b/web/entries/Constructor_Funs.html
@@ -1,208 +1,208 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Constructor Functions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>onstructor
<font class="first">F</font>unctions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Constructor Functions</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-04-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Isabelle's code generator performs various adaptations for target
languages. Among others, constructor applications have to be fully
saturated. That means that for constructor calls occuring as arguments
to higher-order functions, synthetic lambdas have to be inserted. This
entry provides tooling to avoid this construction altogether by
-introducing constructor functions.</div></td>
+introducing constructor functions.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Constructor_Funs-AFP,
author = {Lars Hupel},
title = {Constructor Functions},
journal = {Archive of Formal Proofs},
month = apr,
year = 2017,
note = {\url{http://isa-afp.org/entries/Constructor_Funs.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="CakeML_Codegen.html">CakeML_Codegen</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Constructor_Funs/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Constructor_Funs/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Constructor_Funs/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Constructor_Funs-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Constructor_Funs-2019-06-11.tar.gz">
afp-Constructor_Funs-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Constructor_Funs-2018-08-16.tar.gz">
afp-Constructor_Funs-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Constructor_Funs-2017-10-10.tar.gz">
afp-Constructor_Funs-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Constructor_Funs-2017-04-20.tar.gz">
afp-Constructor_Funs-2017-04-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Containers.html b/web/entries/Containers.html
--- a/web/entries/Containers.html
+++ b/web/entries/Containers.html
@@ -1,258 +1,258 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Light-weight Containers - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>ight-weight
<font class="first">C</font>ontainers
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Light-weight Containers</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="datahead">
Contributor:
</td>
<td class="data">
René Thiemann (rene /dot/ thiemann /at/ uibk /dot/ ac /dot/ at)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-04-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This development provides a framework for container types like sets and maps such that generated code implements these containers with different (efficient) data structures.
Thanks to type classes and refinement during code generation, this light-weight approach can seamlessly replace Isabelle's default setup for code generation.
Heuristics automatically pick one of the available data structures depending on the type of elements to be stored, but users can also choose on their own.
The extensible design permits to add more implementations at any time.
<p>
To support arbitrary nesting of sets, we define a linear order on sets based on a linear order of the elements and provide efficient implementations.
-It even allows to compare complements with non-complements.</div></td>
+It even allows to compare complements with non-complements.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2013-07-11]: add pretty printing for sets (revision 7f3f52c5f5fa)<br>
[2013-09-20]:
provide generators for canonical type class instantiations
(revision 159f4401f4a8 by René Thiemann)<br>
[2014-07-08]: add support for going from partial functions to mappings (revision 7a6fc957e8ed)<br>
[2018-03-05]: add two application examples: depth-first search and 2SAT (revision e5e1a1da2411)</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Containers-AFP,
author = {Andreas Lochbihler},
title = {Light-weight Containers},
journal = {Archive of Formal Proofs},
month = apr,
year = 2013,
note = {\url{http://isa-afp.org/entries/Containers.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Automatic_Refinement.html">Automatic_Refinement</a>, <a href="Collections.html">Collections</a>, <a href="Deriving.html">Deriving</a>, <a href="Finger-Trees.html">Finger-Trees</a>, <a href="Regular-Sets.html">Regular-Sets</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="MFODL_Monitor_Optimized.html">MFODL_Monitor_Optimized</a>, <a href="MFOTL_Monitor.html">MFOTL_Monitor</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Containers/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Containers/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Containers/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Containers-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Containers-2019-06-11.tar.gz">
afp-Containers-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Containers-2018-08-16.tar.gz">
afp-Containers-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Containers-2017-10-10.tar.gz">
afp-Containers-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Containers-2016-12-17.tar.gz">
afp-Containers-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Containers-2016-02-22.tar.gz">
afp-Containers-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Containers-2015-05-27.tar.gz">
afp-Containers-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Containers-2014-08-28.tar.gz">
afp-Containers-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Containers-2013-12-11.tar.gz">
afp-Containers-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Containers-2013-11-17.tar.gz">
afp-Containers-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Containers-2013-04-23.tar.gz">
afp-Containers-2013-04-23.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/CoreC++.html b/web/entries/CoreC++.html
--- a/web/entries/CoreC++.html
+++ b/web/entries/CoreC++.html
@@ -1,278 +1,278 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>CoreC++ - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>oreC++
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">CoreC++</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php">Daniel Wasserrab</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2006-05-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We present an operational semantics and type safety proof for multiple inheritance in C++. The semantics models the behavior of method calls, field accesses, and two forms of casts in C++ class hierarchies. For explanations see the OOPSLA 2006 paper by Wasserrab, Nipkow, Snelting and Tip.</div></td>
+ <td class="abstract mathjax_process">We present an operational semantics and type safety proof for multiple inheritance in C++. The semantics models the behavior of method calls, field accesses, and two forms of casts in C++ class hierarchies. For explanations see the OOPSLA 2006 paper by Wasserrab, Nipkow, Snelting and Tip.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{CoreC++-AFP,
author = {Daniel Wasserrab},
title = {CoreC++},
journal = {Archive of Formal Proofs},
month = may,
year = 2006,
note = {\url{http://isa-afp.org/entries/CoreC++.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CoreC++/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/CoreC++/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CoreC++/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-CoreC++-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-CoreC++-2019-06-11.tar.gz">
afp-CoreC++-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-CoreC++-2018-08-16.tar.gz">
afp-CoreC++-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-CoreC++-2017-10-10.tar.gz">
afp-CoreC++-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-CoreC++-2016-12-17.tar.gz">
afp-CoreC++-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-CoreC++-2016-02-22.tar.gz">
afp-CoreC++-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-CoreC++-2015-05-27.tar.gz">
afp-CoreC++-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-CoreC++-2014-08-28.tar.gz">
afp-CoreC++-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-CoreC++-2013-12-11.tar.gz">
afp-CoreC++-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-CoreC++-2013-11-17.tar.gz">
afp-CoreC++-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-CoreC++-2013-03-02.tar.gz">
afp-CoreC++-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-CoreC++-2013-02-16.tar.gz">
afp-CoreC++-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-CoreC++-2012-05-24.tar.gz">
afp-CoreC++-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-CoreC++-2011-10-11.tar.gz">
afp-CoreC++-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-CoreC++-2011-02-11.tar.gz">
afp-CoreC++-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-CoreC++-2010-06-30.tar.gz">
afp-CoreC++-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-CoreC++-2009-12-12.tar.gz">
afp-CoreC++-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-CoreC++-2009-04-29.tar.gz">
afp-CoreC++-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-CoreC++-2008-06-10.tar.gz">
afp-CoreC++-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-CoreC++-2007-11-27.tar.gz">
afp-CoreC++-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-CoreC++-2006-05-16.tar.gz">
afp-CoreC++-2006-05-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Core_DOM.html b/web/entries/Core_DOM.html
--- a/web/entries/Core_DOM.html
+++ b/web/entries/Core_DOM.html
@@ -1,217 +1,217 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Formal Model of the Document Object Model - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">F</font>ormal
<font class="first">M</font>odel
of
the
<font class="first">D</font>ocument
<font class="first">O</font>bject
<font class="first">M</font>odel
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Formal Model of the Document Object Model</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www.brucker.ch/">Achim D. Brucker</a> and
<a href="http://www.dcs.shef.ac.uk/cgi-bin/makeperson?M.Herzberg">Michael Herzberg</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-12-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
In this AFP entry, we formalize the core of the Document Object Model
(DOM). At its core, the DOM defines a tree-like data structure for
representing documents in general and HTML documents in particular. It
is the heart of any modern web browser. Formalizing the key concepts
of the DOM is a prerequisite for the formal reasoning over client-side
JavaScript programs and for the analysis of security concepts in
modern web browsers. We present a formalization of the core DOM, with
focus on the node-tree and the operations defined on node-trees, in
Isabelle/HOL. We use the formalization to verify the functional
correctness of the most important functions defined in the DOM
standard. Moreover, our formalization is 1) extensible, i.e., can be
extended without the need of re-proving already proven properties and
2) executable, i.e., we can generate executable code from our
-specification.</div></td>
+specification.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Core_DOM-AFP,
author = {Achim D. Brucker and Michael Herzberg},
title = {A Formal Model of the Document Object Model},
journal = {Archive of Formal Proofs},
month = dec,
year = 2018,
note = {\url{http://isa-afp.org/entries/Core_DOM.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Core_DOM/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Core_DOM/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Core_DOM/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Core_DOM-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Core_DOM-2019-06-11.tar.gz">
afp-Core_DOM-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Core_DOM-2019-01-07.tar.gz">
afp-Core_DOM-2019-01-07.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Count_Complex_Roots.html b/web/entries/Count_Complex_Roots.html
--- a/web/entries/Count_Complex_Roots.html
+++ b/web/entries/Count_Complex_Roots.html
@@ -1,214 +1,214 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Count the Number of Complex Roots - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ount
the
<font class="first">N</font>umber
of
<font class="first">C</font>omplex
<font class="first">R</font>oots
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Count the Number of Complex Roots</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-10-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Based on evaluating Cauchy indices through remainder sequences, this
entry provides an effective procedure to count the number of complex
roots (with multiplicity) of a polynomial within a rectangle box or a
half-plane. Potential applications of this entry include certified
complex root isolation (of a polynomial) and testing the Routh-Hurwitz
stability criterion (i.e., to check whether all the roots of some
-characteristic polynomial have negative real parts).</div></td>
+characteristic polynomial have negative real parts).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Count_Complex_Roots-AFP,
author = {Wenda Li},
title = {Count the Number of Complex Roots},
journal = {Archive of Formal Proofs},
month = oct,
year = 2017,
note = {\url{http://isa-afp.org/entries/Count_Complex_Roots.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Sturm_Tarski.html">Sturm_Tarski</a>, <a href="Winding_Number_Eval.html">Winding_Number_Eval</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Linear_Recurrences.html">Linear_Recurrences</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Count_Complex_Roots/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Count_Complex_Roots/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Count_Complex_Roots/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Count_Complex_Roots-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Count_Complex_Roots-2019-06-11.tar.gz">
afp-Count_Complex_Roots-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Count_Complex_Roots-2018-08-16.tar.gz">
afp-Count_Complex_Roots-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Count_Complex_Roots-2017-10-18.tar.gz">
afp-Count_Complex_Roots-2017-10-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/CryptHOL.html b/web/entries/CryptHOL.html
--- a/web/entries/CryptHOL.html
+++ b/web/entries/CryptHOL.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>CryptHOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ryptHOL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">CryptHOL</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-05-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>CryptHOL provides a framework for formalising cryptographic arguments
in Isabelle/HOL. It shallowly embeds a probabilistic functional
programming language in higher order logic. The language features
monadic sequencing, recursion, random sampling, failures and failure
handling, and black-box access to oracles. Oracles are probabilistic
functions which maintain hidden state between different invocations.
All operators are defined in the new semantic domain of
generative probabilistic values, a codatatype. We derive proof rules for
the operators and establish a connection with the theory of relational
parametricity. Thus, the resuting proofs are trustworthy and
comprehensible, and the framework is extensible and widely applicable.
</p><p>
The framework is used in the accompanying AFP entry "Game-based
Cryptography in HOL". There, we show-case our framework by formalizing
different game-based proofs from the literature. This formalisation
-continues the work described in the author's ESOP 2016 paper.</p></div></td>
+continues the work described in the author's ESOP 2016 paper.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{CryptHOL-AFP,
author = {Andreas Lochbihler},
title = {CryptHOL},
journal = {Archive of Formal Proofs},
month = may,
year = 2017,
note = {\url{http://isa-afp.org/entries/CryptHOL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Applicative_Lifting.html">Applicative_Lifting</a>, <a href="Coinductive.html">Coinductive</a>, <a href="Landau_Symbols.html">Landau_Symbols</a>, <a href="Monad_Normalisation.html">Monad_Normalisation</a>, <a href="Monomorphic_Monad.html">Monomorphic_Monad</a>, <a href="Probabilistic_While.html">Probabilistic_While</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Constructive_Cryptography.html">Constructive_Cryptography</a>, <a href="Game_Based_Crypto.html">Game_Based_Crypto</a>, <a href="Sigma_Commit_Crypto.html">Sigma_Commit_Crypto</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CryptHOL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/CryptHOL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CryptHOL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-CryptHOL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-CryptHOL-2019-06-11.tar.gz">
afp-CryptHOL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-CryptHOL-2018-08-16.tar.gz">
afp-CryptHOL-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-CryptHOL-2017-10-10.tar.gz">
afp-CryptHOL-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-CryptHOL-2017-05-11.tar.gz">
afp-CryptHOL-2017-05-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/CryptoBasedCompositionalProperties.html b/web/entries/CryptoBasedCompositionalProperties.html
--- a/web/entries/CryptoBasedCompositionalProperties.html
+++ b/web/entries/CryptoBasedCompositionalProperties.html
@@ -1,231 +1,231 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Compositional Properties of Crypto-Based Components - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ompositional
<font class="first">P</font>roperties
of
<font class="first">C</font>rypto-Based
<font class="first">C</font>omponents
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Compositional Properties of Crypto-Based Components</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Maria Spichkova (maria /dot/ spichkova /at/ rmit /dot/ edu /dot/ au)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-01-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This paper presents an Isabelle/HOL set of theories which allows the specification of crypto-based components and the verification of their composition properties wrt. cryptographic aspects. We introduce a formalisation of the security property of data secrecy, the corresponding definitions and proofs. Please note that here we import the Isabelle/HOL theory ListExtras.thy, presented in the AFP entry FocusStreamsCaseStudies-AFP.</div></td>
+ <td class="abstract mathjax_process">This paper presents an Isabelle/HOL set of theories which allows the specification of crypto-based components and the verification of their composition properties wrt. cryptographic aspects. We introduce a formalisation of the security property of data secrecy, the corresponding definitions and proofs. Please note that here we import the Isabelle/HOL theory ListExtras.thy, presented in the AFP entry FocusStreamsCaseStudies-AFP.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{CryptoBasedCompositionalProperties-AFP,
author = {Maria Spichkova},
title = {Compositional Properties of Crypto-Based Components},
journal = {Archive of Formal Proofs},
month = jan,
year = 2014,
note = {\url{http://isa-afp.org/entries/CryptoBasedCompositionalProperties.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CryptoBasedCompositionalProperties/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/CryptoBasedCompositionalProperties/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/CryptoBasedCompositionalProperties/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-CryptoBasedCompositionalProperties-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-CryptoBasedCompositionalProperties-2019-06-11.tar.gz">
afp-CryptoBasedCompositionalProperties-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-CryptoBasedCompositionalProperties-2018-08-16.tar.gz">
afp-CryptoBasedCompositionalProperties-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-CryptoBasedCompositionalProperties-2017-10-10.tar.gz">
afp-CryptoBasedCompositionalProperties-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-CryptoBasedCompositionalProperties-2016-12-17.tar.gz">
afp-CryptoBasedCompositionalProperties-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-CryptoBasedCompositionalProperties-2016-02-22.tar.gz">
afp-CryptoBasedCompositionalProperties-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-CryptoBasedCompositionalProperties-2015-05-27.tar.gz">
afp-CryptoBasedCompositionalProperties-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-CryptoBasedCompositionalProperties-2014-08-28.tar.gz">
afp-CryptoBasedCompositionalProperties-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-CryptoBasedCompositionalProperties-2014-01-14.tar.gz">
afp-CryptoBasedCompositionalProperties-2014-01-14.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-CryptoBasedCompositionalProperties-2014-01-11.tar.gz">
afp-CryptoBasedCompositionalProperties-2014-01-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/DFS_Framework.html b/web/entries/DFS_Framework.html
--- a/web/entries/DFS_Framework.html
+++ b/web/entries/DFS_Framework.html
@@ -1,241 +1,241 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Framework for Verifying Depth-First Search Algorithms - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">F</font>ramework
for
<font class="first">V</font>erifying
<font class="first">D</font>epth-First
<font class="first">S</font>earch
<font class="first">A</font>lgorithms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Framework for Verifying Depth-First Search Algorithms</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Peter Lammich and
René Neumann (rene /dot/ neumann /at/ in /dot/ tum /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-07-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
This entry presents a framework for the modular verification of
DFS-based algorithms, which is described in our [CPP-2015] paper. It
provides a generic DFS algorithm framework, that can be parameterized
with user-defined actions on certain events (e.g. discovery of new
node). It comes with an extensible library of invariants, which can
be used to derive invariants of a specific parameterization. Using
refinement techniques, efficient implementations of the algorithms can
easily be derived. Here, the framework comes with templates for a
recursive and a tail-recursive implementation, and also with several
templates for implementing the data structures required by the DFS
algorithm. Finally, this entry contains a set of re-usable DFS-based
algorithms, which illustrate the application of the framework.
</p><p>
[CPP-2015] Peter Lammich, René Neumann: A Framework for Verifying
-Depth-First Search Algorithms. CPP 2015: 137-146</p></div></td>
+Depth-First Search Algorithms. CPP 2015: 137-146</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{DFS_Framework-AFP,
author = {Peter Lammich and René Neumann},
title = {A Framework for Verifying Depth-First Search Algorithms},
journal = {Archive of Formal Proofs},
month = jul,
year = 2016,
note = {\url{http://isa-afp.org/entries/DFS_Framework.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="CAVA_Automata.html">CAVA_Automata</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Flow_Networks.html">Flow_Networks</a>, <a href="Refine_Imperative_HOL.html">Refine_Imperative_HOL</a>, <a href="Transition_Systems_and_Automata.html">Transition_Systems_and_Automata</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/DFS_Framework/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/DFS_Framework/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/DFS_Framework/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-DFS_Framework-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-DFS_Framework-2020-01-14.tar.gz">
afp-DFS_Framework-2020-01-14.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-DFS_Framework-2019-06-11.tar.gz">
afp-DFS_Framework-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-DFS_Framework-2018-08-16.tar.gz">
afp-DFS_Framework-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-DFS_Framework-2017-10-10.tar.gz">
afp-DFS_Framework-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-DFS_Framework-2016-12-17.tar.gz">
afp-DFS_Framework-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-DFS_Framework-2016-07-05.tar.gz">
afp-DFS_Framework-2016-07-05.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/DPT-SAT-Solver.html b/web/entries/DPT-SAT-Solver.html
--- a/web/entries/DPT-SAT-Solver.html
+++ b/web/entries/DPT-SAT-Solver.html
@@ -1,274 +1,274 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Fast SAT Solver for Isabelle in Standard ML - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">F</font>ast
<font class="first">S</font>AT
<font class="first">S</font>olver
for
<font class="first">I</font>sabelle
in
<font class="first">S</font>tandard
<font class="first">M</font>L
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Fast SAT Solver for Isabelle in Standard ML</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Armin Heller
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2009-12-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This contribution contains a fast SAT solver for Isabelle written in Standard ML. By loading the theory <tt>DPT_SAT_Solver</tt>, the SAT solver installs itself (under the name ``dptsat'') and certain Isabelle tools like Refute will start using it automatically. This is a port of the DPT (Decision Procedure Toolkit) SAT Solver written in OCaml.</div></td>
+ <td class="abstract mathjax_process">This contribution contains a fast SAT solver for Isabelle written in Standard ML. By loading the theory <tt>DPT_SAT_Solver</tt>, the SAT solver installs itself (under the name ``dptsat'') and certain Isabelle tools like Refute will start using it automatically. This is a port of the DPT (Decision Procedure Toolkit) SAT Solver written in OCaml.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{DPT-SAT-Solver-AFP,
author = {Armin Heller},
title = {A Fast SAT Solver for Isabelle in Standard ML},
journal = {Archive of Formal Proofs},
month = dec,
year = 2009,
note = {\url{http://isa-afp.org/entries/DPT-SAT-Solver.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/DPT-SAT-Solver/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/DPT-SAT-Solver/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/DPT-SAT-Solver/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-DPT-SAT-Solver-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-DPT-SAT-Solver-2019-06-11.tar.gz">
afp-DPT-SAT-Solver-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-DPT-SAT-Solver-2018-08-16.tar.gz">
afp-DPT-SAT-Solver-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-DPT-SAT-Solver-2017-10-10.tar.gz">
afp-DPT-SAT-Solver-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-DPT-SAT-Solver-2016-12-17.tar.gz">
afp-DPT-SAT-Solver-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-DPT-SAT-Solver-2016-02-22.tar.gz">
afp-DPT-SAT-Solver-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-DPT-SAT-Solver-2015-07-27.tar.gz">
afp-DPT-SAT-Solver-2015-07-27.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-DPT-SAT-Solver-2015-05-27.tar.gz">
afp-DPT-SAT-Solver-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-DPT-SAT-Solver-2014-08-28.tar.gz">
afp-DPT-SAT-Solver-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-DPT-SAT-Solver-2013-12-11.tar.gz">
afp-DPT-SAT-Solver-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-DPT-SAT-Solver-2013-11-17.tar.gz">
afp-DPT-SAT-Solver-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-DPT-SAT-Solver-2013-02-16.tar.gz">
afp-DPT-SAT-Solver-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-DPT-SAT-Solver-2012-05-24.tar.gz">
afp-DPT-SAT-Solver-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-DPT-SAT-Solver-2011-10-11.tar.gz">
afp-DPT-SAT-Solver-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-DPT-SAT-Solver-2011-02-11.tar.gz">
afp-DPT-SAT-Solver-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-DPT-SAT-Solver-2010-06-30.tar.gz">
afp-DPT-SAT-Solver-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-DPT-SAT-Solver-2009-12-12.tar.gz">
afp-DPT-SAT-Solver-2009-12-12.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/DataRefinementIBP.html b/web/entries/DataRefinementIBP.html
--- a/web/entries/DataRefinementIBP.html
+++ b/web/entries/DataRefinementIBP.html
@@ -1,278 +1,278 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Semantics and Data Refinement of Invariant Based Programs - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>emantics
and
<font class="first">D</font>ata
<font class="first">R</font>efinement
of
<font class="first">I</font>nvariant
<font class="first">B</font>ased
<font class="first">P</font>rograms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Semantics and Data Refinement of Invariant Based Programs</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Viorel Preoteasa (viorel /dot/ preoteasa /at/ aalto /dot/ fi) and
<a href="http://users.abo.fi/Ralph-Johan.Back/">Ralph-Johan Back</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-05-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">The invariant based programming is a technique of constructing correct programs by first identifying the basic situations (pre- and post-conditions and invariants) that can occur during the execution of the program, and then defining the transitions and proving that they preserve the invariants. Data refinement is a technique of building correct programs working on concrete datatypes as refinements of more abstract programs. In the theories presented here we formalize the predicate transformer semantics for invariant based programs and their data refinement.</div></td>
+ <td class="abstract mathjax_process">The invariant based programming is a technique of constructing correct programs by first identifying the basic situations (pre- and post-conditions and invariants) that can occur during the execution of the program, and then defining the transitions and proving that they preserve the invariants. Data refinement is a technique of building correct programs working on concrete datatypes as refinements of more abstract programs. In the theories presented here we formalize the predicate transformer semantics for invariant based programs and their data refinement.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2012-01-05]: Moved some general complete lattice properties to the AFP entry Lattice Properties.
Changed the definition of the data refinement relation to be more general and updated all corresponding theorems.
Added new syntax for demonic and angelic update statements.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{DataRefinementIBP-AFP,
author = {Viorel Preoteasa and Ralph-Johan Back},
title = {Semantics and Data Refinement of Invariant Based Programs},
journal = {Archive of Formal Proofs},
month = may,
year = 2010,
note = {\url{http://isa-afp.org/entries/DataRefinementIBP.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="LatticeProperties.html">LatticeProperties</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="GraphMarkingIBP.html">GraphMarkingIBP</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/DataRefinementIBP/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/DataRefinementIBP/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/DataRefinementIBP/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-DataRefinementIBP-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-DataRefinementIBP-2019-06-11.tar.gz">
afp-DataRefinementIBP-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-DataRefinementIBP-2018-08-16.tar.gz">
afp-DataRefinementIBP-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-DataRefinementIBP-2017-10-10.tar.gz">
afp-DataRefinementIBP-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-DataRefinementIBP-2016-12-17.tar.gz">
afp-DataRefinementIBP-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-DataRefinementIBP-2016-02-22.tar.gz">
afp-DataRefinementIBP-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-DataRefinementIBP-2015-05-27.tar.gz">
afp-DataRefinementIBP-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-DataRefinementIBP-2014-08-28.tar.gz">
afp-DataRefinementIBP-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-DataRefinementIBP-2013-12-11.tar.gz">
afp-DataRefinementIBP-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-DataRefinementIBP-2013-11-17.tar.gz">
afp-DataRefinementIBP-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-DataRefinementIBP-2013-02-16.tar.gz">
afp-DataRefinementIBP-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-DataRefinementIBP-2012-05-24.tar.gz">
afp-DataRefinementIBP-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-DataRefinementIBP-2011-10-11.tar.gz">
afp-DataRefinementIBP-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-DataRefinementIBP-2011-02-11.tar.gz">
afp-DataRefinementIBP-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-DataRefinementIBP-2010-06-30.tar.gz">
afp-DataRefinementIBP-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-DataRefinementIBP-2010-05-28.tar.gz">
afp-DataRefinementIBP-2010-05-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Datatype_Order_Generator.html b/web/entries/Datatype_Order_Generator.html
--- a/web/entries/Datatype_Order_Generator.html
+++ b/web/entries/Datatype_Order_Generator.html
@@ -1,263 +1,263 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Generating linear orders for datatypes - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>enerating
linear
orders
for
datatypes
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Generating linear orders for datatypes</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-08-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We provide a framework for registering automatic methods to derive
class instances of datatypes, as it is possible using Haskell's ``deriving Ord, Show, ...'' feature.
<p>
We further implemented such automatic methods to derive (linear) orders or hash-functions which are
required in the Isabelle Collection Framework. Moreover, for the tactic of Huffman and Krauss to show that a
datatype is countable, we implemented a wrapper so that this tactic becomes accessible in our framework.
<p>
Our formalization was performed as part of the <a href="http://cl-informatik.uibk.ac.at/software/ceta">IsaFoR/CeTA</a> project.
With our new tactic we could completely remove
tedious proofs for linear orders of two datatypes.
<p>
This development is aimed at datatypes generated by the "old_datatype"
-command.</div></td>
+command.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Datatype_Order_Generator-AFP,
author = {René Thiemann},
title = {Generating linear orders for datatypes},
journal = {Archive of Formal Proofs},
month = aug,
year = 2012,
note = {\url{http://isa-afp.org/entries/Datatype_Order_Generator.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Deriving.html">Deriving</a>, <a href="Native_Word.html">Native_Word</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Higher_Order_Terms.html">Higher_Order_Terms</a>, <a href="WOOT_Strong_Eventual_Consistency.html">WOOT_Strong_Eventual_Consistency</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Datatype_Order_Generator/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Datatype_Order_Generator/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Datatype_Order_Generator/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Datatype_Order_Generator-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Datatype_Order_Generator-2019-06-11.tar.gz">
afp-Datatype_Order_Generator-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Datatype_Order_Generator-2018-08-16.tar.gz">
afp-Datatype_Order_Generator-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Datatype_Order_Generator-2017-10-10.tar.gz">
afp-Datatype_Order_Generator-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Datatype_Order_Generator-2016-12-17.tar.gz">
afp-Datatype_Order_Generator-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Datatype_Order_Generator-2016-02-22.tar.gz">
afp-Datatype_Order_Generator-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Datatype_Order_Generator-2015-05-27.tar.gz">
afp-Datatype_Order_Generator-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Datatype_Order_Generator-2014-08-28.tar.gz">
afp-Datatype_Order_Generator-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Datatype_Order_Generator-2013-12-11.tar.gz">
afp-Datatype_Order_Generator-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Datatype_Order_Generator-2013-11-17.tar.gz">
afp-Datatype_Order_Generator-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Datatype_Order_Generator-2013-03-02.tar.gz">
afp-Datatype_Order_Generator-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Datatype_Order_Generator-2013-02-16.tar.gz">
afp-Datatype_Order_Generator-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Datatype_Order_Generator-2012-08-07.tar.gz">
afp-Datatype_Order_Generator-2012-08-07.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Decl_Sem_Fun_PL.html b/web/entries/Decl_Sem_Fun_PL.html
--- a/web/entries/Decl_Sem_Fun_PL.html
+++ b/web/entries/Decl_Sem_Fun_PL.html
@@ -1,225 +1,225 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Declarative Semantics for Functional Languages - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>eclarative
<font class="first">S</font>emantics
for
<font class="first">F</font>unctional
<font class="first">L</font>anguages
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Declarative Semantics for Functional Languages</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://homes.soic.indiana.edu/jsiek/">Jeremy Siek</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-07-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a semantics for an applied call-by-value lambda-calculus
that is compositional, extensional, and elementary. We present four
different views of the semantics: 1) as a relational (big-step)
semantics that is not operational but instead declarative, 2) as a
denotational semantics that does not use domain theory, 3) as a
non-deterministic interpreter, and 4) as a variant of the intersection
type systems of the Torino group. We prove that the semantics is
correct by showing that it is sound and complete with respect to
operational semantics on programs and that is sound with respect to
contextual equivalence. We have not yet investigated whether it is
fully abstract. We demonstrate that this approach to semantics is
useful with three case studies. First, we use the semantics to prove
correctness of a compiler optimization that inlines function
application. Second, we adapt the semantics to the polymorphic
lambda-calculus extended with general recursion and prove semantic
type soundness. Third, we adapt the semantics to the call-by-value
lambda-calculus with mutable references.
<br>
-The paper that accompanies these Isabelle theories is <a href="https://arxiv.org/abs/1707.03762">available on arXiv</a>.</div></td>
+The paper that accompanies these Isabelle theories is <a href="https://arxiv.org/abs/1707.03762">available on arXiv</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Decl_Sem_Fun_PL-AFP,
author = {Jeremy Siek},
title = {Declarative Semantics for Functional Languages},
journal = {Archive of Formal Proofs},
month = jul,
year = 2017,
note = {\url{http://isa-afp.org/entries/Decl_Sem_Fun_PL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Decl_Sem_Fun_PL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Decl_Sem_Fun_PL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Decl_Sem_Fun_PL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Decl_Sem_Fun_PL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Decl_Sem_Fun_PL-2019-06-11.tar.gz">
afp-Decl_Sem_Fun_PL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Decl_Sem_Fun_PL-2018-08-16.tar.gz">
afp-Decl_Sem_Fun_PL-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Decl_Sem_Fun_PL-2017-10-10.tar.gz">
afp-Decl_Sem_Fun_PL-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Decl_Sem_Fun_PL-2017-07-24.tar.gz">
afp-Decl_Sem_Fun_PL-2017-07-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Decreasing-Diagrams-II.html b/web/entries/Decreasing-Diagrams-II.html
--- a/web/entries/Decreasing-Diagrams-II.html
+++ b/web/entries/Decreasing-Diagrams-II.html
@@ -1,214 +1,214 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Decreasing Diagrams II - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>ecreasing
<font class="first">D</font>iagrams
<font class="first">I</font>I
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Decreasing Diagrams II</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Bertram Felgenhauer (int-e /at/ gmx /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-08-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This theory formalizes the commutation version of decreasing diagrams for Church-Rosser modulo. The proof follows Felgenhauer and van Oostrom (RTA 2013). The theory also provides important specializations, in particular van Oostrom’s conversion version (TCS 2008) of decreasing diagrams.</div></td>
+ <td class="abstract mathjax_process">This theory formalizes the commutation version of decreasing diagrams for Church-Rosser modulo. The proof follows Felgenhauer and van Oostrom (RTA 2013). The theory also provides important specializations, in particular van Oostrom’s conversion version (TCS 2008) of decreasing diagrams.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Decreasing-Diagrams-II-AFP,
author = {Bertram Felgenhauer},
title = {Decreasing Diagrams II},
journal = {Archive of Formal Proofs},
month = aug,
year = 2015,
note = {\url{http://isa-afp.org/entries/Decreasing-Diagrams-II.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Abstract-Rewriting.html">Abstract-Rewriting</a>, <a href="Open_Induction.html">Open_Induction</a>, <a href="Well_Quasi_Orders.html">Well_Quasi_Orders</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Decreasing-Diagrams-II/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Decreasing-Diagrams-II/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Decreasing-Diagrams-II/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Decreasing-Diagrams-II-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Decreasing-Diagrams-II-2019-06-11.tar.gz">
afp-Decreasing-Diagrams-II-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Decreasing-Diagrams-II-2018-08-16.tar.gz">
afp-Decreasing-Diagrams-II-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Decreasing-Diagrams-II-2017-10-10.tar.gz">
afp-Decreasing-Diagrams-II-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Decreasing-Diagrams-II-2016-12-17.tar.gz">
afp-Decreasing-Diagrams-II-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Decreasing-Diagrams-II-2016-02-22.tar.gz">
afp-Decreasing-Diagrams-II-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Decreasing-Diagrams-II-2015-08-21.tar.gz">
afp-Decreasing-Diagrams-II-2015-08-21.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Decreasing-Diagrams.html b/web/entries/Decreasing-Diagrams.html
--- a/web/entries/Decreasing-Diagrams.html
+++ b/web/entries/Decreasing-Diagrams.html
@@ -1,232 +1,232 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Decreasing Diagrams - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>ecreasing
<font class="first">D</font>iagrams
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Decreasing Diagrams</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/users/hzankl">Harald Zankl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-11-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This theory contains a formalization of decreasing diagrams showing that any locally decreasing abstract rewrite system is confluent. We consider the valley (van Oostrom, TCS 1994) and the conversion version (van Oostrom, RTA 2008) and closely follow the original proofs. As an application we prove Newman's lemma.</div></td>
+ <td class="abstract mathjax_process">This theory contains a formalization of decreasing diagrams showing that any locally decreasing abstract rewrite system is confluent. We consider the valley (van Oostrom, TCS 1994) and the conversion version (van Oostrom, RTA 2008) and closely follow the original proofs. As an application we prove Newman's lemma.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Decreasing-Diagrams-AFP,
author = {Harald Zankl},
title = {Decreasing Diagrams},
journal = {Archive of Formal Proofs},
month = nov,
year = 2013,
note = {\url{http://isa-afp.org/entries/Decreasing-Diagrams.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Abstract-Rewriting.html">Abstract-Rewriting</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Decreasing-Diagrams/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Decreasing-Diagrams/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Decreasing-Diagrams/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Decreasing-Diagrams-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Decreasing-Diagrams-2019-06-11.tar.gz">
afp-Decreasing-Diagrams-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Decreasing-Diagrams-2018-08-16.tar.gz">
afp-Decreasing-Diagrams-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Decreasing-Diagrams-2017-10-10.tar.gz">
afp-Decreasing-Diagrams-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Decreasing-Diagrams-2016-12-17.tar.gz">
afp-Decreasing-Diagrams-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Decreasing-Diagrams-2016-02-22.tar.gz">
afp-Decreasing-Diagrams-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Decreasing-Diagrams-2015-05-27.tar.gz">
afp-Decreasing-Diagrams-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Decreasing-Diagrams-2014-08-28.tar.gz">
afp-Decreasing-Diagrams-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Decreasing-Diagrams-2013-12-11.tar.gz">
afp-Decreasing-Diagrams-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Decreasing-Diagrams-2013-12-02.tar.gz">
afp-Decreasing-Diagrams-2013-12-02.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Decreasing-Diagrams-2013-11-18.tar.gz">
afp-Decreasing-Diagrams-2013-11-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Deep_Learning.html b/web/entries/Deep_Learning.html
--- a/web/entries/Deep_Learning.html
+++ b/web/entries/Deep_Learning.html
@@ -1,214 +1,214 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Expressiveness of Deep Learning - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>xpressiveness
of
<font class="first">D</font>eep
<font class="first">L</font>earning
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Expressiveness of Deep Learning</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Alexander Bentkamp (bentkamp /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-11-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
-Deep learning has had a profound impact on computer science in recent years, with applications to search engines, image recognition and language processing, bioinformatics, and more. Recently, Cohen et al. provided theoretical evidence for the superiority of deep learning over shallow learning. This formalization of their work simplifies and generalizes the original proof, while working around the limitations of the Isabelle type system. To support the formalization, I developed reusable libraries of formalized mathematics, including results about the matrix rank, the Lebesgue measure, and multivariate polynomials, as well as a library for tensor analysis.</div></td>
+ <td class="abstract mathjax_process">
+Deep learning has had a profound impact on computer science in recent years, with applications to search engines, image recognition and language processing, bioinformatics, and more. Recently, Cohen et al. provided theoretical evidence for the superiority of deep learning over shallow learning. This formalization of their work simplifies and generalizes the original proof, while working around the limitations of the Isabelle type system. To support the formalization, I developed reusable libraries of formalized mathematics, including results about the matrix rank, the Lebesgue measure, and multivariate polynomials, as well as a library for tensor analysis.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Deep_Learning-AFP,
author = {Alexander Bentkamp},
title = {Expressiveness of Deep Learning},
journal = {Archive of Formal Proofs},
month = nov,
year = 2016,
note = {\url{http://isa-afp.org/entries/Deep_Learning.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Jordan_Normal_Form.html">Jordan_Normal_Form</a>, <a href="Polynomial_Interpolation.html">Polynomial_Interpolation</a>, <a href="Polynomials.html">Polynomials</a>, <a href="VectorSpace.html">VectorSpace</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="QHLProver.html">QHLProver</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Deep_Learning/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Deep_Learning/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Deep_Learning/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Deep_Learning-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Deep_Learning-2019-06-11.tar.gz">
afp-Deep_Learning-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Deep_Learning-2018-08-16.tar.gz">
afp-Deep_Learning-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Deep_Learning-2017-10-10.tar.gz">
afp-Deep_Learning-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Deep_Learning-2016-12-17.tar.gz">
afp-Deep_Learning-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Deep_Learning-2016-11-10.tar.gz">
afp-Deep_Learning-2016-11-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Density_Compiler.html b/web/entries/Density_Compiler.html
--- a/web/entries/Density_Compiler.html
+++ b/web/entries/Density_Compiler.html
@@ -1,248 +1,248 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Verified Compiler for Probability Density Functions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">V</font>erified
<font class="first">C</font>ompiler
for
<font class="first">P</font>robability
<font class="first">D</font>ensity
<font class="first">F</font>unctions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Verified Compiler for Probability Density Functions</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>,
<a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a> and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-10-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<a href="https://doi.org/10.1007/978-3-642-36742-7_35">Bhat et al. [TACAS 2013]</a> developed an inductive compiler that computes
density functions for probability spaces described by programs in a
probabilistic functional language. In this work, we implement such a
compiler for a modified version of this language within the theorem prover
Isabelle and give a formal proof of its soundness w.r.t. the semantics of
the source and target language. Together with Isabelle's code generation
for inductive predicates, this yields a fully verified, executable density
compiler. The proof is done in two steps: First, an abstract compiler
working with abstract functions modelled directly in the theorem prover's
logic is defined and proved sound. Then, this compiler is refined to a
concrete version that returns a target-language expression.
<p>
An article with the same title and authors is published in the proceedings
of ESOP 2015.
A detailed presentation of this work can be found in the first author's
-master's thesis.</div></td>
+master's thesis.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Density_Compiler-AFP,
author = {Manuel Eberl and Johannes Hölzl and Tobias Nipkow},
title = {A Verified Compiler for Probability Density Functions},
journal = {Archive of Formal Proofs},
month = oct,
year = 2014,
note = {\url{http://isa-afp.org/entries/Density_Compiler.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Density_Compiler/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Density_Compiler/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Density_Compiler/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Density_Compiler-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Density_Compiler-2019-06-11.tar.gz">
afp-Density_Compiler-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Density_Compiler-2018-08-16.tar.gz">
afp-Density_Compiler-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Density_Compiler-2017-10-10.tar.gz">
afp-Density_Compiler-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Density_Compiler-2016-12-17.tar.gz">
afp-Density_Compiler-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Density_Compiler-2016-02-22.tar.gz">
afp-Density_Compiler-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Density_Compiler-2015-05-27.tar.gz">
afp-Density_Compiler-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Density_Compiler-2014-12-22.tar.gz">
afp-Density_Compiler-2014-12-22.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Density_Compiler-2014-10-09.tar.gz">
afp-Density_Compiler-2014-10-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Dependent_SIFUM_Refinement.html b/web/entries/Dependent_SIFUM_Refinement.html
--- a/web/entries/Dependent_SIFUM_Refinement.html
+++ b/web/entries/Dependent_SIFUM_Refinement.html
@@ -1,235 +1,235 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Compositional Security-Preserving Refinement for Concurrent Imperative Programs - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ompositional
<font class="first">S</font>ecurity-Preserving
<font class="first">R</font>efinement
for
<font class="first">C</font>oncurrent
<font class="first">I</font>mperative
<font class="first">P</font>rograms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Compositional Security-Preserving Refinement for Concurrent Imperative Programs</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://people.eng.unimelb.edu.au/tobym/">Toby Murray</a>,
Robert Sison,
Edward Pierzchalski and
Christine Rizkallah
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-06-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The paper "Compositional Verification and Refinement of Concurrent
Value-Dependent Noninterference" by Murray et. al. (CSF 2016) presents
a compositional theory of refinement for a value-dependent
noninterference property, defined in (Murray, PLAS 2015), for
concurrent programs. This development formalises that refinement
-theory, and demonstrates its application on some small examples.</div></td>
+theory, and demonstrates its application on some small examples.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2016-08-19]:
Removed unused "stop" parameters from the sifum_refinement locale.
(revision dbc482d36372)
[2016-09-02]:
TobyM extended "simple" refinement theory to be usable for all bisimulations.
(revision 547f31c25f60)</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Dependent_SIFUM_Refinement-AFP,
author = {Toby Murray and Robert Sison and Edward Pierzchalski and Christine Rizkallah},
title = {Compositional Security-Preserving Refinement for Concurrent Imperative Programs},
journal = {Archive of Formal Proofs},
month = jun,
year = 2016,
note = {\url{http://isa-afp.org/entries/Dependent_SIFUM_Refinement.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Dependent_SIFUM_Type_Systems.html">Dependent_SIFUM_Type_Systems</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dependent_SIFUM_Refinement/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Dependent_SIFUM_Refinement/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dependent_SIFUM_Refinement/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Dependent_SIFUM_Refinement-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Dependent_SIFUM_Refinement-2019-06-11.tar.gz">
afp-Dependent_SIFUM_Refinement-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Dependent_SIFUM_Refinement-2018-08-16.tar.gz">
afp-Dependent_SIFUM_Refinement-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Dependent_SIFUM_Refinement-2017-10-10.tar.gz">
afp-Dependent_SIFUM_Refinement-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Dependent_SIFUM_Refinement-2016-12-17.tar.gz">
afp-Dependent_SIFUM_Refinement-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Dependent_SIFUM_Refinement-2016-06-28.tar.gz">
afp-Dependent_SIFUM_Refinement-2016-06-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Dependent_SIFUM_Type_Systems.html b/web/entries/Dependent_SIFUM_Type_Systems.html
--- a/web/entries/Dependent_SIFUM_Type_Systems.html
+++ b/web/entries/Dependent_SIFUM_Type_Systems.html
@@ -1,242 +1,242 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Dependent Security Type System for Concurrent Imperative Programs - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">D</font>ependent
<font class="first">S</font>ecurity
<font class="first">T</font>ype
<font class="first">S</font>ystem
for
<font class="first">C</font>oncurrent
<font class="first">I</font>mperative
<font class="first">P</font>rograms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Dependent Security Type System for Concurrent Imperative Programs</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://people.eng.unimelb.edu.au/tobym/">Toby Murray</a>,
Robert Sison,
Edward Pierzchalski and
Christine Rizkallah
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-06-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The paper "Compositional Verification and Refinement of Concurrent
Value-Dependent Noninterference" by Murray et. al. (CSF 2016) presents
a dependent security type system for compositionally verifying a
value-dependent noninterference property, defined in (Murray, PLAS
2015), for concurrent programs. This development formalises that
security definition, the type system and its soundness proof, and
demonstrates its application on some small examples. It was derived
from the SIFUM_Type_Systems AFP entry, by Sylvia Grewe, Heiko Mantel
-and Daniel Schoepe, and whose structure it inherits.</div></td>
+and Daniel Schoepe, and whose structure it inherits.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2016-08-19]:
Removed unused "stop" parameter and "stop_no_eval" assumption from the sifum_security locale.
(revision dbc482d36372)
[2016-09-27]:
Added security locale support for the imposition of requirements on the initial memory.
(revision cce4ceb74ddb)</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Dependent_SIFUM_Type_Systems-AFP,
author = {Toby Murray and Robert Sison and Edward Pierzchalski and Christine Rizkallah},
title = {A Dependent Security Type System for Concurrent Imperative Programs},
journal = {Archive of Formal Proofs},
month = jun,
year = 2016,
note = {\url{http://isa-afp.org/entries/Dependent_SIFUM_Type_Systems.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Dependent_SIFUM_Refinement.html">Dependent_SIFUM_Refinement</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dependent_SIFUM_Type_Systems/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Dependent_SIFUM_Type_Systems/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dependent_SIFUM_Type_Systems/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Dependent_SIFUM_Type_Systems-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Dependent_SIFUM_Type_Systems-2019-06-11.tar.gz">
afp-Dependent_SIFUM_Type_Systems-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Dependent_SIFUM_Type_Systems-2018-08-16.tar.gz">
afp-Dependent_SIFUM_Type_Systems-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Dependent_SIFUM_Type_Systems-2017-10-10.tar.gz">
afp-Dependent_SIFUM_Type_Systems-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Dependent_SIFUM_Type_Systems-2016-12-17.tar.gz">
afp-Dependent_SIFUM_Type_Systems-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Dependent_SIFUM_Type_Systems-2016-06-25.tar.gz">
afp-Dependent_SIFUM_Type_Systems-2016-06-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Depth-First-Search.html b/web/entries/Depth-First-Search.html
--- a/web/entries/Depth-First-Search.html
+++ b/web/entries/Depth-First-Search.html
@@ -1,283 +1,283 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Depth First Search - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>epth
<font class="first">F</font>irst
<font class="first">S</font>earch
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Depth First Search</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Toshiaki Nishihara and
Yasuhiko Minamide
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-06-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Depth-first search of a graph is formalized with recdef. It is shown that it visits all of the reachable nodes from a given list of nodes. Executable ML code of depth-first search is obtained using the code generation feature of Isabelle/HOL.</div></td>
+ <td class="abstract mathjax_process">Depth-first search of a graph is formalized with recdef. It is shown that it visits all of the reachable nodes from a given list of nodes. Executable ML code of depth-first search is obtained using the code generation feature of Isabelle/HOL.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Depth-First-Search-AFP,
author = {Toshiaki Nishihara and Yasuhiko Minamide},
title = {Depth First Search},
journal = {Archive of Formal Proofs},
month = jun,
year = 2004,
note = {\url{http://isa-afp.org/entries/Depth-First-Search.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Depth-First-Search/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Depth-First-Search/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Depth-First-Search/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Depth-First-Search-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Depth-First-Search-2019-06-11.tar.gz">
afp-Depth-First-Search-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Depth-First-Search-2018-08-16.tar.gz">
afp-Depth-First-Search-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Depth-First-Search-2017-10-10.tar.gz">
afp-Depth-First-Search-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Depth-First-Search-2016-12-17.tar.gz">
afp-Depth-First-Search-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Depth-First-Search-2016-02-22.tar.gz">
afp-Depth-First-Search-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Depth-First-Search-2015-05-27.tar.gz">
afp-Depth-First-Search-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Depth-First-Search-2014-08-28.tar.gz">
afp-Depth-First-Search-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Depth-First-Search-2013-12-11.tar.gz">
afp-Depth-First-Search-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Depth-First-Search-2013-11-17.tar.gz">
afp-Depth-First-Search-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Depth-First-Search-2013-02-16.tar.gz">
afp-Depth-First-Search-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Depth-First-Search-2012-05-24.tar.gz">
afp-Depth-First-Search-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Depth-First-Search-2011-10-11.tar.gz">
afp-Depth-First-Search-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Depth-First-Search-2011-02-11.tar.gz">
afp-Depth-First-Search-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Depth-First-Search-2010-06-30.tar.gz">
afp-Depth-First-Search-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Depth-First-Search-2009-12-12.tar.gz">
afp-Depth-First-Search-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Depth-First-Search-2009-04-29.tar.gz">
afp-Depth-First-Search-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Depth-First-Search-2008-06-10.tar.gz">
afp-Depth-First-Search-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Depth-First-Search-2007-11-27.tar.gz">
afp-Depth-First-Search-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Depth-First-Search-2005-10-14.tar.gz">
afp-Depth-First-Search-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Depth-First-Search-2004-06-24.tar.gz">
afp-Depth-First-Search-2004-06-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Derangements.html b/web/entries/Derangements.html
--- a/web/entries/Derangements.html
+++ b/web/entries/Derangements.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Derangements Formula - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>erangements
<font class="first">F</font>ormula
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Derangements Formula</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-06-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The Derangements Formula describes the number of fixpoint-free permutations
as a closed formula. This theorem is the 88th theorem in a list of the
-``<a href="http://www.cs.ru.nl/~freek/100/">Top 100 Mathematical Theorems</a>''.</div></td>
+``<a href="http://www.cs.ru.nl/~freek/100/">Top 100 Mathematical Theorems</a>''.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Derangements-AFP,
author = {Lukas Bulwahn},
title = {Derangements Formula},
journal = {Archive of Formal Proofs},
month = jun,
year = 2015,
note = {\url{http://isa-afp.org/entries/Derangements.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Derangements/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Derangements/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Derangements/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Derangements-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Derangements-2019-06-11.tar.gz">
afp-Derangements-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Derangements-2018-08-16.tar.gz">
afp-Derangements-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Derangements-2017-10-10.tar.gz">
afp-Derangements-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Derangements-2016-12-17.tar.gz">
afp-Derangements-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Derangements-2016-02-22.tar.gz">
afp-Derangements-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Derangements-2015-11-20.tar.gz">
afp-Derangements-2015-11-20.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Derangements-2015-06-28.tar.gz">
afp-Derangements-2015-06-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Deriving.html b/web/entries/Deriving.html
--- a/web/entries/Deriving.html
+++ b/web/entries/Deriving.html
@@ -1,234 +1,234 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Deriving class instances for datatypes - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>eriving
class
instances
for
datatypes
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Deriving class instances for datatypes</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com) and
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-03-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>We provide a framework for registering automatic methods
to derive class instances of datatypes,
as it is possible using Haskell's ``deriving Ord, Show, ...'' feature.</p>
<p>We further implemented such automatic methods to derive comparators, linear orders, parametrizable equality functions,
and hash-functions which are required in the
Isabelle Collection Framework and the Container Framework.
Moreover, for the tactic of Blanchette to show that a datatype is countable, we implemented a
wrapper so that this tactic becomes accessible in our framework. All of the generators are based on
the infrastructure that is provided by the BNF-based datatype package.</p>
<p>Our formalization was performed as part of the <a href="http://cl-informatik.uibk.ac.at/software/ceta">IsaFoR/CeTA</a> project.
With our new tactics we could remove
several tedious proofs for (conditional) linear orders, and conditional equality operators
-within IsaFoR and the Container Framework.</p></div></td>
+within IsaFoR and the Container Framework.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Deriving-AFP,
author = {Christian Sternagel and René Thiemann},
title = {Deriving class instances for datatypes},
journal = {Archive of Formal Proofs},
month = mar,
year = 2015,
note = {\url{http://isa-afp.org/entries/Deriving.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Collections.html">Collections</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Affine_Arithmetic.html">Affine_Arithmetic</a>, <a href="Containers.html">Containers</a>, <a href="Datatype_Order_Generator.html">Datatype_Order_Generator</a>, <a href="Formula_Derivatives.html">Formula_Derivatives</a>, <a href="Groebner_Bases.html">Groebner_Bases</a>, <a href="LTL_Master_Theorem.html">LTL_Master_Theorem</a>, <a href="MSO_Regex_Equivalence.html">MSO_Regex_Equivalence</a>, <a href="Real_Impl.html">Real_Impl</a>, <a href="Show.html">Show</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Deriving/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Deriving/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Deriving/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Deriving-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Deriving-2019-06-11.tar.gz">
afp-Deriving-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Deriving-2018-08-16.tar.gz">
afp-Deriving-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Deriving-2017-10-10.tar.gz">
afp-Deriving-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Deriving-2016-12-17.tar.gz">
afp-Deriving-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Deriving-2016-02-22.tar.gz">
afp-Deriving-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Deriving-2015-05-27.tar.gz">
afp-Deriving-2015-05-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Descartes_Sign_Rule.html b/web/entries/Descartes_Sign_Rule.html
--- a/web/entries/Descartes_Sign_Rule.html
+++ b/web/entries/Descartes_Sign_Rule.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Descartes' Rule of Signs - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>escartes'
<font class="first">R</font>ule
of
<font class="first">S</font>igns
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Descartes' Rule of Signs</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-12-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
Descartes' Rule of Signs relates the number of positive real roots of a
polynomial with the number of sign changes in its coefficient sequence.
</p><p>
Our proof follows the simple inductive proof given by Rob Arthan, which was also
used by John Harrison in his HOL Light formalisation. We proved most of the
lemmas for arbitrary linearly-ordered integrity domains (e.g. integers,
rationals, reals); the main result, however, requires the intermediate value
theorem and was therefore only proven for real polynomials.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Descartes_Sign_Rule-AFP,
author = {Manuel Eberl},
title = {Descartes' Rule of Signs},
journal = {Archive of Formal Proofs},
month = dec,
year = 2015,
note = {\url{http://isa-afp.org/entries/Descartes_Sign_Rule.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Descartes_Sign_Rule/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Descartes_Sign_Rule/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Descartes_Sign_Rule/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Descartes_Sign_Rule-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Descartes_Sign_Rule-2019-06-11.tar.gz">
afp-Descartes_Sign_Rule-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Descartes_Sign_Rule-2018-08-16.tar.gz">
afp-Descartes_Sign_Rule-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Descartes_Sign_Rule-2017-10-10.tar.gz">
afp-Descartes_Sign_Rule-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Descartes_Sign_Rule-2016-12-17.tar.gz">
afp-Descartes_Sign_Rule-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Descartes_Sign_Rule-2016-02-22.tar.gz">
afp-Descartes_Sign_Rule-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Descartes_Sign_Rule-2016-01-05.tar.gz">
afp-Descartes_Sign_Rule-2016-01-05.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Dict_Construction.html b/web/entries/Dict_Construction.html
--- a/web/entries/Dict_Construction.html
+++ b/web/entries/Dict_Construction.html
@@ -1,206 +1,206 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Dictionary Construction - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>ictionary
<font class="first">C</font>onstruction
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Dictionary Construction</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-05-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Isabelle's code generator natively supports type classes. For
targets that do not have language support for classes and instances,
it performs the well-known dictionary translation, as described by
Haftmann and Nipkow. This translation happens outside the logic, i.e.,
there is no guarantee that it is correct, besides the pen-and-paper
proof. This work implements a certified dictionary translation that
-produces new class-free constants and derives equality theorems.</div></td>
+produces new class-free constants and derives equality theorems.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Dict_Construction-AFP,
author = {Lars Hupel},
title = {Dictionary Construction},
journal = {Archive of Formal Proofs},
month = may,
year = 2017,
note = {\url{http://isa-afp.org/entries/Dict_Construction.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Automatic_Refinement.html">Automatic_Refinement</a>, <a href="Lazy_Case.html">Lazy_Case</a>, <a href="Show.html">Show</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="CakeML_Codegen.html">CakeML_Codegen</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dict_Construction/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Dict_Construction/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dict_Construction/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Dict_Construction-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Dict_Construction-2019-06-11.tar.gz">
afp-Dict_Construction-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Dict_Construction-2018-08-16.tar.gz">
afp-Dict_Construction-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Dict_Construction-2017-10-10.tar.gz">
afp-Dict_Construction-2017-10-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Differential_Dynamic_Logic.html b/web/entries/Differential_Dynamic_Logic.html
--- a/web/entries/Differential_Dynamic_Logic.html
+++ b/web/entries/Differential_Dynamic_Logic.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Differential Dynamic Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>ifferential
<font class="first">D</font>ynamic
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Differential Dynamic Logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Brandon Bohrer (bbohrer /at/ cs /dot/ cmu /dot/ edu)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-02-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize differential dynamic logic, a logic for proving
properties of hybrid systems. The proof calculus in this formalization
is based on the uniform substitution principle. We show it is sound
with respect to our denotational semantics, which provides increased
confidence in the correctness of the KeYmaera X theorem prover based
on this calculus. As an application, we include a proof term checker
embedded in Isabelle/HOL with several example proofs. Published in:
Brandon Bohrer, Vincent Rahli, Ivana Vukotic, Marcus Völp, André
-Platzer: Formally verified differential dynamic logic. CPP 2017.</div></td>
+Platzer: Formally verified differential dynamic logic. CPP 2017.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Differential_Dynamic_Logic-AFP,
author = {Brandon Bohrer},
title = {Differential Dynamic Logic},
journal = {Archive of Formal Proofs},
month = feb,
year = 2017,
note = {\url{http://isa-afp.org/entries/Differential_Dynamic_Logic.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Ordinary_Differential_Equations.html">Ordinary_Differential_Equations</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Differential_Dynamic_Logic/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Differential_Dynamic_Logic/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Differential_Dynamic_Logic/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Differential_Dynamic_Logic-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Differential_Dynamic_Logic-2019-06-11.tar.gz">
afp-Differential_Dynamic_Logic-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Differential_Dynamic_Logic-2018-08-16.tar.gz">
afp-Differential_Dynamic_Logic-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Differential_Dynamic_Logic-2017-10-10.tar.gz">
afp-Differential_Dynamic_Logic-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Differential_Dynamic_Logic-2017-02-14.tar.gz">
afp-Differential_Dynamic_Logic-2017-02-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Differential_Game_Logic.html b/web/entries/Differential_Game_Logic.html
--- a/web/entries/Differential_Game_Logic.html
+++ b/web/entries/Differential_Game_Logic.html
@@ -1,200 +1,200 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Differential Game Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>ifferential
<font class="first">G</font>ame
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Differential Game Logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.cs.cmu.edu/~aplatzer/">André Platzer</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-06-03</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This formalization provides differential game logic (dGL), a logic for
proving properties of hybrid game. In addition to the syntax and
semantics, it formalizes a uniform substitution calculus for dGL.
Church's uniform substitutions substitute a term or formula for a
function or predicate symbol everywhere. The uniform substitutions for
dGL also substitute hybrid games for a game symbol everywhere. We
prove soundness of one-pass uniform substitutions and the axioms of
differential game logic with respect to their denotational semantics.
One-pass uniform substitutions are faster by postponing
soundness-critical admissibility checks with a linear pass homomorphic
application and regain soundness by a variable condition at the
replacements. The formalization is based on prior non-mechanized
-soundness proofs for dGL.</div></td>
+soundness proofs for dGL.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Differential_Game_Logic-AFP,
author = {André Platzer},
title = {Differential Game Logic},
journal = {Archive of Formal Proofs},
month = jun,
year = 2019,
note = {\url{http://isa-afp.org/entries/Differential_Game_Logic.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Differential_Game_Logic/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Differential_Game_Logic/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Differential_Game_Logic/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Differential_Game_Logic-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Differential_Game_Logic-2019-06-24.tar.gz">
afp-Differential_Game_Logic-2019-06-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Dijkstra_Shortest_Path.html b/web/entries/Dijkstra_Shortest_Path.html
--- a/web/entries/Dijkstra_Shortest_Path.html
+++ b/web/entries/Dijkstra_Shortest_Path.html
@@ -1,263 +1,263 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Dijkstra's Shortest Path Algorithm - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>ijkstra's
<font class="first">S</font>hortest
<font class="first">P</font>ath
<font class="first">A</font>lgorithm
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Dijkstra's Shortest Path Algorithm</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Benedikt Nordhoff (b_nord01 /at/ uni-muenster /dot/ de) and
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-01-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We implement and prove correct Dijkstra's algorithm for the
+ <td class="abstract mathjax_process">We implement and prove correct Dijkstra's algorithm for the
single source shortest path problem, conceived in 1956 by E. Dijkstra.
The algorithm is implemented using the data refinement framework for monadic,
nondeterministic programs. An efficient implementation is derived using data
-structures from the Isabelle Collection Framework.</div></td>
+structures from the Isabelle Collection Framework.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Dijkstra_Shortest_Path-AFP,
author = {Benedikt Nordhoff and Peter Lammich},
title = {Dijkstra's Shortest Path Algorithm},
journal = {Archive of Formal Proofs},
month = jan,
year = 2012,
note = {\url{http://isa-afp.org/entries/Dijkstra_Shortest_Path.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Collections.html">Collections</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Formal_SSA.html">Formal_SSA</a>, <a href="Koenigsberg_Friendship.html">Koenigsberg_Friendship</a>, <a href="Refine_Imperative_HOL.html">Refine_Imperative_HOL</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dijkstra_Shortest_Path/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Dijkstra_Shortest_Path/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dijkstra_Shortest_Path/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Dijkstra_Shortest_Path-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Dijkstra_Shortest_Path-2019-06-11.tar.gz">
afp-Dijkstra_Shortest_Path-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Dijkstra_Shortest_Path-2018-08-16.tar.gz">
afp-Dijkstra_Shortest_Path-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Dijkstra_Shortest_Path-2017-10-10.tar.gz">
afp-Dijkstra_Shortest_Path-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Dijkstra_Shortest_Path-2016-12-17.tar.gz">
afp-Dijkstra_Shortest_Path-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Dijkstra_Shortest_Path-2016-02-22.tar.gz">
afp-Dijkstra_Shortest_Path-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Dijkstra_Shortest_Path-2015-05-27.tar.gz">
afp-Dijkstra_Shortest_Path-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Dijkstra_Shortest_Path-2014-08-28.tar.gz">
afp-Dijkstra_Shortest_Path-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Dijkstra_Shortest_Path-2013-12-11.tar.gz">
afp-Dijkstra_Shortest_Path-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Dijkstra_Shortest_Path-2013-11-17.tar.gz">
afp-Dijkstra_Shortest_Path-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Dijkstra_Shortest_Path-2013-03-08.tar.gz">
afp-Dijkstra_Shortest_Path-2013-03-08.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Dijkstra_Shortest_Path-2013-02-16.tar.gz">
afp-Dijkstra_Shortest_Path-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Dijkstra_Shortest_Path-2012-05-24.tar.gz">
afp-Dijkstra_Shortest_Path-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Dijkstra_Shortest_Path-2012-03-15.tar.gz">
afp-Dijkstra_Shortest_Path-2012-03-15.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Dijkstra_Shortest_Path-2012-02-10.tar.gz">
afp-Dijkstra_Shortest_Path-2012-02-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Diophantine_Eqns_Lin_Hom.html b/web/entries/Diophantine_Eqns_Lin_Hom.html
--- a/web/entries/Diophantine_Eqns_Lin_Hom.html
+++ b/web/entries/Diophantine_Eqns_Lin_Hom.html
@@ -1,212 +1,212 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Homogeneous Linear Diophantine Equations - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">H</font>omogeneous
<font class="first">L</font>inear
<font class="first">D</font>iophantine
<font class="first">E</font>quations
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Homogeneous Linear Diophantine Equations</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Florian Messner (florian /dot/ g /dot/ messner /at/ uibk /dot/ ac /dot/ at),
<a href="http://www.parsert.com/">Julian Parsert</a>,
Jonas Schöpf (jonas /dot/ schoepf /at/ uibk /dot/ ac /dot/ at) and
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-10-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize the theory of homogeneous linear diophantine equations,
focusing on two main results: (1) an abstract characterization of
minimal complete sets of solutions, and (2) an algorithm computing
them. Both, the characterization and the algorithm are based on
previous work by Huet. Our starting point is a simple but inefficient
variant of Huet's lexicographic algorithm incorporating improved
bounds due to Clausen and Fortenbacher. We proceed by proving its
soundness and completeness. Finally, we employ code equations to
obtain a reasonably efficient implementation. Thus, we provide a
-formally verified solver for homogeneous linear diophantine equations.</div></td>
+formally verified solver for homogeneous linear diophantine equations.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Diophantine_Eqns_Lin_Hom-AFP,
author = {Florian Messner and Julian Parsert and Jonas Schöpf and Christian Sternagel},
title = {Homogeneous Linear Diophantine Equations},
journal = {Archive of Formal Proofs},
month = oct,
year = 2017,
note = {\url{http://isa-afp.org/entries/Diophantine_Eqns_Lin_Hom.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Diophantine_Eqns_Lin_Hom/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Diophantine_Eqns_Lin_Hom/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Diophantine_Eqns_Lin_Hom/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Diophantine_Eqns_Lin_Hom-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Diophantine_Eqns_Lin_Hom-2019-06-11.tar.gz">
afp-Diophantine_Eqns_Lin_Hom-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Diophantine_Eqns_Lin_Hom-2018-08-16.tar.gz">
afp-Diophantine_Eqns_Lin_Hom-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Diophantine_Eqns_Lin_Hom-2017-10-15.tar.gz">
afp-Diophantine_Eqns_Lin_Hom-2017-10-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Dirichlet_L.html b/web/entries/Dirichlet_L.html
--- a/web/entries/Dirichlet_L.html
+++ b/web/entries/Dirichlet_L.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Dirichlet L-Functions and Dirichlet's Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>irichlet
<font class="first">L</font>-Functions
and
<font class="first">D</font>irichlet's
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Dirichlet L-Functions and Dirichlet's Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-12-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This article provides a formalisation of Dirichlet characters
and Dirichlet <em>L</em>-functions including proofs of
their basic properties &ndash; most notably their analyticity,
their areas of convergence, and their non-vanishing for &Re;(s)
&ge; 1. All of this is built in a very high-level style using
Dirichlet series. The proof of the non-vanishing follows a very short
and elegant proof by Newman, which we attempt to reproduce faithfully
in a similar level of abstraction in Isabelle.</p> <p>This
also leads to a relatively short proof of Dirichlet’s Theorem, which
states that, if <em>h</em> and <em>n</em> are
coprime, there are infinitely many primes <em>p</em> with
<em>p</em> &equiv; <em>h</em> (mod
-<em>n</em>).</p></div></td>
+<em>n</em>).</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Dirichlet_L-AFP,
author = {Manuel Eberl},
title = {Dirichlet L-Functions and Dirichlet's Theorem},
journal = {Archive of Formal Proofs},
month = dec,
year = 2017,
note = {\url{http://isa-afp.org/entries/Dirichlet_L.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Bertrands_Postulate.html">Bertrands_Postulate</a>, <a href="Dirichlet_Series.html">Dirichlet_Series</a>, <a href="Landau_Symbols.html">Landau_Symbols</a>, <a href="Zeta_Function.html">Zeta_Function</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Gauss_Sums.html">Gauss_Sums</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dirichlet_L/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Dirichlet_L/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dirichlet_L/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Dirichlet_L-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Dirichlet_L-2019-06-11.tar.gz">
afp-Dirichlet_L-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Dirichlet_L-2018-08-16.tar.gz">
afp-Dirichlet_L-2018-08-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Dirichlet_Series.html b/web/entries/Dirichlet_Series.html
--- a/web/entries/Dirichlet_Series.html
+++ b/web/entries/Dirichlet_Series.html
@@ -1,217 +1,217 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Dirichlet Series - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>irichlet
<font class="first">S</font>eries
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Dirichlet Series</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-10-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry is a formalisation of much of Chapters 2, 3, and 11 of
Apostol's &ldquo;Introduction to Analytic Number
Theory&rdquo;. This includes: <ul> <li>Definitions and
basic properties for several number-theoretic functions (Euler's
&phi;, M&ouml;bius &mu;, Liouville's &lambda;,
the divisor function &sigma;, von Mangoldt's
&Lambda;)</li> <li>Executable code for most of these
functions, the most efficient implementations using the factoring
algorithm by Thiemann <i>et al.</i></li>
<li>Dirichlet products and formal Dirichlet series</li>
<li>Analytic results connecting convergent formal Dirichlet
series to complex functions</li> <li>Euler product
expansions</li> <li>Asymptotic estimates of
number-theoretic functions including the density of squarefree
integers and the average number of divisors of a natural
number</li> </ul> These results are useful as a basis for
developing more number-theoretic results, such as the Prime Number
-Theorem.</div></td>
+Theorem.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Dirichlet_Series-AFP,
author = {Manuel Eberl},
title = {Dirichlet Series},
journal = {Archive of Formal Proofs},
month = oct,
year = 2017,
note = {\url{http://isa-afp.org/entries/Dirichlet_Series.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Euler_MacLaurin.html">Euler_MacLaurin</a>, <a href="Landau_Symbols.html">Landau_Symbols</a>, <a href="Polynomial_Factorization.html">Polynomial_Factorization</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Dirichlet_L.html">Dirichlet_L</a>, <a href="Gauss_Sums.html">Gauss_Sums</a>, <a href="Zeta_Function.html">Zeta_Function</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dirichlet_Series/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Dirichlet_Series/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dirichlet_Series/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Dirichlet_Series-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Dirichlet_Series-2019-06-11.tar.gz">
afp-Dirichlet_Series-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Dirichlet_Series-2018-08-16.tar.gz">
afp-Dirichlet_Series-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Dirichlet_Series-2017-10-16.tar.gz">
afp-Dirichlet_Series-2017-10-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/DiscretePricing.html b/web/entries/DiscretePricing.html
--- a/web/entries/DiscretePricing.html
+++ b/web/entries/DiscretePricing.html
@@ -1,217 +1,217 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Pricing in discrete financial models - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>ricing
in
discrete
financial
models
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Pricing in discrete financial models</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://lig-membres.imag.fr/mechenim/">Mnacho Echenim</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-07-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We have formalized the computation of fair prices for derivative
products in discrete financial models. As an application, we derive a
way to compute fair prices of derivative products in the
Cox-Ross-Rubinstein model of a financial market, thus completing the
work that was presented in this <a
-href="https://hal.archives-ouvertes.fr/hal-01562944">paper</a>.</div></td>
+href="https://hal.archives-ouvertes.fr/hal-01562944">paper</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2019-05-12]:
Renamed discr_mkt predicate to stk_strict_subs and got rid of predicate A for a more natural definition of the type discrete_market;
renamed basic quantity processes for coherent notation;
renamed value_process into val_process and closing_value_process to cls_val_process;
relaxed hypothesis of lemma CRR_market_fair_price.
Added functions to price some basic options.
(revision 0b813a1a833f)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{DiscretePricing-AFP,
author = {Mnacho Echenim},
title = {Pricing in discrete financial models},
journal = {Archive of Formal Proofs},
month = jul,
year = 2018,
note = {\url{http://isa-afp.org/entries/DiscretePricing.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/DiscretePricing/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/DiscretePricing/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/DiscretePricing/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-DiscretePricing-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-DiscretePricing-2019-06-11.tar.gz">
afp-DiscretePricing-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-DiscretePricing-2018-08-16.tar.gz">
afp-DiscretePricing-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-DiscretePricing-2018-07-18.tar.gz">
afp-DiscretePricing-2018-07-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Discrete_Summation.html b/web/entries/Discrete_Summation.html
--- a/web/entries/Discrete_Summation.html
+++ b/web/entries/Discrete_Summation.html
@@ -1,230 +1,230 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Discrete Summation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>iscrete
<font class="first">S</font>ummation
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Discrete Summation</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://isabelle.in.tum.de/~haftmann">Florian Haftmann</a>
</td>
</tr>
<tr>
<td class="datahead">
Contributor:
</td>
<td class="data">
Amine Chaieb
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-04-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">These theories introduce basic concepts and proofs about discrete summation: shifts, formal summation, falling factorials and stirling numbers. As proof of concept, a simple summation conversion is provided.</div></td>
+ <td class="abstract mathjax_process">These theories introduce basic concepts and proofs about discrete summation: shifts, formal summation, falling factorials and stirling numbers. As proof of concept, a simple summation conversion is provided.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Discrete_Summation-AFP,
author = {Florian Haftmann},
title = {Discrete Summation},
journal = {Archive of Formal Proofs},
month = apr,
year = 2014,
note = {\url{http://isa-afp.org/entries/Discrete_Summation.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Card_Partitions.html">Card_Partitions</a>, <a href="Falling_Factorial_Sum.html">Falling_Factorial_Sum</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Discrete_Summation/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Discrete_Summation/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Discrete_Summation/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Discrete_Summation-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Discrete_Summation-2019-06-11.tar.gz">
afp-Discrete_Summation-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Discrete_Summation-2018-08-16.tar.gz">
afp-Discrete_Summation-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Discrete_Summation-2017-10-10.tar.gz">
afp-Discrete_Summation-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Discrete_Summation-2016-12-17.tar.gz">
afp-Discrete_Summation-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Discrete_Summation-2016-02-22.tar.gz">
afp-Discrete_Summation-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Discrete_Summation-2015-05-27.tar.gz">
afp-Discrete_Summation-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Discrete_Summation-2014-08-28.tar.gz">
afp-Discrete_Summation-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Discrete_Summation-2014-04-13.tar.gz">
afp-Discrete_Summation-2014-04-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/DiskPaxos.html b/web/entries/DiskPaxos.html
--- a/web/entries/DiskPaxos.html
+++ b/web/entries/DiskPaxos.html
@@ -1,289 +1,289 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Proving the Correctness of Disk Paxos - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>roving
the
<font class="first">C</font>orrectness
of
<font class="first">D</font>isk
<font class="first">P</font>axos
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Proving the Correctness of Disk Paxos</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.fceia.unr.edu.ar/~mauro/">Mauro Jaskelioff</a> and
<a href="http://www.loria.fr/~merz">Stephan Merz</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2005-06-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Disk Paxos is an algorithm for building arbitrary fault-tolerant distributed systems. The specification of Disk Paxos has been proved correct informally and tested using the TLC model checker, but up to now, it has never been fully formally verified. In this work we have formally verified its correctness using the Isabelle theorem prover and the HOL logic system, showing that Isabelle is a practical tool for verifying properties of TLA+ specifications.</div></td>
+ <td class="abstract mathjax_process">Disk Paxos is an algorithm for building arbitrary fault-tolerant distributed systems. The specification of Disk Paxos has been proved correct informally and tested using the TLC model checker, but up to now, it has never been fully formally verified. In this work we have formally verified its correctness using the Isabelle theorem prover and the HOL logic system, showing that Isabelle is a practical tool for verifying properties of TLA+ specifications.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{DiskPaxos-AFP,
author = {Mauro Jaskelioff and Stephan Merz},
title = {Proving the Correctness of Disk Paxos},
journal = {Archive of Formal Proofs},
month = jun,
year = 2005,
note = {\url{http://isa-afp.org/entries/DiskPaxos.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/DiskPaxos/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/DiskPaxos/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/DiskPaxos/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-DiskPaxos-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-DiskPaxos-2019-06-11.tar.gz">
afp-DiskPaxos-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-DiskPaxos-2018-08-16.tar.gz">
afp-DiskPaxos-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-DiskPaxos-2017-10-10.tar.gz">
afp-DiskPaxos-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-DiskPaxos-2016-12-17.tar.gz">
afp-DiskPaxos-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-DiskPaxos-2016-02-22.tar.gz">
afp-DiskPaxos-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-DiskPaxos-2015-05-27.tar.gz">
afp-DiskPaxos-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-DiskPaxos-2014-08-28.tar.gz">
afp-DiskPaxos-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-DiskPaxos-2013-12-11.tar.gz">
afp-DiskPaxos-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-DiskPaxos-2013-11-17.tar.gz">
afp-DiskPaxos-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-DiskPaxos-2013-02-16.tar.gz">
afp-DiskPaxos-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-DiskPaxos-2012-05-24.tar.gz">
afp-DiskPaxos-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-DiskPaxos-2011-10-11.tar.gz">
afp-DiskPaxos-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-DiskPaxos-2011-02-11.tar.gz">
afp-DiskPaxos-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-DiskPaxos-2010-06-30.tar.gz">
afp-DiskPaxos-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-DiskPaxos-2009-12-12.tar.gz">
afp-DiskPaxos-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-DiskPaxos-2009-04-29.tar.gz">
afp-DiskPaxos-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-DiskPaxos-2008-06-10.tar.gz">
afp-DiskPaxos-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-DiskPaxos-2007-11-27.tar.gz">
afp-DiskPaxos-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-DiskPaxos-2005-10-14.tar.gz">
afp-DiskPaxos-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-DiskPaxos-2005-06-22.tar.gz">
afp-DiskPaxos-2005-06-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/DynamicArchitectures.html b/web/entries/DynamicArchitectures.html
--- a/web/entries/DynamicArchitectures.html
+++ b/web/entries/DynamicArchitectures.html
@@ -1,230 +1,230 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Dynamic Architectures - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>ynamic
<font class="first">A</font>rchitectures
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Dynamic Architectures</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://marmsoler.com">Diego Marmsoler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-07-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The architecture of a system describes the system's overall
organization into components and connections between those components.
With the emergence of mobile computing, dynamic architectures have
become increasingly important. In such architectures, components may
appear or disappear, and connections may change over time. In the
following we mechanize a theory of dynamic architectures and verify
the soundness of a corresponding calculus. Therefore, we first
formalize the notion of configuration traces as a model for dynamic
architectures. Then, the behavior of single components is formalized
in terms of behavior traces and an operator is introduced and studied
to extract the behavior of a single component out of a given
configuration trace. Then, behavior trace assertions are introduced as
a temporal specification technique to specify behavior of components.
Reasoning about component behavior in a dynamic context is formalized
in terms of a calculus for dynamic architectures. Finally, the
soundness of the calculus is verified by introducing an alternative
interpretation for behavior trace assertions over configuration traces
and proving the rules of the calculus. Since projection may lead to
finite as well as infinite behavior traces, they are formalized in
terms of coinductive lists. Thus, our theory is based on
Lochbihler's formalization of coinductive lists. The theory may
-be applied to verify properties for dynamic architectures.</div></td>
+be applied to verify properties for dynamic architectures.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2018-06-07]: adding logical operators to specify configuration traces (revision 09178f08f050)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{DynamicArchitectures-AFP,
author = {Diego Marmsoler},
title = {Dynamic Architectures},
journal = {Archive of Formal Proofs},
month = jul,
year = 2017,
note = {\url{http://isa-afp.org/entries/DynamicArchitectures.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Coinductive.html">Coinductive</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Architectural_Design_Patterns.html">Architectural_Design_Patterns</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/DynamicArchitectures/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/DynamicArchitectures/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/DynamicArchitectures/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-DynamicArchitectures-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-DynamicArchitectures-2019-06-11.tar.gz">
afp-DynamicArchitectures-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-DynamicArchitectures-2018-08-16.tar.gz">
afp-DynamicArchitectures-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-DynamicArchitectures-2017-10-10.tar.gz">
afp-DynamicArchitectures-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-DynamicArchitectures-2017-07-31.tar.gz">
afp-DynamicArchitectures-2017-07-31.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Dynamic_Tables.html b/web/entries/Dynamic_Tables.html
--- a/web/entries/Dynamic_Tables.html
+++ b/web/entries/Dynamic_Tables.html
@@ -1,225 +1,225 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Parameterized Dynamic Tables - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>arameterized
<font class="first">D</font>ynamic
<font class="first">T</font>ables
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Parameterized Dynamic Tables</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-06-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This article formalizes the amortized analysis of dynamic tables
parameterized with their minimal and maximal load factors and the
expansion and contraction factors.
<P>
A full description is found in a
-<a href="http://www21.in.tum.de/~nipkow/pubs">companion paper</a>.</div></td>
+<a href="http://www21.in.tum.de/~nipkow/pubs">companion paper</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Dynamic_Tables-AFP,
author = {Tobias Nipkow},
title = {Parameterized Dynamic Tables},
journal = {Archive of Formal Proofs},
month = jun,
year = 2015,
note = {\url{http://isa-afp.org/entries/Dynamic_Tables.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Amortized_Complexity.html">Amortized_Complexity</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dynamic_Tables/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Dynamic_Tables/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Dynamic_Tables/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Dynamic_Tables-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Dynamic_Tables-2019-06-11.tar.gz">
afp-Dynamic_Tables-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Dynamic_Tables-2018-08-16.tar.gz">
afp-Dynamic_Tables-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Dynamic_Tables-2017-10-10.tar.gz">
afp-Dynamic_Tables-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Dynamic_Tables-2016-12-17.tar.gz">
afp-Dynamic_Tables-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Dynamic_Tables-2016-02-22.tar.gz">
afp-Dynamic_Tables-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Dynamic_Tables-2015-06-08.tar.gz">
afp-Dynamic_Tables-2015-06-08.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Dynamic_Tables-2015-06-07.tar.gz">
afp-Dynamic_Tables-2015-06-07.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/E_Transcendental.html b/web/entries/E_Transcendental.html
--- a/web/entries/E_Transcendental.html
+++ b/web/entries/E_Transcendental.html
@@ -1,211 +1,211 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Transcendence of e - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">T</font>ranscendence
of
e
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Transcendence of e</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-01-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This work contains a proof that Euler's number e is transcendental. The
proof follows the standard approach of assuming that e is algebraic and
then using a specific integer polynomial to derive two inconsistent bounds,
leading to a contradiction.</p> <p>This kind of approach can be found in
-many different sources; this formalisation mostly follows a <a href="http://planetmath.org/proofoflindemannweierstrasstheoremandthateandpiaretranscendental">PlanetMath article</a> by Roger Lipsett.</p></div></td>
+many different sources; this formalisation mostly follows a <a href="http://planetmath.org/proofoflindemannweierstrasstheoremandthateandpiaretranscendental">PlanetMath article</a> by Roger Lipsett.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{E_Transcendental-AFP,
author = {Manuel Eberl},
title = {The Transcendence of e},
journal = {Archive of Formal Proofs},
month = jan,
year = 2017,
note = {\url{http://isa-afp.org/entries/E_Transcendental.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Pi_Transcendental.html">Pi_Transcendental</a>, <a href="Zeta_3_Irrational.html">Zeta_3_Irrational</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/E_Transcendental/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/E_Transcendental/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/E_Transcendental/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-E_Transcendental-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-E_Transcendental-2019-06-11.tar.gz">
afp-E_Transcendental-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-E_Transcendental-2018-08-16.tar.gz">
afp-E_Transcendental-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-E_Transcendental-2017-10-10.tar.gz">
afp-E_Transcendental-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-E_Transcendental-2017-01-13.tar.gz">
afp-E_Transcendental-2017-01-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Echelon_Form.html b/web/entries/Echelon_Form.html
--- a/web/entries/Echelon_Form.html
+++ b/web/entries/Echelon_Form.html
@@ -1,220 +1,220 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Echelon Form - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>chelon
<font class="first">F</font>orm
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Echelon Form</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a> and
<a href="http://www.unirioja.es/cu/jearansa">Jesús Aransay</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-02-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We formalize an algorithm to compute the Echelon Form of a matrix. We have proved its existence over Bézout domains and made it executable over Euclidean domains, such as the integer ring and the univariate polynomials over a field. This allows us to compute determinants, inverses and characteristic polynomials of matrices. The work is based on the HOL-Multivariate Analysis library, and on both the Gauss-Jordan and Cayley-Hamilton AFP entries. As a by-product, some algebraic structures have been implemented (principal ideal domains, Bézout domains...). The algorithm has been refined to immutable arrays and code can be generated to functional languages as well.</div></td>
+ <td class="abstract mathjax_process">We formalize an algorithm to compute the Echelon Form of a matrix. We have proved its existence over Bézout domains and made it executable over Euclidean domains, such as the integer ring and the univariate polynomials over a field. This allows us to compute determinants, inverses and characteristic polynomials of matrices. The work is based on the HOL-Multivariate Analysis library, and on both the Gauss-Jordan and Cayley-Hamilton AFP entries. As a by-product, some algebraic structures have been implemented (principal ideal domains, Bézout domains...). The algorithm has been refined to immutable arrays and code can be generated to functional languages as well.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Echelon_Form-AFP,
author = {Jose Divasón and Jesús Aransay},
title = {Echelon Form},
journal = {Archive of Formal Proofs},
month = feb,
year = 2015,
note = {\url{http://isa-afp.org/entries/Echelon_Form.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Cayley_Hamilton.html">Cayley_Hamilton</a>, <a href="Gauss_Jordan.html">Gauss_Jordan</a>, <a href="Rank_Nullity_Theorem.html">Rank_Nullity_Theorem</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Hermite.html">Hermite</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Echelon_Form/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Echelon_Form/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Echelon_Form/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Echelon_Form-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Echelon_Form-2019-06-11.tar.gz">
afp-Echelon_Form-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Echelon_Form-2018-08-16.tar.gz">
afp-Echelon_Form-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Echelon_Form-2017-10-10.tar.gz">
afp-Echelon_Form-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Echelon_Form-2016-12-17.tar.gz">
afp-Echelon_Form-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Echelon_Form-2016-02-22.tar.gz">
afp-Echelon_Form-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Echelon_Form-2015-05-27.tar.gz">
afp-Echelon_Form-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Echelon_Form-2015-02-12.tar.gz">
afp-Echelon_Form-2015-02-12.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/EdmondsKarp_Maxflow.html b/web/entries/EdmondsKarp_Maxflow.html
--- a/web/entries/EdmondsKarp_Maxflow.html
+++ b/web/entries/EdmondsKarp_Maxflow.html
@@ -1,223 +1,223 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalizing the Edmonds-Karp Algorithm - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalizing
the
<font class="first">E</font>dmonds-Karp
<font class="first">A</font>lgorithm
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalizing the Edmonds-Karp Algorithm</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Peter Lammich and
S. Reza Sefidgar
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-08-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a formalization of the Ford-Fulkerson method for computing
the maximum flow in a network. Our formal proof closely follows a
standard textbook proof, and is accessible even without being an
expert in Isabelle/HOL--- the interactive theorem prover used for the
formalization. We then use stepwise refinement to obtain the
Edmonds-Karp algorithm, and formally prove a bound on its complexity.
Further refinement yields a verified implementation, whose execution
time compares well to an unverified reference implementation in Java.
-This entry is based on our ITP-2016 paper with the same title.</div></td>
+This entry is based on our ITP-2016 paper with the same title.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{EdmondsKarp_Maxflow-AFP,
author = {Peter Lammich and S. Reza Sefidgar},
title = {Formalizing the Edmonds-Karp Algorithm},
journal = {Archive of Formal Proofs},
month = aug,
year = 2016,
note = {\url{http://isa-afp.org/entries/EdmondsKarp_Maxflow.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Flow_Networks.html">Flow_Networks</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="MFMC_Countable.html">MFMC_Countable</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/EdmondsKarp_Maxflow/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/EdmondsKarp_Maxflow/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/EdmondsKarp_Maxflow/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-EdmondsKarp_Maxflow-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-EdmondsKarp_Maxflow-2019-06-11.tar.gz">
afp-EdmondsKarp_Maxflow-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-EdmondsKarp_Maxflow-2018-08-16.tar.gz">
afp-EdmondsKarp_Maxflow-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-EdmondsKarp_Maxflow-2017-10-10.tar.gz">
afp-EdmondsKarp_Maxflow-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-EdmondsKarp_Maxflow-2016-12-17.tar.gz">
afp-EdmondsKarp_Maxflow-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-EdmondsKarp_Maxflow-2016-08-12.tar.gz">
afp-EdmondsKarp_Maxflow-2016-08-12.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Efficient-Mergesort.html b/web/entries/Efficient-Mergesort.html
--- a/web/entries/Efficient-Mergesort.html
+++ b/web/entries/Efficient-Mergesort.html
@@ -1,261 +1,261 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Efficient Mergesort - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>fficient
<font class="first">M</font>ergesort
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Efficient Mergesort</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-11-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We provide a formalization of the mergesort algorithm as used in GHC's Data.List module, proving correctness and stability. Furthermore, experimental data suggests that generated (Haskell-)code for this algorithm is much faster than for previous algorithms available in the Isabelle distribution.</div></td>
+ <td class="abstract mathjax_process">We provide a formalization of the mergesort algorithm as used in GHC's Data.List module, proving correctness and stability. Furthermore, experimental data suggests that generated (Haskell-)code for this algorithm is much faster than for previous algorithms available in the Isabelle distribution.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2012-10-24]:
Added reference to journal article.<br>
[2018-09-17]:
Added theory Efficient_Mergesort that works exclusively with the mutual
induction schemas generated by the function package.<br>
[2018-09-19]:
Added theory Mergesort_Complexity that proves an upper bound on the number of
comparisons that are required by mergesort.<br>
[2018-09-19]:
Theory Efficient_Mergesort replaces theory Efficient_Sort but keeping the old
name Efficient_Sort.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Efficient-Mergesort-AFP,
author = {Christian Sternagel},
title = {Efficient Mergesort},
journal = {Archive of Formal Proofs},
month = nov,
year = 2011,
note = {\url{http://isa-afp.org/entries/Efficient-Mergesort.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Regex_Equivalence.html">Regex_Equivalence</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Efficient-Mergesort/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Efficient-Mergesort/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Efficient-Mergesort/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Efficient-Mergesort-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Efficient-Mergesort-2019-06-11.tar.gz">
afp-Efficient-Mergesort-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Efficient-Mergesort-2018-08-16.tar.gz">
afp-Efficient-Mergesort-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Efficient-Mergesort-2017-10-10.tar.gz">
afp-Efficient-Mergesort-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Efficient-Mergesort-2016-12-17.tar.gz">
afp-Efficient-Mergesort-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Efficient-Mergesort-2016-02-22.tar.gz">
afp-Efficient-Mergesort-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Efficient-Mergesort-2015-05-27.tar.gz">
afp-Efficient-Mergesort-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Efficient-Mergesort-2014-08-28.tar.gz">
afp-Efficient-Mergesort-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Efficient-Mergesort-2013-12-11.tar.gz">
afp-Efficient-Mergesort-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Efficient-Mergesort-2013-11-17.tar.gz">
afp-Efficient-Mergesort-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Efficient-Mergesort-2013-03-02.tar.gz">
afp-Efficient-Mergesort-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Efficient-Mergesort-2013-02-16.tar.gz">
afp-Efficient-Mergesort-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Efficient-Mergesort-2012-05-24.tar.gz">
afp-Efficient-Mergesort-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Efficient-Mergesort-2011-11-10.tar.gz">
afp-Efficient-Mergesort-2011-11-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Elliptic_Curves_Group_Law.html b/web/entries/Elliptic_Curves_Group_Law.html
--- a/web/entries/Elliptic_Curves_Group_Law.html
+++ b/web/entries/Elliptic_Curves_Group_Law.html
@@ -1,214 +1,214 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Group Law for Elliptic Curves - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">G</font>roup
<font class="first">L</font>aw
for
<font class="first">E</font>lliptic
<font class="first">C</font>urves
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Group Law for Elliptic Curves</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.in.tum.de/~berghofe">Stefan Berghofer</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-02-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We prove the group law for elliptic curves in Weierstrass form over
fields of characteristic greater than 2. In addition to affine
coordinates, we also formalize projective coordinates, which allow for
more efficient computations. By specializing the abstract
formalization to prime fields, we can apply the curve operations to
-parameters used in standard security protocols.</div></td>
+parameters used in standard security protocols.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Elliptic_Curves_Group_Law-AFP,
author = {Stefan Berghofer},
title = {The Group Law for Elliptic Curves},
journal = {Archive of Formal Proofs},
month = feb,
year = 2017,
note = {\url{http://isa-afp.org/entries/Elliptic_Curves_Group_Law.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Elliptic_Curves_Group_Law/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Elliptic_Curves_Group_Law/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Elliptic_Curves_Group_Law/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Elliptic_Curves_Group_Law-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Elliptic_Curves_Group_Law-2019-06-11.tar.gz">
afp-Elliptic_Curves_Group_Law-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Elliptic_Curves_Group_Law-2018-08-16.tar.gz">
afp-Elliptic_Curves_Group_Law-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Elliptic_Curves_Group_Law-2017-10-10.tar.gz">
afp-Elliptic_Curves_Group_Law-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Elliptic_Curves_Group_Law-2017-03-01.tar.gz">
afp-Elliptic_Curves_Group_Law-2017-03-01.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Encodability_Process_Calculi.html b/web/entries/Encodability_Process_Calculi.html
--- a/web/entries/Encodability_Process_Calculi.html
+++ b/web/entries/Encodability_Process_Calculi.html
@@ -1,239 +1,239 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Analysing and Comparing Encodability Criteria for Process Calculi - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>nalysing
and
<font class="first">C</font>omparing
<font class="first">E</font>ncodability
<font class="first">C</font>riteria
for
<font class="first">P</font>rocess
<font class="first">C</font>alculi
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Analysing and Comparing Encodability Criteria for Process Calculi</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Kirstin Peters (kirstin /dot/ peters /at/ tu-berlin /dot/ de) and
<a href="http://theory.stanford.edu/~rvg/">Rob van Glabbeek</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-08-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Encodings or the proof of their absence are the main way to
+ <td class="abstract mathjax_process">Encodings or the proof of their absence are the main way to
compare process calculi. To analyse the quality of encodings and to rule out
trivial or meaningless encodings, they are augmented with quality
criteria. There exists a bunch of different criteria and different variants
of criteria in order to reason in different settings. This leads to
incomparable results. Moreover it is not always clear whether the criteria
used to obtain a result in a particular setting do indeed fit to this
setting. We show how to formally reason about and compare encodability
criteria by mapping them on requirements on a relation between source and
target terms that is induced by the encoding function. In particular we
analyse the common criteria full abstraction, operational correspondence,
divergence reflection, success sensitiveness, and respect of barbs; e.g. we
analyse the exact nature of the simulation relation (coupled simulation
versus bisimulation) that is induced by different variants of operational
correspondence. This way we reduce the problem of analysing or comparing
encodability criteria to the better understood problem of comparing
-relations on processes.</div></td>
+relations on processes.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Encodability_Process_Calculi-AFP,
author = {Kirstin Peters and Rob van Glabbeek},
title = {Analysing and Comparing Encodability Criteria for Process Calculi},
journal = {Archive of Formal Proofs},
month = aug,
year = 2015,
note = {\url{http://isa-afp.org/entries/Encodability_Process_Calculi.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Encodability_Process_Calculi/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Encodability_Process_Calculi/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Encodability_Process_Calculi/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Encodability_Process_Calculi-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Encodability_Process_Calculi-2019-06-11.tar.gz">
afp-Encodability_Process_Calculi-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Encodability_Process_Calculi-2018-08-16.tar.gz">
afp-Encodability_Process_Calculi-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Encodability_Process_Calculi-2017-10-10.tar.gz">
afp-Encodability_Process_Calculi-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Encodability_Process_Calculi-2016-12-17.tar.gz">
afp-Encodability_Process_Calculi-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Encodability_Process_Calculi-2016-02-22.tar.gz">
afp-Encodability_Process_Calculi-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Encodability_Process_Calculi-2015-08-11.tar.gz">
afp-Encodability_Process_Calculi-2015-08-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Epistemic_Logic.html b/web/entries/Epistemic_Logic.html
--- a/web/entries/Epistemic_Logic.html
+++ b/web/entries/Epistemic_Logic.html
@@ -1,195 +1,195 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Epistemic Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>pistemic
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Epistemic Logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://people.compute.dtu.dk/ahfrom/">Asta Halkjær From</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-10-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This work is a formalization of epistemic logic with countably many
agents. It includes proofs of soundness and completeness for the axiom
system K. The completeness proof is based on the textbook
"Reasoning About Knowledge" by Fagin, Halpern, Moses and
-Vardi (MIT Press 1995).</div></td>
+Vardi (MIT Press 1995).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Epistemic_Logic-AFP,
author = {Asta Halkjær From},
title = {Epistemic Logic},
journal = {Archive of Formal Proofs},
month = oct,
year = 2018,
note = {\url{http://isa-afp.org/entries/Epistemic_Logic.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Epistemic_Logic/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Epistemic_Logic/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Epistemic_Logic/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Epistemic_Logic-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Epistemic_Logic-2019-06-11.tar.gz">
afp-Epistemic_Logic-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Epistemic_Logic-2018-10-29.tar.gz">
afp-Epistemic_Logic-2018-10-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Ergodic_Theory.html b/web/entries/Ergodic_Theory.html
--- a/web/entries/Ergodic_Theory.html
+++ b/web/entries/Ergodic_Theory.html
@@ -1,207 +1,207 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Ergodic Theory - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>rgodic
<font class="first">T</font>heory
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Ergodic Theory</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Sebastien Gouezel
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-12-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Ergodic theory is the branch of mathematics that studies the behaviour of measure preserving transformations, in finite or infinite measure. It interacts both with probability theory (mainly through measure theory) and with geometry as a lot of interesting examples are from geometric origin. We implement the first definitions and theorems of ergodic theory, including notably Poicaré recurrence theorem for finite measure preserving systems (together with the notion of conservativity in general), induced maps, Kac's theorem, Birkhoff theorem (arguably the most important theorem in ergodic theory), and variations around it such as conservativity of the corresponding skew product, or Atkinson lemma.</div></td>
+ <td class="abstract mathjax_process">Ergodic theory is the branch of mathematics that studies the behaviour of measure preserving transformations, in finite or infinite measure. It interacts both with probability theory (mainly through measure theory) and with geometry as a lot of interesting examples are from geometric origin. We implement the first definitions and theorems of ergodic theory, including notably Poicaré recurrence theorem for finite measure preserving systems (together with the notion of conservativity in general), induced maps, Kac's theorem, Birkhoff theorem (arguably the most important theorem in ergodic theory), and variations around it such as conservativity of the corresponding skew product, or Atkinson lemma.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Ergodic_Theory-AFP,
author = {Sebastien Gouezel},
title = {Ergodic Theory},
journal = {Archive of Formal Proofs},
month = dec,
year = 2015,
note = {\url{http://isa-afp.org/entries/Ergodic_Theory.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Gromov_Hyperbolicity.html">Gromov_Hyperbolicity</a>, <a href="Lp.html">Lp</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ergodic_Theory/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Ergodic_Theory/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ergodic_Theory/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Ergodic_Theory-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Ergodic_Theory-2019-06-11.tar.gz">
afp-Ergodic_Theory-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Ergodic_Theory-2018-08-16.tar.gz">
afp-Ergodic_Theory-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Ergodic_Theory-2017-10-10.tar.gz">
afp-Ergodic_Theory-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Ergodic_Theory-2016-12-17.tar.gz">
afp-Ergodic_Theory-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Ergodic_Theory-2016-02-22.tar.gz">
afp-Ergodic_Theory-2016-02-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Error_Function.html b/web/entries/Error_Function.html
--- a/web/entries/Error_Function.html
+++ b/web/entries/Error_Function.html
@@ -1,203 +1,203 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Error Function - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">E</font>rror
<font class="first">F</font>unction
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Error Function</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-02-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p> This entry provides the definitions and basic properties of
the complex and real error function erf and the complementary error
function erfc. Additionally, it gives their full asymptotic
-expansions. </p></div></td>
+expansions. </p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Error_Function-AFP,
author = {Manuel Eberl},
title = {The Error Function},
journal = {Archive of Formal Proofs},
month = feb,
year = 2018,
note = {\url{http://isa-afp.org/entries/Error_Function.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Landau_Symbols.html">Landau_Symbols</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Error_Function/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Error_Function/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Error_Function/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Error_Function-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Error_Function-2019-06-11.tar.gz">
afp-Error_Function-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Error_Function-2018-08-16.tar.gz">
afp-Error_Function-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Error_Function-2018-02-07.tar.gz">
afp-Error_Function-2018-02-07.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Euler_MacLaurin.html b/web/entries/Euler_MacLaurin.html
--- a/web/entries/Euler_MacLaurin.html
+++ b/web/entries/Euler_MacLaurin.html
@@ -1,219 +1,219 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Euler–MacLaurin Formula - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">E</font>uler–MacLaurin
<font class="first">F</font>ormula
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Euler–MacLaurin Formula</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-03-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>The Euler-MacLaurin formula relates the value of a
discrete sum to that of the corresponding integral in terms of the
derivatives at the borders of the summation and a remainder term.
Since the remainder term is often very small as the summation bounds
grow, this can be used to compute asymptotic expansions for
sums.</p> <p>This entry contains a proof of this formula
for functions from the reals to an arbitrary Banach space. Two
variants of the formula are given: the standard textbook version and a
variant outlined in <em>Concrete Mathematics</em> that is
more useful for deriving asymptotic estimates.</p> <p>As
example applications, we use that formula to derive the full
asymptotic expansion of the harmonic numbers and the sum of inverse
-squares.</p></div></td>
+squares.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Euler_MacLaurin-AFP,
author = {Manuel Eberl},
title = {The Euler–MacLaurin Formula},
journal = {Archive of Formal Proofs},
month = mar,
year = 2017,
note = {\url{http://isa-afp.org/entries/Euler_MacLaurin.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Bernoulli.html">Bernoulli</a>, <a href="Landau_Symbols.html">Landau_Symbols</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Dirichlet_Series.html">Dirichlet_Series</a>, <a href="Zeta_Function.html">Zeta_Function</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Euler_MacLaurin/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Euler_MacLaurin/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Euler_MacLaurin/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Euler_MacLaurin-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Euler_MacLaurin-2019-06-11.tar.gz">
afp-Euler_MacLaurin-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Euler_MacLaurin-2018-08-16.tar.gz">
afp-Euler_MacLaurin-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Euler_MacLaurin-2017-10-10.tar.gz">
afp-Euler_MacLaurin-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Euler_MacLaurin-2017-03-14.tar.gz">
afp-Euler_MacLaurin-2017-03-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Euler_Partition.html b/web/entries/Euler_Partition.html
--- a/web/entries/Euler_Partition.html
+++ b/web/entries/Euler_Partition.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Euler's Partition Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>uler's
<font class="first">P</font>artition
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Euler's Partition Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-11-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Euler's Partition Theorem states that the number of partitions with only
distinct parts is equal to the number of partitions with only odd parts.
The combinatorial proof follows John Harrison's HOL Light formalization.
-This theorem is the 45th theorem of the Top 100 Theorems list.</div></td>
+This theorem is the 45th theorem of the Top 100 Theorems list.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Euler_Partition-AFP,
author = {Lukas Bulwahn},
title = {Euler's Partition Theorem},
journal = {Archive of Formal Proofs},
month = nov,
year = 2015,
note = {\url{http://isa-afp.org/entries/Euler_Partition.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Card_Number_Partitions.html">Card_Number_Partitions</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Euler_Partition/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Euler_Partition/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Euler_Partition/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Euler_Partition-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Euler_Partition-2019-06-11.tar.gz">
afp-Euler_Partition-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Euler_Partition-2018-08-16.tar.gz">
afp-Euler_Partition-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Euler_Partition-2017-10-10.tar.gz">
afp-Euler_Partition-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Euler_Partition-2016-12-17.tar.gz">
afp-Euler_Partition-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Euler_Partition-2016-02-22.tar.gz">
afp-Euler_Partition-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Euler_Partition-2015-11-20.tar.gz">
afp-Euler_Partition-2015-11-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Example-Submission.html b/web/entries/Example-Submission.html
--- a/web/entries/Example-Submission.html
+++ b/web/entries/Example-Submission.html
@@ -1,191 +1,191 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Example Submission - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>xample
<font class="first">S</font>ubmission
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Example Submission</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.cse.unsw.edu.au/~kleing/">Gerwin Klein</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-02-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This is an example submission to the Archive of Formal Proofs. It shows
submission requirements and explains the structure of a simple typical
submission.</p>
<p>Note that you can use <em>HTML tags</em> and LaTeX formulae like
$\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}$ in the abstract. Display formulae like
$$ \int_0^1 x^{-x}\,\text{d}x = \sum_{n=1}^\infty n^{-n}$$
are also possible. Please read the
-<a href="submitting.html">submission guidelines</a> before using this.</p></div></td>
+<a href="../submitting.html">submission guidelines</a> before using this.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">no-index:</td>
<td class="abstract">true</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Example-Submission-AFP,
author = {Gerwin Klein},
title = {Example Submission},
journal = {Archive of Formal Proofs},
month = feb,
year = 2004,
note = {\url{http://isa-afp.org/entries/Example-Submission.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Example-Submission/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Example-Submission/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Example-Submission/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Example-Submission-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
None
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/FFT.html b/web/entries/FFT.html
--- a/web/entries/FFT.html
+++ b/web/entries/FFT.html
@@ -1,277 +1,277 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Fast Fourier Transform - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ast
<font class="first">F</font>ourier
<font class="first">T</font>ransform
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Fast Fourier Transform</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~ballarin/">Clemens Ballarin</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2005-10-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We formalise a functional implementation of the FFT algorithm over the complex numbers, and its inverse. Both are shown equivalent to the usual definitions of these operations through Vandermonde matrices. They are also shown to be inverse to each other, more precisely, that composition of the inverse and the transformation yield the identity up to a scalar.</div></td>
+ <td class="abstract mathjax_process">We formalise a functional implementation of the FFT algorithm over the complex numbers, and its inverse. Both are shown equivalent to the usual definitions of these operations through Vandermonde matrices. They are also shown to be inverse to each other, more precisely, that composition of the inverse and the transformation yield the identity up to a scalar.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{FFT-AFP,
author = {Clemens Ballarin},
title = {Fast Fourier Transform},
journal = {Archive of Formal Proofs},
month = oct,
year = 2005,
note = {\url{http://isa-afp.org/entries/FFT.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FFT/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/FFT/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FFT/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-FFT-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-FFT-2019-06-11.tar.gz">
afp-FFT-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-FFT-2018-08-16.tar.gz">
afp-FFT-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-FFT-2017-10-10.tar.gz">
afp-FFT-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-FFT-2016-12-17.tar.gz">
afp-FFT-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-FFT-2016-02-22.tar.gz">
afp-FFT-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-FFT-2015-05-27.tar.gz">
afp-FFT-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-FFT-2014-08-28.tar.gz">
afp-FFT-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-FFT-2013-12-11.tar.gz">
afp-FFT-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-FFT-2013-11-17.tar.gz">
afp-FFT-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-FFT-2013-02-16.tar.gz">
afp-FFT-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-FFT-2012-05-24.tar.gz">
afp-FFT-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-FFT-2011-10-11.tar.gz">
afp-FFT-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-FFT-2011-02-11.tar.gz">
afp-FFT-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-FFT-2010-06-30.tar.gz">
afp-FFT-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-FFT-2009-12-12.tar.gz">
afp-FFT-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-FFT-2009-04-29.tar.gz">
afp-FFT-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-FFT-2008-06-10.tar.gz">
afp-FFT-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-FFT-2007-11-27.tar.gz">
afp-FFT-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-FFT-2005-10-14.tar.gz">
afp-FFT-2005-10-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/FLP.html b/web/entries/FLP.html
--- a/web/entries/FLP.html
+++ b/web/entries/FLP.html
@@ -1,233 +1,233 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Constructive Proof for FLP - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">C</font>onstructive
<font class="first">P</font>roof
for
<font class="first">F</font>LP
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Constructive Proof for FLP</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Benjamin Bisping (benjamin /dot/ bisping /at/ campus /dot/ tu-berlin /dot/ de),
Paul-David Brodmann (p /dot/ brodmann /at/ tu-berlin /dot/ de),
Tim Jungnickel (tim /dot/ jungnickel /at/ tu-berlin /dot/ de),
Christina Rickmann (c /dot/ rickmann /at/ tu-berlin /dot/ de),
Henning Seidler (henning /dot/ seidler /at/ mailbox /dot/ tu-berlin /dot/ de),
Anke Stüber (anke /dot/ stueber /at/ campus /dot/ tu-berlin /dot/ de),
Arno Wilhelm-Weidner (arno /dot/ wilhelm-weidner /at/ tu-berlin /dot/ de),
Kirstin Peters (kirstin /dot/ peters /at/ tu-berlin /dot/ de) and
<a href="https://www.mtv.tu-berlin.de/nestmann/">Uwe Nestmann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-05-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The impossibility of distributed consensus with one faulty process is
a result with important consequences for real world distributed
systems e.g., commits in replicated databases. Since proofs are not
immune to faults and even plausible proofs with a profound formalism
can conclude wrong results, we validate the fundamental result named
FLP after Fischer, Lynch and Paterson.
We present a formalization of distributed systems
and the aforementioned consensus problem. Our proof is based on Hagen
Völzer's paper "A constructive proof for FLP". In addition to the
enhanced confidence in the validity of Völzer's proof, we contribute
the missing gaps to show the correctness in Isabelle/HOL. We clarify
the proof details and even prove fairness of the infinite execution
that contradicts consensus. Our Isabelle formalization can also be
-reused for further proofs of properties of distributed systems.</div></td>
+reused for further proofs of properties of distributed systems.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{FLP-AFP,
author = {Benjamin Bisping and Paul-David Brodmann and Tim Jungnickel and Christina Rickmann and Henning Seidler and Anke Stüber and Arno Wilhelm-Weidner and Kirstin Peters and Uwe Nestmann},
title = {A Constructive Proof for FLP},
journal = {Archive of Formal Proofs},
month = may,
year = 2016,
note = {\url{http://isa-afp.org/entries/FLP.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FLP/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/FLP/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FLP/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-FLP-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-FLP-2019-06-11.tar.gz">
afp-FLP-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-FLP-2018-08-16.tar.gz">
afp-FLP-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-FLP-2017-10-10.tar.gz">
afp-FLP-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-FLP-2016-12-17.tar.gz">
afp-FLP-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-FLP-2016-05-18.tar.gz">
afp-FLP-2016-05-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/FOL-Fitting.html b/web/entries/FOL-Fitting.html
--- a/web/entries/FOL-Fitting.html
+++ b/web/entries/FOL-Fitting.html
@@ -1,295 +1,295 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>First-Order Logic According to Fitting - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>irst-Order
<font class="first">L</font>ogic
<font class="first">A</font>ccording
to
<font class="first">F</font>itting
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">First-Order Logic According to Fitting</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.in.tum.de/~berghofe">Stefan Berghofer</a>
</td>
</tr>
<tr>
<td class="datahead">
Contributor:
</td>
<td class="data">
<a href="https://people.compute.dtu.dk/ahfrom/">Asta Halkjær From</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2007-08-02</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We present a formalization of parts of Melvin Fitting's book "First-Order Logic and Automated Theorem Proving". The formalization covers the syntax of first-order logic, its semantics, the model existence theorem, a natural deduction proof calculus together with a proof of correctness and completeness, as well as the Löwenheim-Skolem theorem.</div></td>
+ <td class="abstract mathjax_process">We present a formalization of parts of Melvin Fitting's book "First-Order Logic and Automated Theorem Proving". The formalization covers the syntax of first-order logic, its semantics, the model existence theorem, a natural deduction proof calculus together with a proof of correctness and completeness, as well as the Löwenheim-Skolem theorem.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2018-07-21]: Proved completeness theorem for open formulas. Proofs are now written in the declarative style. Enumeration of pairs and datatypes is automated using the Countable theory.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{FOL-Fitting-AFP,
author = {Stefan Berghofer},
title = {First-Order Logic According to Fitting},
journal = {Archive of Formal Proofs},
month = aug,
year = 2007,
note = {\url{http://isa-afp.org/entries/FOL-Fitting.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="FOL_Seq_Calc1.html">FOL_Seq_Calc1</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FOL-Fitting/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/FOL-Fitting/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FOL-Fitting/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-FOL-Fitting-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-FOL-Fitting-2019-06-11.tar.gz">
afp-FOL-Fitting-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-FOL-Fitting-2018-08-16.tar.gz">
afp-FOL-Fitting-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-FOL-Fitting-2017-10-10.tar.gz">
afp-FOL-Fitting-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-FOL-Fitting-2016-12-17.tar.gz">
afp-FOL-Fitting-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-FOL-Fitting-2016-02-22.tar.gz">
afp-FOL-Fitting-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-FOL-Fitting-2015-05-27.tar.gz">
afp-FOL-Fitting-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-FOL-Fitting-2014-08-28.tar.gz">
afp-FOL-Fitting-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-FOL-Fitting-2013-12-11.tar.gz">
afp-FOL-Fitting-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-FOL-Fitting-2013-11-17.tar.gz">
afp-FOL-Fitting-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-FOL-Fitting-2013-03-02.tar.gz">
afp-FOL-Fitting-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-FOL-Fitting-2013-02-16.tar.gz">
afp-FOL-Fitting-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-FOL-Fitting-2012-05-24.tar.gz">
afp-FOL-Fitting-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-FOL-Fitting-2011-10-11.tar.gz">
afp-FOL-Fitting-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-FOL-Fitting-2011-02-11.tar.gz">
afp-FOL-Fitting-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-FOL-Fitting-2010-06-30.tar.gz">
afp-FOL-Fitting-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-FOL-Fitting-2009-12-12.tar.gz">
afp-FOL-Fitting-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-FOL-Fitting-2009-04-29.tar.gz">
afp-FOL-Fitting-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-FOL-Fitting-2008-06-10.tar.gz">
afp-FOL-Fitting-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-FOL-Fitting-2007-11-27.tar.gz">
afp-FOL-Fitting-2007-11-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/FOL_Harrison.html b/web/entries/FOL_Harrison.html
--- a/web/entries/FOL_Harrison.html
+++ b/web/entries/FOL_Harrison.html
@@ -1,226 +1,226 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>First-Order Logic According to Harrison - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>irst-Order
<font class="first">L</font>ogic
<font class="first">A</font>ccording
to
<font class="first">H</font>arrison
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">First-Order Logic According to Harrison</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://people.compute.dtu.dk/aleje/">Alexander Birch Jensen</a>,
<a href="https://people.compute.dtu.dk/andschl/">Anders Schlichtkrull</a> and
<a href="https://people.compute.dtu.dk/jovi/">Jørgen Villadsen</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-01-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>We present a certified declarative first-order prover with equality
based on John Harrison's Handbook of Practical Logic and
Automated Reasoning, Cambridge University Press, 2009. ML code
reflection is used such that the entire prover can be executed within
Isabelle as a very simple interactive proof assistant. As examples we
consider Pelletier's problems 1-46.</p>
<p>Reference: Programming and Verifying a Declarative First-Order
Prover in Isabelle/HOL. Alexander Birch Jensen, John Bruntse Larsen,
Anders Schlichtkrull & Jørgen Villadsen. AI Communications 31:281-299
2018. <a href="https://content.iospress.com/articles/ai-communications/aic764">
https://content.iospress.com/articles/ai-communications/aic764</a></p>
<p>See also: Students' Proof Assistant (SPA).
<a href=https://github.com/logic-tools/spa>
-https://github.com/logic-tools/spa</a></p></div></td>
+https://github.com/logic-tools/spa</a></p></td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2018-07-21]: Proof of Pelletier's problem 34 (Andrews's Challenge) thanks to Asta Halkjær From.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{FOL_Harrison-AFP,
author = {Alexander Birch Jensen and Anders Schlichtkrull and Jørgen Villadsen},
title = {First-Order Logic According to Harrison},
journal = {Archive of Formal Proofs},
month = jan,
year = 2017,
note = {\url{http://isa-afp.org/entries/FOL_Harrison.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FOL_Harrison/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/FOL_Harrison/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FOL_Harrison/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-FOL_Harrison-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-FOL_Harrison-2019-06-11.tar.gz">
afp-FOL_Harrison-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-FOL_Harrison-2018-08-16.tar.gz">
afp-FOL_Harrison-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-FOL_Harrison-2017-10-10.tar.gz">
afp-FOL_Harrison-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-FOL_Harrison-2017-01-04.tar.gz">
afp-FOL_Harrison-2017-01-04.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/FOL_Seq_Calc1.html b/web/entries/FOL_Seq_Calc1.html
--- a/web/entries/FOL_Seq_Calc1.html
+++ b/web/entries/FOL_Seq_Calc1.html
@@ -1,211 +1,211 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Sequent Calculus for First-Order Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">S</font>equent
<font class="first">C</font>alculus
for
<font class="first">F</font>irst-Order
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Sequent Calculus for First-Order Logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://people.compute.dtu.dk/ahfrom/">Asta Halkjær From</a>
</td>
</tr>
<tr>
<td class="datahead">
Contributors:
</td>
<td class="data">
<a href="https://people.compute.dtu.dk/aleje/">Alexander Birch Jensen</a>,
<a href="https://people.compute.dtu.dk/andschl/">Anders Schlichtkrull</a> and
<a href="https://people.compute.dtu.dk/jovi/">Jørgen Villadsen</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-07-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This work formalizes soundness and completeness of a one-sided sequent
calculus for first-order logic. The completeness is shown via a
translation from a complete semantic tableau calculus, the proof of
which is based on the First-Order Logic According to Fitting theory.
The calculi and proof techniques are taken from Ben-Ari's
-Mathematical Logic for Computer Science.</div></td>
+Mathematical Logic for Computer Science.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{FOL_Seq_Calc1-AFP,
author = {Asta Halkjær From},
title = {A Sequent Calculus for First-Order Logic},
journal = {Archive of Formal Proofs},
month = jul,
year = 2019,
note = {\url{http://isa-afp.org/entries/FOL_Seq_Calc1.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="FOL-Fitting.html">FOL-Fitting</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FOL_Seq_Calc1/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/FOL_Seq_Calc1/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FOL_Seq_Calc1/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-FOL_Seq_Calc1-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-FOL_Seq_Calc1-2019-07-18.tar.gz">
afp-FOL_Seq_Calc1-2019-07-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Factored_Transition_System_Bounding.html b/web/entries/Factored_Transition_System_Bounding.html
--- a/web/entries/Factored_Transition_System_Bounding.html
+++ b/web/entries/Factored_Transition_System_Bounding.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Upper Bounding Diameters of State Spaces of Factored Transition Systems - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">U</font>pper
<font class="first">B</font>ounding
<font class="first">D</font>iameters
of
<font class="first">S</font>tate
<font class="first">S</font>paces
of
<font class="first">F</font>actored
<font class="first">T</font>ransition
<font class="first">S</font>ystems
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Upper Bounding Diameters of State Spaces of Factored Transition Systems</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Friedrich Kurz and
<a href="http://home.in.tum.de/~mansour/">Mohammad Abdulaziz</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-10-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
A completeness threshold is required to guarantee the completeness of
planning as satisfiability, and bounded model checking of safety
properties. One valid completeness threshold is the diameter of the
underlying transition system. The diameter is the maximum element in
the set of lengths of all shortest paths between pairs of states. The
diameter is not calculated exactly in our setting, where the
transition system is succinctly described using a (propositionally)
factored representation. Rather, an upper bound on the diameter is
calculated compositionally, by bounding the diameters of small
abstract subsystems, and then composing those. We port a HOL4
formalisation of a compositional algorithm for computing a relatively
tight upper bound on the system diameter. This compositional algorithm
exploits acyclicity in the state space to achieve compositionality,
and it was introduced by Abdulaziz et. al. The formalisation that we
port is described as a part of another paper by Abdulaziz et. al. As a
part of this porting we developed a libray about transition systems,
-which shall be of use in future related mechanisation efforts.</div></td>
+which shall be of use in future related mechanisation efforts.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Factored_Transition_System_Bounding-AFP,
author = {Friedrich Kurz and Mohammad Abdulaziz},
title = {Upper Bounding Diameters of State Spaces of Factored Transition Systems},
journal = {Archive of Formal Proofs},
month = oct,
year = 2018,
note = {\url{http://isa-afp.org/entries/Factored_Transition_System_Bounding.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Factored_Transition_System_Bounding/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Factored_Transition_System_Bounding/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Factored_Transition_System_Bounding/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Factored_Transition_System_Bounding-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Factored_Transition_System_Bounding-2019-06-11.tar.gz">
afp-Factored_Transition_System_Bounding-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Factored_Transition_System_Bounding-2018-10-16.tar.gz">
afp-Factored_Transition_System_Bounding-2018-10-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Falling_Factorial_Sum.html b/web/entries/Falling_Factorial_Sum.html
--- a/web/entries/Falling_Factorial_Sum.html
+++ b/web/entries/Falling_Factorial_Sum.html
@@ -1,216 +1,216 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Falling Factorial of a Sum - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">F</font>alling
<font class="first">F</font>actorial
of
a
<font class="first">S</font>um
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Falling Factorial of a Sum</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-12-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry shows that the falling factorial of a sum can be computed
with an expression using binomial coefficients and the falling
factorial of its summands. The entry provides three different proofs:
a combinatorial proof, an induction proof and an algebraic proof using
the Vandermonde identity. The three formalizations try to follow
their informal presentations from a Mathematics Stack Exchange page as
close as possible. The induction and algebraic formalization end up to
be very close to their informal presentation, whereas the
combinatorial proof first requires the introduction of list
interleavings, and significant more detail than its informal
-presentation.</div></td>
+presentation.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Falling_Factorial_Sum-AFP,
author = {Lukas Bulwahn},
title = {The Falling Factorial of a Sum},
journal = {Archive of Formal Proofs},
month = dec,
year = 2017,
note = {\url{http://isa-afp.org/entries/Falling_Factorial_Sum.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Card_Partitions.html">Card_Partitions</a>, <a href="Discrete_Summation.html">Discrete_Summation</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Falling_Factorial_Sum/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Falling_Factorial_Sum/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Falling_Factorial_Sum/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Falling_Factorial_Sum-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Falling_Factorial_Sum-2019-06-11.tar.gz">
afp-Falling_Factorial_Sum-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Falling_Factorial_Sum-2018-08-16.tar.gz">
afp-Falling_Factorial_Sum-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Falling_Factorial_Sum-2017-12-30.tar.gz">
afp-Falling_Factorial_Sum-2017-12-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Farkas.html b/web/entries/Farkas.html
--- a/web/entries/Farkas.html
+++ b/web/entries/Farkas.html
@@ -1,212 +1,212 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Farkas' Lemma and Motzkin's Transposition Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>arkas'
<font class="first">L</font>emma
and
<font class="first">M</font>otzkin's
<font class="first">T</font>ransposition
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Farkas' Lemma and Motzkin's Transposition Theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/users/bottesch/">Ralph Bottesch</a>,
<a href="http://cl-informatik.uibk.ac.at/users/mhaslbeck/">Max W. Haslbeck</a> and
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-01-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize a proof of Motzkin's transposition theorem and
Farkas' lemma in Isabelle/HOL. Our proof is based on the
formalization of the simplex algorithm which, given a set of linear
constraints, either returns a satisfying assignment to the problem or
detects unsatisfiability. By reusing facts about the simplex algorithm
we show that a set of linear constraints is unsatisfiable if and only
if there is a linear combination of the constraints which evaluates to
-a trivially unsatisfiable inequality.</div></td>
+a trivially unsatisfiable inequality.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Farkas-AFP,
author = {Ralph Bottesch and Max W. Haslbeck and René Thiemann},
title = {Farkas' Lemma and Motzkin's Transposition Theorem},
journal = {Archive of Formal Proofs},
month = jan,
year = 2019,
note = {\url{http://isa-afp.org/entries/Farkas.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Jordan_Normal_Form.html">Jordan_Normal_Form</a>, <a href="Simplex.html">Simplex</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Linear_Programming.html">Linear_Programming</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Farkas/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Farkas/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Farkas/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Farkas-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Farkas-2019-06-11.tar.gz">
afp-Farkas-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Farkas-2019-01-21.tar.gz">
afp-Farkas-2019-01-21.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/FeatherweightJava.html b/web/entries/FeatherweightJava.html
--- a/web/entries/FeatherweightJava.html
+++ b/web/entries/FeatherweightJava.html
@@ -1,291 +1,291 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Theory of Featherweight Java in Isabelle/HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">T</font>heory
of
<font class="first">F</font>eatherweight
<font class="first">J</font>ava
in
<font class="first">I</font>sabelle/HOL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Theory of Featherweight Java in Isabelle/HOL</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.cs.cornell.edu/~jnfoster/">J. Nathan Foster</a> and
<a href="http://research.microsoft.com/en-us/people/dimitris/">Dimitrios Vytiniotis</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2006-03-31</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We formalize the type system, small-step operational semantics, and type soundness proof for Featherweight Java, a simple object calculus, in Isabelle/HOL.</div></td>
+ <td class="abstract mathjax_process">We formalize the type system, small-step operational semantics, and type soundness proof for Featherweight Java, a simple object calculus, in Isabelle/HOL.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{FeatherweightJava-AFP,
author = {J. Nathan Foster and Dimitrios Vytiniotis},
title = {A Theory of Featherweight Java in Isabelle/HOL},
journal = {Archive of Formal Proofs},
month = mar,
year = 2006,
note = {\url{http://isa-afp.org/entries/FeatherweightJava.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FeatherweightJava/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/FeatherweightJava/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FeatherweightJava/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-FeatherweightJava-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-FeatherweightJava-2019-06-11.tar.gz">
afp-FeatherweightJava-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-FeatherweightJava-2018-08-16.tar.gz">
afp-FeatherweightJava-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-FeatherweightJava-2017-10-10.tar.gz">
afp-FeatherweightJava-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-FeatherweightJava-2016-12-17.tar.gz">
afp-FeatherweightJava-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-FeatherweightJava-2016-02-22.tar.gz">
afp-FeatherweightJava-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-FeatherweightJava-2015-05-27.tar.gz">
afp-FeatherweightJava-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-FeatherweightJava-2014-08-28.tar.gz">
afp-FeatherweightJava-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-FeatherweightJava-2013-12-11.tar.gz">
afp-FeatherweightJava-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-FeatherweightJava-2013-11-17.tar.gz">
afp-FeatherweightJava-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-FeatherweightJava-2013-02-16.tar.gz">
afp-FeatherweightJava-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-FeatherweightJava-2012-05-24.tar.gz">
afp-FeatherweightJava-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-FeatherweightJava-2011-10-11.tar.gz">
afp-FeatherweightJava-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-FeatherweightJava-2011-02-11.tar.gz">
afp-FeatherweightJava-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-FeatherweightJava-2010-06-30.tar.gz">
afp-FeatherweightJava-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-FeatherweightJava-2009-12-12.tar.gz">
afp-FeatherweightJava-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-FeatherweightJava-2009-04-29.tar.gz">
afp-FeatherweightJava-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-FeatherweightJava-2008-06-10.tar.gz">
afp-FeatherweightJava-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-FeatherweightJava-2007-11-27.tar.gz">
afp-FeatherweightJava-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-FeatherweightJava-2006-04-06.tar.gz">
afp-FeatherweightJava-2006-04-06.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-FeatherweightJava-2006-04-05.tar.gz">
afp-FeatherweightJava-2006-04-05.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Featherweight_OCL.html b/web/entries/Featherweight_OCL.html
--- a/web/entries/Featherweight_OCL.html
+++ b/web/entries/Featherweight_OCL.html
@@ -1,273 +1,273 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Featherweight OCL: A Proposal for a Machine-Checked Formal Semantics for OCL 2.5 - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>eatherweight
<font class="first">O</font>CL:
<font class="first">A</font>
<font class="first">P</font>roposal
for
a
<font class="first">M</font>achine-Checked
<font class="first">F</font>ormal
<font class="first">S</font>emantics
for
<font class="first">O</font>CL
<font class="first">2</font>.5
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Featherweight OCL: A Proposal for a Machine-Checked Formal Semantics for OCL 2.5</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www.brucker.ch/">Achim D. Brucker</a>,
<a href="https://www.lri.fr/~ftuong/">Frédéric Tuong</a> and
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-01-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">The Unified Modeling Language (UML) is one of the few
+ <td class="abstract mathjax_process">The Unified Modeling Language (UML) is one of the few
modeling languages that is widely used in industry. While
UML is mostly known as diagrammatic modeling language
(e.g., visualizing class models), it is complemented by a
textual language, called Object Constraint Language
(OCL). The current version of OCL is based on a four-valued
logic that turns UML into a formal language. Any type
comprises the elements "invalid" and "null" which are
propagated as strict and non-strict, respectively.
Unfortunately, the former semi-formal semantics of this
specification language, captured in the "Annex A" of the
OCL standard, leads to different interpretations of corner
cases. We formalize the core of OCL: denotational
definitions, a logical calculus and operational rules that
allow for the execution of OCL expressions by a mixture of
term rewriting and code compilation. Our formalization
reveals several inconsistencies and contradictions in the
current version of the OCL standard. Overall, this document
is intended to provide the basis for a machine-checked text
"Annex A" of the OCL standard targeting at tool
-implementors.</div></td>
+implementors.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2015-10-13]:
<a href="https://bitbucket.org/isa-afp/afp-devel/commits/ea3b38fc54d68535bcfafd40357b6ff8f1092057">afp-devel@ea3b38fc54d6</a> and
<a href="https://projects.brucker.ch/hol-testgen/log/trunk?rev=12148">hol-testgen@12148</a><br>
&nbsp;&nbsp;&nbsp;Update of Featherweight OCL including a change in the abstract.<br>
[2014-01-16]:
<a href="https://bitbucket.org/isa-afp/afp-devel/commits/9091ce05cb20d4ad3dc1961c18f1846d85e87f8e">afp-devel@9091ce05cb20</a> and
<a href="https://projects.brucker.ch/hol-testgen/log/trunk?rev=10241">hol-testgen@10241</a><br>
&nbsp;&nbsp;&nbsp;New Entry: Featherweight OCL</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Featherweight_OCL-AFP,
author = {Achim D. Brucker and Frédéric Tuong and Burkhart Wolff},
title = {Featherweight OCL: A Proposal for a Machine-Checked Formal Semantics for OCL 2.5},
journal = {Archive of Formal Proofs},
month = jan,
year = 2014,
note = {\url{http://isa-afp.org/entries/Featherweight_OCL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Featherweight_OCL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Featherweight_OCL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Featherweight_OCL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Featherweight_OCL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Featherweight_OCL-2019-06-11.tar.gz">
afp-Featherweight_OCL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Featherweight_OCL-2018-08-16.tar.gz">
afp-Featherweight_OCL-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Featherweight_OCL-2017-10-10.tar.gz">
afp-Featherweight_OCL-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Featherweight_OCL-2016-12-17.tar.gz">
afp-Featherweight_OCL-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Featherweight_OCL-2016-02-22.tar.gz">
afp-Featherweight_OCL-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Featherweight_OCL-2015-05-27.tar.gz">
afp-Featherweight_OCL-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Featherweight_OCL-2014-08-28.tar.gz">
afp-Featherweight_OCL-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Featherweight_OCL-2014-01-16.tar.gz">
afp-Featherweight_OCL-2014-01-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Fermat3_4.html b/web/entries/Fermat3_4.html
--- a/web/entries/Fermat3_4.html
+++ b/web/entries/Fermat3_4.html
@@ -1,294 +1,294 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Fermat's Last Theorem for Exponents 3 and 4 and the Parametrisation of Pythagorean Triples - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ermat's
<font class="first">L</font>ast
<font class="first">T</font>heorem
for
<font class="first">E</font>xponents
<font class="first">3</font>
and
<font class="first">4</font>
and
the
<font class="first">P</font>arametrisation
of
<font class="first">P</font>ythagorean
<font class="first">T</font>riples
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Fermat's Last Theorem for Exponents 3 and 4 and the Parametrisation of Pythagorean Triples</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Roelof Oosterhuis
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2007-08-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This document presents the mechanised proofs of<ul><li>Fermat's Last Theorem for exponents 3 and 4 and</li><li>the parametrisation of Pythagorean Triples.</li></ul></div></td>
+ <td class="abstract mathjax_process">This document presents the mechanised proofs of<ul><li>Fermat's Last Theorem for exponents 3 and 4 and</li><li>the parametrisation of Pythagorean Triples.</li></ul></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Fermat3_4-AFP,
author = {Roelof Oosterhuis},
title = {Fermat's Last Theorem for Exponents 3 and 4 and the Parametrisation of Pythagorean Triples},
journal = {Archive of Formal Proofs},
month = aug,
year = 2007,
note = {\url{http://isa-afp.org/entries/Fermat3_4.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Fermat3_4/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Fermat3_4/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Fermat3_4/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Fermat3_4-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Fermat3_4-2019-06-11.tar.gz">
afp-Fermat3_4-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Fermat3_4-2018-08-16.tar.gz">
afp-Fermat3_4-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Fermat3_4-2017-10-10.tar.gz">
afp-Fermat3_4-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Fermat3_4-2016-12-17.tar.gz">
afp-Fermat3_4-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Fermat3_4-2016-02-22.tar.gz">
afp-Fermat3_4-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Fermat3_4-2015-05-27.tar.gz">
afp-Fermat3_4-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Fermat3_4-2014-08-28.tar.gz">
afp-Fermat3_4-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Fermat3_4-2013-12-11.tar.gz">
afp-Fermat3_4-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Fermat3_4-2013-11-17.tar.gz">
afp-Fermat3_4-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Fermat3_4-2013-02-16.tar.gz">
afp-Fermat3_4-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Fermat3_4-2012-05-24.tar.gz">
afp-Fermat3_4-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Fermat3_4-2011-10-11.tar.gz">
afp-Fermat3_4-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Fermat3_4-2011-02-11.tar.gz">
afp-Fermat3_4-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Fermat3_4-2010-06-30.tar.gz">
afp-Fermat3_4-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Fermat3_4-2009-12-12.tar.gz">
afp-Fermat3_4-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Fermat3_4-2009-04-29.tar.gz">
afp-Fermat3_4-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Fermat3_4-2008-06-10.tar.gz">
afp-Fermat3_4-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Fermat3_4-2007-11-27.tar.gz">
afp-Fermat3_4-2007-11-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/FileRefinement.html b/web/entries/FileRefinement.html
--- a/web/entries/FileRefinement.html
+++ b/web/entries/FileRefinement.html
@@ -1,281 +1,281 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>File Refinement - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ile
<font class="first">R</font>efinement
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">File Refinement</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.mit.edu/~kkz/">Karen Zee</a> and
<a href="http://lara.epfl.ch/~kuncak/">Viktor Kuncak</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-12-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">These theories illustrates the verification of basic file operations (file creation, file read and file write) in the Isabelle theorem prover. We describe a file at two levels of abstraction: an abstract file represented as a resizable array, and a concrete file represented using data blocks.</div></td>
+ <td class="abstract mathjax_process">These theories illustrates the verification of basic file operations (file creation, file read and file write) in the Isabelle theorem prover. We describe a file at two levels of abstraction: an abstract file represented as a resizable array, and a concrete file represented using data blocks.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{FileRefinement-AFP,
author = {Karen Zee and Viktor Kuncak},
title = {File Refinement},
journal = {Archive of Formal Proofs},
month = dec,
year = 2004,
note = {\url{http://isa-afp.org/entries/FileRefinement.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FileRefinement/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/FileRefinement/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FileRefinement/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-FileRefinement-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-FileRefinement-2019-06-11.tar.gz">
afp-FileRefinement-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-FileRefinement-2018-08-16.tar.gz">
afp-FileRefinement-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-FileRefinement-2017-10-10.tar.gz">
afp-FileRefinement-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-FileRefinement-2016-12-17.tar.gz">
afp-FileRefinement-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-FileRefinement-2016-02-22.tar.gz">
afp-FileRefinement-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-FileRefinement-2015-05-27.tar.gz">
afp-FileRefinement-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-FileRefinement-2014-08-28.tar.gz">
afp-FileRefinement-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-FileRefinement-2013-12-11.tar.gz">
afp-FileRefinement-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-FileRefinement-2013-11-17.tar.gz">
afp-FileRefinement-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-FileRefinement-2013-02-16.tar.gz">
afp-FileRefinement-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-FileRefinement-2012-05-24.tar.gz">
afp-FileRefinement-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-FileRefinement-2011-10-11.tar.gz">
afp-FileRefinement-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-FileRefinement-2011-02-11.tar.gz">
afp-FileRefinement-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-FileRefinement-2010-06-30.tar.gz">
afp-FileRefinement-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-FileRefinement-2009-12-12.tar.gz">
afp-FileRefinement-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-FileRefinement-2009-04-29.tar.gz">
afp-FileRefinement-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-FileRefinement-2008-06-10.tar.gz">
afp-FileRefinement-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-FileRefinement-2007-11-27.tar.gz">
afp-FileRefinement-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-FileRefinement-2005-10-14.tar.gz">
afp-FileRefinement-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-FileRefinement-2004-12-15.tar.gz">
afp-FileRefinement-2004-12-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/FinFun.html b/web/entries/FinFun.html
--- a/web/entries/FinFun.html
+++ b/web/entries/FinFun.html
@@ -1,287 +1,287 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Code Generation for Functions as Data - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ode
<font class="first">G</font>eneration
for
<font class="first">F</font>unctions
as
<font class="first">D</font>ata
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Code Generation for Functions as Data</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2009-05-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">FinFuns are total functions that are constant except for a finite set of points, i.e. a generalisation of finite maps. They are formalised as a new type in Isabelle/HOL such that the code generator can handle equality tests and quantification on FinFuns. On the code output level, FinFuns are explicitly represented by constant functions and pointwise updates, similarly to associative lists. Inside the logic, they behave like ordinary functions with extensionality. Via the update/constant pattern, a recursion combinator and an induction rule for FinFuns allow for defining and reasoning about operators on FinFun that are also executable.</div></td>
+ <td class="abstract mathjax_process">FinFuns are total functions that are constant except for a finite set of points, i.e. a generalisation of finite maps. They are formalised as a new type in Isabelle/HOL such that the code generator can handle equality tests and quantification on FinFuns. On the code output level, FinFuns are explicitly represented by constant functions and pointwise updates, similarly to associative lists. Inside the logic, they behave like ordinary functions with extensionality. Via the update/constant pattern, a recursion combinator and an induction rule for FinFuns allow for defining and reasoning about operators on FinFun that are also executable.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2010-08-13]:
new concept domain of a FinFun as a FinFun
(revision 34b3517cbc09)<br>
[2010-11-04]:
new conversion function from FinFun to list of elements in the domain
(revision 0c167102e6ed)<br>
[2012-03-07]:
replace sets as FinFuns by predicates as FinFuns because the set type constructor has been reintroduced
(revision b7aa87989f3a)</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{FinFun-AFP,
author = {Andreas Lochbihler},
title = {Code Generation for Functions as Data},
journal = {Archive of Formal Proofs},
month = may,
year = 2009,
note = {\url{http://isa-afp.org/entries/FinFun.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="JinjaThreads.html">JinjaThreads</a>, <a href="Launchbury.html">Launchbury</a>, <a href="Nominal2.html">Nominal2</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FinFun/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/FinFun/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FinFun/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-FinFun-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-FinFun-2019-06-11.tar.gz">
afp-FinFun-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-FinFun-2018-08-16.tar.gz">
afp-FinFun-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-FinFun-2017-10-10.tar.gz">
afp-FinFun-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-FinFun-2016-12-17.tar.gz">
afp-FinFun-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-FinFun-2016-02-22.tar.gz">
afp-FinFun-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-FinFun-2015-05-27.tar.gz">
afp-FinFun-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-FinFun-2014-08-28.tar.gz">
afp-FinFun-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-FinFun-2013-12-11.tar.gz">
afp-FinFun-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-FinFun-2013-11-17.tar.gz">
afp-FinFun-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-FinFun-2013-03-02.tar.gz">
afp-FinFun-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-FinFun-2013-02-16.tar.gz">
afp-FinFun-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-FinFun-2012-05-24.tar.gz">
afp-FinFun-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-FinFun-2011-10-11.tar.gz">
afp-FinFun-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-FinFun-2011-02-11.tar.gz">
afp-FinFun-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-FinFun-2010-06-30.tar.gz">
afp-FinFun-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-FinFun-2009-12-12.tar.gz">
afp-FinFun-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-FinFun-2009-05-25.tar.gz">
afp-FinFun-2009-05-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Finger-Trees.html b/web/entries/Finger-Trees.html
--- a/web/entries/Finger-Trees.html
+++ b/web/entries/Finger-Trees.html
@@ -1,269 +1,269 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Finger Trees - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>inger
<font class="first">T</font>rees
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Finger Trees</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Benedikt Nordhoff (b_nord01 /at/ uni-muenster /dot/ de),
Stefan Körner (s_koer03 /at/ uni-muenster /dot/ de) and
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-10-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We implement and prove correct 2-3 finger trees.
Finger trees are a general purpose data structure, that can be used to
efficiently implement other data structures, such as priority queues.
Intuitively, a finger tree is an annotated sequence, where the annotations are
elements of a monoid. Apart from operations to access the ends of the sequence,
the main operation is to split the sequence at the point where a
<em>monotone predicate</em> over the sum of the left part of the sequence
becomes true for the first time.
The implementation follows the paper of Hinze and Paterson.
-The code generator can be used to get efficient, verified code.</div></td>
+The code generator can be used to get efficient, verified code.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Finger-Trees-AFP,
author = {Benedikt Nordhoff and Stefan Körner and Peter Lammich},
title = {Finger Trees},
journal = {Archive of Formal Proofs},
month = oct,
year = 2010,
note = {\url{http://isa-afp.org/entries/Finger-Trees.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Collections.html">Collections</a>, <a href="Containers.html">Containers</a>, <a href="JinjaThreads.html">JinjaThreads</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Finger-Trees/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Finger-Trees/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Finger-Trees/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Finger-Trees-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Finger-Trees-2019-06-11.tar.gz">
afp-Finger-Trees-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Finger-Trees-2018-08-16.tar.gz">
afp-Finger-Trees-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Finger-Trees-2017-10-10.tar.gz">
afp-Finger-Trees-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Finger-Trees-2016-12-17.tar.gz">
afp-Finger-Trees-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Finger-Trees-2016-02-22.tar.gz">
afp-Finger-Trees-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Finger-Trees-2015-05-27.tar.gz">
afp-Finger-Trees-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Finger-Trees-2014-08-28.tar.gz">
afp-Finger-Trees-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Finger-Trees-2013-12-11.tar.gz">
afp-Finger-Trees-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Finger-Trees-2013-11-17.tar.gz">
afp-Finger-Trees-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Finger-Trees-2013-03-02.tar.gz">
afp-Finger-Trees-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Finger-Trees-2013-02-16.tar.gz">
afp-Finger-Trees-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Finger-Trees-2012-05-24.tar.gz">
afp-Finger-Trees-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Finger-Trees-2011-10-11.tar.gz">
afp-Finger-Trees-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Finger-Trees-2011-02-11.tar.gz">
afp-Finger-Trees-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Finger-Trees-2010-10-28.tar.gz">
afp-Finger-Trees-2010-10-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Finite_Automata_HF.html b/web/entries/Finite_Automata_HF.html
--- a/web/entries/Finite_Automata_HF.html
+++ b/web/entries/Finite_Automata_HF.html
@@ -1,231 +1,231 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Finite Automata in Hereditarily Finite Set Theory - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>inite
<font class="first">A</font>utomata
in
<font class="first">H</font>ereditarily
<font class="first">F</font>inite
<font class="first">S</font>et
<font class="first">T</font>heory
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Finite Automata in Hereditarily Finite Set Theory</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-02-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Finite Automata, both deterministic and non-deterministic, for regular languages.
+ <td class="abstract mathjax_process">Finite Automata, both deterministic and non-deterministic, for regular languages.
The Myhill-Nerode Theorem. Closure under intersection, concatenation, etc.
Regular expressions define regular languages. Closure under reversal;
the powerset construction mapping NFAs to DFAs. Left and right languages; minimal DFAs.
-Brzozowski's minimization algorithm. Uniqueness up to isomorphism of minimal DFAs.</div></td>
+Brzozowski's minimization algorithm. Uniqueness up to isomorphism of minimal DFAs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Finite_Automata_HF-AFP,
author = {Lawrence C. Paulson},
title = {Finite Automata in Hereditarily Finite Set Theory},
journal = {Archive of Formal Proofs},
month = feb,
year = 2015,
note = {\url{http://isa-afp.org/entries/Finite_Automata_HF.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="HereditarilyFinite.html">HereditarilyFinite</a>, <a href="Regular-Sets.html">Regular-Sets</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Finite_Automata_HF/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Finite_Automata_HF/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Finite_Automata_HF/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Finite_Automata_HF-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Finite_Automata_HF-2019-06-11.tar.gz">
afp-Finite_Automata_HF-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Finite_Automata_HF-2018-08-16.tar.gz">
afp-Finite_Automata_HF-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Finite_Automata_HF-2017-10-10.tar.gz">
afp-Finite_Automata_HF-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Finite_Automata_HF-2016-12-17.tar.gz">
afp-Finite_Automata_HF-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Finite_Automata_HF-2016-02-22.tar.gz">
afp-Finite_Automata_HF-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Finite_Automata_HF-2015-05-27.tar.gz">
afp-Finite_Automata_HF-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Finite_Automata_HF-2015-02-05.tar.gz">
afp-Finite_Automata_HF-2015-02-05.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/First_Order_Terms.html b/web/entries/First_Order_Terms.html
--- a/web/entries/First_Order_Terms.html
+++ b/web/entries/First_Order_Terms.html
@@ -1,214 +1,214 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>First-Order Terms - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>irst-Order
<font class="first">T</font>erms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">First-Order Terms</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com) and
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-02-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize basic results on first-order terms, including matching and a
first-order unification algorithm, as well as well-foundedness of the
subsumption order. This entry is part of the <i>Isabelle
Formalization of Rewriting</i> <a
href="http://cl-informatik.uibk.ac.at/isafor">IsaFoR</a>,
where first-order terms are omni-present: the unification algorithm is
used to certify several confluence and termination techniques, like
critical-pair computation and dependency graph approximations; and the
-subsumption order is a crucial ingredient for completion.</div></td>
+subsumption order is a crucial ingredient for completion.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{First_Order_Terms-AFP,
author = {Christian Sternagel and René Thiemann},
title = {First-Order Terms},
journal = {Archive of Formal Proofs},
month = feb,
year = 2018,
note = {\url{http://isa-afp.org/entries/First_Order_Terms.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Abstract-Rewriting.html">Abstract-Rewriting</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Functional_Ordered_Resolution_Prover.html">Functional_Ordered_Resolution_Prover</a>, <a href="Resolution_FOL.html">Resolution_FOL</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/First_Order_Terms/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/First_Order_Terms/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/First_Order_Terms/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-First_Order_Terms-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-First_Order_Terms-2019-06-11.tar.gz">
afp-First_Order_Terms-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-First_Order_Terms-2018-08-16.tar.gz">
afp-First_Order_Terms-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-First_Order_Terms-2018-02-07.tar.gz">
afp-First_Order_Terms-2018-02-07.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-First_Order_Terms-2018-02-06.tar.gz">
afp-First_Order_Terms-2018-02-06.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/First_Welfare_Theorem.html b/web/entries/First_Welfare_Theorem.html
--- a/web/entries/First_Welfare_Theorem.html
+++ b/web/entries/First_Welfare_Theorem.html
@@ -1,233 +1,233 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Microeconomics and the First Welfare Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>icroeconomics
and
the
<font class="first">F</font>irst
<font class="first">W</font>elfare
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Microeconomics and the First Welfare Theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.parsert.com/">Julian Parsert</a> and
<a href="http://cl-informatik.uibk.ac.at/cek/">Cezary Kaliszyk</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-09-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Economic activity has always been a fundamental part of society. Due
to modern day politics, economic theory has gained even more influence
on our lives. Thus we want models and theories to be as precise as
possible. This can be achieved using certification with the help of
formal proof technology. Hence we will use Isabelle/HOL to construct
two economic models, that of the the pure exchange economy and a
version of the Arrow-Debreu Model. We will prove that the
<i>First Theorem of Welfare Economics</i> holds within
both. The theorem is the mathematical formulation of Adam Smith's
famous <i>invisible hand</i> and states that a group of
self-interested and rational actors will eventually achieve an
-efficient allocation of goods and services.</div></td>
+efficient allocation of goods and services.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2018-06-17]: Added some lemmas and a theory file, also introduced Microeconomics folder.
<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{First_Welfare_Theorem-AFP,
author = {Julian Parsert and Cezary Kaliszyk},
title = {Microeconomics and the First Welfare Theorem},
journal = {Archive of Formal Proofs},
month = sep,
year = 2017,
note = {\url{http://isa-afp.org/entries/First_Welfare_Theorem.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Neumann_Morgenstern_Utility.html">Neumann_Morgenstern_Utility</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/First_Welfare_Theorem/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/First_Welfare_Theorem/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/First_Welfare_Theorem/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-First_Welfare_Theorem-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-First_Welfare_Theorem-2019-06-11.tar.gz">
afp-First_Welfare_Theorem-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-First_Welfare_Theorem-2018-08-16.tar.gz">
afp-First_Welfare_Theorem-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-First_Welfare_Theorem-2017-10-10.tar.gz">
afp-First_Welfare_Theorem-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-First_Welfare_Theorem-2017-09-05.tar.gz">
afp-First_Welfare_Theorem-2017-09-05.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-First_Welfare_Theorem-2017-09-04.tar.gz">
afp-First_Welfare_Theorem-2017-09-04.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Fishburn_Impossibility.html b/web/entries/Fishburn_Impossibility.html
--- a/web/entries/Fishburn_Impossibility.html
+++ b/web/entries/Fishburn_Impossibility.html
@@ -1,223 +1,223 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">I</font>ncompatibility
of
<font class="first">F</font>ishburn-Strategyproofness
and
<font class="first">P</font>areto-Efficiency
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://dss.in.tum.de/staff/brandt.html">Felix Brandt</a>,
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>,
<a href="http://dss.in.tum.de/staff/christian-saile.html">Christian Saile</a> and
<a href="http://dss.in.tum.de/staff/christian-stricker.html">Christian Stricker</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-03-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This formalisation contains the proof that there is no
anonymous Social Choice Function for at least three agents and
alternatives that fulfils both Pareto-Efficiency and
Fishburn-Strategyproofness. It was derived from a proof of <a
href="http://dss.in.tum.de/files/brandt-research/stratset.pdf">Brandt
<em>et al.</em></a>, which relies on an unverified
translation of a fixed finite instance of the original problem to SAT.
This Isabelle proof contains a machine-checked version of both the
statement for exactly three agents and alternatives and the lifting to
-the general case.</p></div></td>
+the general case.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Fishburn_Impossibility-AFP,
author = {Felix Brandt and Manuel Eberl and Christian Saile and Christian Stricker},
title = {The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency},
journal = {Archive of Formal Proofs},
month = mar,
year = 2018,
note = {\url{http://isa-afp.org/entries/Fishburn_Impossibility.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Randomised_Social_Choice.html">Randomised_Social_Choice</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Fishburn_Impossibility/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Fishburn_Impossibility/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Fishburn_Impossibility/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Fishburn_Impossibility-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Fishburn_Impossibility-2019-06-11.tar.gz">
afp-Fishburn_Impossibility-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Fishburn_Impossibility-2018-08-16.tar.gz">
afp-Fishburn_Impossibility-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Fishburn_Impossibility-2018-06-10.tar.gz">
afp-Fishburn_Impossibility-2018-06-10.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Fishburn_Impossibility-2018-03-23.tar.gz">
afp-Fishburn_Impossibility-2018-03-23.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Fisher_Yates.html b/web/entries/Fisher_Yates.html
--- a/web/entries/Fisher_Yates.html
+++ b/web/entries/Fisher_Yates.html
@@ -1,205 +1,205 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Fisher–Yates shuffle - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>isher–Yates
shuffle
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Fisher–Yates shuffle</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-09-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This work defines and proves the correctness of the Fisher–Yates
algorithm for shuffling – i.e. producing a random permutation – of a
list. The algorithm proceeds by traversing the list and in
each step swapping the current element with a random element from the
-remaining list.</p></div></td>
+remaining list.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Fisher_Yates-AFP,
author = {Manuel Eberl},
title = {Fisher–Yates shuffle},
journal = {Archive of Formal Proofs},
month = sep,
year = 2016,
note = {\url{http://isa-afp.org/entries/Fisher_Yates.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Fisher_Yates/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Fisher_Yates/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Fisher_Yates/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Fisher_Yates-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Fisher_Yates-2019-06-11.tar.gz">
afp-Fisher_Yates-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Fisher_Yates-2018-08-16.tar.gz">
afp-Fisher_Yates-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Fisher_Yates-2017-10-10.tar.gz">
afp-Fisher_Yates-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Fisher_Yates-2016-12-17.tar.gz">
afp-Fisher_Yates-2016-12-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Flow_Networks.html b/web/entries/Flow_Networks.html
--- a/web/entries/Flow_Networks.html
+++ b/web/entries/Flow_Networks.html
@@ -1,222 +1,222 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Flow Networks and the Min-Cut-Max-Flow Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>low
<font class="first">N</font>etworks
and
the
<font class="first">M</font>in-Cut-Max-Flow
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Flow Networks and the Min-Cut-Max-Flow Theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Peter Lammich and
S. Reza Sefidgar
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-06-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a formalization of flow networks and the Min-Cut-Max-Flow
theorem. Our formal proof closely follows a standard textbook proof,
and is accessible even without being an expert in Isabelle/HOL, the
-interactive theorem prover used for the formalization.</div></td>
+interactive theorem prover used for the formalization.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Flow_Networks-AFP,
author = {Peter Lammich and S. Reza Sefidgar},
title = {Flow Networks and the Min-Cut-Max-Flow Theorem},
journal = {Archive of Formal Proofs},
month = jun,
year = 2017,
note = {\url{http://isa-afp.org/entries/Flow_Networks.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="CAVA_Automata.html">CAVA_Automata</a>, <a href="DFS_Framework.html">DFS_Framework</a>, <a href="Program-Conflict-Analysis.html">Program-Conflict-Analysis</a>, <a href="Refine_Imperative_HOL.html">Refine_Imperative_HOL</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="EdmondsKarp_Maxflow.html">EdmondsKarp_Maxflow</a>, <a href="Prpu_Maxflow.html">Prpu_Maxflow</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Flow_Networks/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Flow_Networks/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Flow_Networks/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Flow_Networks-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Flow_Networks-2020-01-14.tar.gz">
afp-Flow_Networks-2020-01-14.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-Flow_Networks-2019-06-11.tar.gz">
afp-Flow_Networks-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Flow_Networks-2018-08-16.tar.gz">
afp-Flow_Networks-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Flow_Networks-2017-10-10.tar.gz">
afp-Flow_Networks-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Flow_Networks-2017-06-02.tar.gz">
afp-Flow_Networks-2017-06-02.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Floyd_Warshall.html b/web/entries/Floyd_Warshall.html
--- a/web/entries/Floyd_Warshall.html
+++ b/web/entries/Floyd_Warshall.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Floyd-Warshall Algorithm for Shortest Paths - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">F</font>loyd-Warshall
<font class="first">A</font>lgorithm
for
<font class="first">S</font>hortest
<font class="first">P</font>aths
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Floyd-Warshall Algorithm for Shortest Paths</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a> and
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-05-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The Floyd-Warshall algorithm [Flo62, Roy59, War62] is a classic
dynamic programming algorithm to compute the length of all shortest
paths between any two vertices in a graph (i.e. to solve the all-pairs
shortest path problem, or APSP for short). Given a representation of
the graph as a matrix of weights M, it computes another matrix M'
which represents a graph with the same path lengths and contains the
length of the shortest path between any two vertices i and j. This is
only possible if the graph does not contain any negative cycles.
However, in this case the Floyd-Warshall algorithm will detect the
situation by calculating a negative diagonal entry. This entry
includes a formalization of the algorithm and of these key properties.
The algorithm is refined to an efficient imperative version using the
-Imperative Refinement Framework.</div></td>
+Imperative Refinement Framework.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Floyd_Warshall-AFP,
author = {Simon Wimmer and Peter Lammich},
title = {The Floyd-Warshall Algorithm for Shortest Paths},
journal = {Archive of Formal Proofs},
month = may,
year = 2017,
note = {\url{http://isa-afp.org/entries/Floyd_Warshall.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Refine_Imperative_HOL.html">Refine_Imperative_HOL</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Floyd_Warshall/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Floyd_Warshall/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Floyd_Warshall/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Floyd_Warshall-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Floyd_Warshall-2019-06-11.tar.gz">
afp-Floyd_Warshall-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Floyd_Warshall-2018-08-16.tar.gz">
afp-Floyd_Warshall-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Floyd_Warshall-2017-10-10.tar.gz">
afp-Floyd_Warshall-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Floyd_Warshall-2017-05-09.tar.gz">
afp-Floyd_Warshall-2017-05-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Flyspeck-Tame.html b/web/entries/Flyspeck-Tame.html
--- a/web/entries/Flyspeck-Tame.html
+++ b/web/entries/Flyspeck-Tame.html
@@ -1,299 +1,299 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Flyspeck I: Tame Graphs - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>lyspeck
<font class="first">I</font>:
<font class="first">T</font>ame
<font class="first">G</font>raphs
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Flyspeck I: Tame Graphs</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Gertrud Bauer and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2006-05-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
These theories present the verified enumeration of <i>tame</i> plane graphs
as defined by Thomas C. Hales in his proof of the Kepler Conjecture in his
book <i>Dense Sphere Packings. A Blueprint for Formal Proofs.</i> [CUP 2012].
The values of the constants in the definition of tameness are identical to
those in the <a href="https://code.google.com/p/flyspeck/">Flyspeck project</a>.
The <a href="http://www21.in.tum.de/~nipkow/pubs/Flyspeck/">IJCAR 2006 paper by Nipkow, Bauer and Schultz</a> refers to the original version of Hales' proof,
-the <a href="http://www21.in.tum.de/~nipkow/pubs/itp11.html">ITP 2011 paper by Nipkow</a> refers to the Blueprint version of the proof.</div></td>
+the <a href="http://www21.in.tum.de/~nipkow/pubs/itp11.html">ITP 2011 paper by Nipkow</a> refers to the Blueprint version of the proof.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2010-11-02]: modified theories to reflect the modified definition of tameness in Hales' revised proof.<br>
[2014-07-03]: modified constants in def of tameness and Archive according to the final state of the Flyspeck proof.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Flyspeck-Tame-AFP,
author = {Gertrud Bauer and Tobias Nipkow},
title = {Flyspeck I: Tame Graphs},
journal = {Archive of Formal Proofs},
month = may,
year = 2006,
note = {\url{http://isa-afp.org/entries/Flyspeck-Tame.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Trie.html">Trie</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Flyspeck-Tame/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Flyspeck-Tame/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Flyspeck-Tame/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Flyspeck-Tame-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Flyspeck-Tame-2019-06-11.tar.gz">
afp-Flyspeck-Tame-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Flyspeck-Tame-2018-08-17.tar.gz">
afp-Flyspeck-Tame-2018-08-17.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Flyspeck-Tame-2017-10-10.tar.gz">
afp-Flyspeck-Tame-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Flyspeck-Tame-2016-12-17.tar.gz">
afp-Flyspeck-Tame-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Flyspeck-Tame-2016-02-22.tar.gz">
afp-Flyspeck-Tame-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Flyspeck-Tame-2015-05-27.tar.gz">
afp-Flyspeck-Tame-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Flyspeck-Tame-2014-08-28.tar.gz">
afp-Flyspeck-Tame-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Flyspeck-Tame-2013-12-11.tar.gz">
afp-Flyspeck-Tame-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Flyspeck-Tame-2013-11-17.tar.gz">
afp-Flyspeck-Tame-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Flyspeck-Tame-2013-03-02.tar.gz">
afp-Flyspeck-Tame-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Flyspeck-Tame-2013-02-16.tar.gz">
afp-Flyspeck-Tame-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Flyspeck-Tame-2012-05-25.tar.gz">
afp-Flyspeck-Tame-2012-05-25.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Flyspeck-Tame-2011-10-11.tar.gz">
afp-Flyspeck-Tame-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Flyspeck-Tame-2011-02-11.tar.gz">
afp-Flyspeck-Tame-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Flyspeck-Tame-2010-06-30.tar.gz">
afp-Flyspeck-Tame-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Flyspeck-Tame-2009-12-12.tar.gz">
afp-Flyspeck-Tame-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Flyspeck-Tame-2009-04-29.tar.gz">
afp-Flyspeck-Tame-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Flyspeck-Tame-2008-06-10.tar.gz">
afp-Flyspeck-Tame-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Flyspeck-Tame-2008-01-04.tar.gz">
afp-Flyspeck-Tame-2008-01-04.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Flyspeck-Tame-2007-11-27.tar.gz">
afp-Flyspeck-Tame-2007-11-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/FocusStreamsCaseStudies.html b/web/entries/FocusStreamsCaseStudies.html
--- a/web/entries/FocusStreamsCaseStudies.html
+++ b/web/entries/FocusStreamsCaseStudies.html
@@ -1,245 +1,245 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Stream Processing Components: Isabelle/HOL Formalisation and Case Studies - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>tream
<font class="first">P</font>rocessing
<font class="first">C</font>omponents:
<font class="first">I</font>sabelle/HOL
<font class="first">F</font>ormalisation
and
<font class="first">C</font>ase
<font class="first">S</font>tudies
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Stream Processing Components: Isabelle/HOL Formalisation and Case Studies</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Maria Spichkova (maria /dot/ spichkova /at/ rmit /dot/ edu /dot/ au)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-11-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This set of theories presents an Isabelle/HOL formalisation of stream processing components introduced
+ <td class="abstract mathjax_process">This set of theories presents an Isabelle/HOL formalisation of stream processing components introduced
in Focus,
a framework for formal specification and development of interactive systems.
This is an extended and updated version of the formalisation, which was
elaborated within the methodology "Focus on Isabelle".
In addition, we also applied the formalisation on three case studies
that cover different application areas: process control (Steam Boiler System),
data transmission (FlexRay communication protocol),
-memory and processing components (Automotive-Gateway System).</div></td>
+memory and processing components (Automotive-Gateway System).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{FocusStreamsCaseStudies-AFP,
author = {Maria Spichkova},
title = {Stream Processing Components: Isabelle/HOL Formalisation and Case Studies},
journal = {Archive of Formal Proofs},
month = nov,
year = 2013,
note = {\url{http://isa-afp.org/entries/FocusStreamsCaseStudies.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FocusStreamsCaseStudies/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/FocusStreamsCaseStudies/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FocusStreamsCaseStudies/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-FocusStreamsCaseStudies-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-FocusStreamsCaseStudies-2019-06-11.tar.gz">
afp-FocusStreamsCaseStudies-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-FocusStreamsCaseStudies-2018-08-16.tar.gz">
afp-FocusStreamsCaseStudies-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-FocusStreamsCaseStudies-2017-10-10.tar.gz">
afp-FocusStreamsCaseStudies-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-FocusStreamsCaseStudies-2016-12-17.tar.gz">
afp-FocusStreamsCaseStudies-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-FocusStreamsCaseStudies-2016-02-22.tar.gz">
afp-FocusStreamsCaseStudies-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-FocusStreamsCaseStudies-2015-05-27.tar.gz">
afp-FocusStreamsCaseStudies-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-FocusStreamsCaseStudies-2014-08-28.tar.gz">
afp-FocusStreamsCaseStudies-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-FocusStreamsCaseStudies-2013-12-11.tar.gz">
afp-FocusStreamsCaseStudies-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-FocusStreamsCaseStudies-2013-11-18.tar.gz">
afp-FocusStreamsCaseStudies-2013-11-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Formal_SSA.html b/web/entries/Formal_SSA.html
--- a/web/entries/Formal_SSA.html
+++ b/web/entries/Formal_SSA.html
@@ -1,250 +1,250 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Verified Construction of Static Single Assignment Form - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>erified
<font class="first">C</font>onstruction
of
<font class="first">S</font>tatic
<font class="first">S</font>ingle
<font class="first">A</font>ssignment
<font class="first">F</font>orm
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Verified Construction of Static Single Assignment Form</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Sebastian Ullrich (sebasti /at/ nullri /dot/ ch) and
<a href="http://pp.ipd.kit.edu/person.php?id=88">Denis Lohner</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-02-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
We define a functional variant of the static single assignment (SSA)
form construction algorithm described by <a
href="https://doi.org/10.1007/978-3-642-37051-9_6">Braun et al.</a>,
which combines simplicity and efficiency. The definition is based on a
general, abstract control flow graph representation using Isabelle locales.
</p>
<p>
We prove that the algorithm's output is semantically equivalent to the
input according to a small-step semantics, and that it is in minimal SSA
form for the common special case of reducible inputs. We then show the
satisfiability of the locale assumptions by giving instantiations for a
simple While language.
</p>
<p>
Furthermore, we use a generic instantiation based on typedefs in order
to extract OCaml code and replace the unverified SSA construction
algorithm of the <a href="https://doi.org/10.1145/2579080">CompCertSSA
project</a> with it.
</p>
<p>
A more detailed description of the verified SSA construction can be found
in the paper <a href="https://doi.org/10.1145/2892208.2892211">Verified
Construction of Static Single Assignment Form</a>, CC 2016.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Formal_SSA-AFP,
author = {Sebastian Ullrich and Denis Lohner},
title = {Verified Construction of Static Single Assignment Form},
journal = {Archive of Formal Proofs},
month = feb,
year = 2016,
note = {\url{http://isa-afp.org/entries/Formal_SSA.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="CAVA_Automata.html">CAVA_Automata</a>, <a href="Collections.html">Collections</a>, <a href="Dijkstra_Shortest_Path.html">Dijkstra_Shortest_Path</a>, <a href="Slicing.html">Slicing</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Minimal_SSA.html">Minimal_SSA</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Formal_SSA/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Formal_SSA/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Formal_SSA/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Formal_SSA-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Formal_SSA-2019-06-11.tar.gz">
afp-Formal_SSA-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Formal_SSA-2018-08-16.tar.gz">
afp-Formal_SSA-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Formal_SSA-2017-10-10.tar.gz">
afp-Formal_SSA-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Formal_SSA-2016-12-17.tar.gz">
afp-Formal_SSA-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Formal_SSA-2016-02-22.tar.gz">
afp-Formal_SSA-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Formal_SSA-2016-02-08.tar.gz">
afp-Formal_SSA-2016-02-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Formula_Derivatives.html b/web/entries/Formula_Derivatives.html
--- a/web/entries/Formula_Derivatives.html
+++ b/web/entries/Formula_Derivatives.html
@@ -1,231 +1,231 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Derivatives of Logical Formulas - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>erivatives
of
<font class="first">L</font>ogical
<font class="first">F</font>ormulas
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Derivatives of Logical Formulas</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-05-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize new decision procedures for WS1S, M2L(Str), and Presburger
Arithmetics. Formulas of these logics denote regular languages. Unlike
traditional decision procedures, we do <em>not</em> translate formulas into automata
(nor into regular expressions), at least not explicitly. Instead we devise
notions of derivatives (inspired by Brzozowski derivatives for regular
expressions) that operate on formulas directly and compute a syntactic
bisimulation using these derivatives. The treatment of Boolean connectives and
quantifiers is uniform for all mentioned logics and is abstracted into a
locale. This locale is then instantiated by different atomic formulas and their
derivatives (which may differ even for the same logic under different encodings
of interpretations as formal words).
<p>
The WS1S instance is described in the draft paper <a
href="https://people.inf.ethz.ch/trayteld/papers/csl15-ws1s_derivatives/index.html">A
-Coalgebraic Decision Procedure for WS1S</a> by the author.</div></td>
+Coalgebraic Decision Procedure for WS1S</a> by the author.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Formula_Derivatives-AFP,
author = {Dmitriy Traytel},
title = {Derivatives of Logical Formulas},
journal = {Archive of Formal Proofs},
month = may,
year = 2015,
note = {\url{http://isa-afp.org/entries/Formula_Derivatives.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Coinductive_Languages.html">Coinductive_Languages</a>, <a href="Deriving.html">Deriving</a>, <a href="List-Index.html">List-Index</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Formula_Derivatives/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Formula_Derivatives/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Formula_Derivatives/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Formula_Derivatives-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Formula_Derivatives-2019-06-11.tar.gz">
afp-Formula_Derivatives-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Formula_Derivatives-2018-08-16.tar.gz">
afp-Formula_Derivatives-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Formula_Derivatives-2017-10-10.tar.gz">
afp-Formula_Derivatives-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Formula_Derivatives-2016-12-17.tar.gz">
afp-Formula_Derivatives-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Formula_Derivatives-2016-02-22.tar.gz">
afp-Formula_Derivatives-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Formula_Derivatives-2015-05-28.tar.gz">
afp-Formula_Derivatives-2015-05-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Fourier.html b/web/entries/Fourier.html
--- a/web/entries/Fourier.html
+++ b/web/entries/Fourier.html
@@ -1,192 +1,192 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Fourier Series - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ourier
<font class="first">S</font>eries
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Fourier Series</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-09-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This development formalises the square integrable functions over the
reals and the basics of Fourier series. It culminates with a proof
that every well-behaved periodic function can be approximated by a
Fourier series. The material is ported from HOL Light:
-https://github.com/jrh13/hol-light/blob/master/100/fourier.ml</div></td>
+https://github.com/jrh13/hol-light/blob/master/100/fourier.ml</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Fourier-AFP,
author = {Lawrence C Paulson},
title = {Fourier Series},
journal = {Archive of Formal Proofs},
month = sep,
year = 2019,
note = {\url{http://isa-afp.org/entries/Fourier.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Lp.html">Lp</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Fourier/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Fourier/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Fourier/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Fourier-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Fourier-2019-09-11.tar.gz">
afp-Fourier-2019-09-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Free-Boolean-Algebra.html b/web/entries/Free-Boolean-Algebra.html
--- a/web/entries/Free-Boolean-Algebra.html
+++ b/web/entries/Free-Boolean-Algebra.html
@@ -1,262 +1,262 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Free Boolean Algebra - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ree
<font class="first">B</font>oolean
<font class="first">A</font>lgebra
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Free Boolean Algebra</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Brian Huffman
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-03-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This theory defines a type constructor representing the free Boolean algebra over a set of generators. Values of type (α)<i>formula</i> represent propositional formulas with uninterpreted variables from type α, ordered by implication. In addition to all the standard Boolean algebra operations, the library also provides a function for building homomorphisms to any other Boolean algebra type.</div></td>
+ <td class="abstract mathjax_process">This theory defines a type constructor representing the free Boolean algebra over a set of generators. Values of type (α)<i>formula</i> represent propositional formulas with uninterpreted variables from type α, ordered by implication. In addition to all the standard Boolean algebra operations, the library also provides a function for building homomorphisms to any other Boolean algebra type.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Free-Boolean-Algebra-AFP,
author = {Brian Huffman},
title = {Free Boolean Algebra},
journal = {Archive of Formal Proofs},
month = mar,
year = 2010,
note = {\url{http://isa-afp.org/entries/Free-Boolean-Algebra.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Free-Boolean-Algebra/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Free-Boolean-Algebra/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Free-Boolean-Algebra/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Free-Boolean-Algebra-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Free-Boolean-Algebra-2019-06-11.tar.gz">
afp-Free-Boolean-Algebra-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Free-Boolean-Algebra-2018-08-16.tar.gz">
afp-Free-Boolean-Algebra-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Free-Boolean-Algebra-2017-10-10.tar.gz">
afp-Free-Boolean-Algebra-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Free-Boolean-Algebra-2016-12-17.tar.gz">
afp-Free-Boolean-Algebra-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Free-Boolean-Algebra-2016-02-22.tar.gz">
afp-Free-Boolean-Algebra-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Free-Boolean-Algebra-2015-05-27.tar.gz">
afp-Free-Boolean-Algebra-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Free-Boolean-Algebra-2014-08-28.tar.gz">
afp-Free-Boolean-Algebra-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Free-Boolean-Algebra-2013-12-11.tar.gz">
afp-Free-Boolean-Algebra-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Free-Boolean-Algebra-2013-11-17.tar.gz">
afp-Free-Boolean-Algebra-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Free-Boolean-Algebra-2013-03-02.tar.gz">
afp-Free-Boolean-Algebra-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Free-Boolean-Algebra-2013-02-16.tar.gz">
afp-Free-Boolean-Algebra-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Free-Boolean-Algebra-2012-05-24.tar.gz">
afp-Free-Boolean-Algebra-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Free-Boolean-Algebra-2011-10-11.tar.gz">
afp-Free-Boolean-Algebra-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Free-Boolean-Algebra-2011-02-11.tar.gz">
afp-Free-Boolean-Algebra-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Free-Boolean-Algebra-2010-06-30.tar.gz">
afp-Free-Boolean-Algebra-2010-06-30.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Free-Boolean-Algebra-2010-03-29.tar.gz">
afp-Free-Boolean-Algebra-2010-03-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Free-Groups.html b/web/entries/Free-Groups.html
--- a/web/entries/Free-Groups.html
+++ b/web/entries/Free-Groups.html
@@ -1,262 +1,262 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Free Groups - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ree
<font class="first">G</font>roups
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Free Groups</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Joachim Breitner (joachim /at/ cis /dot/ upenn /dot/ edu)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-06-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Free Groups are, in a sense, the most generic kind of group. They
are defined over a set of generators with no additional relations in between
them. They play an important role in the definition of group presentations
and in other fields. This theory provides the definition of Free Group as
the set of fully canceled words in the generators. The universal property is
-proven, as well as some isomorphisms results about Free Groups.</div></td>
+proven, as well as some isomorphisms results about Free Groups.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2011-12-11]: Added the Ping Pong Lemma.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Free-Groups-AFP,
author = {Joachim Breitner},
title = {Free Groups},
journal = {Archive of Formal Proofs},
month = jun,
year = 2010,
note = {\url{http://isa-afp.org/entries/Free-Groups.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Applicative_Lifting.html">Applicative_Lifting</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Free-Groups/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Free-Groups/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Free-Groups/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Free-Groups-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Free-Groups-2019-06-11.tar.gz">
afp-Free-Groups-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Free-Groups-2018-08-16.tar.gz">
afp-Free-Groups-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Free-Groups-2017-10-10.tar.gz">
afp-Free-Groups-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Free-Groups-2016-12-17.tar.gz">
afp-Free-Groups-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Free-Groups-2016-02-22.tar.gz">
afp-Free-Groups-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Free-Groups-2015-05-27.tar.gz">
afp-Free-Groups-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Free-Groups-2014-08-28.tar.gz">
afp-Free-Groups-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Free-Groups-2013-12-11.tar.gz">
afp-Free-Groups-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Free-Groups-2013-11-17.tar.gz">
afp-Free-Groups-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Free-Groups-2013-02-16.tar.gz">
afp-Free-Groups-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Free-Groups-2012-05-24.tar.gz">
afp-Free-Groups-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Free-Groups-2011-10-11.tar.gz">
afp-Free-Groups-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Free-Groups-2011-02-11.tar.gz">
afp-Free-Groups-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Free-Groups-2010-07-01.tar.gz">
afp-Free-Groups-2010-07-01.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/FunWithFunctions.html b/web/entries/FunWithFunctions.html
--- a/web/entries/FunWithFunctions.html
+++ b/web/entries/FunWithFunctions.html
@@ -1,262 +1,262 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Fun With Functions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>un
<font class="first">W</font>ith
<font class="first">F</font>unctions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Fun With Functions</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-08-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This is a collection of cute puzzles of the form ``Show that if a function satisfies the following constraints, it must be ...'' Please add further examples to this collection!</div></td>
+ <td class="abstract mathjax_process">This is a collection of cute puzzles of the form ``Show that if a function satisfies the following constraints, it must be ...'' Please add further examples to this collection!</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{FunWithFunctions-AFP,
author = {Tobias Nipkow},
title = {Fun With Functions},
journal = {Archive of Formal Proofs},
month = aug,
year = 2008,
note = {\url{http://isa-afp.org/entries/FunWithFunctions.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FunWithFunctions/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/FunWithFunctions/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FunWithFunctions/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-FunWithFunctions-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-FunWithFunctions-2019-06-11.tar.gz">
afp-FunWithFunctions-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-FunWithFunctions-2018-08-16.tar.gz">
afp-FunWithFunctions-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-FunWithFunctions-2017-10-10.tar.gz">
afp-FunWithFunctions-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-FunWithFunctions-2016-12-17.tar.gz">
afp-FunWithFunctions-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-FunWithFunctions-2016-02-22.tar.gz">
afp-FunWithFunctions-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-FunWithFunctions-2015-05-27.tar.gz">
afp-FunWithFunctions-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-FunWithFunctions-2014-08-28.tar.gz">
afp-FunWithFunctions-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-FunWithFunctions-2013-12-11.tar.gz">
afp-FunWithFunctions-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-FunWithFunctions-2013-11-17.tar.gz">
afp-FunWithFunctions-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-FunWithFunctions-2013-02-16.tar.gz">
afp-FunWithFunctions-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-FunWithFunctions-2012-05-24.tar.gz">
afp-FunWithFunctions-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-FunWithFunctions-2011-10-11.tar.gz">
afp-FunWithFunctions-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-FunWithFunctions-2011-02-11.tar.gz">
afp-FunWithFunctions-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-FunWithFunctions-2010-07-01.tar.gz">
afp-FunWithFunctions-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-FunWithFunctions-2009-12-12.tar.gz">
afp-FunWithFunctions-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-FunWithFunctions-2009-04-29.tar.gz">
afp-FunWithFunctions-2009-04-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/FunWithTilings.html b/web/entries/FunWithTilings.html
--- a/web/entries/FunWithTilings.html
+++ b/web/entries/FunWithTilings.html
@@ -1,263 +1,263 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Fun With Tilings - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>un
<font class="first">W</font>ith
<font class="first">T</font>ilings
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Fun With Tilings</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a> and
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-11-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Tilings are defined inductively. It is shown that one form of mutilated chess board cannot be tiled with dominoes, while another one can be tiled with L-shaped tiles. Please add further fun examples of this kind!</div></td>
+ <td class="abstract mathjax_process">Tilings are defined inductively. It is shown that one form of mutilated chess board cannot be tiled with dominoes, while another one can be tiled with L-shaped tiles. Please add further fun examples of this kind!</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{FunWithTilings-AFP,
author = {Tobias Nipkow and Lawrence C. Paulson},
title = {Fun With Tilings},
journal = {Archive of Formal Proofs},
month = nov,
year = 2008,
note = {\url{http://isa-afp.org/entries/FunWithTilings.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FunWithTilings/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/FunWithTilings/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/FunWithTilings/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-FunWithTilings-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-FunWithTilings-2019-06-11.tar.gz">
afp-FunWithTilings-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-FunWithTilings-2018-08-16.tar.gz">
afp-FunWithTilings-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-FunWithTilings-2017-10-10.tar.gz">
afp-FunWithTilings-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-FunWithTilings-2016-12-17.tar.gz">
afp-FunWithTilings-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-FunWithTilings-2016-02-22.tar.gz">
afp-FunWithTilings-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-FunWithTilings-2015-05-27.tar.gz">
afp-FunWithTilings-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-FunWithTilings-2014-08-28.tar.gz">
afp-FunWithTilings-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-FunWithTilings-2013-12-11.tar.gz">
afp-FunWithTilings-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-FunWithTilings-2013-11-17.tar.gz">
afp-FunWithTilings-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-FunWithTilings-2013-02-16.tar.gz">
afp-FunWithTilings-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-FunWithTilings-2012-05-24.tar.gz">
afp-FunWithTilings-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-FunWithTilings-2011-10-11.tar.gz">
afp-FunWithTilings-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-FunWithTilings-2011-02-11.tar.gz">
afp-FunWithTilings-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-FunWithTilings-2010-07-01.tar.gz">
afp-FunWithTilings-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-FunWithTilings-2009-12-12.tar.gz">
afp-FunWithTilings-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-FunWithTilings-2009-04-29.tar.gz">
afp-FunWithTilings-2009-04-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Functional-Automata.html b/web/entries/Functional-Automata.html
--- a/web/entries/Functional-Automata.html
+++ b/web/entries/Functional-Automata.html
@@ -1,297 +1,297 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Functional Automata - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>unctional
<font class="first">A</font>utomata
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Functional Automata</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-03-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This theory defines deterministic and nondeterministic automata in a functional representation: the transition function/relation and the finality predicate are just functions. Hence the state space may be infinite. It is shown how to convert regular expressions into such automata. A scanner (generator) is implemented with the help of functional automata: the scanner chops the input up into longest recognized substrings. Finally we also show how to convert a certain subclass of functional automata (essentially the finite deterministic ones) into regular sets.</div></td>
+ <td class="abstract mathjax_process">This theory defines deterministic and nondeterministic automata in a functional representation: the transition function/relation and the finality predicate are just functions. Hence the state space may be infinite. It is shown how to convert regular expressions into such automata. A scanner (generator) is implemented with the help of functional automata: the scanner chops the input up into longest recognized substrings. Finally we also show how to convert a certain subclass of functional automata (essentially the finite deterministic ones) into regular sets.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Functional-Automata-AFP,
author = {Tobias Nipkow},
title = {Functional Automata},
journal = {Archive of Formal Proofs},
month = mar,
year = 2004,
note = {\url{http://isa-afp.org/entries/Functional-Automata.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Regular-Sets.html">Regular-Sets</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Functional-Automata/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Functional-Automata/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Functional-Automata/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Functional-Automata-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Functional-Automata-2019-06-11.tar.gz">
afp-Functional-Automata-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Functional-Automata-2018-08-16.tar.gz">
afp-Functional-Automata-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Functional-Automata-2017-10-10.tar.gz">
afp-Functional-Automata-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Functional-Automata-2016-12-17.tar.gz">
afp-Functional-Automata-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Functional-Automata-2016-02-22.tar.gz">
afp-Functional-Automata-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Functional-Automata-2015-05-27.tar.gz">
afp-Functional-Automata-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Functional-Automata-2014-08-28.tar.gz">
afp-Functional-Automata-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Functional-Automata-2013-12-11.tar.gz">
afp-Functional-Automata-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Functional-Automata-2013-11-17.tar.gz">
afp-Functional-Automata-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Functional-Automata-2013-03-02.tar.gz">
afp-Functional-Automata-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Functional-Automata-2013-02-16.tar.gz">
afp-Functional-Automata-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Functional-Automata-2012-05-24.tar.gz">
afp-Functional-Automata-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Functional-Automata-2011-10-11.tar.gz">
afp-Functional-Automata-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Functional-Automata-2011-02-11.tar.gz">
afp-Functional-Automata-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Functional-Automata-2010-07-01.tar.gz">
afp-Functional-Automata-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Functional-Automata-2009-12-12.tar.gz">
afp-Functional-Automata-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Functional-Automata-2009-04-29.tar.gz">
afp-Functional-Automata-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Functional-Automata-2008-06-10.tar.gz">
afp-Functional-Automata-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Functional-Automata-2007-11-27.tar.gz">
afp-Functional-Automata-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Functional-Automata-2005-10-14.tar.gz">
afp-Functional-Automata-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Functional-Automata-2004-05-21.tar.gz">
afp-Functional-Automata-2004-05-21.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Functional-Automata-2004-04-20.tar.gz">
afp-Functional-Automata-2004-04-20.tar.gz
</a>
</li>
<li>Isabelle 2003:
<a href="../release/afp-Functional-Automata-2004-03-30.tar.gz">
afp-Functional-Automata-2004-03-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Functional_Ordered_Resolution_Prover.html b/web/entries/Functional_Ordered_Resolution_Prover.html
--- a/web/entries/Functional_Ordered_Resolution_Prover.html
+++ b/web/entries/Functional_Ordered_Resolution_Prover.html
@@ -1,217 +1,217 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Verified Functional Implementation of Bachmair and Ganzinger's Ordered Resolution Prover - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">V</font>erified
<font class="first">F</font>unctional
<font class="first">I</font>mplementation
of
<font class="first">B</font>achmair
and
<font class="first">G</font>anzinger's
<font class="first">O</font>rdered
<font class="first">R</font>esolution
<font class="first">P</font>rover
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Verified Functional Implementation of Bachmair and Ganzinger's Ordered Resolution Prover</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://people.compute.dtu.dk/andschl/">Anders Schlichtkrull</a>,
Jasmin Christian Blanchette (j /dot/ c /dot/ blanchette /at/ vu /dot/ nl) and
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-11-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This Isabelle/HOL formalization refines the abstract ordered
resolution prover presented in Section 4.3 of Bachmair and
Ganzinger's "Resolution Theorem Proving" chapter in the
<i>Handbook of Automated Reasoning</i>. The result is a
-functional implementation of a first-order prover.</div></td>
+functional implementation of a first-order prover.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Functional_Ordered_Resolution_Prover-AFP,
author = {Anders Schlichtkrull and Jasmin Christian Blanchette and Dmitriy Traytel},
title = {A Verified Functional Implementation of Bachmair and Ganzinger's Ordered Resolution Prover},
journal = {Archive of Formal Proofs},
month = nov,
year = 2018,
note = {\url{http://isa-afp.org/entries/Functional_Ordered_Resolution_Prover.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
- <td class="data"><a href="First_Order_Terms.html">First_Order_Terms</a>, <a href="Nested_Multisets_Ordinals.html">Nested_Multisets_Ordinals</a>, <a href="Open_Induction.html">Open_Induction</a>, <a href="Ordered_Resolution_Prover.html">Ordered_Resolution_Prover</a>, <a href="Polynomial_Factorization.html">Polynomial_Factorization</a> </td></tr>
+ <td class="data"><a href="First_Order_Terms.html">First_Order_Terms</a>, <a href="Lambda_Free_RPOs.html">Lambda_Free_RPOs</a>, <a href="Nested_Multisets_Ordinals.html">Nested_Multisets_Ordinals</a>, <a href="Open_Induction.html">Open_Induction</a>, <a href="Ordered_Resolution_Prover.html">Ordered_Resolution_Prover</a>, <a href="Polynomial_Factorization.html">Polynomial_Factorization</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Functional_Ordered_Resolution_Prover/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Functional_Ordered_Resolution_Prover/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Functional_Ordered_Resolution_Prover/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Functional_Ordered_Resolution_Prover-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Functional_Ordered_Resolution_Prover-2019-06-11.tar.gz">
afp-Functional_Ordered_Resolution_Prover-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Functional_Ordered_Resolution_Prover-2018-11-29.tar.gz">
afp-Functional_Ordered_Resolution_Prover-2018-11-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Furstenberg_Topology.html b/web/entries/Furstenberg_Topology.html
--- a/web/entries/Furstenberg_Topology.html
+++ b/web/entries/Furstenberg_Topology.html
@@ -1,210 +1,210 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Furstenberg's topology and his proof of the infinitude of primes - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>urstenberg's
topology
and
his
proof
of
the
infinitude
of
primes
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Furstenberg's topology and his proof of the infinitude of primes</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-03-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This article gives a formal version of Furstenberg's
topological proof of the infinitude of primes. He defines a topology
on the integers based on arithmetic progressions (or, equivalently,
residue classes). Using some fairly obvious properties of this
topology, the infinitude of primes is then easily obtained.</p>
<p>Apart from this, this topology is also fairly ‘nice’ in
general: it is second countable, metrizable, and perfect. All of these
(well-known) facts are formally proven, including an explicit metric
-for the topology given by Zulfeqarr.</p></div></td>
+for the topology given by Zulfeqarr.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Furstenberg_Topology-AFP,
author = {Manuel Eberl},
title = {Furstenberg's topology and his proof of the infinitude of primes},
journal = {Archive of Formal Proofs},
month = mar,
year = 2020,
note = {\url{http://isa-afp.org/entries/Furstenberg_Topology.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Furstenberg_Topology/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Furstenberg_Topology/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Furstenberg_Topology/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Furstenberg_Topology-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Furstenberg_Topology-2020-03-27.tar.gz">
afp-Furstenberg_Topology-2020-03-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/GPU_Kernel_PL.html b/web/entries/GPU_Kernel_PL.html
--- a/web/entries/GPU_Kernel_PL.html
+++ b/web/entries/GPU_Kernel_PL.html
@@ -1,240 +1,240 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Syntax and semantics of a GPU kernel programming language - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>yntax
and
semantics
of
a
<font class="first">G</font>PU
kernel
programming
language
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Syntax and semantics of a GPU kernel programming language</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
John Wickerson
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-04-03</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This document accompanies the article "The Design and
Implementation of a Verification Technique for GPU Kernels"
by Adam Betts, Nathan Chong, Alastair F. Donaldson, Jeroen
Ketema, Shaz Qadeer, Paul Thomson and John Wickerson. It
formalises all of the definitions provided in Sections 3
-and 4 of the article.</div></td>
+and 4 of the article.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{GPU_Kernel_PL-AFP,
author = {John Wickerson},
title = {Syntax and semantics of a GPU kernel programming language},
journal = {Archive of Formal Proofs},
month = apr,
year = 2014,
note = {\url{http://isa-afp.org/entries/GPU_Kernel_PL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/GPU_Kernel_PL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/GPU_Kernel_PL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/GPU_Kernel_PL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-GPU_Kernel_PL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-GPU_Kernel_PL-2019-06-11.tar.gz">
afp-GPU_Kernel_PL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-GPU_Kernel_PL-2018-08-16.tar.gz">
afp-GPU_Kernel_PL-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-GPU_Kernel_PL-2017-10-10.tar.gz">
afp-GPU_Kernel_PL-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-GPU_Kernel_PL-2016-12-17.tar.gz">
afp-GPU_Kernel_PL-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-GPU_Kernel_PL-2016-02-22.tar.gz">
afp-GPU_Kernel_PL-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-GPU_Kernel_PL-2015-05-27.tar.gz">
afp-GPU_Kernel_PL-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-GPU_Kernel_PL-2014-08-28.tar.gz">
afp-GPU_Kernel_PL-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-GPU_Kernel_PL-2014-04-06.tar.gz">
afp-GPU_Kernel_PL-2014-04-06.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Gabow_SCC.html b/web/entries/Gabow_SCC.html
--- a/web/entries/Gabow_SCC.html
+++ b/web/entries/Gabow_SCC.html
@@ -1,246 +1,246 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Verified Efficient Implementation of Gabow's Strongly Connected Components Algorithm - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>erified
<font class="first">E</font>fficient
<font class="first">I</font>mplementation
of
<font class="first">G</font>abow's
<font class="first">S</font>trongly
<font class="first">C</font>onnected
<font class="first">C</font>omponents
<font class="first">A</font>lgorithm
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Verified Efficient Implementation of Gabow's Strongly Connected Components Algorithm</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-05-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present an Isabelle/HOL formalization of Gabow's algorithm for
finding the strongly connected components of a directed graph.
Using data refinement techniques, we extract efficient code that
performs comparable to a reference implementation in Java.
Our style of formalization allows for re-using large parts of the proofs
when defining variants of the algorithm. We demonstrate this by
verifying an algorithm for the emptiness check of generalized Büchi
-automata, re-using most of the existing proofs.</div></td>
+automata, re-using most of the existing proofs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Gabow_SCC-AFP,
author = {Peter Lammich},
title = {Verified Efficient Implementation of Gabow's Strongly Connected Components Algorithm},
journal = {Archive of Formal Proofs},
month = may,
year = 2014,
note = {\url{http://isa-afp.org/entries/Gabow_SCC.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="CAVA_Automata.html">CAVA_Automata</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Transition_Systems_and_Automata.html">Transition_Systems_and_Automata</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Gabow_SCC/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Gabow_SCC/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Gabow_SCC/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Gabow_SCC-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Gabow_SCC-2019-06-11.tar.gz">
afp-Gabow_SCC-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Gabow_SCC-2018-08-16.tar.gz">
afp-Gabow_SCC-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Gabow_SCC-2017-10-10.tar.gz">
afp-Gabow_SCC-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Gabow_SCC-2016-12-17.tar.gz">
afp-Gabow_SCC-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Gabow_SCC-2016-02-22.tar.gz">
afp-Gabow_SCC-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Gabow_SCC-2015-05-27.tar.gz">
afp-Gabow_SCC-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Gabow_SCC-2014-08-28.tar.gz">
afp-Gabow_SCC-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Gabow_SCC-2014-05-29.tar.gz">
afp-Gabow_SCC-2014-05-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Game_Based_Crypto.html b/web/entries/Game_Based_Crypto.html
--- a/web/entries/Game_Based_Crypto.html
+++ b/web/entries/Game_Based_Crypto.html
@@ -1,232 +1,232 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Game-based cryptography in HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>ame-based
cryptography
in
<font class="first">H</font>OL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Game-based cryptography in HOL</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>,
S. Reza Sefidgar and
Bhargav Bhatt (bhargav /dot/ bhatt /at/ inf /dot/ ethz /dot/ ch)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-05-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>In this AFP entry, we show how to specify game-based cryptographic
security notions and formally prove secure several cryptographic
constructions from the literature using the CryptHOL framework. Among
others, we formalise the notions of a random oracle, a pseudo-random
function, an unpredictable function, and of encryption schemes that are
indistinguishable under chosen plaintext and/or ciphertext attacks. We
prove the random-permutation/random-function switching lemma, security
of the Elgamal and hashed Elgamal public-key encryption scheme and
correctness and security of several constructions with pseudo-random
functions.
</p><p>Our proofs follow the game-hopping style advocated by
Shoup and Bellare and Rogaway, from which most of the examples have
been taken. We generalise some of their results such that they can be
reused in other proofs. Thanks to CryptHOL's integration with
Isabelle's parametricity infrastructure, many simple hops are easily
-justified using the theory of representation independence.</p></div></td>
+justified using the theory of representation independence.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2018-09-28]:
added the CryptHOL tutorial for game-based cryptography
(revision 489a395764ae)</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Game_Based_Crypto-AFP,
author = {Andreas Lochbihler and S. Reza Sefidgar and Bhargav Bhatt},
title = {Game-based cryptography in HOL},
journal = {Archive of Formal Proofs},
month = may,
year = 2017,
note = {\url{http://isa-afp.org/entries/Game_Based_Crypto.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="CryptHOL.html">CryptHOL</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Multi_Party_Computation.html">Multi_Party_Computation</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Game_Based_Crypto/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Game_Based_Crypto/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Game_Based_Crypto/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Game_Based_Crypto-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Game_Based_Crypto-2019-06-11.tar.gz">
afp-Game_Based_Crypto-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Game_Based_Crypto-2018-08-16.tar.gz">
afp-Game_Based_Crypto-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Game_Based_Crypto-2017-10-10.tar.gz">
afp-Game_Based_Crypto-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Game_Based_Crypto-2017-05-11.tar.gz">
afp-Game_Based_Crypto-2017-05-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Gauss-Jordan-Elim-Fun.html b/web/entries/Gauss-Jordan-Elim-Fun.html
--- a/web/entries/Gauss-Jordan-Elim-Fun.html
+++ b/web/entries/Gauss-Jordan-Elim-Fun.html
@@ -1,262 +1,262 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Gauss-Jordan Elimination for Matrices Represented as Functions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>auss-Jordan
<font class="first">E</font>limination
for
<font class="first">M</font>atrices
<font class="first">R</font>epresented
as
<font class="first">F</font>unctions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Gauss-Jordan Elimination for Matrices Represented as Functions</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-08-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This theory provides a compact formulation of Gauss-Jordan elimination for matrices represented as functions. Its distinctive feature is succinctness. It is not meant for large computations.</div></td>
+ <td class="abstract mathjax_process">This theory provides a compact formulation of Gauss-Jordan elimination for matrices represented as functions. Its distinctive feature is succinctness. It is not meant for large computations.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Gauss-Jordan-Elim-Fun-AFP,
author = {Tobias Nipkow},
title = {Gauss-Jordan Elimination for Matrices Represented as Functions},
journal = {Archive of Formal Proofs},
month = aug,
year = 2011,
note = {\url{http://isa-afp.org/entries/Gauss-Jordan-Elim-Fun.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Markov_Models.html">Markov_Models</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Gauss-Jordan-Elim-Fun/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Gauss-Jordan-Elim-Fun/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Gauss-Jordan-Elim-Fun/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Gauss-Jordan-Elim-Fun-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2020-01-14.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2020-01-14.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2019-06-11.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2018-08-16.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2017-10-10.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2016-12-17.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2016-02-22.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2015-05-27.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2014-08-28.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2013-12-11.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2013-11-17.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2013-02-16.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2012-05-24.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2011-10-11.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Gauss-Jordan-Elim-Fun-2011-08-19.tar.gz">
afp-Gauss-Jordan-Elim-Fun-2011-08-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Gauss_Jordan.html b/web/entries/Gauss_Jordan.html
--- a/web/entries/Gauss_Jordan.html
+++ b/web/entries/Gauss_Jordan.html
@@ -1,226 +1,226 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Gauss-Jordan Algorithm and Its Applications - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>auss-Jordan
<font class="first">A</font>lgorithm
and
<font class="first">I</font>ts
<font class="first">A</font>pplications
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Gauss-Jordan Algorithm and Its Applications</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a> and
<a href="http://www.unirioja.es/cu/jearansa">Jesús Aransay</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-09-03</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">The Gauss-Jordan algorithm states that any matrix over a field can be transformed by means of elementary row operations to a matrix in reduced row echelon form. The formalization is based on the Rank Nullity Theorem entry of the AFP and on the HOL-Multivariate-Analysis session of Isabelle, where matrices are represented as functions over finite types. We have set up the code generator to make this representation executable. In order to improve the performance, a refinement to immutable arrays has been carried out. We have formalized some of the applications of the Gauss-Jordan algorithm. Thanks to this development, the following facts can be computed over matrices whose elements belong to a field: Ranks, Determinants, Inverses, Bases and dimensions and Solutions of systems of linear equations. Code can be exported to SML and Haskell.</div></td>
+ <td class="abstract mathjax_process">The Gauss-Jordan algorithm states that any matrix over a field can be transformed by means of elementary row operations to a matrix in reduced row echelon form. The formalization is based on the Rank Nullity Theorem entry of the AFP and on the HOL-Multivariate-Analysis session of Isabelle, where matrices are represented as functions over finite types. We have set up the code generator to make this representation executable. In order to improve the performance, a refinement to immutable arrays has been carried out. We have formalized some of the applications of the Gauss-Jordan algorithm. Thanks to this development, the following facts can be computed over matrices whose elements belong to a field: Ranks, Determinants, Inverses, Bases and dimensions and Solutions of systems of linear equations. Code can be exported to SML and Haskell.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Gauss_Jordan-AFP,
author = {Jose Divasón and Jesús Aransay},
title = {Gauss-Jordan Algorithm and Its Applications},
journal = {Archive of Formal Proofs},
month = sep,
year = 2014,
note = {\url{http://isa-afp.org/entries/Gauss_Jordan.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Rank_Nullity_Theorem.html">Rank_Nullity_Theorem</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Echelon_Form.html">Echelon_Form</a>, <a href="QR_Decomposition.html">QR_Decomposition</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Gauss_Jordan/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Gauss_Jordan/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Gauss_Jordan/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Gauss_Jordan-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Gauss_Jordan-2019-06-11.tar.gz">
afp-Gauss_Jordan-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Gauss_Jordan-2018-08-16.tar.gz">
afp-Gauss_Jordan-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Gauss_Jordan-2017-10-10.tar.gz">
afp-Gauss_Jordan-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Gauss_Jordan-2016-12-17.tar.gz">
afp-Gauss_Jordan-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Gauss_Jordan-2016-02-22.tar.gz">
afp-Gauss_Jordan-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Gauss_Jordan-2015-05-27.tar.gz">
afp-Gauss_Jordan-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Gauss_Jordan-2014-09-03.tar.gz">
afp-Gauss_Jordan-2014-09-03.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Gauss_Sums.html b/web/entries/Gauss_Sums.html
--- a/web/entries/Gauss_Sums.html
+++ b/web/entries/Gauss_Sums.html
@@ -1,206 +1,206 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Gauss Sums and the Pólya–Vinogradov Inequality - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>auss
<font class="first">S</font>ums
and
the
<font class="first">P</font>ólya–Vinogradov
<font class="first">I</font>nequality
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Gauss Sums and the Pólya–Vinogradov Inequality</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://people.epfl.ch/rodrigo.raya">Rodrigo Raya</a> and
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-12-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This article provides a full formalisation of Chapter 8 of
Apostol's <em><a
href="https://www.springer.com/de/book/9780387901633">Introduction
to Analytic Number Theory</a></em>. Subjects that are
covered are:</p> <ul> <li>periodic arithmetic
functions and their finite Fourier series</li>
<li>(generalised) Ramanujan sums</li> <li>Gauss sums
and separable characters</li> <li>induced moduli and
primitive characters</li> <li>the
-Pólya&mdash;Vinogradov inequality</li> </ul></div></td>
+Pólya&mdash;Vinogradov inequality</li> </ul></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Gauss_Sums-AFP,
author = {Rodrigo Raya and Manuel Eberl},
title = {Gauss Sums and the Pólya–Vinogradov Inequality},
journal = {Archive of Formal Proofs},
month = dec,
year = 2019,
note = {\url{http://isa-afp.org/entries/Gauss_Sums.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Dirichlet_L.html">Dirichlet_L</a>, <a href="Dirichlet_Series.html">Dirichlet_Series</a>, <a href="Polynomial_Interpolation.html">Polynomial_Interpolation</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Gauss_Sums/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Gauss_Sums/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Gauss_Sums/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Gauss_Sums-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Gauss_Sums-2020-01-10.tar.gz">
afp-Gauss_Sums-2020-01-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/GenClock.html b/web/entries/GenClock.html
--- a/web/entries/GenClock.html
+++ b/web/entries/GenClock.html
@@ -1,292 +1,292 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of a Generalized Protocol for Clock Synchronization - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
a
<font class="first">G</font>eneralized
<font class="first">P</font>rotocol
for
<font class="first">C</font>lock
<font class="first">S</font>ynchronization
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of a Generalized Protocol for Clock Synchronization</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Alwen Tiu (ATiu /at/ ntu /dot/ edu /dot/ sg)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2005-06-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We formalize the generalized Byzantine fault-tolerant clock synchronization protocol of Schneider. This protocol abstracts from particular algorithms or implementations for clock synchronization. This abstraction includes several assumptions on the behaviors of physical clocks and on general properties of concrete algorithms/implementations. Based on these assumptions the correctness of the protocol is proved by Schneider. His proof was later verified by Shankar using the theorem prover EHDM (precursor to PVS). Our formalization in Isabelle/HOL is based on Shankar's formalization.</div></td>
+ <td class="abstract mathjax_process">We formalize the generalized Byzantine fault-tolerant clock synchronization protocol of Schneider. This protocol abstracts from particular algorithms or implementations for clock synchronization. This abstraction includes several assumptions on the behaviors of physical clocks and on general properties of concrete algorithms/implementations. Based on these assumptions the correctness of the protocol is proved by Schneider. His proof was later verified by Shankar using the theorem prover EHDM (precursor to PVS). Our formalization in Isabelle/HOL is based on Shankar's formalization.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{GenClock-AFP,
author = {Alwen Tiu},
title = {Formalization of a Generalized Protocol for Clock Synchronization},
journal = {Archive of Formal Proofs},
month = jun,
year = 2005,
note = {\url{http://isa-afp.org/entries/GenClock.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/GenClock/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/GenClock/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/GenClock/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-GenClock-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-GenClock-2019-06-11.tar.gz">
afp-GenClock-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-GenClock-2018-08-16.tar.gz">
afp-GenClock-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-GenClock-2017-10-10.tar.gz">
afp-GenClock-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-GenClock-2016-12-17.tar.gz">
afp-GenClock-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-GenClock-2016-02-22.tar.gz">
afp-GenClock-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-GenClock-2015-05-27.tar.gz">
afp-GenClock-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-GenClock-2014-08-28.tar.gz">
afp-GenClock-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-GenClock-2013-12-11.tar.gz">
afp-GenClock-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-GenClock-2013-11-17.tar.gz">
afp-GenClock-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-GenClock-2013-02-16.tar.gz">
afp-GenClock-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-GenClock-2012-05-24.tar.gz">
afp-GenClock-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-GenClock-2011-10-11.tar.gz">
afp-GenClock-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-GenClock-2011-02-11.tar.gz">
afp-GenClock-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-GenClock-2010-07-01.tar.gz">
afp-GenClock-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-GenClock-2009-12-12.tar.gz">
afp-GenClock-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-GenClock-2009-04-29.tar.gz">
afp-GenClock-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-GenClock-2008-06-10.tar.gz">
afp-GenClock-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-GenClock-2007-11-27.tar.gz">
afp-GenClock-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-GenClock-2005-10-14.tar.gz">
afp-GenClock-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-GenClock-2005-06-24.tar.gz">
afp-GenClock-2005-06-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/General-Triangle.html b/web/entries/General-Triangle.html
--- a/web/entries/General-Triangle.html
+++ b/web/entries/General-Triangle.html
@@ -1,251 +1,251 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The General Triangle Is Unique - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">G</font>eneral
<font class="first">T</font>riangle
<font class="first">I</font>s
<font class="first">U</font>nique
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The General Triangle Is Unique</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Joachim Breitner (joachim /at/ cis /dot/ upenn /dot/ edu)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-04-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Some acute-angled triangles are special, e.g. right-angled or isoscele triangles. Some are not of this kind, but, without measuring angles, look as if they were. In that sense, there is exactly one general triangle. This well-known fact is proven here formally.</div></td>
+ <td class="abstract mathjax_process">Some acute-angled triangles are special, e.g. right-angled or isoscele triangles. Some are not of this kind, but, without measuring angles, look as if they were. In that sense, there is exactly one general triangle. This well-known fact is proven here formally.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{General-Triangle-AFP,
author = {Joachim Breitner},
title = {The General Triangle Is Unique},
journal = {Archive of Formal Proofs},
month = apr,
year = 2011,
note = {\url{http://isa-afp.org/entries/General-Triangle.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/General-Triangle/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/General-Triangle/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/General-Triangle/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-General-Triangle-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-General-Triangle-2019-06-11.tar.gz">
afp-General-Triangle-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-General-Triangle-2018-08-16.tar.gz">
afp-General-Triangle-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-General-Triangle-2017-10-10.tar.gz">
afp-General-Triangle-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-General-Triangle-2016-12-17.tar.gz">
afp-General-Triangle-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-General-Triangle-2016-02-22.tar.gz">
afp-General-Triangle-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-General-Triangle-2015-05-27.tar.gz">
afp-General-Triangle-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-General-Triangle-2014-08-28.tar.gz">
afp-General-Triangle-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-General-Triangle-2013-12-11.tar.gz">
afp-General-Triangle-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-General-Triangle-2013-11-17.tar.gz">
afp-General-Triangle-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-General-Triangle-2013-02-16.tar.gz">
afp-General-Triangle-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-General-Triangle-2012-05-24.tar.gz">
afp-General-Triangle-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-General-Triangle-2011-10-11.tar.gz">
afp-General-Triangle-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-General-Triangle-2011-04-01.tar.gz">
afp-General-Triangle-2011-04-01.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Generalized_Counting_Sort.html b/web/entries/Generalized_Counting_Sort.html
--- a/web/entries/Generalized_Counting_Sort.html
+++ b/web/entries/Generalized_Counting_Sort.html
@@ -1,222 +1,222 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>An Efficient Generalization of Counting Sort for Large, possibly Infinite Key Ranges - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>n
<font class="first">E</font>fficient
<font class="first">G</font>eneralization
of
<font class="first">C</font>ounting
<font class="first">S</font>ort
for
<font class="first">L</font>arge,
possibly
<font class="first">I</font>nfinite
<font class="first">K</font>ey
<font class="first">R</font>anges
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">An Efficient Generalization of Counting Sort for Large, possibly Infinite Key Ranges</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Pasquale Noce (pasquale /dot/ noce /dot/ lavoro /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-12-04</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Counting sort is a well-known algorithm that sorts objects of any kind
mapped to integer keys, or else to keys in one-to-one correspondence
with some subset of the integers (e.g. alphabet letters). However, it
is suitable for direct use, viz. not just as a subroutine of another
sorting algorithm (e.g. radix sort), only if the key range is not
significantly larger than the number of the objects to be sorted.
This paper describes a tail-recursive generalization of counting sort
making use of a bounded number of counters, suitable for direct use in
case of a large, or even infinite key range of any kind, subject to
the only constraint of being a subset of an arbitrary linear order.
After performing a pen-and-paper analysis of how such algorithm has to
be designed to maximize its efficiency, this paper formalizes the
resulting generalized counting sort (GCsort) algorithm and then
formally proves its correctness properties, namely that (a) the
counters' number is maximized never exceeding the fixed upper
bound, (b) objects are conserved, (c) objects get sorted, and (d) the
-algorithm is stable.</div></td>
+algorithm is stable.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Generalized_Counting_Sort-AFP,
author = {Pasquale Noce},
title = {An Efficient Generalization of Counting Sort for Large, possibly Infinite Key Ranges},
journal = {Archive of Formal Proofs},
month = dec,
year = 2019,
note = {\url{http://isa-afp.org/entries/Generalized_Counting_Sort.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Generalized_Counting_Sort/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Generalized_Counting_Sort/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Generalized_Counting_Sort/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Generalized_Counting_Sort-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Generalized_Counting_Sort-2019-12-09.tar.gz">
afp-Generalized_Counting_Sort-2019-12-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Generic_Deriving.html b/web/entries/Generic_Deriving.html
--- a/web/entries/Generic_Deriving.html
+++ b/web/entries/Generic_Deriving.html
@@ -1,208 +1,208 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Deriving generic class instances for datatypes - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>eriving
generic
class
instances
for
datatypes
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Deriving generic class instances for datatypes</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Jonas Rädle (jonas /dot/ raedle /at/ tum /dot/ de) and
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-11-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>We provide a framework for automatically deriving instances for
generic type classes. Our approach is inspired by Haskell's
<i>generic-deriving</i> package and Scala's
<i>shapeless</i> library. In addition to generating the
code for type class functions, we also attempt to automatically prove
type class laws for these instances. As of now, however, some manual
proofs are still required for recursive datatypes.</p>
<p>Note: There are already articles in the AFP that provide
-automatic instantiation for a number of classes. Concretely, <a href="https://www.isa-afp.org/entries/Deriving.html">Deriving</a> allows the automatic instantiation of comparators, linear orders, equality, and hashing. <a href="https://www.isa-afp.org/entries/Show.html">Show</a> instantiates a Haskell-style <i>show</i> class.</p><p>Our approach works for arbitrary classes (with some Isabelle/HOL overhead for each class), but a smaller set of datatypes.</p></div></td>
+automatic instantiation for a number of classes. Concretely, <a href="https://www.isa-afp.org/entries/Deriving.html">Deriving</a> allows the automatic instantiation of comparators, linear orders, equality, and hashing. <a href="https://www.isa-afp.org/entries/Show.html">Show</a> instantiates a Haskell-style <i>show</i> class.</p><p>Our approach works for arbitrary classes (with some Isabelle/HOL overhead for each class), but a smaller set of datatypes.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Generic_Deriving-AFP,
author = {Jonas Rädle and Lars Hupel},
title = {Deriving generic class instances for datatypes},
journal = {Archive of Formal Proofs},
month = nov,
year = 2018,
note = {\url{http://isa-afp.org/entries/Generic_Deriving.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Generic_Deriving/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Generic_Deriving/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Generic_Deriving/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Generic_Deriving-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Generic_Deriving-2019-06-11.tar.gz">
afp-Generic_Deriving-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Generic_Deriving-2018-11-21.tar.gz">
afp-Generic_Deriving-2018-11-21.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Generic_Join.html b/web/entries/Generic_Join.html
--- a/web/entries/Generic_Join.html
+++ b/web/entries/Generic_Join.html
@@ -1,202 +1,202 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of Multiway-Join Algorithms - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
<font class="first">M</font>ultiway-Join
<font class="first">A</font>lgorithms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of Multiway-Join Algorithms</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Thibault Dardinier
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-09-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Worst-case optimal multiway-join algorithms are recent seminal
achievement of the database community. These algorithms compute the
natural join of multiple relational databases and improve in the worst
case over traditional query plan optimizations of nested binary joins.
In 2014, <a
href="https://doi.org/10.1145/2590989.2590991">Ngo, Ré,
and Rudra</a> gave a unified presentation of different multi-way
join algorithms. We formalized and proved correct their "Generic
-Join" algorithm and extended it to support negative joins.</div></td>
+Join" algorithm and extended it to support negative joins.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Generic_Join-AFP,
author = {Thibault Dardinier},
title = {Formalization of Multiway-Join Algorithms},
journal = {Archive of Formal Proofs},
month = sep,
year = 2019,
note = {\url{http://isa-afp.org/entries/Generic_Join.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="MFOTL_Monitor.html">MFOTL_Monitor</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="MFODL_Monitor_Optimized.html">MFODL_Monitor_Optimized</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Generic_Join/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Generic_Join/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Generic_Join/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Generic_Join-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Generic_Join-2019-09-18.tar.gz">
afp-Generic_Join-2019-09-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/GewirthPGCProof.html b/web/entries/GewirthPGCProof.html
--- a/web/entries/GewirthPGCProof.html
+++ b/web/entries/GewirthPGCProof.html
@@ -1,228 +1,228 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalisation and Evaluation of Alan Gewirth's Proof for the Principle of Generic Consistency in Isabelle/HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalisation
and
<font class="first">E</font>valuation
of
<font class="first">A</font>lan
<font class="first">G</font>ewirth's
<font class="first">P</font>roof
for
the
<font class="first">P</font>rinciple
of
<font class="first">G</font>eneric
<font class="first">C</font>onsistency
in
<font class="first">I</font>sabelle/HOL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalisation and Evaluation of Alan Gewirth's Proof for the Principle of Generic Consistency in Isabelle/HOL</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
David Fuenmayor (davfuenmayor /at/ gmail /dot/ com) and
<a href="http://christoph-benzmueller.de">Christoph Benzmüller</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-10-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
An ambitious ethical theory ---Alan Gewirth's "Principle of
Generic Consistency"--- is encoded and analysed in Isabelle/HOL.
Gewirth's theory has stirred much attention in philosophy and
ethics and has been proposed as a potential means to bound the impact
-of artificial general intelligence.</div></td>
+of artificial general intelligence.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2019-04-09]:
added proof for a stronger variant of the PGC and examplary inferences
(revision 88182cb0a2f6)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{GewirthPGCProof-AFP,
author = {David Fuenmayor and Christoph Benzmüller},
title = {Formalisation and Evaluation of Alan Gewirth's Proof for the Principle of Generic Consistency in Isabelle/HOL},
journal = {Archive of Formal Proofs},
month = oct,
year = 2018,
note = {\url{http://isa-afp.org/entries/GewirthPGCProof.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/GewirthPGCProof/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/GewirthPGCProof/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/GewirthPGCProof/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-GewirthPGCProof-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-GewirthPGCProof-2019-06-11.tar.gz">
afp-GewirthPGCProof-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-GewirthPGCProof-2018-10-31.tar.gz">
afp-GewirthPGCProof-2018-10-31.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Girth_Chromatic.html b/web/entries/Girth_Chromatic.html
--- a/web/entries/Girth_Chromatic.html
+++ b/web/entries/Girth_Chromatic.html
@@ -1,254 +1,254 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Probabilistic Proof of the Girth-Chromatic Number Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">P</font>robabilistic
<font class="first">P</font>roof
of
the
<font class="first">G</font>irth-Chromatic
<font class="first">N</font>umber
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Probabilistic Proof of the Girth-Chromatic Number Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~noschinl/">Lars Noschinski</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-02-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This works presents a formalization of the Girth-Chromatic number theorem in graph theory, stating that graphs with arbitrarily large girth and chromatic number exist. The proof uses the theory of Random Graphs to prove the existence with probabilistic arguments.</div></td>
+ <td class="abstract mathjax_process">This works presents a formalization of the Girth-Chromatic number theorem in graph theory, stating that graphs with arbitrarily large girth and chromatic number exist. The proof uses the theory of Random Graphs to prove the existence with probabilistic arguments.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Girth_Chromatic-AFP,
author = {Lars Noschinski},
title = {A Probabilistic Proof of the Girth-Chromatic Number Theorem},
journal = {Archive of Formal Proofs},
month = feb,
year = 2012,
note = {\url{http://isa-afp.org/entries/Girth_Chromatic.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Random_Graph_Subgraph_Threshold.html">Random_Graph_Subgraph_Threshold</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Girth_Chromatic/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Girth_Chromatic/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Girth_Chromatic/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Girth_Chromatic-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Girth_Chromatic-2019-06-11.tar.gz">
afp-Girth_Chromatic-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Girth_Chromatic-2018-08-16.tar.gz">
afp-Girth_Chromatic-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Girth_Chromatic-2017-10-10.tar.gz">
afp-Girth_Chromatic-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Girth_Chromatic-2016-12-17.tar.gz">
afp-Girth_Chromatic-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Girth_Chromatic-2016-02-22.tar.gz">
afp-Girth_Chromatic-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Girth_Chromatic-2015-05-27.tar.gz">
afp-Girth_Chromatic-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Girth_Chromatic-2014-08-28.tar.gz">
afp-Girth_Chromatic-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Girth_Chromatic-2013-12-11.tar.gz">
afp-Girth_Chromatic-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Girth_Chromatic-2013-11-17.tar.gz">
afp-Girth_Chromatic-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Girth_Chromatic-2013-02-16.tar.gz">
afp-Girth_Chromatic-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Girth_Chromatic-2012-05-24.tar.gz">
afp-Girth_Chromatic-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Girth_Chromatic-2012-02-06.tar.gz">
afp-Girth_Chromatic-2012-02-06.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/GoedelGod.html b/web/entries/GoedelGod.html
--- a/web/entries/GoedelGod.html
+++ b/web/entries/GoedelGod.html
@@ -1,238 +1,238 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Gödel's God in Isabelle/HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>ödel's
<font class="first">G</font>od
in
<font class="first">I</font>sabelle/HOL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Gödel's God in Isabelle/HOL</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://christoph-benzmueller.de">Christoph Benzmüller</a> and
<a href="http://www.logic.at/staff/bruno/">Bruno Woltzenlogel Paleo</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-11-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Dana Scott's version of Gödel's proof of God's existence is formalized in quantified
+ <td class="abstract mathjax_process">Dana Scott's version of Gödel's proof of God's existence is formalized in quantified
modal logic KB (QML KB).
QML KB is modeled as a fragment of classical higher-order logic (HOL);
-thus, the formalization is essentially a formalization in HOL.</div></td>
+thus, the formalization is essentially a formalization in HOL.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{GoedelGod-AFP,
author = {Christoph Benzmüller and Bruno Woltzenlogel Paleo},
title = {Gödel's God in Isabelle/HOL},
journal = {Archive of Formal Proofs},
month = nov,
year = 2013,
note = {\url{http://isa-afp.org/entries/GoedelGod.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/GoedelGod/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/GoedelGod/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/GoedelGod/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-GoedelGod-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-GoedelGod-2019-06-11.tar.gz">
afp-GoedelGod-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-GoedelGod-2018-08-16.tar.gz">
afp-GoedelGod-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-GoedelGod-2017-10-10.tar.gz">
afp-GoedelGod-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-GoedelGod-2016-12-17.tar.gz">
afp-GoedelGod-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-GoedelGod-2016-02-22.tar.gz">
afp-GoedelGod-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-GoedelGod-2015-05-27.tar.gz">
afp-GoedelGod-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-GoedelGod-2014-08-28.tar.gz">
afp-GoedelGod-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-GoedelGod-2013-12-11.tar.gz">
afp-GoedelGod-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-GoedelGod-2013-11-19.tar.gz">
afp-GoedelGod-2013-11-19.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-GoedelGod-2013-11-18.tar.gz">
afp-GoedelGod-2013-11-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Goodstein_Lambda.html b/web/entries/Goodstein_Lambda.html
--- a/web/entries/Goodstein_Lambda.html
+++ b/web/entries/Goodstein_Lambda.html
@@ -1,200 +1,200 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Implementing the Goodstein Function in &lambda;-Calculus - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>mplementing
the
<font class="first">G</font>oodstein
<font class="first">F</font>unction
in
<font class="first">&</font>lambda;-Calculus
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Implementing the Goodstein Function in &lambda;-Calculus</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Bertram Felgenhauer (int-e /at/ gmx /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-02-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
In this formalization, we develop an implementation of the Goodstein
function G in plain &lambda;-calculus, linked to a concise, self-contained
specification. The implementation works on a Church-encoded
representation of countable ordinals. The initial conversion to
hereditary base 2 is not covered, but the material is sufficient to
compute the particular value G(16), and easily extends to other fixed
-arguments.</div></td>
+arguments.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Goodstein_Lambda-AFP,
author = {Bertram Felgenhauer},
title = {Implementing the Goodstein Function in &lambda;-Calculus},
journal = {Archive of Formal Proofs},
month = feb,
year = 2020,
note = {\url{http://isa-afp.org/entries/Goodstein_Lambda.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Goodstein_Lambda/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Goodstein_Lambda/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Goodstein_Lambda/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Goodstein_Lambda-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Goodstein_Lambda-2020-02-24.tar.gz">
afp-Goodstein_Lambda-2020-02-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/GraphMarkingIBP.html b/web/entries/GraphMarkingIBP.html
--- a/web/entries/GraphMarkingIBP.html
+++ b/web/entries/GraphMarkingIBP.html
@@ -1,283 +1,283 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Verification of the Deutsch-Schorr-Waite Graph Marking Algorithm using Data Refinement - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>erification
of
the
<font class="first">D</font>eutsch-Schorr-Waite
<font class="first">G</font>raph
<font class="first">M</font>arking
<font class="first">A</font>lgorithm
using
<font class="first">D</font>ata
<font class="first">R</font>efinement
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Verification of the Deutsch-Schorr-Waite Graph Marking Algorithm using Data Refinement</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Viorel Preoteasa (viorel /dot/ preoteasa /at/ aalto /dot/ fi) and
<a href="http://users.abo.fi/Ralph-Johan.Back/">Ralph-Johan Back</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-05-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">The verification of the Deutsch-Schorr-Waite graph marking algorithm is used as a benchmark in many formalizations of pointer programs. The main purpose of this mechanization is to show how data refinement of invariant based programs can be used in verifying practical algorithms. The verification starts with an abstract algorithm working on a graph given by a relation <i>next</i> on nodes. Gradually the abstract program is refined into Deutsch-Schorr-Waite graph marking algorithm where only one bit per graph node of additional memory is used for marking.</div></td>
+ <td class="abstract mathjax_process">The verification of the Deutsch-Schorr-Waite graph marking algorithm is used as a benchmark in many formalizations of pointer programs. The main purpose of this mechanization is to show how data refinement of invariant based programs can be used in verifying practical algorithms. The verification starts with an abstract algorithm working on a graph given by a relation <i>next</i> on nodes. Gradually the abstract program is refined into Deutsch-Schorr-Waite graph marking algorithm where only one bit per graph node of additional memory is used for marking.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2012-01-05]: Updated for the new definition of data refinement and the new syntax for demonic and angelic update statements</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{GraphMarkingIBP-AFP,
author = {Viorel Preoteasa and Ralph-Johan Back},
title = {Verification of the Deutsch-Schorr-Waite Graph Marking Algorithm using Data Refinement},
journal = {Archive of Formal Proofs},
month = may,
year = 2010,
note = {\url{http://isa-afp.org/entries/GraphMarkingIBP.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="DataRefinementIBP.html">DataRefinementIBP</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/GraphMarkingIBP/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/GraphMarkingIBP/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/GraphMarkingIBP/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-GraphMarkingIBP-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-GraphMarkingIBP-2019-06-11.tar.gz">
afp-GraphMarkingIBP-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-GraphMarkingIBP-2018-08-16.tar.gz">
afp-GraphMarkingIBP-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-GraphMarkingIBP-2017-10-10.tar.gz">
afp-GraphMarkingIBP-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-GraphMarkingIBP-2016-12-17.tar.gz">
afp-GraphMarkingIBP-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-GraphMarkingIBP-2016-02-22.tar.gz">
afp-GraphMarkingIBP-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-GraphMarkingIBP-2015-05-27.tar.gz">
afp-GraphMarkingIBP-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-GraphMarkingIBP-2014-08-28.tar.gz">
afp-GraphMarkingIBP-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-GraphMarkingIBP-2013-12-11.tar.gz">
afp-GraphMarkingIBP-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-GraphMarkingIBP-2013-11-17.tar.gz">
afp-GraphMarkingIBP-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-GraphMarkingIBP-2013-02-16.tar.gz">
afp-GraphMarkingIBP-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-GraphMarkingIBP-2012-05-24.tar.gz">
afp-GraphMarkingIBP-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-GraphMarkingIBP-2012-03-15.tar.gz">
afp-GraphMarkingIBP-2012-03-15.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-GraphMarkingIBP-2011-10-11.tar.gz">
afp-GraphMarkingIBP-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-GraphMarkingIBP-2011-02-11.tar.gz">
afp-GraphMarkingIBP-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-GraphMarkingIBP-2010-07-01.tar.gz">
afp-GraphMarkingIBP-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-GraphMarkingIBP-2010-05-28.tar.gz">
afp-GraphMarkingIBP-2010-05-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Graph_Saturation.html b/web/entries/Graph_Saturation.html
--- a/web/entries/Graph_Saturation.html
+++ b/web/entries/Graph_Saturation.html
@@ -1,198 +1,198 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Graph Saturation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>raph
<font class="first">S</font>aturation
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Graph Saturation</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Sebastiaan J. C. Joosten
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-11-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This is an Isabelle/HOL formalisation of graph saturation, closely
following a <a href="https://doi.org/10.1016/j.jlamp.2018.06.005">paper by the author</a> on graph saturation.
Nine out of ten lemmas of the original paper are proven in this
formalisation. The formalisation additionally includes two theorems
that show the main premise of the paper: that consistency and
entailment are decided through graph saturation. This formalisation
does not give executable code, and it did not implement any of the
-optimisations suggested in the paper.</div></td>
+optimisations suggested in the paper.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Graph_Saturation-AFP,
author = {Sebastiaan J. C. Joosten},
title = {Graph Saturation},
journal = {Archive of Formal Proofs},
month = nov,
year = 2018,
note = {\url{http://isa-afp.org/entries/Graph_Saturation.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Graph_Saturation/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Graph_Saturation/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Graph_Saturation/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Graph_Saturation-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Graph_Saturation-2019-06-11.tar.gz">
afp-Graph_Saturation-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Graph_Saturation-2018-11-28.tar.gz">
afp-Graph_Saturation-2018-11-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Graph_Theory.html b/web/entries/Graph_Theory.html
--- a/web/entries/Graph_Theory.html
+++ b/web/entries/Graph_Theory.html
@@ -1,234 +1,234 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Graph Theory - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>raph
<font class="first">T</font>heory
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Graph Theory</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~noschinl/">Lars Noschinski</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-04-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This development provides a formalization of directed graphs, supporting (labelled) multi-edges and infinite graphs. A polymorphic edge type allows edges to be treated as pairs of vertices, if multi-edges are not required. Formalized properties are i.a. walks (and related concepts), connectedness and subgraphs and basic properties of isomorphisms.
+ <td class="abstract mathjax_process">This development provides a formalization of directed graphs, supporting (labelled) multi-edges and infinite graphs. A polymorphic edge type allows edges to be treated as pairs of vertices, if multi-edges are not required. Formalized properties are i.a. walks (and related concepts), connectedness and subgraphs and basic properties of isomorphisms.
<p>
-This formalization is used to prove characterizations of Euler Trails, Shortest Paths and Kuratowski subgraphs.</div></td>
+This formalization is used to prove characterizations of Euler Trails, Shortest Paths and Kuratowski subgraphs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Graph_Theory-AFP,
author = {Lars Noschinski},
title = {Graph Theory},
journal = {Archive of Formal Proofs},
month = apr,
year = 2013,
note = {\url{http://isa-afp.org/entries/Graph_Theory.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Parity_Game.html">Parity_Game</a>, <a href="Planarity_Certificates.html">Planarity_Certificates</a>, <a href="ShortestPath.html">ShortestPath</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Graph_Theory/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Graph_Theory/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Graph_Theory/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Graph_Theory-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Graph_Theory-2019-06-11.tar.gz">
afp-Graph_Theory-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Graph_Theory-2018-08-16.tar.gz">
afp-Graph_Theory-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Graph_Theory-2017-10-10.tar.gz">
afp-Graph_Theory-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Graph_Theory-2016-12-17.tar.gz">
afp-Graph_Theory-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Graph_Theory-2016-02-22.tar.gz">
afp-Graph_Theory-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Graph_Theory-2015-05-27.tar.gz">
afp-Graph_Theory-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Graph_Theory-2014-08-28.tar.gz">
afp-Graph_Theory-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Graph_Theory-2013-12-11.tar.gz">
afp-Graph_Theory-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Graph_Theory-2013-11-17.tar.gz">
afp-Graph_Theory-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Graph_Theory-2013-05-02.tar.gz">
afp-Graph_Theory-2013-05-02.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Green.html b/web/entries/Green.html
--- a/web/entries/Green.html
+++ b/web/entries/Green.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>An Isabelle/HOL formalisation of Green's Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>n
<font class="first">I</font>sabelle/HOL
formalisation
of
<font class="first">G</font>reen's
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">An Isabelle/HOL formalisation of Green's Theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://home.in.tum.de/~mansour/">Mohammad Abdulaziz</a> and
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-01-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalise a statement of Green’s theorem—the first formalisation to
our knowledge—in Isabelle/HOL. The theorem statement that we formalise
is enough for most applications, especially in physics and
engineering. Our formalisation is made possible by a novel proof that
avoids the ubiquitous line integral cancellation argument. This
eliminates the need to formalise orientations and region boundaries
explicitly with respect to the outwards-pointing normal vector.
Instead we appeal to a homological argument about equivalences between
-paths.</div></td>
+paths.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Green-AFP,
author = {Mohammad Abdulaziz and Lawrence C. Paulson},
title = {An Isabelle/HOL formalisation of Green's Theorem},
journal = {Archive of Formal Proofs},
month = jan,
year = 2018,
note = {\url{http://isa-afp.org/entries/Green.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Green/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Green/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Green/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Green-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Green-2019-06-11.tar.gz">
afp-Green-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Green-2018-08-16.tar.gz">
afp-Green-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Green-2018-01-12.tar.gz">
afp-Green-2018-01-12.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Groebner_Bases.html b/web/entries/Groebner_Bases.html
--- a/web/entries/Groebner_Bases.html
+++ b/web/entries/Groebner_Bases.html
@@ -1,226 +1,226 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Gröbner Bases Theory - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>röbner
<font class="first">B</font>ases
<font class="first">T</font>heory
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Gröbner Bases Theory</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a> and
<a href="https://risc.jku.at/m/alexander-maletzky/">Alexander Maletzky</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-05-02</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This formalization is concerned with the theory of Gröbner bases in
(commutative) multivariate polynomial rings over fields, originally
developed by Buchberger in his 1965 PhD thesis. Apart from the
statement and proof of the main theorem of the theory, the
formalization also implements Buchberger's algorithm for actually
computing Gröbner bases as a tail-recursive function, thus allowing to
effectively decide ideal membership in finitely generated polynomial
ideals. Furthermore, all functions can be executed on a concrete
-representation of multivariate polynomials as association lists.</div></td>
+representation of multivariate polynomials as association lists.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2019-04-18]: Specialized Gröbner bases to less abstract representation of polynomials, where
power-products are represented as polynomial mappings.<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Groebner_Bases-AFP,
author = {Fabian Immler and Alexander Maletzky},
title = {Gröbner Bases Theory},
journal = {Archive of Formal Proofs},
month = may,
year = 2016,
note = {\url{http://isa-afp.org/entries/Groebner_Bases.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Deriving.html">Deriving</a>, <a href="Jordan_Normal_Form.html">Jordan_Normal_Form</a>, <a href="Polynomials.html">Polynomials</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Groebner_Macaulay.html">Groebner_Macaulay</a>, <a href="Nullstellensatz.html">Nullstellensatz</a>, <a href="Signature_Groebner.html">Signature_Groebner</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Groebner_Bases/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Groebner_Bases/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Groebner_Bases/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Groebner_Bases-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Groebner_Bases-2019-06-11.tar.gz">
afp-Groebner_Bases-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Groebner_Bases-2018-08-16.tar.gz">
afp-Groebner_Bases-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Groebner_Bases-2017-10-10.tar.gz">
afp-Groebner_Bases-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Groebner_Bases-2016-12-17.tar.gz">
afp-Groebner_Bases-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Groebner_Bases-2016-05-02.tar.gz">
afp-Groebner_Bases-2016-05-02.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Groebner_Macaulay.html b/web/entries/Groebner_Macaulay.html
--- a/web/entries/Groebner_Macaulay.html
+++ b/web/entries/Groebner_Macaulay.html
@@ -1,211 +1,211 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Gröbner Bases, Macaulay Matrices and Dubé's Degree Bounds - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>röbner
<font class="first">B</font>ases,
<font class="first">M</font>acaulay
<font class="first">M</font>atrices
and
<font class="first">D</font>ubé's
<font class="first">D</font>egree
<font class="first">B</font>ounds
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Gröbner Bases, Macaulay Matrices and Dubé's Degree Bounds</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://risc.jku.at/m/alexander-maletzky/">Alexander Maletzky</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-06-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry formalizes the connection between Gröbner bases and
Macaulay matrices (sometimes also referred to as `generalized
Sylvester matrices'). In particular, it contains a method for
computing Gröbner bases, which proceeds by first constructing some
Macaulay matrix of the initial set of polynomials, then row-reducing
this matrix, and finally converting the result back into a set of
polynomials. The output is shown to be a Gröbner basis if the Macaulay
matrix constructed in the first step is sufficiently large. In order
to obtain concrete upper bounds on the size of the matrix (and hence
turn the method into an effectively executable algorithm), Dubé's
degree bounds on Gröbner bases are utilized; consequently, they are
-also part of the formalization.</div></td>
+also part of the formalization.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Groebner_Macaulay-AFP,
author = {Alexander Maletzky},
title = {Gröbner Bases, Macaulay Matrices and Dubé's Degree Bounds},
journal = {Archive of Formal Proofs},
month = jun,
year = 2019,
note = {\url{http://isa-afp.org/entries/Groebner_Macaulay.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Groebner_Bases.html">Groebner_Bases</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Groebner_Macaulay/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Groebner_Macaulay/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Groebner_Macaulay/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Groebner_Macaulay-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Groebner_Macaulay-2019-06-17.tar.gz">
afp-Groebner_Macaulay-2019-06-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Gromov_Hyperbolicity.html b/web/entries/Gromov_Hyperbolicity.html
--- a/web/entries/Gromov_Hyperbolicity.html
+++ b/web/entries/Gromov_Hyperbolicity.html
@@ -1,210 +1,210 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Gromov Hyperbolicity - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>romov
<font class="first">H</font>yperbolicity
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Gromov Hyperbolicity</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Sebastien Gouezel
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-01-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
A geodesic metric space is Gromov hyperbolic if all its geodesic
triangles are thin, i.e., every side is contained in a fixed
thickening of the two other sides. While this definition looks
innocuous, it has proved extremely important and versatile in modern
geometry since its introduction by Gromov. We formalize the basic
classical properties of Gromov hyperbolic spaces, notably the Morse
lemma asserting that quasigeodesics are close to geodesics, the
invariance of hyperbolicity under quasi-isometries, we define and
study the Gromov boundary and its associated distance, and prove that
a quasi-isometry between Gromov hyperbolic spaces extends to a
homeomorphism of the boundaries. We also prove a less classical
theorem, by Bonk and Schramm, asserting that a Gromov hyperbolic space
embeds isometrically in a geodesic Gromov-hyperbolic space. As the
original proof uses a transfinite sequence of Cauchy completions, this
is an interesting formalization exercise. Along the way, we introduce
basic material on isometries, quasi-isometries, Lipschitz maps,
geodesic spaces, the Hausdorff distance, the Cauchy completion of a
-metric space, and the exponential on extended real numbers.</div></td>
+metric space, and the exponential on extended real numbers.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Gromov_Hyperbolicity-AFP,
author = {Sebastien Gouezel},
title = {Gromov Hyperbolicity},
journal = {Archive of Formal Proofs},
month = jan,
year = 2018,
note = {\url{http://isa-afp.org/entries/Gromov_Hyperbolicity.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Ergodic_Theory.html">Ergodic_Theory</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Gromov_Hyperbolicity/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Gromov_Hyperbolicity/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Gromov_Hyperbolicity/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Gromov_Hyperbolicity-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Gromov_Hyperbolicity-2019-06-11.tar.gz">
afp-Gromov_Hyperbolicity-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Gromov_Hyperbolicity-2018-08-16.tar.gz">
afp-Gromov_Hyperbolicity-2018-08-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Group-Ring-Module.html b/web/entries/Group-Ring-Module.html
--- a/web/entries/Group-Ring-Module.html
+++ b/web/entries/Group-Ring-Module.html
@@ -1,303 +1,303 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Groups, Rings and Modules - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>roups,
<font class="first">R</font>ings
and
<font class="first">M</font>odules
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Groups, Rings and Modules</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Hidetsune Kobayashi,
L. Chen and
H. Murao
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-05-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">The theory of groups, rings and modules is developed to a great depth. Group theory results include Zassenhaus's theorem and the Jordan-Hoelder theorem. The ring theory development includes ideals, quotient rings and the Chinese remainder theorem. The module development includes the Nakayama lemma, exact sequences and Tensor products.</div></td>
+ <td class="abstract mathjax_process">The theory of groups, rings and modules is developed to a great depth. Group theory results include Zassenhaus's theorem and the Jordan-Hoelder theorem. The ring theory development includes ideals, quotient rings and the Chinese remainder theorem. The module development includes the Nakayama lemma, exact sequences and Tensor products.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Group-Ring-Module-AFP,
author = {Hidetsune Kobayashi and L. Chen and H. Murao},
title = {Groups, Rings and Modules},
journal = {Archive of Formal Proofs},
month = may,
year = 2004,
note = {\url{http://isa-afp.org/entries/Group-Ring-Module.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Valuation.html">Valuation</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Group-Ring-Module/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Group-Ring-Module/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Group-Ring-Module/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Group-Ring-Module-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Group-Ring-Module-2019-06-11.tar.gz">
afp-Group-Ring-Module-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Group-Ring-Module-2018-08-16.tar.gz">
afp-Group-Ring-Module-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Group-Ring-Module-2017-10-10.tar.gz">
afp-Group-Ring-Module-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Group-Ring-Module-2016-12-17.tar.gz">
afp-Group-Ring-Module-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Group-Ring-Module-2016-02-22.tar.gz">
afp-Group-Ring-Module-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Group-Ring-Module-2015-05-27.tar.gz">
afp-Group-Ring-Module-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Group-Ring-Module-2014-08-28.tar.gz">
afp-Group-Ring-Module-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Group-Ring-Module-2013-12-11.tar.gz">
afp-Group-Ring-Module-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Group-Ring-Module-2013-11-17.tar.gz">
afp-Group-Ring-Module-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Group-Ring-Module-2013-03-02.tar.gz">
afp-Group-Ring-Module-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Group-Ring-Module-2013-02-16.tar.gz">
afp-Group-Ring-Module-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Group-Ring-Module-2012-05-24.tar.gz">
afp-Group-Ring-Module-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Group-Ring-Module-2011-10-11.tar.gz">
afp-Group-Ring-Module-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Group-Ring-Module-2011-02-11.tar.gz">
afp-Group-Ring-Module-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Group-Ring-Module-2010-07-01.tar.gz">
afp-Group-Ring-Module-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Group-Ring-Module-2009-12-12.tar.gz">
afp-Group-Ring-Module-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Group-Ring-Module-2009-04-30.tar.gz">
afp-Group-Ring-Module-2009-04-30.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Group-Ring-Module-2009-04-29.tar.gz">
afp-Group-Ring-Module-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Group-Ring-Module-2008-06-10.tar.gz">
afp-Group-Ring-Module-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Group-Ring-Module-2007-11-27.tar.gz">
afp-Group-Ring-Module-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Group-Ring-Module-2005-10-14.tar.gz">
afp-Group-Ring-Module-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Group-Ring-Module-2004-05-20.tar.gz">
afp-Group-Ring-Module-2004-05-20.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Group-Ring-Module-2004-05-19.tar.gz">
afp-Group-Ring-Module-2004-05-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/HOL-CSP.html b/web/entries/HOL-CSP.html
--- a/web/entries/HOL-CSP.html
+++ b/web/entries/HOL-CSP.html
@@ -1,207 +1,207 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>HOL-CSP Version 2.0 - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">H</font>OL-CSP
<font class="first">V</font>ersion
<font class="first">2</font>.0
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">HOL-CSP Version 2.0</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Safouan Taha (safouan /dot/ taha /at/ lri /dot/ fr),
Lina Ye (lina /dot/ ye /at/ lri /dot/ fr) and
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-04-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This is a complete formalization of the work of Hoare and Roscoe on
the denotational semantics of the Failure/Divergence Model of CSP. It
follows essentially the presentation of CSP in Roscoe’s Book ”Theory
and Practice of Concurrency” [8] and the semantic details in a joint
Paper of Roscoe and Brooks ”An improved failures model for
communicating processes". The present work is based on a prior
formalization attempt, called HOL-CSP 1.0, done in 1997 by H. Tej and
B. Wolff with the Isabelle proof technology available at that time.
This work revealed minor, but omnipresent foundational errors in key
concepts like the process invariant. The present version HOL-CSP
profits from substantially improved libraries (notably HOLCF),
improved automated proof techniques, and structured proof techniques
-in Isar and is substantially shorter but more complete.</div></td>
+in Isar and is substantially shorter but more complete.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{HOL-CSP-AFP,
author = {Safouan Taha and Lina Ye and Burkhart Wolff},
title = {HOL-CSP Version 2.0},
journal = {Archive of Formal Proofs},
month = apr,
year = 2019,
note = {\url{http://isa-afp.org/entries/HOL-CSP.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/HOL-CSP/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/HOL-CSP/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/HOL-CSP/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-HOL-CSP-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-HOL-CSP-2019-06-11.tar.gz">
afp-HOL-CSP-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-HOL-CSP-2019-04-29.tar.gz">
afp-HOL-CSP-2019-04-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/HOLCF-Prelude.html b/web/entries/HOLCF-Prelude.html
--- a/web/entries/HOLCF-Prelude.html
+++ b/web/entries/HOLCF-Prelude.html
@@ -1,208 +1,208 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>HOLCF-Prelude - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">H</font>OLCF-Prelude
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">HOLCF-Prelude</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Joachim Breitner (joachim /at/ cis /dot/ upenn /dot/ edu),
Brian Huffman,
Neil Mitchell and
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-07-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The Isabelle/HOLCF-Prelude is a formalization of a large part of
Haskell's standard prelude in Isabelle/HOLCF. We use it to prove
the correctness of the Eratosthenes' Sieve, in its
self-referential implementation commonly used to showcase
Haskell's laziness; prove correctness of GHC's
"fold/build" rule and related rewrite rules; and certify a
-number of hints suggested by HLint.</div></td>
+number of hints suggested by HLint.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{HOLCF-Prelude-AFP,
author = {Joachim Breitner and Brian Huffman and Neil Mitchell and Christian Sternagel},
title = {HOLCF-Prelude},
journal = {Archive of Formal Proofs},
month = jul,
year = 2017,
note = {\url{http://isa-afp.org/entries/HOLCF-Prelude.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/HOLCF-Prelude/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/HOLCF-Prelude/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/HOLCF-Prelude/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-HOLCF-Prelude-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-HOLCF-Prelude-2019-06-11.tar.gz">
afp-HOLCF-Prelude-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-HOLCF-Prelude-2018-08-16.tar.gz">
afp-HOLCF-Prelude-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-HOLCF-Prelude-2017-10-10.tar.gz">
afp-HOLCF-Prelude-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-HOLCF-Prelude-2017-07-15.tar.gz">
afp-HOLCF-Prelude-2017-07-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/HRB-Slicing.html b/web/entries/HRB-Slicing.html
--- a/web/entries/HRB-Slicing.html
+++ b/web/entries/HRB-Slicing.html
@@ -1,278 +1,278 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Backing up Slicing: Verifying the Interprocedural Two-Phase Horwitz-Reps-Binkley Slicer - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>acking
up
<font class="first">S</font>licing:
<font class="first">V</font>erifying
the
<font class="first">I</font>nterprocedural
<font class="first">T</font>wo-Phase
<font class="first">H</font>orwitz-Reps-Binkley
<font class="first">S</font>licer
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Backing up Slicing: Verifying the Interprocedural Two-Phase Horwitz-Reps-Binkley Slicer</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php">Daniel Wasserrab</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2009-11-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">After verifying <a href="Slicing.html">dynamic and static interprocedural slicing</a>, we present a modular framework for static interprocedural slicing. To this end, we formalized the standard two-phase slicer from Horwitz, Reps and Binkley (see their TOPLAS 12(1) 1990 paper) together with summary edges as presented by Reps et al. (see FSE 1994). The framework is again modular in the programming language by using an abstract CFG, defined via structural and well-formedness properties. Using a weak simulation between the original and sliced graph, we were able to prove the correctness of static interprocedural slicing. We also instantiate our framework with a simple While language with procedures. This shows that the chosen abstractions are indeed valid.</div></td>
+ <td class="abstract mathjax_process">After verifying <a href="Slicing.html">dynamic and static interprocedural slicing</a>, we present a modular framework for static interprocedural slicing. To this end, we formalized the standard two-phase slicer from Horwitz, Reps and Binkley (see their TOPLAS 12(1) 1990 paper) together with summary edges as presented by Reps et al. (see FSE 1994). The framework is again modular in the programming language by using an abstract CFG, defined via structural and well-formedness properties. Using a weak simulation between the original and sliced graph, we were able to prove the correctness of static interprocedural slicing. We also instantiate our framework with a simple While language with procedures. This shows that the chosen abstractions are indeed valid.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{HRB-Slicing-AFP,
author = {Daniel Wasserrab},
title = {Backing up Slicing: Verifying the Interprocedural Two-Phase Horwitz-Reps-Binkley Slicer},
journal = {Archive of Formal Proofs},
month = nov,
year = 2009,
note = {\url{http://isa-afp.org/entries/HRB-Slicing.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Jinja.html">Jinja</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="InformationFlowSlicing_Inter.html">InformationFlowSlicing_Inter</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/HRB-Slicing/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/HRB-Slicing/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/HRB-Slicing/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-HRB-Slicing-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-HRB-Slicing-2019-06-11.tar.gz">
afp-HRB-Slicing-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-HRB-Slicing-2018-08-16.tar.gz">
afp-HRB-Slicing-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-HRB-Slicing-2017-10-10.tar.gz">
afp-HRB-Slicing-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-HRB-Slicing-2016-12-17.tar.gz">
afp-HRB-Slicing-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-HRB-Slicing-2016-02-22.tar.gz">
afp-HRB-Slicing-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-HRB-Slicing-2015-05-27.tar.gz">
afp-HRB-Slicing-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-HRB-Slicing-2014-08-28.tar.gz">
afp-HRB-Slicing-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-HRB-Slicing-2013-12-11.tar.gz">
afp-HRB-Slicing-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-HRB-Slicing-2013-11-17.tar.gz">
afp-HRB-Slicing-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-HRB-Slicing-2013-02-16.tar.gz">
afp-HRB-Slicing-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-HRB-Slicing-2012-05-24.tar.gz">
afp-HRB-Slicing-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-HRB-Slicing-2011-10-11.tar.gz">
afp-HRB-Slicing-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-HRB-Slicing-2011-02-11.tar.gz">
afp-HRB-Slicing-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-HRB-Slicing-2010-07-01.tar.gz">
afp-HRB-Slicing-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-HRB-Slicing-2009-12-12.tar.gz">
afp-HRB-Slicing-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-HRB-Slicing-2009-11-19.tar.gz">
afp-HRB-Slicing-2009-11-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Heard_Of.html b/web/entries/Heard_Of.html
--- a/web/entries/Heard_Of.html
+++ b/web/entries/Heard_Of.html
@@ -1,282 +1,282 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Verifying Fault-Tolerant Distributed Algorithms in the Heard-Of Model - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>erifying
<font class="first">F</font>ault-Tolerant
<font class="first">D</font>istributed
<font class="first">A</font>lgorithms
in
the
<font class="first">H</font>eard-Of
<font class="first">M</font>odel
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Verifying Fault-Tolerant Distributed Algorithms in the Heard-Of Model</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Henri Debrat (henri /dot/ debrat /at/ loria /dot/ fr) and
<a href="http://www.loria.fr/~merz">Stephan Merz</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-07-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Distributed computing is inherently based on replication, promising
increased tolerance to failures of individual computing nodes or
communication channels. Realizing this promise, however, involves
quite subtle algorithmic mechanisms, and requires precise statements
about the kinds and numbers of faults that an algorithm tolerates (such
as process crashes, communication faults or corrupted values). The
landmark theorem due to Fischer, Lynch, and Paterson shows that it is
impossible to achieve Consensus among N asynchronously communicating
nodes in the presence of even a single permanent failure. Existing
solutions must rely on assumptions of "partial synchrony".
<p>
Indeed, there have been numerous misunderstandings on what exactly a given
algorithm is supposed to realize in what kinds of environments. Moreover, the
abundance of subtly different computational models complicates comparisons
between different algorithms. Charron-Bost and Schiper introduced the Heard-Of
model for representing algorithms and failure assumptions in a uniform
framework, simplifying comparisons between algorithms.
<p>
In this contribution, we represent the Heard-Of model in Isabelle/HOL. We define
two semantics of runs of algorithms with different unit of atomicity and relate
these through a reduction theorem that allows us to verify algorithms in the
coarse-grained semantics (where proofs are easier) and infer their correctness
for the fine-grained one (which corresponds to actual executions). We
instantiate the framework by verifying six Consensus algorithms that differ in
-the underlying algorithmic mechanisms and the kinds of faults they tolerate.</div></td>
+the underlying algorithmic mechanisms and the kinds of faults they tolerate.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Heard_Of-AFP,
author = {Henri Debrat and Stephan Merz},
title = {Verifying Fault-Tolerant Distributed Algorithms in the Heard-Of Model},
journal = {Archive of Formal Proofs},
month = jul,
year = 2012,
note = {\url{http://isa-afp.org/entries/Heard_Of.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Stuttering_Equivalence.html">Stuttering_Equivalence</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Consensus_Refined.html">Consensus_Refined</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Heard_Of/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Heard_Of/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Heard_Of/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Heard_Of-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Heard_Of-2019-06-11.tar.gz">
afp-Heard_Of-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Heard_Of-2018-08-16.tar.gz">
afp-Heard_Of-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Heard_Of-2017-10-10.tar.gz">
afp-Heard_Of-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Heard_Of-2016-12-17.tar.gz">
afp-Heard_Of-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Heard_Of-2016-02-22.tar.gz">
afp-Heard_Of-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Heard_Of-2015-05-27.tar.gz">
afp-Heard_Of-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Heard_Of-2014-08-28.tar.gz">
afp-Heard_Of-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Heard_Of-2013-12-11.tar.gz">
afp-Heard_Of-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Heard_Of-2013-11-17.tar.gz">
afp-Heard_Of-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Heard_Of-2013-03-02.tar.gz">
afp-Heard_Of-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Heard_Of-2013-02-16.tar.gz">
afp-Heard_Of-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Heard_Of-2012-07-30.tar.gz">
afp-Heard_Of-2012-07-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Hello_World.html b/web/entries/Hello_World.html
--- a/web/entries/Hello_World.html
+++ b/web/entries/Hello_World.html
@@ -1,192 +1,192 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Hello World - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">H</font>ello
<font class="first">W</font>orld
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Hello World</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a> and
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-03-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
In this article, we present a formalization of the well-known
"Hello, World!" code, including a formal framework for
reasoning about IO. Our model is inspired by the handling of IO in
Haskell. We start by formalizing the 🌍 and embrace the IO monad
afterwards. Then we present a sample main :: IO (), followed by its
-proof of correctness.</div></td>
+proof of correctness.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Hello_World-AFP,
author = {Cornelius Diekmann and Lars Hupel},
title = {Hello World},
journal = {Archive of Formal Proofs},
month = mar,
year = 2020,
note = {\url{http://isa-afp.org/entries/Hello_World.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hello_World/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Hello_World/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hello_World/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Hello_World-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Hello_World-2020-03-23.tar.gz">
afp-Hello_World-2020-03-23.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/HereditarilyFinite.html b/web/entries/HereditarilyFinite.html
--- a/web/entries/HereditarilyFinite.html
+++ b/web/entries/HereditarilyFinite.html
@@ -1,243 +1,243 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Hereditarily Finite Sets - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">H</font>ereditarily
<font class="first">F</font>inite
<font class="first">S</font>ets
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Hereditarily Finite Sets</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-11-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">The theory of hereditarily finite sets is formalised, following
+ <td class="abstract mathjax_process">The theory of hereditarily finite sets is formalised, following
the <a href="http://journals.impan.gov.pl/dm/Inf/422-0-1.html">development</a> of Swierczkowski.
An HF set is a finite collection of other HF sets; they enjoy an induction principle
and satisfy all the axioms of ZF set theory apart from the axiom of infinity, which is negated.
All constructions that are possible in ZF set theory (Cartesian products, disjoint sums, natural numbers,
functions) without using infinite sets are possible here.
The definition of addition for the HF sets follows Kirby.
This development forms the foundation for the Isabelle proof of Gödel's incompleteness theorems,
-which has been <a href="Incompleteness.html">formalised separately</a>.</div></td>
+which has been <a href="Incompleteness.html">formalised separately</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2015-02-23]: Added the theory "Finitary" defining the class of types that can be embedded in hf, including int, char, option, list, etc.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{HereditarilyFinite-AFP,
author = {Lawrence C. Paulson},
title = {The Hereditarily Finite Sets},
journal = {Archive of Formal Proofs},
month = nov,
year = 2013,
note = {\url{http://isa-afp.org/entries/HereditarilyFinite.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Finite_Automata_HF.html">Finite_Automata_HF</a>, <a href="Incompleteness.html">Incompleteness</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/HereditarilyFinite/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/HereditarilyFinite/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/HereditarilyFinite/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-HereditarilyFinite-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-HereditarilyFinite-2019-06-11.tar.gz">
afp-HereditarilyFinite-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-HereditarilyFinite-2018-08-16.tar.gz">
afp-HereditarilyFinite-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-HereditarilyFinite-2017-10-10.tar.gz">
afp-HereditarilyFinite-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-HereditarilyFinite-2016-12-17.tar.gz">
afp-HereditarilyFinite-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-HereditarilyFinite-2016-02-22.tar.gz">
afp-HereditarilyFinite-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-HereditarilyFinite-2015-05-27.tar.gz">
afp-HereditarilyFinite-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-HereditarilyFinite-2014-08-28.tar.gz">
afp-HereditarilyFinite-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-HereditarilyFinite-2013-12-11.tar.gz">
afp-HereditarilyFinite-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-HereditarilyFinite-2013-11-17.tar.gz">
afp-HereditarilyFinite-2013-11-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Hermite.html b/web/entries/Hermite.html
--- a/web/entries/Hermite.html
+++ b/web/entries/Hermite.html
@@ -1,215 +1,215 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Hermite Normal Form - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">H</font>ermite
<font class="first">N</font>ormal
<font class="first">F</font>orm
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Hermite Normal Form</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a> and
<a href="http://www.unirioja.es/cu/jearansa">Jesús Aransay</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-07-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Hermite Normal Form is a canonical matrix analogue of Reduced Echelon Form, but involving matrices over more general rings. In this work we formalise an algorithm to compute the Hermite Normal Form of a matrix by means of elementary row operations, taking advantage of the Echelon Form AFP entry. We have proven the correctness of such an algorithm and refined it to immutable arrays. Furthermore, we have also formalised the uniqueness of the Hermite Normal Form of a matrix. Code can be exported and some examples of execution involving integer matrices and polynomial matrices are presented as well.</div></td>
+ <td class="abstract mathjax_process">Hermite Normal Form is a canonical matrix analogue of Reduced Echelon Form, but involving matrices over more general rings. In this work we formalise an algorithm to compute the Hermite Normal Form of a matrix by means of elementary row operations, taking advantage of the Echelon Form AFP entry. We have proven the correctness of such an algorithm and refined it to immutable arrays. Furthermore, we have also formalised the uniqueness of the Hermite Normal Form of a matrix. Code can be exported and some examples of execution involving integer matrices and polynomial matrices are presented as well.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Hermite-AFP,
author = {Jose Divasón and Jesús Aransay},
title = {Hermite Normal Form},
journal = {Archive of Formal Proofs},
month = jul,
year = 2015,
note = {\url{http://isa-afp.org/entries/Hermite.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Echelon_Form.html">Echelon_Form</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hermite/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Hermite/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hermite/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Hermite-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Hermite-2019-06-11.tar.gz">
afp-Hermite-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Hermite-2018-08-16.tar.gz">
afp-Hermite-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Hermite-2017-10-10.tar.gz">
afp-Hermite-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Hermite-2016-12-17.tar.gz">
afp-Hermite-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Hermite-2016-02-22.tar.gz">
afp-Hermite-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Hermite-2015-07-07.tar.gz">
afp-Hermite-2015-07-07.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Hidden_Markov_Models.html b/web/entries/Hidden_Markov_Models.html
--- a/web/entries/Hidden_Markov_Models.html
+++ b/web/entries/Hidden_Markov_Models.html
@@ -1,209 +1,209 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Hidden Markov Models - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">H</font>idden
<font class="first">M</font>arkov
<font class="first">M</font>odels
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Hidden Markov Models</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-05-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry contains a formalization of hidden Markov models [3] based
on Johannes Hölzl's formalization of discrete time Markov chains
[1]. The basic definitions are provided and the correctness of two
main (dynamic programming) algorithms for hidden Markov models is
proved: the forward algorithm for computing the likelihood of an
observed sequence, and the Viterbi algorithm for decoding the most
probable hidden state sequence. The Viterbi algorithm is made
executable including memoization. Hidden markov models have various
applications in natural language processing. For an introduction see
-Jurafsky and Martin [2].</div></td>
+Jurafsky and Martin [2].</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Hidden_Markov_Models-AFP,
author = {Simon Wimmer},
title = {Hidden Markov Models},
journal = {Archive of Formal Proofs},
month = may,
year = 2018,
note = {\url{http://isa-afp.org/entries/Hidden_Markov_Models.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Markov_Models.html">Markov_Models</a>, <a href="Monad_Memo_DP.html">Monad_Memo_DP</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hidden_Markov_Models/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Hidden_Markov_Models/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hidden_Markov_Models/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Hidden_Markov_Models-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Hidden_Markov_Models-2019-06-11.tar.gz">
afp-Hidden_Markov_Models-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Hidden_Markov_Models-2018-08-16.tar.gz">
afp-Hidden_Markov_Models-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Hidden_Markov_Models-2018-05-25.tar.gz">
afp-Hidden_Markov_Models-2018-05-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Higher_Order_Terms.html b/web/entries/Higher_Order_Terms.html
--- a/web/entries/Higher_Order_Terms.html
+++ b/web/entries/Higher_Order_Terms.html
@@ -1,221 +1,221 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>An Algebra for Higher-Order Terms - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>n
<font class="first">A</font>lgebra
for
<font class="first">H</font>igher-Order
<font class="first">T</font>erms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">An Algebra for Higher-Order Terms</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="datahead">
Contributor:
</td>
<td class="data">
Yu Zhang
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-01-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
In this formalization, I introduce a higher-order term algebra,
generalizing the notions of free variables, matching, and
substitution. The need arose from the work on a <a
href="http://dx.doi.org/10.1007/978-3-319-89884-1_35">verified
compiler from Isabelle to CakeML</a>. Terms can be thought of as
consisting of a generic (free variables, constants, application) and
a specific part. As example applications, this entry provides
instantiations for de-Bruijn terms, terms with named variables, and
<a
href="https://www.isa-afp.org/entries/Lambda_Free_RPOs.html">Blanchette’s
&lambda;-free higher-order terms</a>. Furthermore, I
implement translation functions between de-Bruijn terms and named
-terms and prove their correctness.</div></td>
+terms and prove their correctness.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Higher_Order_Terms-AFP,
author = {Lars Hupel},
title = {An Algebra for Higher-Order Terms},
journal = {Archive of Formal Proofs},
month = jan,
year = 2019,
note = {\url{http://isa-afp.org/entries/Higher_Order_Terms.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Datatype_Order_Generator.html">Datatype_Order_Generator</a>, <a href="Lambda_Free_RPOs.html">Lambda_Free_RPOs</a>, <a href="List-Index.html">List-Index</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="CakeML_Codegen.html">CakeML_Codegen</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Higher_Order_Terms/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Higher_Order_Terms/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Higher_Order_Terms/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Higher_Order_Terms-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Higher_Order_Terms-2019-06-11.tar.gz">
afp-Higher_Order_Terms-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Higher_Order_Terms-2019-01-15.tar.gz">
afp-Higher_Order_Terms-2019-01-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Hoare_Time.html b/web/entries/Hoare_Time.html
--- a/web/entries/Hoare_Time.html
+++ b/web/entries/Hoare_Time.html
@@ -1,215 +1,215 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Hoare Logics for Time Bounds - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">H</font>oare
<font class="first">L</font>ogics
for
<font class="first">T</font>ime
<font class="first">B</font>ounds
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Hoare Logics for Time Bounds</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.in.tum.de/~haslbema">Maximilian P. L. Haslbeck</a> and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-02-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We study three different Hoare logics for reasoning about time bounds
of imperative programs and formalize them in Isabelle/HOL: a classical
Hoare like logic due to Nielson, a logic with potentials due to
Carbonneaux <i>et al.</i> and a <i>separation
logic</i> following work by Atkey, Chaguérand and Pottier.
These logics are formally shown to be sound and complete. Verification
condition generators are developed and are shown sound and complete
too. We also consider variants of the systems where we abstract from
multiplicative constants in the running time bounds, thus supporting a
big-O style of reasoning. Finally we compare the expressive power of
-the three systems.</div></td>
+the three systems.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Hoare_Time-AFP,
author = {Maximilian P. L. Haslbeck and Tobias Nipkow},
title = {Hoare Logics for Time Bounds},
journal = {Archive of Formal Proofs},
month = feb,
year = 2018,
note = {\url{http://isa-afp.org/entries/Hoare_Time.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Separation_Algebra.html">Separation_Algebra</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hoare_Time/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Hoare_Time/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hoare_Time/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Hoare_Time-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Hoare_Time-2019-06-11.tar.gz">
afp-Hoare_Time-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Hoare_Time-2018-08-16.tar.gz">
afp-Hoare_Time-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Hoare_Time-2018-02-26.tar.gz">
afp-Hoare_Time-2018-02-26.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/HotelKeyCards.html b/web/entries/HotelKeyCards.html
--- a/web/entries/HotelKeyCards.html
+++ b/web/entries/HotelKeyCards.html
@@ -1,274 +1,274 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Hotel Key Card System - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">H</font>otel
<font class="first">K</font>ey
<font class="first">C</font>ard
<font class="first">S</font>ystem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Hotel Key Card System</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2006-09-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Two models of an electronic hotel key card system are contrasted: a state based and a trace based one. Both are defined, verified, and proved equivalent in the theorem prover Isabelle/HOL. It is shown that if a guest follows a certain safety policy regarding her key cards, she can be sure that nobody but her can enter her room.</div></td>
+ <td class="abstract mathjax_process">Two models of an electronic hotel key card system are contrasted: a state based and a trace based one. Both are defined, verified, and proved equivalent in the theorem prover Isabelle/HOL. It is shown that if a guest follows a certain safety policy regarding her key cards, she can be sure that nobody but her can enter her room.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{HotelKeyCards-AFP,
author = {Tobias Nipkow},
title = {Hotel Key Card System},
journal = {Archive of Formal Proofs},
month = sep,
year = 2006,
note = {\url{http://isa-afp.org/entries/HotelKeyCards.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/HotelKeyCards/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/HotelKeyCards/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/HotelKeyCards/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-HotelKeyCards-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-HotelKeyCards-2019-06-11.tar.gz">
afp-HotelKeyCards-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-HotelKeyCards-2018-08-16.tar.gz">
afp-HotelKeyCards-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-HotelKeyCards-2017-10-10.tar.gz">
afp-HotelKeyCards-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-HotelKeyCards-2016-12-17.tar.gz">
afp-HotelKeyCards-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-HotelKeyCards-2016-02-22.tar.gz">
afp-HotelKeyCards-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-HotelKeyCards-2015-05-27.tar.gz">
afp-HotelKeyCards-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-HotelKeyCards-2014-08-28.tar.gz">
afp-HotelKeyCards-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-HotelKeyCards-2013-12-11.tar.gz">
afp-HotelKeyCards-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-HotelKeyCards-2013-11-17.tar.gz">
afp-HotelKeyCards-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-HotelKeyCards-2013-02-16.tar.gz">
afp-HotelKeyCards-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-HotelKeyCards-2012-05-24.tar.gz">
afp-HotelKeyCards-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-HotelKeyCards-2011-10-11.tar.gz">
afp-HotelKeyCards-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-HotelKeyCards-2011-02-11.tar.gz">
afp-HotelKeyCards-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-HotelKeyCards-2010-07-01.tar.gz">
afp-HotelKeyCards-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-HotelKeyCards-2009-12-12.tar.gz">
afp-HotelKeyCards-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-HotelKeyCards-2009-04-29.tar.gz">
afp-HotelKeyCards-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-HotelKeyCards-2008-06-10.tar.gz">
afp-HotelKeyCards-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-HotelKeyCards-2007-11-27.tar.gz">
afp-HotelKeyCards-2007-11-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Huffman.html b/web/entries/Huffman.html
--- a/web/entries/Huffman.html
+++ b/web/entries/Huffman.html
@@ -1,285 +1,285 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Textbook Proof of Huffman's Algorithm - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">T</font>extbook
<font class="first">P</font>roof
of
<font class="first">H</font>uffman's
<font class="first">A</font>lgorithm
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Textbook Proof of Huffman's Algorithm</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Jasmin Christian Blanchette (j /dot/ c /dot/ blanchette /at/ vu /dot/ nl)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-10-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Huffman's algorithm is a procedure for constructing a binary tree with minimum weighted path length. This report presents a formal proof of the correctness of Huffman's algorithm written using Isabelle/HOL. Our proof closely follows the sketches found in standard algorithms textbooks, uncovering a few snags in the process. Another distinguishing feature of our formalization is the use of custom induction rules to help Isabelle's automatic tactics, leading to very short proofs for most of the lemmas.</div></td>
+ <td class="abstract mathjax_process">Huffman's algorithm is a procedure for constructing a binary tree with minimum weighted path length. This report presents a formal proof of the correctness of Huffman's algorithm written using Isabelle/HOL. Our proof closely follows the sketches found in standard algorithms textbooks, uncovering a few snags in the process. Another distinguishing feature of our formalization is the use of custom induction rules to help Isabelle's automatic tactics, leading to very short proofs for most of the lemmas.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Huffman-AFP,
author = {Jasmin Christian Blanchette},
title = {The Textbook Proof of Huffman's Algorithm},
journal = {Archive of Formal Proofs},
month = oct,
year = 2008,
note = {\url{http://isa-afp.org/entries/Huffman.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="CakeML_Codegen.html">CakeML_Codegen</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Huffman/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Huffman/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Huffman/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Huffman-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Huffman-2019-06-11.tar.gz">
afp-Huffman-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Huffman-2018-08-16.tar.gz">
afp-Huffman-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Huffman-2017-10-10.tar.gz">
afp-Huffman-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Huffman-2016-12-17.tar.gz">
afp-Huffman-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Huffman-2016-02-22.tar.gz">
afp-Huffman-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Huffman-2015-05-27.tar.gz">
afp-Huffman-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Huffman-2014-08-28.tar.gz">
afp-Huffman-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Huffman-2013-12-11.tar.gz">
afp-Huffman-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Huffman-2013-11-17.tar.gz">
afp-Huffman-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Huffman-2013-03-02.tar.gz">
afp-Huffman-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Huffman-2013-02-16.tar.gz">
afp-Huffman-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Huffman-2012-05-24.tar.gz">
afp-Huffman-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Huffman-2011-10-11.tar.gz">
afp-Huffman-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Huffman-2011-02-11.tar.gz">
afp-Huffman-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Huffman-2010-07-01.tar.gz">
afp-Huffman-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Huffman-2009-12-12.tar.gz">
afp-Huffman-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Huffman-2009-04-29.tar.gz">
afp-Huffman-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Huffman-2008-10-21.tar.gz">
afp-Huffman-2008-10-21.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Huffman-2008-10-15.tar.gz">
afp-Huffman-2008-10-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Hybrid_Logic.html b/web/entries/Hybrid_Logic.html
--- a/web/entries/Hybrid_Logic.html
+++ b/web/entries/Hybrid_Logic.html
@@ -1,214 +1,214 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalizing a Seligman-Style Tableau System for Hybrid Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalizing
a
<font class="first">S</font>eligman-Style
<font class="first">T</font>ableau
<font class="first">S</font>ystem
for
<font class="first">H</font>ybrid
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalizing a Seligman-Style Tableau System for Hybrid Logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://people.compute.dtu.dk/ahfrom/">Asta Halkjær From</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-12-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This work is a formalization of soundness and completeness proofs
for a Seligman-style tableau system for hybrid logic. The completeness
result is obtained via a synthetic approach using maximally
consistent sets of tableau blocks. The formalization differs from
the cited work in a few ways. First, to avoid the need to backtrack in
the construction of a tableau, the formalized system has no unnamed
initial segment, and therefore no Name rule. Second, I show that the
full Bridge rule is admissible in the system. Third, I start from rules
restricted to only extend the branch with new formulas, including only
witnessing diamonds that are not already witnessed, and show that
the unrestricted rules are admissible. Similarly, I start from simpler
versions of the @-rules and show the general ones admissible. Finally,
the GoTo rule is restricted using a notion of coins such that each
application consumes a coin and coins are earned through applications of
the remaining rules. I show that if a branch can be closed then it can
be closed starting from a single coin. These restrictions are imposed
-to rule out some means of nontermination.</div></td>
+to rule out some means of nontermination.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Hybrid_Logic-AFP,
author = {Asta Halkjær From},
title = {Formalizing a Seligman-Style Tableau System for Hybrid Logic},
journal = {Archive of Formal Proofs},
month = dec,
year = 2019,
note = {\url{http://isa-afp.org/entries/Hybrid_Logic.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hybrid_Logic/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Hybrid_Logic/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hybrid_Logic/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Hybrid_Logic-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Hybrid_Logic-2020-01-07.tar.gz">
afp-Hybrid_Logic-2020-01-07.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Hybrid_Multi_Lane_Spatial_Logic.html b/web/entries/Hybrid_Multi_Lane_Spatial_Logic.html
--- a/web/entries/Hybrid_Multi_Lane_Spatial_Logic.html
+++ b/web/entries/Hybrid_Multi_Lane_Spatial_Logic.html
@@ -1,208 +1,208 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Hybrid Multi-Lane Spatial Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">H</font>ybrid
<font class="first">M</font>ulti-Lane
<font class="first">S</font>patial
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Hybrid Multi-Lane Spatial Logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Sven Linker (s /dot/ linker /at/ liverpool /dot/ ac /dot/ uk)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-11-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a semantic embedding of a spatio-temporal multi-modal
logic, specifically defined to reason about motorway traffic, into
Isabelle/HOL. The semantic model is an abstraction of a motorway,
emphasising local spatial properties, and parameterised by the types
of sensors deployed in the vehicles. We use the logic to define
controller constraints to ensure safety, i.e., the absence of
collisions on the motorway. After proving safety with a restrictive
definition of sensors, we relax these assumptions and show how to
-amend the controller constraints to still guarantee safety.</div></td>
+amend the controller constraints to still guarantee safety.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Hybrid_Multi_Lane_Spatial_Logic-AFP,
author = {Sven Linker},
title = {Hybrid Multi-Lane Spatial Logic},
journal = {Archive of Formal Proofs},
month = nov,
year = 2017,
note = {\url{http://isa-afp.org/entries/Hybrid_Multi_Lane_Spatial_Logic.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hybrid_Multi_Lane_Spatial_Logic/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Hybrid_Multi_Lane_Spatial_Logic/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hybrid_Multi_Lane_Spatial_Logic/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Hybrid_Multi_Lane_Spatial_Logic-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Hybrid_Multi_Lane_Spatial_Logic-2019-06-11.tar.gz">
afp-Hybrid_Multi_Lane_Spatial_Logic-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Hybrid_Multi_Lane_Spatial_Logic-2018-08-16.tar.gz">
afp-Hybrid_Multi_Lane_Spatial_Logic-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Hybrid_Multi_Lane_Spatial_Logic-2017-11-09.tar.gz">
afp-Hybrid_Multi_Lane_Spatial_Logic-2017-11-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Hybrid_Systems_VCs.html b/web/entries/Hybrid_Systems_VCs.html
--- a/web/entries/Hybrid_Systems_VCs.html
+++ b/web/entries/Hybrid_Systems_VCs.html
@@ -1,201 +1,201 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Verification Components for Hybrid Systems - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>erification
<font class="first">C</font>omponents
for
<font class="first">H</font>ybrid
<font class="first">S</font>ystems
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Verification Components for Hybrid Systems</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Jonathan Julian Huerta y Munive
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-09-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
These components formalise a semantic framework for the deductive
verification of hybrid systems. They support reasoning about
continuous evolutions of hybrid programs in the style of differential
dynamics logic. Vector fields or flows model these evolutions, and
their verification is done with invariants for the former or orbits
for the latter. Laws of modal Kleene algebra or categorical predicate
transformers implement the verification condition generation. Examples
-show the approach at work.</div></td>
+show the approach at work.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Hybrid_Systems_VCs-AFP,
author = {Jonathan Julian Huerta y Munive},
title = {Verification Components for Hybrid Systems},
journal = {Archive of Formal Proofs},
month = sep,
year = 2019,
note = {\url{http://isa-afp.org/entries/Hybrid_Systems_VCs.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="KAD.html">KAD</a>, <a href="Ordinary_Differential_Equations.html">Ordinary_Differential_Equations</a>, <a href="Transformer_Semantics.html">Transformer_Semantics</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hybrid_Systems_VCs/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Hybrid_Systems_VCs/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Hybrid_Systems_VCs/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Hybrid_Systems_VCs-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Hybrid_Systems_VCs-2019-09-10.tar.gz">
afp-Hybrid_Systems_VCs-2019-09-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/HyperCTL.html b/web/entries/HyperCTL.html
--- a/web/entries/HyperCTL.html
+++ b/web/entries/HyperCTL.html
@@ -1,234 +1,234 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A shallow embedding of HyperCTL* - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
shallow
embedding
of
<font class="first">H</font>yperCTL*
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A shallow embedding of HyperCTL*</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.react.uni-saarland.de/people/rabe.html">Markus N. Rabe</a>,
Peter Lammich and
Andrei Popescu (a /dot/ popescu /at/ mdx /dot/ ac /dot/ uk)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-04-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We formalize HyperCTL*, a temporal logic for expressing security properties. We
+ <td class="abstract mathjax_process">We formalize HyperCTL*, a temporal logic for expressing security properties. We
first define a shallow embedding of HyperCTL*, within which we prove inductive and coinductive
rules for the operators. Then we show that a HyperCTL* formula captures Goguen-Meseguer
noninterference, a landmark information flow property. We also define a deep embedding and
connect it to the shallow embedding by a denotational semantics, for which we prove sanity w.r.t.
dependence on the free variables. Finally, we show that under some finiteness assumptions about
-the model, noninterference is given by a (finitary) syntactic formula.</div></td>
+the model, noninterference is given by a (finitary) syntactic formula.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{HyperCTL-AFP,
author = {Markus N. Rabe and Peter Lammich and Andrei Popescu},
title = {A shallow embedding of HyperCTL*},
journal = {Archive of Formal Proofs},
month = apr,
year = 2014,
note = {\url{http://isa-afp.org/entries/HyperCTL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/HyperCTL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/HyperCTL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/HyperCTL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-HyperCTL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-HyperCTL-2019-06-11.tar.gz">
afp-HyperCTL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-HyperCTL-2018-08-16.tar.gz">
afp-HyperCTL-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-HyperCTL-2017-10-10.tar.gz">
afp-HyperCTL-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-HyperCTL-2016-12-17.tar.gz">
afp-HyperCTL-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-HyperCTL-2016-02-22.tar.gz">
afp-HyperCTL-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-HyperCTL-2015-05-27.tar.gz">
afp-HyperCTL-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-HyperCTL-2014-08-28.tar.gz">
afp-HyperCTL-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-HyperCTL-2014-04-16.tar.gz">
afp-HyperCTL-2014-04-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/IEEE_Floating_Point.html b/web/entries/IEEE_Floating_Point.html
--- a/web/entries/IEEE_Floating_Point.html
+++ b/web/entries/IEEE_Floating_Point.html
@@ -1,263 +1,263 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Formal Model of IEEE Floating Point Arithmetic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">F</font>ormal
<font class="first">M</font>odel
of
<font class="first">I</font>EEE
<font class="first">F</font>loating
<font class="first">P</font>oint
<font class="first">A</font>rithmetic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Formal Model of IEEE Floating Point Arithmetic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Lei Yu (ly271 /at/ cam /dot/ ac /dot/ uk)
</td>
</tr>
<tr>
<td class="datahead">
Contributors:
</td>
<td class="data">
Fabian Hellauer (hellauer /at/ in /dot/ tum /dot/ de) and
<a href="http://www21.in.tum.de/~immler">Fabian Immler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-07-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This development provides a formal model of IEEE-754 floating-point arithmetic. This formalization, including formal specification of the standard and proofs of important properties of floating-point arithmetic, forms the foundation for verifying programs with floating-point computation. There is also a code generation setup for floats so that we can execute programs using this formalization in functional programming languages.</div></td>
+ <td class="abstract mathjax_process">This development provides a formal model of IEEE-754 floating-point arithmetic. This formalization, including formal specification of the standard and proofs of important properties of floating-point arithmetic, forms the foundation for verifying programs with floating-point computation. There is also a code generation setup for floats so that we can execute programs using this formalization in functional programming languages.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2017-09-25]: Added conversions from and to software floating point numbers
(by Fabian Hellauer and Fabian Immler).<br>
[2018-02-05]: 'Modernized' representation following the formalization in HOL4:
former "float_format" and predicate "is_valid" is now encoded in a type "('e, 'f) float" where
'e and 'f encode the size of exponent and fraction.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{IEEE_Floating_Point-AFP,
author = {Lei Yu},
title = {A Formal Model of IEEE Floating Point Arithmetic},
journal = {Archive of Formal Proofs},
month = jul,
year = 2013,
note = {\url{http://isa-afp.org/entries/IEEE_Floating_Point.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Word_Lib.html">Word_Lib</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="CakeML.html">CakeML</a>, <a href="MFODL_Monitor_Optimized.html">MFODL_Monitor_Optimized</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/IEEE_Floating_Point/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/IEEE_Floating_Point/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/IEEE_Floating_Point/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-IEEE_Floating_Point-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-IEEE_Floating_Point-2019-06-11.tar.gz">
afp-IEEE_Floating_Point-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-IEEE_Floating_Point-2018-08-16.tar.gz">
afp-IEEE_Floating_Point-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-IEEE_Floating_Point-2017-10-10.tar.gz">
afp-IEEE_Floating_Point-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-IEEE_Floating_Point-2016-12-17.tar.gz">
afp-IEEE_Floating_Point-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-IEEE_Floating_Point-2016-02-22.tar.gz">
afp-IEEE_Floating_Point-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-IEEE_Floating_Point-2015-05-27.tar.gz">
afp-IEEE_Floating_Point-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-IEEE_Floating_Point-2014-08-28.tar.gz">
afp-IEEE_Floating_Point-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-IEEE_Floating_Point-2013-12-11.tar.gz">
afp-IEEE_Floating_Point-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-IEEE_Floating_Point-2013-11-17.tar.gz">
afp-IEEE_Floating_Point-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-IEEE_Floating_Point-2013-07-28.tar.gz">
afp-IEEE_Floating_Point-2013-07-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/IMAP-CRDT.html b/web/entries/IMAP-CRDT.html
--- a/web/entries/IMAP-CRDT.html
+++ b/web/entries/IMAP-CRDT.html
@@ -1,215 +1,215 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The IMAP CmRDT - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">I</font>MAP
<font class="first">C</font>mRDT
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The IMAP CmRDT</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Tim Jungnickel (tim /dot/ jungnickel /at/ tu-berlin /dot/ de),
Lennart Oldenburg and
Matthias Loibl
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-11-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We provide our Isabelle/HOL formalization of a Conflict-free
Replicated Datatype for Internet Message Access Protocol commands.
We show that Strong Eventual Consistency (SEC) is guaranteed
by proving the commutativity of concurrent operations. We base our
formalization on the recently proposed "framework for
establishing Strong Eventual Consistency for Conflict-free Replicated
Datatypes" (AFP.CRDT) from Gomes et al. Hence, we provide an
additional example of how the recently proposed framework can be used
-to design and prove CRDTs.</div></td>
+to design and prove CRDTs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{IMAP-CRDT-AFP,
author = {Tim Jungnickel and Lennart Oldenburg and Matthias Loibl},
title = {The IMAP CmRDT},
journal = {Archive of Formal Proofs},
month = nov,
year = 2017,
note = {\url{http://isa-afp.org/entries/IMAP-CRDT.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="CRDT.html">CRDT</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/IMAP-CRDT/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/IMAP-CRDT/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/IMAP-CRDT/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-IMAP-CRDT-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-IMAP-CRDT-2020-01-14.tar.gz">
afp-IMAP-CRDT-2020-01-14.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-IMAP-CRDT-2019-06-11.tar.gz">
afp-IMAP-CRDT-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-IMAP-CRDT-2018-08-16.tar.gz">
afp-IMAP-CRDT-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-IMAP-CRDT-2017-11-10.tar.gz">
afp-IMAP-CRDT-2017-11-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/IMO2019.html b/web/entries/IMO2019.html
--- a/web/entries/IMO2019.html
+++ b/web/entries/IMO2019.html
@@ -1,207 +1,207 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Selected Problems from the International Mathematical Olympiad 2019 - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>elected
<font class="first">P</font>roblems
from
the
<font class="first">I</font>nternational
<font class="first">M</font>athematical
<font class="first">O</font>lympiad
<font class="first">2</font>019
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Selected Problems from the International Mathematical Olympiad 2019</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-08-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This entry contains formalisations of the answers to three of
the six problem of the International Mathematical Olympiad 2019,
namely Q1, Q4, and Q5.</p> <p>The reason why these
problems were chosen is that they are particularly amenable to
formalisation: they can be solved with minimal use of libraries. The
remaining three concern geometry and graph theory, which, in the
author's opinion, are more difficult to formalise resp. require a
-more complex library.</p></div></td>
+more complex library.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{IMO2019-AFP,
author = {Manuel Eberl},
title = {Selected Problems from the International Mathematical Olympiad 2019},
journal = {Archive of Formal Proofs},
month = aug,
year = 2019,
note = {\url{http://isa-afp.org/entries/IMO2019.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Prime_Distribution_Elementary.html">Prime_Distribution_Elementary</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/IMO2019/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/IMO2019/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/IMO2019/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-IMO2019-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-IMO2019-2019-08-06.tar.gz">
afp-IMO2019-2019-08-06.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/IMP2.html b/web/entries/IMP2.html
--- a/web/entries/IMP2.html
+++ b/web/entries/IMP2.html
@@ -1,221 +1,221 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>IMP2 – Simple Program Verification in Isabelle/HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>MP2
<font class="first">–</font>
<font class="first">S</font>imple
<font class="first">P</font>rogram
<font class="first">V</font>erification
in
<font class="first">I</font>sabelle/HOL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">IMP2 – Simple Program Verification in Isabelle/HOL</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Peter Lammich and
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-01-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
IMP2 is a simple imperative language together with Isabelle tooling to
create a program verification environment in Isabelle/HOL. The tools
include a C-like syntax, a verification condition generator, and
Isabelle commands for the specification of programs. The framework is
modular, i.e., it allows easy reuse of already proved programs within
larger programs. This entry comes with a quickstart guide and a large
collection of examples, spanning basic algorithms with simple proofs
to more advanced algorithms and proof techniques like data refinement.
Some highlights from the examples are: <ul> <li>Bisection
Square Root, </li> <li>Extended Euclid, </li>
<li>Exponentiation by Squaring, </li> <li>Binary
Search, </li> <li>Insertion Sort, </li>
<li>Quicksort, </li> <li>Depth First Search.
</li> </ul> The abstract syntax and semantics are very
simple and well-documented. They are suitable to be used in a course,
as extension to the IMP language which comes with the Isabelle
distribution. While this entry is limited to a simple imperative
-language, the ideas could be extended to more sophisticated languages.</div></td>
+language, the ideas could be extended to more sophisticated languages.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{IMP2-AFP,
author = {Peter Lammich and Simon Wimmer},
title = {IMP2 – Simple Program Verification in Isabelle/HOL},
journal = {Archive of Formal Proofs},
month = jan,
year = 2019,
note = {\url{http://isa-afp.org/entries/IMP2.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="IMP2_Binary_Heap.html">IMP2_Binary_Heap</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/IMP2/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/IMP2/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/IMP2/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-IMP2-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-IMP2-2019-06-11.tar.gz">
afp-IMP2-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-IMP2-2019-01-15.tar.gz">
afp-IMP2-2019-01-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/IMP2_Binary_Heap.html b/web/entries/IMP2_Binary_Heap.html
--- a/web/entries/IMP2_Binary_Heap.html
+++ b/web/entries/IMP2_Binary_Heap.html
@@ -1,201 +1,201 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Binary Heaps for IMP2 - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>inary
<font class="first">H</font>eaps
for
<font class="first">I</font>MP2
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Binary Heaps for IMP2</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Simon Griebel
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-06-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
In this submission array-based binary minimum heaps are formalized.
The correctness of the following heap operations is proved: insert,
get-min, delete-min and make-heap. These are then used to verify an
in-place heapsort. The formalization is based on IMP2, an imperative
program verification framework implemented in Isabelle/HOL. The
verified heap functions are iterative versions of the partly recursive
functions found in "Algorithms and Data Structures – The Basic
Toolbox" by K. Mehlhorn and P. Sanders and "Introduction to
Algorithms" by T. H. Cormen, C. E. Leiserson, R. L. Rivest and C.
-Stein.</div></td>
+Stein.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{IMP2_Binary_Heap-AFP,
author = {Simon Griebel},
title = {Binary Heaps for IMP2},
journal = {Archive of Formal Proofs},
month = jun,
year = 2019,
note = {\url{http://isa-afp.org/entries/IMP2_Binary_Heap.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="IMP2.html">IMP2</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/IMP2_Binary_Heap/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/IMP2_Binary_Heap/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/IMP2_Binary_Heap/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-IMP2_Binary_Heap-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-IMP2_Binary_Heap-2019-06-13.tar.gz">
afp-IMP2_Binary_Heap-2019-06-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/IP_Addresses.html b/web/entries/IP_Addresses.html
--- a/web/entries/IP_Addresses.html
+++ b/web/entries/IP_Addresses.html
@@ -1,223 +1,223 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>IP Addresses - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>P
<font class="first">A</font>ddresses
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">IP Addresses</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a>,
<a href="http://liftm.de">Julius Michaelis</a> and
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-06-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry contains a definition of IP addresses and a library to work
with them. Generic IP addresses are modeled as machine words of
arbitrary length. Derived from this generic definition, IPv4 addresses
are 32bit machine words, IPv6 addresses are 128bit words.
Additionally, IPv4 addresses can be represented in dot-decimal
notation and IPv6 addresses in (compressed) colon-separated notation.
We support toString functions and parsers for both notations. Sets of
IP addresses can be represented with a netmask (e.g.
192.168.0.0/255.255.0.0) or in CIDR notation (e.g. 192.168.0.0/16). To
provide executable code for set operations on IP address ranges, the
library includes a datatype to work on arbitrary intervals of machine
-words.</div></td>
+words.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{IP_Addresses-AFP,
author = {Cornelius Diekmann and Julius Michaelis and Lars Hupel},
title = {IP Addresses},
journal = {Archive of Formal Proofs},
month = jun,
year = 2016,
note = {\url{http://isa-afp.org/entries/IP_Addresses.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Automatic_Refinement.html">Automatic_Refinement</a>, <a href="Word_Lib.html">Word_Lib</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Simple_Firewall.html">Simple_Firewall</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/IP_Addresses/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/IP_Addresses/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/IP_Addresses/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-IP_Addresses-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-IP_Addresses-2019-06-11.tar.gz">
afp-IP_Addresses-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-IP_Addresses-2018-08-16.tar.gz">
afp-IP_Addresses-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-IP_Addresses-2017-10-10.tar.gz">
afp-IP_Addresses-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-IP_Addresses-2016-12-17.tar.gz">
afp-IP_Addresses-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-IP_Addresses-2016-06-28.tar.gz">
afp-IP_Addresses-2016-06-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Imperative_Insertion_Sort.html b/web/entries/Imperative_Insertion_Sort.html
--- a/web/entries/Imperative_Insertion_Sort.html
+++ b/web/entries/Imperative_Insertion_Sort.html
@@ -1,217 +1,217 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Imperative Insertion Sort - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>mperative
<font class="first">I</font>nsertion
<font class="first">S</font>ort
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Imperative Insertion Sort</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-09-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">The insertion sort algorithm of Cormen et al. (Introduction to Algorithms) is expressed in Imperative HOL and proved to be correct and terminating. For this purpose we also provide a theory about imperative loop constructs with accompanying induction/invariant rules for proving partial and total correctness. Furthermore, the formalized algorithm is fit for code generation.</div></td>
+ <td class="abstract mathjax_process">The insertion sort algorithm of Cormen et al. (Introduction to Algorithms) is expressed in Imperative HOL and proved to be correct and terminating. For this purpose we also provide a theory about imperative loop constructs with accompanying induction/invariant rules for proving partial and total correctness. Furthermore, the formalized algorithm is fit for code generation.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Imperative_Insertion_Sort-AFP,
author = {Christian Sternagel},
title = {Imperative Insertion Sort},
journal = {Archive of Formal Proofs},
month = sep,
year = 2014,
note = {\url{http://isa-afp.org/entries/Imperative_Insertion_Sort.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Imperative_Insertion_Sort/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Imperative_Insertion_Sort/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Imperative_Insertion_Sort/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Imperative_Insertion_Sort-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Imperative_Insertion_Sort-2019-06-11.tar.gz">
afp-Imperative_Insertion_Sort-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Imperative_Insertion_Sort-2018-08-16.tar.gz">
afp-Imperative_Insertion_Sort-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Imperative_Insertion_Sort-2017-10-10.tar.gz">
afp-Imperative_Insertion_Sort-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Imperative_Insertion_Sort-2016-12-17.tar.gz">
afp-Imperative_Insertion_Sort-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Imperative_Insertion_Sort-2016-02-22.tar.gz">
afp-Imperative_Insertion_Sort-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Imperative_Insertion_Sort-2015-05-27.tar.gz">
afp-Imperative_Insertion_Sort-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Imperative_Insertion_Sort-2014-09-25.tar.gz">
afp-Imperative_Insertion_Sort-2014-09-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Impossible_Geometry.html b/web/entries/Impossible_Geometry.html
--- a/web/entries/Impossible_Geometry.html
+++ b/web/entries/Impossible_Geometry.html
@@ -1,259 +1,259 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Proving the Impossibility of Trisecting an Angle and Doubling the Cube - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>roving
the
<font class="first">I</font>mpossibility
of
<font class="first">T</font>risecting
an
<font class="first">A</font>ngle
and
<font class="first">D</font>oubling
the
<font class="first">C</font>ube
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Proving the Impossibility of Trisecting an Angle and Doubling the Cube</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Ralph Romanos (ralph /dot/ romanos /at/ student /dot/ ecp /dot/ fr) and
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-08-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Squaring the circle, doubling the cube and trisecting an angle, using a compass and straightedge alone, are classic unsolved problems first posed by the ancient Greeks. All three problems were proved to be impossible in the 19th century. The following document presents the proof of the impossibility of solving the latter two problems using Isabelle/HOL, following a proof by Carrega. The proof uses elementary methods: no Galois theory or field extensions. The set of points constructible using a compass and straightedge is defined inductively. Radical expressions, which involve only square roots and arithmetic of rational numbers, are defined, and we find that all constructive points have radical coordinates. Finally, doubling the cube and trisecting certain angles requires solving certain cubic equations that can be proved to have no rational roots. The Isabelle proofs require a great many detailed calculations.</div></td>
+ <td class="abstract mathjax_process">Squaring the circle, doubling the cube and trisecting an angle, using a compass and straightedge alone, are classic unsolved problems first posed by the ancient Greeks. All three problems were proved to be impossible in the 19th century. The following document presents the proof of the impossibility of solving the latter two problems using Isabelle/HOL, following a proof by Carrega. The proof uses elementary methods: no Galois theory or field extensions. The set of points constructible using a compass and straightedge is defined inductively. Radical expressions, which involve only square roots and arithmetic of rational numbers, are defined, and we find that all constructive points have radical coordinates. Finally, doubling the cube and trisecting certain angles requires solving certain cubic equations that can be proved to have no rational roots. The Isabelle proofs require a great many detailed calculations.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Impossible_Geometry-AFP,
author = {Ralph Romanos and Lawrence C. Paulson},
title = {Proving the Impossibility of Trisecting an Angle and Doubling the Cube},
journal = {Archive of Formal Proofs},
month = aug,
year = 2012,
note = {\url{http://isa-afp.org/entries/Impossible_Geometry.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Impossible_Geometry/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Impossible_Geometry/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Impossible_Geometry/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Impossible_Geometry-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Impossible_Geometry-2019-06-11.tar.gz">
afp-Impossible_Geometry-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Impossible_Geometry-2018-08-16.tar.gz">
afp-Impossible_Geometry-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Impossible_Geometry-2017-10-10.tar.gz">
afp-Impossible_Geometry-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Impossible_Geometry-2016-12-17.tar.gz">
afp-Impossible_Geometry-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Impossible_Geometry-2016-02-22.tar.gz">
afp-Impossible_Geometry-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Impossible_Geometry-2015-05-27.tar.gz">
afp-Impossible_Geometry-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Impossible_Geometry-2014-08-28.tar.gz">
afp-Impossible_Geometry-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Impossible_Geometry-2013-12-11.tar.gz">
afp-Impossible_Geometry-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Impossible_Geometry-2013-11-17.tar.gz">
afp-Impossible_Geometry-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Impossible_Geometry-2013-02-16.tar.gz">
afp-Impossible_Geometry-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Impossible_Geometry-2012-08-07.tar.gz">
afp-Impossible_Geometry-2012-08-07.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Impossible_Geometry-2012-08-06.tar.gz">
afp-Impossible_Geometry-2012-08-06.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Incompleteness.html b/web/entries/Incompleteness.html
--- a/web/entries/Incompleteness.html
+++ b/web/entries/Incompleteness.html
@@ -1,239 +1,239 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Gödel's Incompleteness Theorems - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">G</font>ödel's
<font class="first">I</font>ncompleteness
<font class="first">T</font>heorems
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Gödel's Incompleteness Theorems</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-11-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Gödel's two incompleteness theorems are formalised, following a careful <a href="http://journals.impan.gov.pl/dm/Inf/422-0-1.html">presentation</a> by Swierczkowski, in the theory of <a href="HereditarilyFinite.html">hereditarily finite sets</a>. This represents the first ever machine-assisted proof of the second incompleteness theorem. Compared with traditional formalisations using Peano arithmetic (see e.g. Boolos), coding is simpler, with no need to formalise the notion
+ <td class="abstract mathjax_process">Gödel's two incompleteness theorems are formalised, following a careful <a href="http://journals.impan.gov.pl/dm/Inf/422-0-1.html">presentation</a> by Swierczkowski, in the theory of <a href="HereditarilyFinite.html">hereditarily finite sets</a>. This represents the first ever machine-assisted proof of the second incompleteness theorem. Compared with traditional formalisations using Peano arithmetic (see e.g. Boolos), coding is simpler, with no need to formalise the notion
of multiplication (let alone that of a prime number)
in the formalised calculus upon which the theorem is based.
-However, other technical problems had to be solved in order to complete the argument.</div></td>
+However, other technical problems had to be solved in order to complete the argument.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Incompleteness-AFP,
author = {Lawrence C. Paulson},
title = {Gödel's Incompleteness Theorems},
journal = {Archive of Formal Proofs},
month = nov,
year = 2013,
note = {\url{http://isa-afp.org/entries/Incompleteness.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="HereditarilyFinite.html">HereditarilyFinite</a>, <a href="Nominal2.html">Nominal2</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Surprise_Paradox.html">Surprise_Paradox</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Incompleteness/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Incompleteness/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Incompleteness/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Incompleteness-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Incompleteness-2019-06-11.tar.gz">
afp-Incompleteness-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Incompleteness-2018-08-16.tar.gz">
afp-Incompleteness-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Incompleteness-2017-10-10.tar.gz">
afp-Incompleteness-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Incompleteness-2016-12-17.tar.gz">
afp-Incompleteness-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Incompleteness-2016-02-22.tar.gz">
afp-Incompleteness-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Incompleteness-2015-05-27.tar.gz">
afp-Incompleteness-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Incompleteness-2014-08-28.tar.gz">
afp-Incompleteness-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Incompleteness-2013-12-11.tar.gz">
afp-Incompleteness-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Incompleteness-2013-12-02.tar.gz">
afp-Incompleteness-2013-12-02.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Incompleteness-2013-11-17.tar.gz">
afp-Incompleteness-2013-11-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Incredible_Proof_Machine.html b/web/entries/Incredible_Proof_Machine.html
--- a/web/entries/Incredible_Proof_Machine.html
+++ b/web/entries/Incredible_Proof_Machine.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The meta theory of the Incredible Proof Machine - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
meta
theory
of
the
<font class="first">I</font>ncredible
<font class="first">P</font>roof
<font class="first">M</font>achine
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The meta theory of the Incredible Proof Machine</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Joachim Breitner (joachim /at/ cis /dot/ upenn /dot/ edu) and
<a href="http://pp.ipd.kit.edu/person.php?id=88">Denis Lohner</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-05-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The <a href="http://incredible.pm">Incredible Proof Machine</a> is an
interactive visual theorem prover which represents proofs as port
graphs. We model this proof representation in Isabelle, and prove that
-it is just as powerful as natural deduction.</div></td>
+it is just as powerful as natural deduction.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Incredible_Proof_Machine-AFP,
author = {Joachim Breitner and Denis Lohner},
title = {The meta theory of the Incredible Proof Machine},
journal = {Archive of Formal Proofs},
month = may,
year = 2016,
note = {\url{http://isa-afp.org/entries/Incredible_Proof_Machine.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Abstract_Completeness.html">Abstract_Completeness</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Incredible_Proof_Machine/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Incredible_Proof_Machine/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Incredible_Proof_Machine/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Incredible_Proof_Machine-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Incredible_Proof_Machine-2019-06-11.tar.gz">
afp-Incredible_Proof_Machine-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Incredible_Proof_Machine-2018-08-16.tar.gz">
afp-Incredible_Proof_Machine-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Incredible_Proof_Machine-2017-10-10.tar.gz">
afp-Incredible_Proof_Machine-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Incredible_Proof_Machine-2016-12-17.tar.gz">
afp-Incredible_Proof_Machine-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Incredible_Proof_Machine-2016-05-20.tar.gz">
afp-Incredible_Proof_Machine-2016-05-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Inductive_Confidentiality.html b/web/entries/Inductive_Confidentiality.html
--- a/web/entries/Inductive_Confidentiality.html
+++ b/web/entries/Inductive_Confidentiality.html
@@ -1,244 +1,244 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Inductive Study of Confidentiality - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>nductive
<font class="first">S</font>tudy
of
<font class="first">C</font>onfidentiality
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Inductive Study of Confidentiality</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.dmi.unict.it/~giamp/">Giampaolo Bella</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-05-02</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This document contains the full theory files accompanying article <i>Inductive Study of Confidentiality --- for Everyone</i> in <i>Formal Aspects of Computing</i>. They aim at an illustrative and didactic presentation of the Inductive Method of protocol analysis, focusing on the treatment of one of the main goals of security protocols: confidentiality against a threat model. The treatment of confidentiality, which in fact forms a key aspect of all protocol analysis tools, has been found cryptic by many learners of the Inductive Method, hence the motivation for this work. The theory files in this document guide the reader step by step towards design and proof of significant confidentiality theorems. These are developed against two threat models, the standard Dolev-Yao and a more audacious one, the General Attacker, which turns out to be particularly useful also for teaching purposes.</div></td>
+ <td class="abstract mathjax_process">This document contains the full theory files accompanying article <i>Inductive Study of Confidentiality --- for Everyone</i> in <i>Formal Aspects of Computing</i>. They aim at an illustrative and didactic presentation of the Inductive Method of protocol analysis, focusing on the treatment of one of the main goals of security protocols: confidentiality against a threat model. The treatment of confidentiality, which in fact forms a key aspect of all protocol analysis tools, has been found cryptic by many learners of the Inductive Method, hence the motivation for this work. The theory files in this document guide the reader step by step towards design and proof of significant confidentiality theorems. These are developed against two threat models, the standard Dolev-Yao and a more audacious one, the General Attacker, which turns out to be particularly useful also for teaching purposes.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Inductive_Confidentiality-AFP,
author = {Giampaolo Bella},
title = {Inductive Study of Confidentiality},
journal = {Archive of Formal Proofs},
month = may,
year = 2012,
note = {\url{http://isa-afp.org/entries/Inductive_Confidentiality.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Inductive_Confidentiality/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Inductive_Confidentiality/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Inductive_Confidentiality/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Inductive_Confidentiality-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Inductive_Confidentiality-2019-06-11.tar.gz">
afp-Inductive_Confidentiality-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Inductive_Confidentiality-2018-08-16.tar.gz">
afp-Inductive_Confidentiality-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Inductive_Confidentiality-2017-10-10.tar.gz">
afp-Inductive_Confidentiality-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Inductive_Confidentiality-2016-12-17.tar.gz">
afp-Inductive_Confidentiality-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Inductive_Confidentiality-2016-02-22.tar.gz">
afp-Inductive_Confidentiality-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Inductive_Confidentiality-2015-05-27.tar.gz">
afp-Inductive_Confidentiality-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Inductive_Confidentiality-2014-08-28.tar.gz">
afp-Inductive_Confidentiality-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Inductive_Confidentiality-2013-12-11.tar.gz">
afp-Inductive_Confidentiality-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Inductive_Confidentiality-2013-11-17.tar.gz">
afp-Inductive_Confidentiality-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Inductive_Confidentiality-2013-02-16.tar.gz">
afp-Inductive_Confidentiality-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Inductive_Confidentiality-2012-05-24.tar.gz">
afp-Inductive_Confidentiality-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Inductive_Confidentiality-2012-05-02.tar.gz">
afp-Inductive_Confidentiality-2012-05-02.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/InfPathElimination.html b/web/entries/InfPathElimination.html
--- a/web/entries/InfPathElimination.html
+++ b/web/entries/InfPathElimination.html
@@ -1,245 +1,245 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Infeasible Paths Elimination by Symbolic Execution Techniques: Proof of Correctness and Preservation of Paths - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>nfeasible
<font class="first">P</font>aths
<font class="first">E</font>limination
by
<font class="first">S</font>ymbolic
<font class="first">E</font>xecution
<font class="first">T</font>echniques:
<font class="first">P</font>roof
of
<font class="first">C</font>orrectness
and
<font class="first">P</font>reservation
of
<font class="first">P</font>aths
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Infeasible Paths Elimination by Symbolic Execution Techniques: Proof of Correctness and Preservation of Paths</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Romain Aissat,
Frederic Voisin and
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-08-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
TRACER is a tool for verifying safety properties of sequential C
programs. TRACER attempts at building a finite symbolic execution
graph which over-approximates the set of all concrete reachable states
and the set of feasible paths. We present an abstract framework for
TRACER and similar CEGAR-like systems. The framework provides 1) a
graph- transformation based method for reducing the feasible paths in
control-flow graphs, 2) a model for symbolic execution, subsumption,
predicate abstraction and invariant generation. In this framework we
formally prove two key properties: correct construction of the
symbolic states and preservation of feasible paths. The framework
focuses on core operations, leaving to concrete prototypes to “fit in”
heuristics for combining them. The accompanying paper (published in
ITP 2016) can be found at
-https://www.lri.fr/∼wolff/papers/conf/2016-itp-InfPathsNSE.pdf.</div></td>
+https://www.lri.fr/∼wolff/papers/conf/2016-itp-InfPathsNSE.pdf.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{InfPathElimination-AFP,
author = {Romain Aissat and Frederic Voisin and Burkhart Wolff},
title = {Infeasible Paths Elimination by Symbolic Execution Techniques: Proof of Correctness and Preservation of Paths},
journal = {Archive of Formal Proofs},
month = aug,
year = 2016,
note = {\url{http://isa-afp.org/entries/InfPathElimination.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/InfPathElimination/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/InfPathElimination/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/InfPathElimination/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-InfPathElimination-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-InfPathElimination-2019-06-11.tar.gz">
afp-InfPathElimination-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-InfPathElimination-2018-08-16.tar.gz">
afp-InfPathElimination-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-InfPathElimination-2017-10-10.tar.gz">
afp-InfPathElimination-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-InfPathElimination-2016-12-17.tar.gz">
afp-InfPathElimination-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-InfPathElimination-2016-08-18.tar.gz">
afp-InfPathElimination-2016-08-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/InformationFlowSlicing.html b/web/entries/InformationFlowSlicing.html
--- a/web/entries/InformationFlowSlicing.html
+++ b/web/entries/InformationFlowSlicing.html
@@ -1,287 +1,287 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Information Flow Noninterference via Slicing - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>nformation
<font class="first">F</font>low
<font class="first">N</font>oninterference
via
<font class="first">S</font>licing
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Information Flow Noninterference via Slicing</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php">Daniel Wasserrab</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-03-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
In this contribution, we show how correctness proofs for <a
href="Slicing.html">intra-</a> and <a
href="HRB-Slicing.html">interprocedural slicing</a> can be used to prove
that slicing is able to guarantee information flow noninterference.
Moreover, we also illustrate how to lift the control flow graphs of the
respective frameworks such that they fulfil the additional assumptions
needed in the noninterference proofs. A detailed description of the
intraprocedural proof and its interplay with the slicing framework can be
found in the PLAS'09 paper by Wasserrab et al.
</p>
<p>
This entry contains the part for intra-procedural slicing. See entry
<a href="InformationFlowSlicing_Inter.html">InformationFlowSlicing_Inter</a>
for the inter-procedural part.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2016-06-10]: The original entry <a
href="InformationFlowSlicing.html">InformationFlowSlicing</a> contained both
the <a href="InformationFlowSlicing_Inter.html">inter-</a> and <a
href="InformationFlowSlicing.html">intra-procedural</a> case was split into
two for easier maintenance.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{InformationFlowSlicing-AFP,
author = {Daniel Wasserrab},
title = {Information Flow Noninterference via Slicing},
journal = {Archive of Formal Proofs},
month = mar,
year = 2010,
note = {\url{http://isa-afp.org/entries/InformationFlowSlicing.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Slicing.html">Slicing</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/InformationFlowSlicing/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/InformationFlowSlicing/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/InformationFlowSlicing/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-InformationFlowSlicing-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-InformationFlowSlicing-2019-06-11.tar.gz">
afp-InformationFlowSlicing-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-InformationFlowSlicing-2018-08-16.tar.gz">
afp-InformationFlowSlicing-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-InformationFlowSlicing-2017-10-10.tar.gz">
afp-InformationFlowSlicing-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-InformationFlowSlicing-2016-12-17.tar.gz">
afp-InformationFlowSlicing-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-InformationFlowSlicing-2016-02-22.tar.gz">
afp-InformationFlowSlicing-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-InformationFlowSlicing-2015-05-27.tar.gz">
afp-InformationFlowSlicing-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-InformationFlowSlicing-2014-08-28.tar.gz">
afp-InformationFlowSlicing-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-InformationFlowSlicing-2013-12-11.tar.gz">
afp-InformationFlowSlicing-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-InformationFlowSlicing-2013-11-17.tar.gz">
afp-InformationFlowSlicing-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-InformationFlowSlicing-2013-02-16.tar.gz">
afp-InformationFlowSlicing-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-InformationFlowSlicing-2012-05-24.tar.gz">
afp-InformationFlowSlicing-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-InformationFlowSlicing-2011-10-11.tar.gz">
afp-InformationFlowSlicing-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-InformationFlowSlicing-2011-02-11.tar.gz">
afp-InformationFlowSlicing-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-InformationFlowSlicing-2010-07-01.tar.gz">
afp-InformationFlowSlicing-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-InformationFlowSlicing-2010-03-23.tar.gz">
afp-InformationFlowSlicing-2010-03-23.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/InformationFlowSlicing_Inter.html b/web/entries/InformationFlowSlicing_Inter.html
--- a/web/entries/InformationFlowSlicing_Inter.html
+++ b/web/entries/InformationFlowSlicing_Inter.html
@@ -1,234 +1,234 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Inter-Procedural Information Flow Noninterference via Slicing - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>nter-Procedural
<font class="first">I</font>nformation
<font class="first">F</font>low
<font class="first">N</font>oninterference
via
<font class="first">S</font>licing
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Inter-Procedural Information Flow Noninterference via Slicing</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php">Daniel Wasserrab</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-03-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
In this contribution, we show how correctness proofs for <a
href="Slicing.html">intra-</a> and <a
href="HRB-Slicing.html">interprocedural slicing</a> can be used to prove
that slicing is able to guarantee information flow noninterference.
Moreover, we also illustrate how to lift the control flow graphs of the
respective frameworks such that they fulfil the additional assumptions
needed in the noninterference proofs. A detailed description of the
intraprocedural proof and its interplay with the slicing framework can be
found in the PLAS'09 paper by Wasserrab et al.
</p>
<p>
This entry contains the part for inter-procedural slicing. See entry
<a href="InformationFlowSlicing.html">InformationFlowSlicing</a>
for the intra-procedural part.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2016-06-10]: The original entry <a
href="InformationFlowSlicing.html">InformationFlowSlicing</a> contained both
the <a href="InformationFlowSlicing_Inter.html">inter-</a> and <a
href="InformationFlowSlicing.html">intra-procedural</a> case was split into
two for easier maintenance.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{InformationFlowSlicing_Inter-AFP,
author = {Daniel Wasserrab},
title = {Inter-Procedural Information Flow Noninterference via Slicing},
journal = {Archive of Formal Proofs},
month = mar,
year = 2010,
note = {\url{http://isa-afp.org/entries/InformationFlowSlicing_Inter.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="HRB-Slicing.html">HRB-Slicing</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/InformationFlowSlicing_Inter/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/InformationFlowSlicing_Inter/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/InformationFlowSlicing_Inter/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-InformationFlowSlicing_Inter-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-InformationFlowSlicing_Inter-2019-06-11.tar.gz">
afp-InformationFlowSlicing_Inter-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-InformationFlowSlicing_Inter-2018-08-16.tar.gz">
afp-InformationFlowSlicing_Inter-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-InformationFlowSlicing_Inter-2017-10-10.tar.gz">
afp-InformationFlowSlicing_Inter-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-InformationFlowSlicing_Inter-2016-12-17.tar.gz">
afp-InformationFlowSlicing_Inter-2016-12-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Integration.html b/web/entries/Integration.html
--- a/web/entries/Integration.html
+++ b/web/entries/Integration.html
@@ -1,295 +1,295 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Integration theory and random variables - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>ntegration
theory
and
random
variables
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Integration theory and random variables</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www-lti.informatik.rwth-aachen.de/~richter/">Stefan Richter</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-11-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Lebesgue-style integration plays a major role in advanced probability. We formalize concepts of elementary measure theory, real-valued random variables as Borel-measurable functions, and a stepwise inductive definition of the integral itself. All proofs are carried out in human readable style using the Isar language.</div></td>
+ <td class="abstract mathjax_process">Lebesgue-style integration plays a major role in advanced probability. We formalize concepts of elementary measure theory, real-valued random variables as Borel-measurable functions, and a stepwise inductive definition of the integral itself. All proofs are carried out in human readable style using the Isar language.</td>
</tr>
<tr>
<td class="datahead" valign="top">Note:</td>
<td class="abstract">This article is of historical interest only. Lebesgue-style integration and probability theory are now available as part of the Isabelle/HOL distribution (directory Probability).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Integration-AFP,
author = {Stefan Richter},
title = {Integration theory and random variables},
journal = {Archive of Formal Proofs},
month = nov,
year = 2004,
note = {\url{http://isa-afp.org/entries/Integration.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Integration/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Integration/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Integration/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Integration-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Integration-2019-06-11.tar.gz">
afp-Integration-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Integration-2018-08-16.tar.gz">
afp-Integration-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Integration-2017-10-10.tar.gz">
afp-Integration-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Integration-2016-12-17.tar.gz">
afp-Integration-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Integration-2016-02-22.tar.gz">
afp-Integration-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Integration-2015-05-27.tar.gz">
afp-Integration-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Integration-2014-08-28.tar.gz">
afp-Integration-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Integration-2013-12-11.tar.gz">
afp-Integration-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Integration-2013-11-17.tar.gz">
afp-Integration-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Integration-2013-02-16.tar.gz">
afp-Integration-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Integration-2012-05-24.tar.gz">
afp-Integration-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Integration-2011-10-11.tar.gz">
afp-Integration-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Integration-2011-02-11.tar.gz">
afp-Integration-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Integration-2010-07-01.tar.gz">
afp-Integration-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Integration-2009-12-12.tar.gz">
afp-Integration-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Integration-2009-04-29.tar.gz">
afp-Integration-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Integration-2008-06-10.tar.gz">
afp-Integration-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Integration-2007-11-27.tar.gz">
afp-Integration-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Integration-2005-10-14.tar.gz">
afp-Integration-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Integration-2004-11-23.tar.gz">
afp-Integration-2004-11-23.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Integration-2004-11-22.tar.gz">
afp-Integration-2004-11-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Interval_Arithmetic_Word32.html b/web/entries/Interval_Arithmetic_Word32.html
--- a/web/entries/Interval_Arithmetic_Word32.html
+++ b/web/entries/Interval_Arithmetic_Word32.html
@@ -1,204 +1,204 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Interval Arithmetic on 32-bit Words - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>nterval
<font class="first">A</font>rithmetic
on
<font class="first">3</font>2-bit
<font class="first">W</font>ords
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Interval Arithmetic on 32-bit Words</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Brandon Bohrer (bbohrer /at/ cs /dot/ cmu /dot/ edu)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-11-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Interval_Arithmetic implements conservative interval arithmetic
computations, then uses this interval arithmetic to implement a simple
programming language where all terms have 32-bit signed word values,
with explicit infinities for terms outside the representable bounds.
Our target use case is interpreters for languages that must have a
well-understood low-level behavior. We include a formalization of
bounded-length strings which are used for the identifiers of our
language. Bounded-length identifiers are useful in some applications,
for example the <a href="https://www.isa-afp.org/entries/Differential_Dynamic_Logic.html">Differential_Dynamic_Logic</a> article,
where a Euclidean space indexed by identifiers demands that identifiers
-are finitely many.</div></td>
+are finitely many.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Interval_Arithmetic_Word32-AFP,
author = {Brandon Bohrer},
title = {Interval Arithmetic on 32-bit Words},
journal = {Archive of Formal Proofs},
month = nov,
year = 2019,
note = {\url{http://isa-afp.org/entries/Interval_Arithmetic_Word32.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Word_Lib.html">Word_Lib</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Interval_Arithmetic_Word32/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Interval_Arithmetic_Word32/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Interval_Arithmetic_Word32/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Interval_Arithmetic_Word32-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Interval_Arithmetic_Word32-2019-11-28.tar.gz">
afp-Interval_Arithmetic_Word32-2019-11-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Iptables_Semantics.html b/web/entries/Iptables_Semantics.html
--- a/web/entries/Iptables_Semantics.html
+++ b/web/entries/Iptables_Semantics.html
@@ -1,225 +1,225 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Iptables Semantics - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>ptables
<font class="first">S</font>emantics
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Iptables Semantics</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a> and
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-09-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a big step semantics of the filtering behavior of the
Linux/netfilter iptables firewall. We provide algorithms to simplify
complex iptables rulests to a simple firewall model (c.f. AFP entry <a
href="https://www.isa-afp.org/entries/Simple_Firewall.html">Simple_Firewall</a>)
and to verify spoofing protection of a ruleset.
Internally, we embed our semantics into ternary logic, ultimately
supporting every iptables match condition by abstracting over
unknowns. Using this AFP entry and all entries it depends on, we
created an easy-to-use, stand-alone haskell tool called <a
href="http://iptables.isabelle.systems">fffuu</a>. The tool does not
require any input &mdash;except for the <tt>iptables-save</tt> dump of
the analyzed firewall&mdash; and presents interesting results about
the user's ruleset. Real-Word firewall errors have been uncovered, and
the correctness of rulesets has been proved, with the help of
-our tool.</div></td>
+our tool.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Iptables_Semantics-AFP,
author = {Cornelius Diekmann and Lars Hupel},
title = {Iptables Semantics},
journal = {Archive of Formal Proofs},
month = sep,
year = 2016,
note = {\url{http://isa-afp.org/entries/Iptables_Semantics.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Native_Word.html">Native_Word</a>, <a href="Routing.html">Routing</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="LOFT.html">LOFT</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Iptables_Semantics/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Iptables_Semantics/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Iptables_Semantics/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Iptables_Semantics-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Iptables_Semantics-2019-06-11.tar.gz">
afp-Iptables_Semantics-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Iptables_Semantics-2018-08-16.tar.gz">
afp-Iptables_Semantics-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Iptables_Semantics-2017-10-10.tar.gz">
afp-Iptables_Semantics-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Iptables_Semantics-2016-12-17.tar.gz">
afp-Iptables_Semantics-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Iptables_Semantics-2016-09-09.tar.gz">
afp-Iptables_Semantics-2016-09-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Irrationality_J_Hancl.html b/web/entries/Irrationality_J_Hancl.html
--- a/web/entries/Irrationality_J_Hancl.html
+++ b/web/entries/Irrationality_J_Hancl.html
@@ -1,206 +1,206 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Irrational Rapidly Convergent Series - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>rrational
<font class="first">R</font>apidly
<font class="first">C</font>onvergent
<font class="first">S</font>eries
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Irrational Rapidly Convergent Series</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~ak2110/">Angeliki Koutsoukou-Argyraki</a> and
<a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-05-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize with Isabelle/HOL a proof of a theorem by J. Hancl asserting the
irrationality of the sum of a series consisting of rational numbers, built up
by sequences that fulfill certain properties. Even though the criterion is a
number theoretic result, the proof makes use only of analytical arguments. We
also formalize a corollary of the theorem for a specific series fulfilling the
-assumptions of the theorem.</div></td>
+assumptions of the theorem.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Irrationality_J_Hancl-AFP,
author = {Angeliki Koutsoukou-Argyraki and Wenda Li},
title = {Irrational Rapidly Convergent Series},
journal = {Archive of Formal Proofs},
month = may,
year = 2018,
note = {\url{http://isa-afp.org/entries/Irrationality_J_Hancl.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Irrationality_J_Hancl/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Irrationality_J_Hancl/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Irrationality_J_Hancl/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Irrationality_J_Hancl-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Irrationality_J_Hancl-2019-06-11.tar.gz">
afp-Irrationality_J_Hancl-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Irrationality_J_Hancl-2018-08-16.tar.gz">
afp-Irrationality_J_Hancl-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Irrationality_J_Hancl-2018-05-26.tar.gz">
afp-Irrationality_J_Hancl-2018-05-26.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Isabelle_C.html b/web/entries/Isabelle_C.html
--- a/web/entries/Isabelle_C.html
+++ b/web/entries/Isabelle_C.html
@@ -1,205 +1,205 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Isabelle/C - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>sabelle/C
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Isabelle/C</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www.lri.fr/~ftuong/">Frédéric Tuong</a> and
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-10-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a framework for C code in C11 syntax deeply integrated into
the Isabelle/PIDE development environment. Our framework provides an
abstract interface for verification back-ends to be plugged-in
independently. Thus, various techniques such as deductive program
verification or white-box testing can be applied to the same source,
which is part of an integrated PIDE document model. Semantic back-ends
are free to choose the supported C fragment and its semantics. In
particular, they can differ on the chosen memory model or the
specification mechanism for framing conditions. Our framework supports
semantic annotations of C sources in the form of comments. Annotations
serve to locally control back-end settings, and can express the term
focus to which an annotation refers. Both the logical and the
syntactic context are available when semantic annotations are
evaluated. As a consequence, a formula in an annotation can refer both
to HOL or C variables. Our approach demonstrates the degree of
maturity and expressive power the Isabelle/PIDE sub-system has
achieved in recent years. Our integration technique employs Lex and
Yacc style grammars to ensure efficient deterministic parsing. This
is the core-module of Isabelle/C; the AFP package for Clean and
Clean_wrapper as well as AutoCorres and AutoCorres_wrapper (available
-via git) are applications of this front-end.</div></td>
+via git) are applications of this front-end.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Isabelle_C-AFP,
author = {Frédéric Tuong and Burkhart Wolff},
title = {Isabelle/C},
journal = {Archive of Formal Proofs},
month = oct,
year = 2019,
note = {\url{http://isa-afp.org/entries/Isabelle_C.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Isabelle_C/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Isabelle_C/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Isabelle_C/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Isabelle_C-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Isabelle_C-2019-12-19.tar.gz">
afp-Isabelle_C-2019-12-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Isabelle_Meta_Model.html b/web/entries/Isabelle_Meta_Model.html
--- a/web/entries/Isabelle_Meta_Model.html
+++ b/web/entries/Isabelle_Meta_Model.html
@@ -1,255 +1,255 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Meta-Model for the Isabelle API - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">M</font>eta-Model
for
the
<font class="first">I</font>sabelle
<font class="first">A</font>PI
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Meta-Model for the Isabelle API</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www.lri.fr/~ftuong/">Frédéric Tuong</a> and
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-09-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We represent a theory <i>of</i> (a fragment of) Isabelle/HOL <i>in</i>
Isabelle/HOL. The purpose of this exercise is to write packages for
domain-specific specifications such as class models, B-machines, ...,
and generally speaking, any domain-specific languages whose
abstract syntax can be defined by a HOL "datatype". On this basis, the
Isabelle code-generator can then be used to generate code for global
context transformations as well as tactic code.
<p>
Consequently the package is geared towards
parsing, printing and code-generation to the Isabelle API.
It is at the moment not sufficiently rich for doing meta theory on
Isabelle itself. Extensions in this direction are possible though.
<p>
Moreover, the chosen fragment is fairly rudimentary. However it should be
easily adapted to one's needs if a package is written on top of it.
The supported API contains types, terms, transformation of
global context like definitions and data-type declarations as well
as infrastructure for Isar-setups.
<p>
This theory is drawn from the
<a href="http://isa-afp.org/entries/Featherweight_OCL.html">Featherweight OCL</a>
project where
it is used to construct a package for object-oriented data-type theories
generated from UML class diagrams. The Featherweight OCL, for example, allows for
both the direct execution of compiled tactic code by the Isabelle API
as well as the generation of ".thy"-files for debugging purposes.
<p>
Gained experience from this project shows that the compiled code is sufficiently
efficient for practical purposes while being based on a formal <i>model</i>
on which properties of the package can be proven such as termination of certain
-transformations, correctness, etc.</div></td>
+transformations, correctness, etc.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Isabelle_Meta_Model-AFP,
author = {Frédéric Tuong and Burkhart Wolff},
title = {A Meta-Model for the Isabelle API},
journal = {Archive of Formal Proofs},
month = sep,
year = 2015,
note = {\url{http://isa-afp.org/entries/Isabelle_Meta_Model.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Isabelle_Meta_Model/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Isabelle_Meta_Model/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Isabelle_Meta_Model/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Isabelle_Meta_Model-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Isabelle_Meta_Model-2019-06-11.tar.gz">
afp-Isabelle_Meta_Model-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Isabelle_Meta_Model-2018-08-16.tar.gz">
afp-Isabelle_Meta_Model-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Isabelle_Meta_Model-2017-10-10.tar.gz">
afp-Isabelle_Meta_Model-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Isabelle_Meta_Model-2016-12-17.tar.gz">
afp-Isabelle_Meta_Model-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Isabelle_Meta_Model-2016-02-22.tar.gz">
afp-Isabelle_Meta_Model-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Isabelle_Meta_Model-2015-09-28.tar.gz">
afp-Isabelle_Meta_Model-2015-09-28.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Isabelle_Meta_Model-2015-09-25.tar.gz">
afp-Isabelle_Meta_Model-2015-09-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Jacobson_Basic_Algebra.html b/web/entries/Jacobson_Basic_Algebra.html
--- a/web/entries/Jacobson_Basic_Algebra.html
+++ b/web/entries/Jacobson_Basic_Algebra.html
@@ -1,199 +1,199 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Case Study in Basic Algebra - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">C</font>ase
<font class="first">S</font>tudy
in
<font class="first">B</font>asic
<font class="first">A</font>lgebra
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Case Study in Basic Algebra</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~ballarin/">Clemens Ballarin</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-08-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The focus of this case study is re-use in abstract algebra. It
contains locale-based formalisations of selected parts of set, group
and ring theory from Jacobson's <i>Basic Algebra</i>
leading to the respective fundamental homomorphism theorems. The
study is not intended as a library base for abstract algebra. It
-rather explores an approach towards abstract algebra in Isabelle.</div></td>
+rather explores an approach towards abstract algebra in Isabelle.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Jacobson_Basic_Algebra-AFP,
author = {Clemens Ballarin},
title = {A Case Study in Basic Algebra},
journal = {Archive of Formal Proofs},
month = aug,
year = 2019,
note = {\url{http://isa-afp.org/entries/Jacobson_Basic_Algebra.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Jacobson_Basic_Algebra/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Jacobson_Basic_Algebra/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Jacobson_Basic_Algebra/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Jacobson_Basic_Algebra-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Jacobson_Basic_Algebra-2019-09-01.tar.gz">
afp-Jacobson_Basic_Algebra-2019-09-01.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Jinja.html b/web/entries/Jinja.html
--- a/web/entries/Jinja.html
+++ b/web/entries/Jinja.html
@@ -1,289 +1,289 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Jinja is not Java - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">J</font>inja
is
not
<font class="first">J</font>ava
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Jinja is not Java</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.cse.unsw.edu.au/~kleing/">Gerwin Klein</a> and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2005-06-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We introduce Jinja, a Java-like programming language with a formal semantics designed to exhibit core features of the Java language architecture. Jinja is a compromise between realism of the language and tractability and clarity of the formal semantics. The following aspects are formalised: a big and a small step operational semantics for Jinja and a proof of their equivalence; a type system and a definite initialisation analysis; a type safety proof of the small step semantics; a virtual machine (JVM), its operational semantics and its type system; a type safety proof for the JVM; a bytecode verifier, i.e. data flow analyser for the JVM; a correctness proof of the bytecode verifier w.r.t. the type system; a compiler and a proof that it preserves semantics and well-typedness. The emphasis of this work is not on particular language features but on providing a unified model of the source language, the virtual machine and the compiler. The whole development has been carried out in the theorem prover Isabelle/HOL.</div></td>
+ <td class="abstract mathjax_process">We introduce Jinja, a Java-like programming language with a formal semantics designed to exhibit core features of the Java language architecture. Jinja is a compromise between realism of the language and tractability and clarity of the formal semantics. The following aspects are formalised: a big and a small step operational semantics for Jinja and a proof of their equivalence; a type system and a definite initialisation analysis; a type safety proof of the small step semantics; a virtual machine (JVM), its operational semantics and its type system; a type safety proof for the JVM; a bytecode verifier, i.e. data flow analyser for the JVM; a correctness proof of the bytecode verifier w.r.t. the type system; a compiler and a proof that it preserves semantics and well-typedness. The emphasis of this work is not on particular language features but on providing a unified model of the source language, the virtual machine and the compiler. The whole development has been carried out in the theorem prover Isabelle/HOL.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Jinja-AFP,
author = {Gerwin Klein and Tobias Nipkow},
title = {Jinja is not Java},
journal = {Archive of Formal Proofs},
month = jun,
year = 2005,
note = {\url{http://isa-afp.org/entries/Jinja.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="List-Index.html">List-Index</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="HRB-Slicing.html">HRB-Slicing</a>, <a href="Slicing.html">Slicing</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Jinja/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Jinja/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Jinja/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Jinja-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Jinja-2019-06-11.tar.gz">
afp-Jinja-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Jinja-2018-08-16.tar.gz">
afp-Jinja-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Jinja-2017-10-10.tar.gz">
afp-Jinja-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Jinja-2016-12-17.tar.gz">
afp-Jinja-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Jinja-2016-02-22.tar.gz">
afp-Jinja-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Jinja-2015-05-27.tar.gz">
afp-Jinja-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Jinja-2014-08-28.tar.gz">
afp-Jinja-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Jinja-2013-12-11.tar.gz">
afp-Jinja-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Jinja-2013-11-17.tar.gz">
afp-Jinja-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Jinja-2013-02-16.tar.gz">
afp-Jinja-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Jinja-2012-05-24.tar.gz">
afp-Jinja-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Jinja-2011-10-11.tar.gz">
afp-Jinja-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Jinja-2011-02-11.tar.gz">
afp-Jinja-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Jinja-2010-07-01.tar.gz">
afp-Jinja-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Jinja-2009-12-12.tar.gz">
afp-Jinja-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Jinja-2009-04-29.tar.gz">
afp-Jinja-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Jinja-2008-06-10.tar.gz">
afp-Jinja-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Jinja-2007-11-27.tar.gz">
afp-Jinja-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Jinja-2006-08-08.tar.gz">
afp-Jinja-2006-08-08.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Jinja-2005-10-14.tar.gz">
afp-Jinja-2005-10-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/JinjaThreads.html b/web/entries/JinjaThreads.html
--- a/web/entries/JinjaThreads.html
+++ b/web/entries/JinjaThreads.html
@@ -1,335 +1,335 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Jinja with Threads - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">J</font>inja
with
<font class="first">T</font>hreads
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Jinja with Threads</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2007-12-03</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We extend the Jinja source code semantics by Klein and Nipkow with Java-style arrays and threads. Concurrency is captured in a generic framework semantics for adding concurrency through interleaving to a sequential semantics, which features dynamic thread creation, inter-thread communication via shared memory, lock synchronisation and joins. Also, threads can suspend themselves and be notified by others. We instantiate the framework with the adapted versions of both Jinja source and byte code and show type safety for the multithreaded case. Equally, the compiler from source to byte code is extended, for which we prove weak bisimilarity between the source code small step semantics and the defensive Jinja virtual machine. On top of this, we formalise the JMM and show the DRF guarantee and consistency. For description of the different parts, see Lochbihler's papers at FOOL 2008, ESOP 2010, ITP 2011, and ESOP 2012.</div></td>
+ <td class="abstract mathjax_process">We extend the Jinja source code semantics by Klein and Nipkow with Java-style arrays and threads. Concurrency is captured in a generic framework semantics for adding concurrency through interleaving to a sequential semantics, which features dynamic thread creation, inter-thread communication via shared memory, lock synchronisation and joins. Also, threads can suspend themselves and be notified by others. We instantiate the framework with the adapted versions of both Jinja source and byte code and show type safety for the multithreaded case. Equally, the compiler from source to byte code is extended, for which we prove weak bisimilarity between the source code small step semantics and the defensive Jinja virtual machine. On top of this, we formalise the JMM and show the DRF guarantee and consistency. For description of the different parts, see Lochbihler's papers at FOOL 2008, ESOP 2010, ITP 2011, and ESOP 2012.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2008-04-23]:
added bytecode formalisation with arrays and threads, added thread joins
(revision f74a8be156a7)<br>
[2009-04-27]:
added verified compiler from source code to bytecode;
encapsulate native methods in separate semantics
(revision e4f26541e58a)<br>
[2009-11-30]:
extended compiler correctness proof to infinite and deadlocking computations
(revision e50282397435)<br>
[2010-06-08]:
added thread interruption;
new abstract memory model with sequential consistency as implementation
(revision 0cb9e8dbd78d)<br>
[2010-06-28]:
new thread interruption model
(revision c0440d0a1177)<br>
[2010-10-15]:
preliminary version of the Java memory model for source code
(revision 02fee0ef3ca2)<br>
[2010-12-16]:
improved version of the Java memory model, also for bytecode
executable scheduler for source code semantics
(revision 1f41c1842f5a)<br>
[2011-02-02]:
simplified code generator setup
new random scheduler
(revision 3059dafd013f)<br>
[2011-07-21]:
new interruption model,
generalized JMM proof of DRF guarantee,
allow class Object to declare methods and fields,
simplified subtyping relation,
corrected division and modulo implementation
(revision 46e4181ed142)<br>
[2012-02-16]:
added example programs
(revision bf0b06c8913d)<br>
[2012-11-21]:
type safety proof for the Java memory model,
allow spurious wake-ups
(revision 76063d860ae0)<br>
[2013-05-16]:
support for non-deterministic memory allocators
(revision cc3344a49ced)<br>
[2017-10-20]:
add an atomic compare-and-swap operation for volatile fields
(revision a6189b1d6b30)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{JinjaThreads-AFP,
author = {Andreas Lochbihler},
title = {Jinja with Threads},
journal = {Archive of Formal Proofs},
month = dec,
year = 2007,
note = {\url{http://isa-afp.org/entries/JinjaThreads.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Automatic_Refinement.html">Automatic_Refinement</a>, <a href="Binomial-Heaps.html">Binomial-Heaps</a>, <a href="Coinductive.html">Coinductive</a>, <a href="Collections.html">Collections</a>, <a href="FinFun.html">FinFun</a>, <a href="Finger-Trees.html">Finger-Trees</a>, <a href="Native_Word.html">Native_Word</a>, <a href="Refine_Monadic.html">Refine_Monadic</a>, <a href="Trie.html">Trie</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/JinjaThreads/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/JinjaThreads/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/JinjaThreads/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-JinjaThreads-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-JinjaThreads-2019-06-11.tar.gz">
afp-JinjaThreads-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-JinjaThreads-2018-08-17.tar.gz">
afp-JinjaThreads-2018-08-17.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-JinjaThreads-2017-10-10.tar.gz">
afp-JinjaThreads-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-JinjaThreads-2016-12-17.tar.gz">
afp-JinjaThreads-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-JinjaThreads-2016-02-22.tar.gz">
afp-JinjaThreads-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-JinjaThreads-2015-05-27.tar.gz">
afp-JinjaThreads-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-JinjaThreads-2014-08-28.tar.gz">
afp-JinjaThreads-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-JinjaThreads-2013-12-11.tar.gz">
afp-JinjaThreads-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-JinjaThreads-2013-11-17.tar.gz">
afp-JinjaThreads-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-JinjaThreads-2013-02-16.tar.gz">
afp-JinjaThreads-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-JinjaThreads-2012-05-26.tar.gz">
afp-JinjaThreads-2012-05-26.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-JinjaThreads-2011-10-12.tar.gz">
afp-JinjaThreads-2011-10-12.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-JinjaThreads-2011-10-11.tar.gz">
afp-JinjaThreads-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-JinjaThreads-2011-02-11.tar.gz">
afp-JinjaThreads-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-JinjaThreads-2010-07-02.tar.gz">
afp-JinjaThreads-2010-07-02.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-JinjaThreads-2009-12-12.tar.gz">
afp-JinjaThreads-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-JinjaThreads-2009-04-30.tar.gz">
afp-JinjaThreads-2009-04-30.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-JinjaThreads-2009-04-29.tar.gz">
afp-JinjaThreads-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-JinjaThreads-2008-06-10.tar.gz">
afp-JinjaThreads-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-JinjaThreads-2007-12-03.tar.gz">
afp-JinjaThreads-2007-12-03.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/JiveDataStoreModel.html b/web/entries/JiveDataStoreModel.html
--- a/web/entries/JiveDataStoreModel.html
+++ b/web/entries/JiveDataStoreModel.html
@@ -1,282 +1,282 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Jive Data and Store Model - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">J</font>ive
<font class="first">D</font>ata
and
<font class="first">S</font>tore
<font class="first">M</font>odel
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Jive Data and Store Model</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Nicole Rauch (rauch /at/ informatik /dot/ uni-kl /dot/ de) and
Norbert Schirmer (norbert /dot/ schirmer /at/ web /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2005-06-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This document presents the formalization of an object-oriented data and store model in Isabelle/HOL. This model is being used in the Java Interactive Verification Environment, Jive.</div></td>
+ <td class="abstract mathjax_process">This document presents the formalization of an object-oriented data and store model in Isabelle/HOL. This model is being used in the Java Interactive Verification Environment, Jive.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{JiveDataStoreModel-AFP,
author = {Nicole Rauch and Norbert Schirmer},
title = {Jive Data and Store Model},
journal = {Archive of Formal Proofs},
month = jun,
year = 2005,
note = {\url{http://isa-afp.org/entries/JiveDataStoreModel.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/JiveDataStoreModel/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/JiveDataStoreModel/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/JiveDataStoreModel/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-JiveDataStoreModel-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-JiveDataStoreModel-2019-06-11.tar.gz">
afp-JiveDataStoreModel-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-JiveDataStoreModel-2018-08-16.tar.gz">
afp-JiveDataStoreModel-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-JiveDataStoreModel-2017-10-10.tar.gz">
afp-JiveDataStoreModel-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-JiveDataStoreModel-2016-12-17.tar.gz">
afp-JiveDataStoreModel-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-JiveDataStoreModel-2016-02-22.tar.gz">
afp-JiveDataStoreModel-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-JiveDataStoreModel-2015-05-27.tar.gz">
afp-JiveDataStoreModel-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-JiveDataStoreModel-2014-08-28.tar.gz">
afp-JiveDataStoreModel-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-JiveDataStoreModel-2013-12-11.tar.gz">
afp-JiveDataStoreModel-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-JiveDataStoreModel-2013-11-17.tar.gz">
afp-JiveDataStoreModel-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-JiveDataStoreModel-2013-02-16.tar.gz">
afp-JiveDataStoreModel-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-JiveDataStoreModel-2012-05-24.tar.gz">
afp-JiveDataStoreModel-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-JiveDataStoreModel-2011-10-11.tar.gz">
afp-JiveDataStoreModel-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-JiveDataStoreModel-2011-02-11.tar.gz">
afp-JiveDataStoreModel-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-JiveDataStoreModel-2010-07-01.tar.gz">
afp-JiveDataStoreModel-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-JiveDataStoreModel-2009-12-12.tar.gz">
afp-JiveDataStoreModel-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-JiveDataStoreModel-2009-04-29.tar.gz">
afp-JiveDataStoreModel-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-JiveDataStoreModel-2008-06-10.tar.gz">
afp-JiveDataStoreModel-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-JiveDataStoreModel-2007-11-27.tar.gz">
afp-JiveDataStoreModel-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-JiveDataStoreModel-2005-10-14.tar.gz">
afp-JiveDataStoreModel-2005-10-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Jordan_Hoelder.html b/web/entries/Jordan_Hoelder.html
--- a/web/entries/Jordan_Hoelder.html
+++ b/web/entries/Jordan_Hoelder.html
@@ -1,219 +1,219 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Jordan-Hölder Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">J</font>ordan-Hölder
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Jordan-Hölder Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Jakob von Raumer (psxjv4 /at/ nottingham /dot/ ac /dot/ uk)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-09-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This submission contains theories that lead to a formalization of the proof of the Jordan-Hölder theorem about composition series of finite groups. The theories formalize the notions of isomorphism classes of groups, simple groups, normal series, composition series, maximal normal subgroups. Furthermore, they provide proofs of the second isomorphism theorem for groups, the characterization theorem for maximal normal subgroups as well as many useful lemmas about normal subgroups and factor groups. The proof is inspired by course notes of Stuart Rankin.</div></td>
+ <td class="abstract mathjax_process">This submission contains theories that lead to a formalization of the proof of the Jordan-Hölder theorem about composition series of finite groups. The theories formalize the notions of isomorphism classes of groups, simple groups, normal series, composition series, maximal normal subgroups. Furthermore, they provide proofs of the second isomorphism theorem for groups, the characterization theorem for maximal normal subgroups as well as many useful lemmas about normal subgroups and factor groups. The proof is inspired by course notes of Stuart Rankin.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Jordan_Hoelder-AFP,
author = {Jakob von Raumer},
title = {The Jordan-Hölder Theorem},
journal = {Archive of Formal Proofs},
month = sep,
year = 2014,
note = {\url{http://isa-afp.org/entries/Jordan_Hoelder.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Secondary_Sylow.html">Secondary_Sylow</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Jordan_Hoelder/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Jordan_Hoelder/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Jordan_Hoelder/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Jordan_Hoelder-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Jordan_Hoelder-2019-06-11.tar.gz">
afp-Jordan_Hoelder-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Jordan_Hoelder-2018-08-16.tar.gz">
afp-Jordan_Hoelder-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Jordan_Hoelder-2017-10-10.tar.gz">
afp-Jordan_Hoelder-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Jordan_Hoelder-2016-12-17.tar.gz">
afp-Jordan_Hoelder-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Jordan_Hoelder-2016-02-22.tar.gz">
afp-Jordan_Hoelder-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Jordan_Hoelder-2015-05-27.tar.gz">
afp-Jordan_Hoelder-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Jordan_Hoelder-2014-09-11.tar.gz">
afp-Jordan_Hoelder-2014-09-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Jordan_Normal_Form.html b/web/entries/Jordan_Normal_Form.html
--- a/web/entries/Jordan_Normal_Form.html
+++ b/web/entries/Jordan_Normal_Form.html
@@ -1,249 +1,249 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Matrices, Jordan Normal Forms, and Spectral Radius Theory - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>atrices,
<font class="first">J</font>ordan
<font class="first">N</font>ormal
<font class="first">F</font>orms,
and
<font class="first">S</font>pectral
<font class="first">R</font>adius
<font class="first">T</font>heory
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Matrices, Jordan Normal Forms, and Spectral Radius Theory</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a> and
<a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="datahead">
Contributor:
</td>
<td class="data">
Alexander Bentkamp (bentkamp /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-08-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
Matrix interpretations are useful as measure functions in termination proving. In order to use these interpretations also for complexity analysis, the growth rate of matrix powers has to examined. Here, we formalized a central result of spectral radius theory, namely that the growth rate is polynomially bounded if and only if the spectral radius of a matrix is at most one.
</p><p>
To formally prove this result we first studied the growth rates of matrices in Jordan normal form, and prove the result that every complex matrix has a Jordan normal form using a constructive prove via Schur decomposition.
</p><p>
The whole development is based on a new abstract type for matrices, which is also executable by a suitable setup of the code generator. It completely subsumes our former AFP-entry on executable matrices, and its main advantage is its close connection to the HMA-representation which allowed us to easily adapt existing proofs on determinants.
</p><p>
All the results have been applied to improve CeTA, our certifier to validate termination and complexity proof certificates.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2016-01-07]: Added Schur-decomposition, Gram-Schmidt orthogonalization, uniqueness of Jordan normal forms<br/>
[2018-04-17]: Integrated lemmas from deep-learning AFP-entry of Alexander Bentkamp</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Jordan_Normal_Form-AFP,
author = {René Thiemann and Akihisa Yamada},
title = {Matrices, Jordan Normal Forms, and Spectral Radius Theory},
journal = {Archive of Formal Proofs},
month = aug,
year = 2015,
note = {\url{http://isa-afp.org/entries/Jordan_Normal_Form.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Polynomial_Factorization.html">Polynomial_Factorization</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Deep_Learning.html">Deep_Learning</a>, <a href="Farkas.html">Farkas</a>, <a href="Groebner_Bases.html">Groebner_Bases</a>, <a href="Linear_Programming.html">Linear_Programming</a>, <a href="Perron_Frobenius.html">Perron_Frobenius</a>, <a href="QHLProver.html">QHLProver</a>, <a href="Stochastic_Matrices.html">Stochastic_Matrices</a>, <a href="Subresultants.html">Subresultants</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Jordan_Normal_Form/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Jordan_Normal_Form/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Jordan_Normal_Form/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Jordan_Normal_Form-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Jordan_Normal_Form-2019-06-11.tar.gz">
afp-Jordan_Normal_Form-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Jordan_Normal_Form-2018-08-16.tar.gz">
afp-Jordan_Normal_Form-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Jordan_Normal_Form-2017-10-10.tar.gz">
afp-Jordan_Normal_Form-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Jordan_Normal_Form-2016-12-17.tar.gz">
afp-Jordan_Normal_Form-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Jordan_Normal_Form-2016-02-22.tar.gz">
afp-Jordan_Normal_Form-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Jordan_Normal_Form-2015-08-23.tar.gz">
afp-Jordan_Normal_Form-2015-08-23.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/KAD.html b/web/entries/KAD.html
--- a/web/entries/KAD.html
+++ b/web/entries/KAD.html
@@ -1,228 +1,228 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Kleene Algebras with Domain - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">K</font>leene
<font class="first">A</font>lgebras
with
<font class="first">D</font>omain
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Kleene Algebras with Domain</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Victor B. F. Gomes (vb358 /at/ cl /dot/ cam /dot/ ac /dot/ uk),
<a href="http://www.cosc.canterbury.ac.nz/walter.guttmann/">Walter Guttmann</a>,
<a href="http://www.hoefner-online.de/">Peter Höfner</a>,
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a> and
Tjark Weber (tjark /dot/ weber /at/ it /dot/ uu /dot/ se)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-04-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Kleene algebras with domain are Kleene algebras endowed with an
operation that maps each element of the algebra to its domain of
definition (or its complement) in abstract fashion. They form a simple
algebraic basis for Hoare logics, dynamic logics or predicate
transformer semantics. We formalise a modular hierarchy of algebras
with domain and antidomain (domain complement) operations in
Isabelle/HOL that ranges from domain and antidomain semigroups to
modal Kleene algebras and divergence Kleene algebras. We link these
algebras with models of binary relations and program traces. We
include some examples from modal logics, termination and program
-analysis.</div></td>
+analysis.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{KAD-AFP,
author = {Victor B. F. Gomes and Walter Guttmann and Peter Höfner and Georg Struth and Tjark Weber},
title = {Kleene Algebras with Domain},
journal = {Archive of Formal Proofs},
month = apr,
year = 2016,
note = {\url{http://isa-afp.org/entries/KAD.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Kleene_Algebra.html">Kleene_Algebra</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Algebraic_VCs.html">Algebraic_VCs</a>, <a href="Hybrid_Systems_VCs.html">Hybrid_Systems_VCs</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/KAD/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/KAD/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/KAD/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-KAD-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-KAD-2019-06-11.tar.gz">
afp-KAD-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-KAD-2018-08-16.tar.gz">
afp-KAD-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-KAD-2017-10-10.tar.gz">
afp-KAD-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-KAD-2016-12-17.tar.gz">
afp-KAD-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-KAD-2016-04-12.tar.gz">
afp-KAD-2016-04-12.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/KAT_and_DRA.html b/web/entries/KAT_and_DRA.html
--- a/web/entries/KAT_and_DRA.html
+++ b/web/entries/KAT_and_DRA.html
@@ -1,247 +1,247 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Kleene Algebra with Tests and Demonic Refinement Algebras - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">K</font>leene
<font class="first">A</font>lgebra
with
<font class="first">T</font>ests
and
<font class="first">D</font>emonic
<font class="first">R</font>efinement
<font class="first">A</font>lgebras
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Kleene Algebra with Tests and Demonic Refinement Algebras</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Alasdair Armstrong,
Victor B. F. Gomes (vb358 /at/ cl /dot/ cam /dot/ ac /dot/ uk) and
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-01-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalise Kleene algebra with tests (KAT) and demonic refinement
algebra (DRA) in Isabelle/HOL. KAT is relevant for program verification
and correctness proofs in the partial correctness setting. While DRA
targets similar applications in the context of total correctness. Our
formalisation contains the two most important models of these algebras:
binary relations in the case of KAT and predicate transformers in the
case of DRA. In addition, we derive the inference rules for Hoare logic
in KAT and its relational model and present a simple formally verified
-program verification tool prototype based on the algebraic approach.</div></td>
+program verification tool prototype based on the algebraic approach.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{KAT_and_DRA-AFP,
author = {Alasdair Armstrong and Victor B. F. Gomes and Georg Struth},
title = {Kleene Algebra with Tests and Demonic Refinement Algebras},
journal = {Archive of Formal Proofs},
month = jan,
year = 2014,
note = {\url{http://isa-afp.org/entries/KAT_and_DRA.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Kleene_Algebra.html">Kleene_Algebra</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Algebraic_VCs.html">Algebraic_VCs</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/KAT_and_DRA/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/KAT_and_DRA/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/KAT_and_DRA/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-KAT_and_DRA-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-KAT_and_DRA-2019-06-11.tar.gz">
afp-KAT_and_DRA-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-KAT_and_DRA-2018-08-16.tar.gz">
afp-KAT_and_DRA-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-KAT_and_DRA-2017-10-10.tar.gz">
afp-KAT_and_DRA-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-KAT_and_DRA-2016-12-17.tar.gz">
afp-KAT_and_DRA-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-KAT_and_DRA-2016-02-22.tar.gz">
afp-KAT_and_DRA-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-KAT_and_DRA-2015-05-27.tar.gz">
afp-KAT_and_DRA-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-KAT_and_DRA-2014-08-28.tar.gz">
afp-KAT_and_DRA-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-KAT_and_DRA-2014-01-29.tar.gz">
afp-KAT_and_DRA-2014-01-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/KBPs.html b/web/entries/KBPs.html
--- a/web/entries/KBPs.html
+++ b/web/entries/KBPs.html
@@ -1,258 +1,258 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Knowledge-based programs - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">K</font>nowledge-based
programs
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Knowledge-based programs</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-05-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Knowledge-based programs (KBPs) are a formalism for directly relating agents' knowledge and behaviour. Here we present a general scheme for compiling KBPs to executable automata with a proof of correctness in Isabelle/HOL. We develop the algorithm top-down, using Isabelle's locale mechanism to structure these proofs, and show that two classic examples can be synthesised using Isabelle's code generator.</div></td>
+ <td class="abstract mathjax_process">Knowledge-based programs (KBPs) are a formalism for directly relating agents' knowledge and behaviour. Here we present a general scheme for compiling KBPs to executable automata with a proof of correctness in Isabelle/HOL. We develop the algorithm top-down, using Isabelle's locale mechanism to structure these proofs, and show that two classic examples can be synthesised using Isabelle's code generator.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2012-03-06]: Add some more views and revive the code generation.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{KBPs-AFP,
author = {Peter Gammie},
title = {Knowledge-based programs},
journal = {Archive of Formal Proofs},
month = may,
year = 2011,
note = {\url{http://isa-afp.org/entries/KBPs.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Transitive-Closure.html">Transitive-Closure</a>, <a href="Trie.html">Trie</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="LTL_to_DRA.html">LTL_to_DRA</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/KBPs/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/KBPs/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/KBPs/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-KBPs-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-KBPs-2019-06-11.tar.gz">
afp-KBPs-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-KBPs-2018-08-16.tar.gz">
afp-KBPs-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-KBPs-2017-10-10.tar.gz">
afp-KBPs-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-KBPs-2016-12-17.tar.gz">
afp-KBPs-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-KBPs-2016-02-22.tar.gz">
afp-KBPs-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-KBPs-2015-05-27.tar.gz">
afp-KBPs-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-KBPs-2014-08-28.tar.gz">
afp-KBPs-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-KBPs-2013-12-11.tar.gz">
afp-KBPs-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-KBPs-2013-11-17.tar.gz">
afp-KBPs-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-KBPs-2013-03-02.tar.gz">
afp-KBPs-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-KBPs-2013-02-16.tar.gz">
afp-KBPs-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-KBPs-2012-05-24.tar.gz">
afp-KBPs-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-KBPs-2011-10-11.tar.gz">
afp-KBPs-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-KBPs-2011-05-19.tar.gz">
afp-KBPs-2011-05-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/KD_Tree.html b/web/entries/KD_Tree.html
--- a/web/entries/KD_Tree.html
+++ b/web/entries/KD_Tree.html
@@ -1,211 +1,211 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Multidimensional Binary Search Trees - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>ultidimensional
<font class="first">B</font>inary
<font class="first">S</font>earch
<font class="first">T</font>rees
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Multidimensional Binary Search Trees</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Martin Rau (martin /dot/ rau /at/ tum /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-05-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry provides a formalization of multidimensional binary trees,
also known as k-d trees. It includes a balanced build algorithm as
well as the nearest neighbor algorithm and the range search algorithm.
It is based on the papers <a
href="https://dl.acm.org/citation.cfm?doid=361002.361007">Multidimensional
binary search trees used for associative searching</a> and <a
href="https://dl.acm.org/citation.cfm?doid=355744.355745">
An Algorithm for Finding Best Matches in Logarithmic Expected
-Time</a>.</div></td>
+Time</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2020-15-04]: Change representation of k-dimensional points from 'list' to
HOL-Analysis.Finite_Cartesian_Product 'vec'. Update proofs
to incorporate HOL-Analysis 'dist' and 'cbox' primitives.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{KD_Tree-AFP,
author = {Martin Rau},
title = {Multidimensional Binary Search Trees},
journal = {Archive of Formal Proofs},
month = may,
year = 2019,
note = {\url{http://isa-afp.org/entries/KD_Tree.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Median_Of_Medians_Selection.html">Median_Of_Medians_Selection</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/KD_Tree/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/KD_Tree/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/KD_Tree/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-KD_Tree-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-KD_Tree-2019-06-11.tar.gz">
afp-KD_Tree-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-KD_Tree-2019-06-04.tar.gz">
afp-KD_Tree-2019-06-04.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Key_Agreement_Strong_Adversaries.html b/web/entries/Key_Agreement_Strong_Adversaries.html
--- a/web/entries/Key_Agreement_Strong_Adversaries.html
+++ b/web/entries/Key_Agreement_Strong_Adversaries.html
@@ -1,222 +1,222 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Refining Authenticated Key Agreement with Strong Adversaries - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>efining
<font class="first">A</font>uthenticated
<font class="first">K</font>ey
<font class="first">A</font>greement
with
<font class="first">S</font>trong
<font class="first">A</font>dversaries
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Refining Authenticated Key Agreement with Strong Adversaries</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Joseph Lallemand (joseph /dot/ lallemand /at/ loria /dot/ fr) and
Christoph Sprenger (sprenger /at/ inf /dot/ ethz /dot/ ch)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-01-31</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We develop a family of key agreement protocols that are correct by
construction. Our work substantially extends prior work on developing
security protocols by refinement. First, we strengthen the adversary
by allowing him to compromise different resources of protocol
participants, such as their long-term keys or their session keys. This
enables the systematic development of protocols that ensure strong
properties such as perfect forward secrecy. Second, we broaden the
class of protocols supported to include those with non-atomic keys and
equationally defined cryptographic operators. We use these extensions
to develop key agreement protocols including signed Diffie-Hellman and
-the core of IKEv1 and SKEME.</div></td>
+the core of IKEv1 and SKEME.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Key_Agreement_Strong_Adversaries-AFP,
author = {Joseph Lallemand and Christoph Sprenger},
title = {Refining Authenticated Key Agreement with Strong Adversaries},
journal = {Archive of Formal Proofs},
month = jan,
year = 2017,
note = {\url{http://isa-afp.org/entries/Key_Agreement_Strong_Adversaries.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Key_Agreement_Strong_Adversaries/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Key_Agreement_Strong_Adversaries/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Key_Agreement_Strong_Adversaries/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Key_Agreement_Strong_Adversaries-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Key_Agreement_Strong_Adversaries-2019-06-11.tar.gz">
afp-Key_Agreement_Strong_Adversaries-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Key_Agreement_Strong_Adversaries-2018-08-16.tar.gz">
afp-Key_Agreement_Strong_Adversaries-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Key_Agreement_Strong_Adversaries-2017-10-10.tar.gz">
afp-Key_Agreement_Strong_Adversaries-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Key_Agreement_Strong_Adversaries-2017-02-03.tar.gz">
afp-Key_Agreement_Strong_Adversaries-2017-02-03.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Kleene_Algebra.html b/web/entries/Kleene_Algebra.html
--- a/web/entries/Kleene_Algebra.html
+++ b/web/entries/Kleene_Algebra.html
@@ -1,266 +1,266 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Kleene Algebra - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">K</font>leene
<font class="first">A</font>lgebra
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Kleene Algebra</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Alasdair Armstrong,
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a> and
Tjark Weber (tjark /dot/ weber /at/ it /dot/ uu /dot/ se)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-01-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
These files contain a formalisation of variants of Kleene algebras and
their most important models as axiomatic type classes in Isabelle/HOL.
Kleene algebras are foundational structures in computing with
applications ranging from automata and language theory to computational
modeling, program construction and verification.
<p>
We start with formalising dioids, which are additively idempotent
semirings, and expand them by axiomatisations of the Kleene star for
finite iteration and an omega operation for infinite iteration. We
show that powersets over a given monoid, (regular) languages, sets of
paths in a graph, sets of computation traces, binary relations and
formal power series form Kleene algebras, and consider further models
based on lattices, max-plus semirings and min-plus semirings. We also
demonstrate that dioids are closed under the formation of matrices
(proofs for Kleene algebras remain to be completed).
<p>
On the one hand we have aimed at a reference formalisation of variants
of Kleene algebras that covers a wide range of variants and the core
theorems in a structured and modular way and provides readable proofs
at text book level. On the other hand, we intend to use this algebraic
hierarchy and its models as a generic algebraic middle-layer from which
-programming applications can quickly be explored, implemented and verified.</div></td>
+programming applications can quickly be explored, implemented and verified.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Kleene_Algebra-AFP,
author = {Alasdair Armstrong and Georg Struth and Tjark Weber},
title = {Kleene Algebra},
journal = {Archive of Formal Proofs},
month = jan,
year = 2013,
note = {\url{http://isa-afp.org/entries/Kleene_Algebra.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="KAD.html">KAD</a>, <a href="KAT_and_DRA.html">KAT_and_DRA</a>, <a href="Multirelations.html">Multirelations</a>, <a href="Quantales.html">Quantales</a>, <a href="Regular_Algebras.html">Regular_Algebras</a>, <a href="Relation_Algebra.html">Relation_Algebra</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Kleene_Algebra/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Kleene_Algebra/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Kleene_Algebra/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Kleene_Algebra-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Kleene_Algebra-2019-06-11.tar.gz">
afp-Kleene_Algebra-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Kleene_Algebra-2018-08-16.tar.gz">
afp-Kleene_Algebra-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Kleene_Algebra-2017-10-10.tar.gz">
afp-Kleene_Algebra-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Kleene_Algebra-2016-12-17.tar.gz">
afp-Kleene_Algebra-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Kleene_Algebra-2016-02-22.tar.gz">
afp-Kleene_Algebra-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Kleene_Algebra-2015-05-27.tar.gz">
afp-Kleene_Algebra-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Kleene_Algebra-2014-08-28.tar.gz">
afp-Kleene_Algebra-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Kleene_Algebra-2013-12-11.tar.gz">
afp-Kleene_Algebra-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Kleene_Algebra-2013-11-17.tar.gz">
afp-Kleene_Algebra-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Kleene_Algebra-2013-03-02.tar.gz">
afp-Kleene_Algebra-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Kleene_Algebra-2013-02-16.tar.gz">
afp-Kleene_Algebra-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Kleene_Algebra-2013-01-16.tar.gz">
afp-Kleene_Algebra-2013-01-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Knot_Theory.html b/web/entries/Knot_Theory.html
--- a/web/entries/Knot_Theory.html
+++ b/web/entries/Knot_Theory.html
@@ -1,219 +1,219 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Knot Theory - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">K</font>not
<font class="first">T</font>heory
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Knot Theory</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
T.V.H. Prathamesh (prathamesh /at/ imsc /dot/ res /dot/ in)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-01-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This work contains a formalization of some topics in knot theory.
The concepts that were formalized include definitions of tangles, links,
framed links and link/tangle equivalence. The formalization is based on a
formulation of links in terms of tangles. We further construct and prove the
invariance of the Bracket polynomial. Bracket polynomial is an invariant of
framed links closely linked to the Jones polynomial. This is perhaps the first
-attempt to formalize any aspect of knot theory in an interactive proof assistant.</div></td>
+attempt to formalize any aspect of knot theory in an interactive proof assistant.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Knot_Theory-AFP,
author = {T.V.H. Prathamesh},
title = {Knot Theory},
journal = {Archive of Formal Proofs},
month = jan,
year = 2016,
note = {\url{http://isa-afp.org/entries/Knot_Theory.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Matrix_Tensor.html">Matrix_Tensor</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Knot_Theory/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Knot_Theory/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Knot_Theory/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Knot_Theory-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Knot_Theory-2019-06-11.tar.gz">
afp-Knot_Theory-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Knot_Theory-2018-08-16.tar.gz">
afp-Knot_Theory-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Knot_Theory-2017-10-10.tar.gz">
afp-Knot_Theory-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Knot_Theory-2016-12-17.tar.gz">
afp-Knot_Theory-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Knot_Theory-2016-02-22.tar.gz">
afp-Knot_Theory-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Knot_Theory-2016-01-20.tar.gz">
afp-Knot_Theory-2016-01-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Knuth_Morris_Pratt.html b/web/entries/Knuth_Morris_Pratt.html
--- a/web/entries/Knuth_Morris_Pratt.html
+++ b/web/entries/Knuth_Morris_Pratt.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The string search algorithm by Knuth, Morris and Pratt - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
string
search
algorithm
by
<font class="first">K</font>nuth,
<font class="first">M</font>orris
and
<font class="first">P</font>ratt
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The string search algorithm by Knuth, Morris and Pratt</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Fabian Hellauer (hellauer /at/ in /dot/ tum /dot/ de) and
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-12-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The Knuth-Morris-Pratt algorithm is often used to show that the
problem of finding a string <i>s</i> in a text
<i>t</i> can be solved deterministically in
<i>O(|s| + |t|)</i> time. We use the Isabelle
Refinement Framework to formulate and verify the algorithm. Via
refinement, we apply some optimisations and finally use the
<em>Sepref</em> tool to obtain executable code in
-<em>Imperative/HOL</em>.</div></td>
+<em>Imperative/HOL</em>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Knuth_Morris_Pratt-AFP,
author = {Fabian Hellauer and Peter Lammich},
title = {The string search algorithm by Knuth, Morris and Pratt},
journal = {Archive of Formal Proofs},
month = dec,
year = 2017,
note = {\url{http://isa-afp.org/entries/Knuth_Morris_Pratt.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Knuth_Morris_Pratt/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Knuth_Morris_Pratt/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Knuth_Morris_Pratt/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Knuth_Morris_Pratt-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Knuth_Morris_Pratt-2019-06-11.tar.gz">
afp-Knuth_Morris_Pratt-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Knuth_Morris_Pratt-2018-08-16.tar.gz">
afp-Knuth_Morris_Pratt-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Knuth_Morris_Pratt-2017-12-18.tar.gz">
afp-Knuth_Morris_Pratt-2017-12-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Koenigsberg_Friendship.html b/web/entries/Koenigsberg_Friendship.html
--- a/web/entries/Koenigsberg_Friendship.html
+++ b/web/entries/Koenigsberg_Friendship.html
@@ -1,244 +1,244 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Königsberg Bridge Problem and the Friendship Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">K</font>önigsberg
<font class="first">B</font>ridge
<font class="first">P</font>roblem
and
the
<font class="first">F</font>riendship
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Königsberg Bridge Problem and the Friendship Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-07-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This development provides a formalization of undirected graphs and simple graphs, which are based on Benedikt Nordhoff and Peter Lammich's simple formalization of labelled directed graphs in the archive. Then, with our formalization of graphs, we show both necessary and sufficient conditions for Eulerian trails and circuits as well as the fact that the Königsberg Bridge Problem does not have a solution. In addition, we show the Friendship Theorem in simple graphs.</div></td>
+ <td class="abstract mathjax_process">This development provides a formalization of undirected graphs and simple graphs, which are based on Benedikt Nordhoff and Peter Lammich's simple formalization of labelled directed graphs in the archive. Then, with our formalization of graphs, we show both necessary and sufficient conditions for Eulerian trails and circuits as well as the fact that the Königsberg Bridge Problem does not have a solution. In addition, we show the Friendship Theorem in simple graphs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Koenigsberg_Friendship-AFP,
author = {Wenda Li},
title = {The Königsberg Bridge Problem and the Friendship Theorem},
journal = {Archive of Formal Proofs},
month = jul,
year = 2013,
note = {\url{http://isa-afp.org/entries/Koenigsberg_Friendship.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Dijkstra_Shortest_Path.html">Dijkstra_Shortest_Path</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Koenigsberg_Friendship/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Koenigsberg_Friendship/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Koenigsberg_Friendship/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Koenigsberg_Friendship-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Koenigsberg_Friendship-2019-06-11.tar.gz">
afp-Koenigsberg_Friendship-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Koenigsberg_Friendship-2018-08-16.tar.gz">
afp-Koenigsberg_Friendship-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Koenigsberg_Friendship-2017-10-10.tar.gz">
afp-Koenigsberg_Friendship-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Koenigsberg_Friendship-2016-12-17.tar.gz">
afp-Koenigsberg_Friendship-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Koenigsberg_Friendship-2016-02-22.tar.gz">
afp-Koenigsberg_Friendship-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Koenigsberg_Friendship-2015-05-27.tar.gz">
afp-Koenigsberg_Friendship-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Koenigsberg_Friendship-2014-08-28.tar.gz">
afp-Koenigsberg_Friendship-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Koenigsberg_Friendship-2013-12-11.tar.gz">
afp-Koenigsberg_Friendship-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Koenigsberg_Friendship-2013-11-17.tar.gz">
afp-Koenigsberg_Friendship-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Koenigsberg_Friendship-2013-07-26.tar.gz">
afp-Koenigsberg_Friendship-2013-07-26.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Kruskal.html b/web/entries/Kruskal.html
--- a/web/entries/Kruskal.html
+++ b/web/entries/Kruskal.html
@@ -1,210 +1,210 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Kruskal's Algorithm for Minimum Spanning Forest - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">K</font>ruskal's
<font class="first">A</font>lgorithm
for
<font class="first">M</font>inimum
<font class="first">S</font>panning
<font class="first">F</font>orest
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Kruskal's Algorithm for Minimum Spanning Forest</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://in.tum.de/~haslbema/">Maximilian P.L. Haslbeck</a>,
Peter Lammich and
Julian Biendarra
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-02-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This Isabelle/HOL formalization defines a greedy algorithm for finding
a minimum weight basis on a weighted matroid and proves its
correctness. This algorithm is an abstract version of Kruskal's
algorithm. We interpret the abstract algorithm for the cycle matroid
(i.e. forests in a graph) and refine it to imperative executable code
using an efficient union-find data structure. Our formalization can
be instantiated for different graph representations. We provide
-instantiations for undirected graphs and symmetric directed graphs.</div></td>
+instantiations for undirected graphs and symmetric directed graphs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Kruskal-AFP,
author = {Maximilian P.L. Haslbeck and Peter Lammich and Julian Biendarra},
title = {Kruskal's Algorithm for Minimum Spanning Forest},
journal = {Archive of Formal Proofs},
month = feb,
year = 2019,
note = {\url{http://isa-afp.org/entries/Kruskal.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Collections.html">Collections</a>, <a href="Matroids.html">Matroids</a>, <a href="Refine_Imperative_HOL.html">Refine_Imperative_HOL</a>, <a href="Refine_Monadic.html">Refine_Monadic</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Kruskal/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Kruskal/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Kruskal/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Kruskal-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Kruskal-2019-06-11.tar.gz">
afp-Kruskal-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Kruskal-2019-02-19.tar.gz">
afp-Kruskal-2019-02-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Kuratowski_Closure_Complement.html b/web/entries/Kuratowski_Closure_Complement.html
--- a/web/entries/Kuratowski_Closure_Complement.html
+++ b/web/entries/Kuratowski_Closure_Complement.html
@@ -1,206 +1,206 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Kuratowski Closure-Complement Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">K</font>uratowski
<font class="first">C</font>losure-Complement
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Kuratowski Closure-Complement Theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://peteg.org">Peter Gammie</a> and
Gianpaolo Gioiosa
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-10-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We discuss a topological curiosity discovered by Kuratowski (1922):
the fact that the number of distinct operators on a topological space
generated by compositions of closure and complement never exceeds 14,
and is exactly 14 in the case of R. In addition, we prove a theorem
due to Chagrov (1982) that classifies topological spaces according to
-the number of such operators they support.</div></td>
+the number of such operators they support.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Kuratowski_Closure_Complement-AFP,
author = {Peter Gammie and Gianpaolo Gioiosa},
title = {The Kuratowski Closure-Complement Theorem},
journal = {Archive of Formal Proofs},
month = oct,
year = 2017,
note = {\url{http://isa-afp.org/entries/Kuratowski_Closure_Complement.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Kuratowski_Closure_Complement/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Kuratowski_Closure_Complement/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Kuratowski_Closure_Complement/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Kuratowski_Closure_Complement-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Kuratowski_Closure_Complement-2019-06-11.tar.gz">
afp-Kuratowski_Closure_Complement-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Kuratowski_Closure_Complement-2018-08-16.tar.gz">
afp-Kuratowski_Closure_Complement-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Kuratowski_Closure_Complement-2017-10-27.tar.gz">
afp-Kuratowski_Closure_Complement-2017-10-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/LLL_Basis_Reduction.html b/web/entries/LLL_Basis_Reduction.html
--- a/web/entries/LLL_Basis_Reduction.html
+++ b/web/entries/LLL_Basis_Reduction.html
@@ -1,231 +1,231 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A verified LLL algorithm - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
verified
<font class="first">L</font>LL
algorithm
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A verified LLL algorithm</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/users/bottesch/">Ralph Bottesch</a>,
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a>,
<a href="http://cl-informatik.uibk.ac.at/users/mhaslbeck/">Maximilian Haslbeck</a>,
<a href="http://sjcjoosten.nl/">Sebastiaan Joosten</a>,
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a> and
<a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-02-02</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as
LLL algorithm, is an algorithm to find a basis with short, nearly
orthogonal vectors of an integer lattice. Thereby, it can also be seen
as an approximation to solve the shortest vector problem (SVP), which
is an NP-hard problem, where the approximation quality solely depends
on the dimension of the lattice, but not the lattice itself. The
algorithm also possesses many applications in diverse fields of
computer science, from cryptanalysis to number theory, but it is
specially well-known since it was used to implement the first
polynomial-time algorithm to factor polynomials. In this work we
present the first mechanized soundness proof of the LLL algorithm to
compute short vectors in lattices. The formalization follows a
-textbook by von zur Gathen and Gerhard.</div></td>
+textbook by von zur Gathen and Gerhard.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2018-04-16]: Integrated formal complexity bounds (Haslbeck, Thiemann)
[2018-05-25]: Integrated much faster LLL implementation based on integer arithmetic (Bottesch, Haslbeck, Thiemann)</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{LLL_Basis_Reduction-AFP,
author = {Ralph Bottesch and Jose Divasón and Maximilian Haslbeck and Sebastiaan Joosten and René Thiemann and Akihisa Yamada},
title = {A verified LLL algorithm},
journal = {Archive of Formal Proofs},
month = feb,
year = 2018,
note = {\url{http://isa-afp.org/entries/LLL_Basis_Reduction.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Algebraic_Numbers.html">Algebraic_Numbers</a>, <a href="Berlekamp_Zassenhaus.html">Berlekamp_Zassenhaus</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Linear_Inequalities.html">Linear_Inequalities</a>, <a href="LLL_Factorization.html">LLL_Factorization</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LLL_Basis_Reduction/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/LLL_Basis_Reduction/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LLL_Basis_Reduction/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-LLL_Basis_Reduction-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-LLL_Basis_Reduction-2019-06-11.tar.gz">
afp-LLL_Basis_Reduction-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LLL_Basis_Reduction-2018-09-07.tar.gz">
afp-LLL_Basis_Reduction-2018-09-07.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LLL_Basis_Reduction-2018-08-16.tar.gz">
afp-LLL_Basis_Reduction-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-LLL_Basis_Reduction-2018-02-03.tar.gz">
afp-LLL_Basis_Reduction-2018-02-03.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/LLL_Factorization.html b/web/entries/LLL_Factorization.html
--- a/web/entries/LLL_Factorization.html
+++ b/web/entries/LLL_Factorization.html
@@ -1,229 +1,229 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A verified factorization algorithm for integer polynomials with polynomial complexity - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
verified
factorization
algorithm
for
integer
polynomials
with
polynomial
complexity
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A verified factorization algorithm for integer polynomials with polynomial complexity</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a>,
<a href="http://sjcjoosten.nl/">Sebastiaan Joosten</a>,
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a> and
<a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-02-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Short vectors in lattices and factors of integer polynomials are
related. Each factor of an integer polynomial belongs to a certain
lattice. When factoring polynomials, the condition that we are looking
for an irreducible polynomial means that we must look for a small
element in a lattice, which can be done by a basis reduction
algorithm. In this development we formalize this connection and
thereby one main application of the LLL basis reduction algorithm: an
algorithm to factor square-free integer polynomials which runs in
polynomial time. The work is based on our previous
Berlekamp–Zassenhaus development, where the exponential reconstruction
phase has been replaced by the polynomial-time basis reduction
algorithm. Thanks to this formalization we found a serious flaw in a
-textbook.</div></td>
+textbook.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{LLL_Factorization-AFP,
author = {Jose Divasón and Sebastiaan Joosten and René Thiemann and Akihisa Yamada},
title = {A verified factorization algorithm for integer polynomials with polynomial complexity},
journal = {Archive of Formal Proofs},
month = feb,
year = 2018,
note = {\url{http://isa-afp.org/entries/LLL_Factorization.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="LLL_Basis_Reduction.html">LLL_Basis_Reduction</a>, <a href="Perron_Frobenius.html">Perron_Frobenius</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LLL_Factorization/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/LLL_Factorization/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LLL_Factorization/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-LLL_Factorization-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-LLL_Factorization-2019-06-11.tar.gz">
afp-LLL_Factorization-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LLL_Factorization-2018-08-16.tar.gz">
afp-LLL_Factorization-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-LLL_Factorization-2018-02-07.tar.gz">
afp-LLL_Factorization-2018-02-07.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/LOFT.html b/web/entries/LOFT.html
--- a/web/entries/LOFT.html
+++ b/web/entries/LOFT.html
@@ -1,228 +1,228 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>LOFT — Verified Migration of Linux Firewalls to SDN - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>OFT
<font class="first">—</font>
<font class="first">V</font>erified
<font class="first">M</font>igration
of
<font class="first">L</font>inux
<font class="first">F</font>irewalls
to
<font class="first">S</font>DN
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">LOFT — Verified Migration of Linux Firewalls to SDN</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://liftm.de">Julius Michaelis</a> and
<a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-10-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present LOFT — Linux firewall OpenFlow Translator, a system that
transforms the main routing table and FORWARD chain of iptables of a
Linux-based firewall into a set of static OpenFlow rules. Our
implementation is verified against a model of a simplified Linux-based
router and we can directly show how much of the original functionality
-is preserved.</div></td>
+is preserved.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{LOFT-AFP,
author = {Julius Michaelis and Cornelius Diekmann},
title = {LOFT — Verified Migration of Linux Firewalls to SDN},
journal = {Archive of Formal Proofs},
month = oct,
year = 2016,
note = {\url{http://isa-afp.org/entries/LOFT.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Iptables_Semantics.html">Iptables_Semantics</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LOFT/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/LOFT/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LOFT/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-LOFT-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-LOFT-2019-06-11.tar.gz">
afp-LOFT-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LOFT-2018-08-16.tar.gz">
afp-LOFT-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-LOFT-2017-10-10.tar.gz">
afp-LOFT-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-LOFT-2016-12-17.tar.gz">
afp-LOFT-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-LOFT-2016-10-21.tar.gz">
afp-LOFT-2016-10-21.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/LTL.html b/web/entries/LTL.html
--- a/web/entries/LTL.html
+++ b/web/entries/LTL.html
@@ -1,231 +1,231 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Linear Temporal Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>inear
<font class="first">T</font>emporal
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Linear Temporal Logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Salomon Sickert (s /dot/ sickert /at/ tum /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">
Contributor:
</td>
<td class="data">
Benedikt Seidl (benedikt /dot/ seidl /at/ tum /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-03-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This theory provides a formalisation of linear temporal logic (LTL)
and unifies previous formalisations within the AFP. This entry
establishes syntax and semantics for this logic and decouples it from
existing entries, yielding a common environment for theories reasoning
about LTL. Furthermore a parser written in SML and an executable
-simplifier are provided.</div></td>
+simplifier are provided.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2019-03-12]:
Support for additional operators, implementation of common equivalence relations,
definition of syntactic fragments of LTL and the minimal disjunctive normal form. <br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{LTL-AFP,
author = {Salomon Sickert},
title = {Linear Temporal Logic},
journal = {Archive of Formal Proofs},
month = mar,
year = 2016,
note = {\url{http://isa-afp.org/entries/LTL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Boolean_Expression_Checkers.html">Boolean_Expression_Checkers</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="LTL_Master_Theorem.html">LTL_Master_Theorem</a>, <a href="LTL_to_DRA.html">LTL_to_DRA</a>, <a href="LTL_to_GBA.html">LTL_to_GBA</a>, <a href="Promela.html">Promela</a>, <a href="Stuttering_Equivalence.html">Stuttering_Equivalence</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LTL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/LTL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LTL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-LTL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-LTL-2019-06-11.tar.gz">
afp-LTL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LTL-2018-08-16.tar.gz">
afp-LTL-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-LTL-2017-10-10.tar.gz">
afp-LTL-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-LTL-2016-12-17.tar.gz">
afp-LTL-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-LTL-2016-03-02.tar.gz">
afp-LTL-2016-03-02.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/LTL_Master_Theorem.html b/web/entries/LTL_Master_Theorem.html
--- a/web/entries/LTL_Master_Theorem.html
+++ b/web/entries/LTL_Master_Theorem.html
@@ -1,221 +1,221 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Compositional and Unified Translation of LTL into ω-Automata - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">C</font>ompositional
and
<font class="first">U</font>nified
<font class="first">T</font>ranslation
of
<font class="first">L</font>TL
into
ω-Automata
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Compositional and Unified Translation of LTL into ω-Automata</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Benedikt Seidl (benedikt /dot/ seidl /at/ tum /dot/ de) and
Salomon Sickert (s /dot/ sickert /at/ tum /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-04-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a formalisation of the unified translation approach of
linear temporal logic (LTL) into ω-automata from [1]. This approach
decomposes LTL formulas into ``simple'' languages and allows
a clear separation of concerns: first, we formalise the purely logical
result yielding this decomposition; second, we instantiate this
generic theory to obtain a construction for deterministic
(state-based) Rabin automata (DRA). We extract from this particular
instantiation an executable tool translating LTL to DRAs. To the best
of our knowledge this is the first verified translation from LTL to
DRAs that is proven to be double exponential in the worst case which
asymptotically matches the known lower bound.
<p>
[1] Javier Esparza, Jan Kretínský, Salomon Sickert. One Theorem to Rule Them All:
-A Unified Translation of LTL into ω-Automata. LICS 2018</div></td>
+A Unified Translation of LTL into ω-Automata. LICS 2018</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{LTL_Master_Theorem-AFP,
author = {Benedikt Seidl and Salomon Sickert},
title = {A Compositional and Unified Translation of LTL into ω-Automata},
journal = {Archive of Formal Proofs},
month = apr,
year = 2019,
note = {\url{http://isa-afp.org/entries/LTL_Master_Theorem.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Deriving.html">Deriving</a>, <a href="LTL.html">LTL</a>, <a href="Transition_Systems_and_Automata.html">Transition_Systems_and_Automata</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LTL_Master_Theorem/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/LTL_Master_Theorem/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LTL_Master_Theorem/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-LTL_Master_Theorem-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-LTL_Master_Theorem-2019-06-11.tar.gz">
afp-LTL_Master_Theorem-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LTL_Master_Theorem-2019-04-17.tar.gz">
afp-LTL_Master_Theorem-2019-04-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/LTL_to_DRA.html b/web/entries/LTL_to_DRA.html
--- a/web/entries/LTL_to_DRA.html
+++ b/web/entries/LTL_to_DRA.html
@@ -1,231 +1,231 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Converting Linear Temporal Logic to Deterministic (Generalized) Rabin Automata - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>onverting
<font class="first">L</font>inear
<font class="first">T</font>emporal
<font class="first">L</font>ogic
to
<font class="first">D</font>eterministic
<font class="first">(</font>Generalized)
<font class="first">R</font>abin
<font class="first">A</font>utomata
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Converting Linear Temporal Logic to Deterministic (Generalized) Rabin Automata</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Salomon Sickert (s /dot/ sickert /at/ tum /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-09-04</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Recently, Javier Esparza and Jan Kretinsky proposed a new method directly translating linear temporal logic (LTL) formulas to deterministic (generalized) Rabin automata. Compared to the existing approaches of constructing a non-deterministic Buechi-automaton in the first step and then applying a determinization procedure (e.g. some variant of Safra's construction) in a second step, this new approach preservers a relation between the formula and the states of the resulting automaton. While the old approach produced a monolithic structure, the new method is compositional. Furthermore, in some cases the resulting automata are much smaller than the automata generated by existing approaches. In order to ensure the correctness of the construction, this entry contains a complete formalisation and verification of the translation. Furthermore from this basis executable code is generated.</div></td>
+ <td class="abstract mathjax_process">Recently, Javier Esparza and Jan Kretinsky proposed a new method directly translating linear temporal logic (LTL) formulas to deterministic (generalized) Rabin automata. Compared to the existing approaches of constructing a non-deterministic Buechi-automaton in the first step and then applying a determinization procedure (e.g. some variant of Safra's construction) in a second step, this new approach preservers a relation between the formula and the states of the resulting automaton. While the old approach produced a monolithic structure, the new method is compositional. Furthermore, in some cases the resulting automata are much smaller than the automata generated by existing approaches. In order to ensure the correctness of the construction, this entry contains a complete formalisation and verification of the translation. Furthermore from this basis executable code is generated.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2015-09-23]: Enable code export for the eager unfolding optimisation and reduce running time of the generated tool. Moreover, add support for the mlton SML compiler.<br>
[2016-03-24]: Make use of the LTL entry and include the simplifier.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{LTL_to_DRA-AFP,
author = {Salomon Sickert},
title = {Converting Linear Temporal Logic to Deterministic (Generalized) Rabin Automata},
journal = {Archive of Formal Proofs},
month = sep,
year = 2015,
note = {\url{http://isa-afp.org/entries/LTL_to_DRA.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Boolean_Expression_Checkers.html">Boolean_Expression_Checkers</a>, <a href="KBPs.html">KBPs</a>, <a href="List-Index.html">List-Index</a>, <a href="LTL.html">LTL</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LTL_to_DRA/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/LTL_to_DRA/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LTL_to_DRA/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-LTL_to_DRA-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-LTL_to_DRA-2019-06-11.tar.gz">
afp-LTL_to_DRA-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LTL_to_DRA-2018-08-16.tar.gz">
afp-LTL_to_DRA-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-LTL_to_DRA-2017-10-10.tar.gz">
afp-LTL_to_DRA-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-LTL_to_DRA-2016-12-17.tar.gz">
afp-LTL_to_DRA-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-LTL_to_DRA-2016-02-22.tar.gz">
afp-LTL_to_DRA-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-LTL_to_DRA-2015-09-04.tar.gz">
afp-LTL_to_DRA-2015-09-04.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/LTL_to_GBA.html b/web/entries/LTL_to_GBA.html
--- a/web/entries/LTL_to_GBA.html
+++ b/web/entries/LTL_to_GBA.html
@@ -1,246 +1,246 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Converting Linear-Time Temporal Logic to Generalized Büchi Automata - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>onverting
<font class="first">L</font>inear-Time
<font class="first">T</font>emporal
<font class="first">L</font>ogic
to
<font class="first">G</font>eneralized
<font class="first">B</font>üchi
<font class="first">A</font>utomata
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Converting Linear-Time Temporal Logic to Generalized Büchi Automata</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Alexander Schimpf (schimpfa /at/ informatik /dot/ uni-freiburg /dot/ de) and
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-05-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize linear-time temporal logic (LTL) and the algorithm by Gerth
et al. to convert LTL formulas to generalized Büchi automata.
We also formalize some syntactic rewrite rules that can be applied to
optimize the LTL formula before conversion.
Moreover, we integrate the Stuttering Equivalence AFP-Entry by Stefan
Merz, adapting the lemma that next-free LTL formula cannot distinguish
between stuttering equivalent runs to our setting.
<p>
We use the Isabelle Refinement and Collection framework, as well as the
Autoref tool, to obtain a refined version of our algorithm, from which
-efficiently executable code can be extracted.</div></td>
+efficiently executable code can be extracted.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{LTL_to_GBA-AFP,
author = {Alexander Schimpf and Peter Lammich},
title = {Converting Linear-Time Temporal Logic to Generalized Büchi Automata},
journal = {Archive of Formal Proofs},
month = may,
year = 2014,
note = {\url{http://isa-afp.org/entries/LTL_to_GBA.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="CAVA_Automata.html">CAVA_Automata</a>, <a href="LTL.html">LTL</a>, <a href="Stuttering_Equivalence.html">Stuttering_Equivalence</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LTL_to_GBA/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/LTL_to_GBA/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LTL_to_GBA/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-LTL_to_GBA-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-LTL_to_GBA-2019-06-11.tar.gz">
afp-LTL_to_GBA-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LTL_to_GBA-2018-08-16.tar.gz">
afp-LTL_to_GBA-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-LTL_to_GBA-2017-10-10.tar.gz">
afp-LTL_to_GBA-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-LTL_to_GBA-2016-12-17.tar.gz">
afp-LTL_to_GBA-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-LTL_to_GBA-2016-02-22.tar.gz">
afp-LTL_to_GBA-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-LTL_to_GBA-2015-05-27.tar.gz">
afp-LTL_to_GBA-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-LTL_to_GBA-2014-08-28.tar.gz">
afp-LTL_to_GBA-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-LTL_to_GBA-2014-05-29.tar.gz">
afp-LTL_to_GBA-2014-05-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Lam-ml-Normalization.html b/web/entries/Lam-ml-Normalization.html
--- a/web/entries/Lam-ml-Normalization.html
+++ b/web/entries/Lam-ml-Normalization.html
@@ -1,258 +1,258 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Strong Normalization of Moggis's Computational Metalanguage - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>trong
<font class="first">N</font>ormalization
of
<font class="first">M</font>oggis's
<font class="first">C</font>omputational
<font class="first">M</font>etalanguage
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Strong Normalization of Moggis's Computational Metalanguage</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Christian Doczkal (doczkal /at/ ps /dot/ uni-saarland /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-08-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Handling variable binding is one of the main difficulties in formal proofs. In this context, Moggi's computational metalanguage serves as an interesting case study. It features monadic types and a commuting conversion rule that rearranges the binding structure. Lindley and Stark have given an elegant proof of strong normalization for this calculus. The key construction in their proof is a notion of relational TT-lifting, using stacks of elimination contexts to obtain a Girard-Tait style logical relation. I give a formalization of their proof in Isabelle/HOL-Nominal with a particular emphasis on the treatment of bound variables.</div></td>
+ <td class="abstract mathjax_process">Handling variable binding is one of the main difficulties in formal proofs. In this context, Moggi's computational metalanguage serves as an interesting case study. It features monadic types and a commuting conversion rule that rearranges the binding structure. Lindley and Stark have given an elegant proof of strong normalization for this calculus. The key construction in their proof is a notion of relational TT-lifting, using stacks of elimination contexts to obtain a Girard-Tait style logical relation. I give a formalization of their proof in Isabelle/HOL-Nominal with a particular emphasis on the treatment of bound variables.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Lam-ml-Normalization-AFP,
author = {Christian Doczkal},
title = {Strong Normalization of Moggis's Computational Metalanguage},
journal = {Archive of Formal Proofs},
month = aug,
year = 2010,
note = {\url{http://isa-afp.org/entries/Lam-ml-Normalization.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lam-ml-Normalization/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Lam-ml-Normalization/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lam-ml-Normalization/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Lam-ml-Normalization-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Lam-ml-Normalization-2019-06-11.tar.gz">
afp-Lam-ml-Normalization-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Lam-ml-Normalization-2018-08-16.tar.gz">
afp-Lam-ml-Normalization-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Lam-ml-Normalization-2017-10-10.tar.gz">
afp-Lam-ml-Normalization-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Lam-ml-Normalization-2016-12-17.tar.gz">
afp-Lam-ml-Normalization-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Lam-ml-Normalization-2016-02-22.tar.gz">
afp-Lam-ml-Normalization-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Lam-ml-Normalization-2015-05-27.tar.gz">
afp-Lam-ml-Normalization-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Lam-ml-Normalization-2014-08-28.tar.gz">
afp-Lam-ml-Normalization-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Lam-ml-Normalization-2013-12-11.tar.gz">
afp-Lam-ml-Normalization-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Lam-ml-Normalization-2013-11-17.tar.gz">
afp-Lam-ml-Normalization-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Lam-ml-Normalization-2013-02-16.tar.gz">
afp-Lam-ml-Normalization-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Lam-ml-Normalization-2012-05-24.tar.gz">
afp-Lam-ml-Normalization-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Lam-ml-Normalization-2011-10-11.tar.gz">
afp-Lam-ml-Normalization-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Lam-ml-Normalization-2011-02-11.tar.gz">
afp-Lam-ml-Normalization-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Lam-ml-Normalization-2010-09-01.tar.gz">
afp-Lam-ml-Normalization-2010-09-01.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/LambdaAuth.html b/web/entries/LambdaAuth.html
--- a/web/entries/LambdaAuth.html
+++ b/web/entries/LambdaAuth.html
@@ -1,217 +1,217 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of Generic Authenticated Data Structures - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
<font class="first">G</font>eneric
<font class="first">A</font>uthenticated
<font class="first">D</font>ata
<font class="first">S</font>tructures
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of Generic Authenticated Data Structures</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Matthias Brun and
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-05-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Authenticated data structures are a technique for outsourcing data
storage and maintenance to an untrusted server. The server is required
to produce an efficiently checkable and cryptographically secure proof
that it carried out precisely the requested computation. <a
href="https://doi.org/10.1145/2535838.2535851">Miller et
al.</a> introduced &lambda;&bull; (pronounced
<i>lambda auth</i>)&mdash;a functional programming
language with a built-in primitive authentication construct, which
supports a wide range of user-specified authenticated data structures
while guaranteeing certain correctness and security properties for all
well-typed programs. We formalize &lambda;&bull; and prove its
correctness and security properties. With Isabelle's help, we
uncover and repair several mistakes in the informal proofs and lemma
statements. Our findings are summarized in a <a
href="http://people.inf.ethz.ch/trayteld/papers/lambdaauth/lambdaauth.pdf">paper
-draft</a>.</div></td>
+draft</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{LambdaAuth-AFP,
author = {Matthias Brun and Dmitriy Traytel},
title = {Formalization of Generic Authenticated Data Structures},
journal = {Archive of Formal Proofs},
month = may,
year = 2019,
note = {\url{http://isa-afp.org/entries/LambdaAuth.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Nominal2.html">Nominal2</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LambdaAuth/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/LambdaAuth/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LambdaAuth/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-LambdaAuth-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-LambdaAuth-2019-06-11.tar.gz">
afp-LambdaAuth-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LambdaAuth-2019-05-15.tar.gz">
afp-LambdaAuth-2019-05-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/LambdaMu.html b/web/entries/LambdaMu.html
--- a/web/entries/LambdaMu.html
+++ b/web/entries/LambdaMu.html
@@ -1,210 +1,210 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The LambdaMu-calculus - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">L</font>ambdaMu-calculus
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The LambdaMu-calculus</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Cristina Matache (cris /dot/ matache /at/ gmail /dot/ com),
Victor B. F. Gomes (vb358 /at/ cl /dot/ cam /dot/ ac /dot/ uk) and
Dominic P. Mulligan (Dominic /dot/ Mulligan /at/ arm /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-08-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The propositions-as-types correspondence is ordinarily presented as
linking the metatheory of typed λ-calculi and the proof theory of
intuitionistic logic. Griffin observed that this correspondence could
be extended to classical logic through the use of control operators.
This observation set off a flurry of further research, leading to the
development of Parigots λμ-calculus. In this work, we formalise λμ-
calculus in Isabelle/HOL and prove several metatheoretical properties
-such as type preservation and progress.</div></td>
+such as type preservation and progress.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{LambdaMu-AFP,
author = {Cristina Matache and Victor B. F. Gomes and Dominic P. Mulligan},
title = {The LambdaMu-calculus},
journal = {Archive of Formal Proofs},
month = aug,
year = 2017,
note = {\url{http://isa-afp.org/entries/LambdaMu.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LambdaMu/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/LambdaMu/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LambdaMu/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-LambdaMu-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-LambdaMu-2019-06-11.tar.gz">
afp-LambdaMu-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LambdaMu-2018-08-16.tar.gz">
afp-LambdaMu-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-LambdaMu-2017-10-10.tar.gz">
afp-LambdaMu-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-LambdaMu-2017-08-21.tar.gz">
afp-LambdaMu-2017-08-21.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Lambda_Free_EPO.html b/web/entries/Lambda_Free_EPO.html
--- a/web/entries/Lambda_Free_EPO.html
+++ b/web/entries/Lambda_Free_EPO.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of the Embedding Path Order for Lambda-Free Higher-Order Terms - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
the
<font class="first">E</font>mbedding
<font class="first">P</font>ath
<font class="first">O</font>rder
for
<font class="first">L</font>ambda-Free
<font class="first">H</font>igher-Order
<font class="first">T</font>erms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of the Embedding Path Order for Lambda-Free Higher-Order Terms</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Alexander Bentkamp (bentkamp /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-10-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This Isabelle/HOL formalization defines the Embedding Path Order (EPO)
for higher-order terms without lambda-abstraction and proves many
useful properties about it. In contrast to the lambda-free recursive
path orders, it does not fully coincide with RPO on first-order terms,
-but it is compatible with arbitrary higher-order contexts.</div></td>
+but it is compatible with arbitrary higher-order contexts.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Lambda_Free_EPO-AFP,
author = {Alexander Bentkamp},
title = {Formalization of the Embedding Path Order for Lambda-Free Higher-Order Terms},
journal = {Archive of Formal Proofs},
month = oct,
year = 2018,
note = {\url{http://isa-afp.org/entries/Lambda_Free_EPO.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Lambda_Free_RPOs.html">Lambda_Free_RPOs</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lambda_Free_EPO/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Lambda_Free_EPO/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lambda_Free_EPO/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Lambda_Free_EPO-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Lambda_Free_EPO-2019-06-11.tar.gz">
afp-Lambda_Free_EPO-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Lambda_Free_EPO-2018-10-21.tar.gz">
afp-Lambda_Free_EPO-2018-10-21.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Lambda_Free_KBOs.html b/web/entries/Lambda_Free_KBOs.html
--- a/web/entries/Lambda_Free_KBOs.html
+++ b/web/entries/Lambda_Free_KBOs.html
@@ -1,217 +1,217 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of Knuth–Bendix Orders for Lambda-Free Higher-Order Terms - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
<font class="first">K</font>nuth–Bendix
<font class="first">O</font>rders
for
<font class="first">L</font>ambda-Free
<font class="first">H</font>igher-Order
<font class="first">T</font>erms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of Knuth–Bendix Orders for Lambda-Free Higher-Order Terms</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Heiko Becker (hbecker /at/ mpi-sws /dot/ org),
Jasmin Christian Blanchette (j /dot/ c /dot/ blanchette /at/ vu /dot/ nl),
Uwe Waldmann (uwe /at/ mpi-inf /dot/ mpg /dot/ de) and
Daniel Wand (dwand /at/ mpi-inf /dot/ mpg /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-11-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This Isabelle/HOL formalization defines Knuth–Bendix orders for higher-order terms without lambda-abstraction and proves many useful properties about them. The main order fully coincides with the standard transfinite KBO with subterm coefficients on first-order terms. It appears promising as the basis of a higher-order superposition calculus.</div></td>
+ <td class="abstract mathjax_process">This Isabelle/HOL formalization defines Knuth–Bendix orders for higher-order terms without lambda-abstraction and proves many useful properties about them. The main order fully coincides with the standard transfinite KBO with subterm coefficients on first-order terms. It appears promising as the basis of a higher-order superposition calculus.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Lambda_Free_KBOs-AFP,
author = {Heiko Becker and Jasmin Christian Blanchette and Uwe Waldmann and Daniel Wand},
title = {Formalization of Knuth–Bendix Orders for Lambda-Free Higher-Order Terms},
journal = {Archive of Formal Proofs},
month = nov,
year = 2016,
note = {\url{http://isa-afp.org/entries/Lambda_Free_KBOs.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Lambda_Free_RPOs.html">Lambda_Free_RPOs</a>, <a href="Nested_Multisets_Ordinals.html">Nested_Multisets_Ordinals</a>, <a href="Polynomials.html">Polynomials</a>, <a href="Regular-Sets.html">Regular-Sets</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lambda_Free_KBOs/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Lambda_Free_KBOs/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lambda_Free_KBOs/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Lambda_Free_KBOs-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Lambda_Free_KBOs-2019-06-11.tar.gz">
afp-Lambda_Free_KBOs-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Lambda_Free_KBOs-2018-08-16.tar.gz">
afp-Lambda_Free_KBOs-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Lambda_Free_KBOs-2017-10-10.tar.gz">
afp-Lambda_Free_KBOs-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Lambda_Free_KBOs-2016-12-17.tar.gz">
afp-Lambda_Free_KBOs-2016-12-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Lambda_Free_RPOs.html b/web/entries/Lambda_Free_RPOs.html
--- a/web/entries/Lambda_Free_RPOs.html
+++ b/web/entries/Lambda_Free_RPOs.html
@@ -1,220 +1,220 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of Recursive Path Orders for Lambda-Free Higher-Order Terms - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
<font class="first">R</font>ecursive
<font class="first">P</font>ath
<font class="first">O</font>rders
for
<font class="first">L</font>ambda-Free
<font class="first">H</font>igher-Order
<font class="first">T</font>erms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of Recursive Path Orders for Lambda-Free Higher-Order Terms</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Jasmin Christian Blanchette (j /dot/ c /dot/ blanchette /at/ vu /dot/ nl),
Uwe Waldmann (uwe /at/ mpi-inf /dot/ mpg /dot/ de) and
Daniel Wand (dwand /at/ mpi-inf /dot/ mpg /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-09-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This Isabelle/HOL formalization defines recursive path orders (RPOs) for higher-order terms without lambda-abstraction and proves many useful properties about them. The main order fully coincides with the standard RPO on first-order terms also in the presence of currying, distinguishing it from previous work. An optimized variant is formalized as well. It appears promising as the basis of a higher-order superposition calculus.</div></td>
+ <td class="abstract mathjax_process">This Isabelle/HOL formalization defines recursive path orders (RPOs) for higher-order terms without lambda-abstraction and proves many useful properties about them. The main order fully coincides with the standard RPO on first-order terms also in the presence of currying, distinguishing it from previous work. An optimized variant is formalized as well. It appears promising as the basis of a higher-order superposition calculus.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Lambda_Free_RPOs-AFP,
author = {Jasmin Christian Blanchette and Uwe Waldmann and Daniel Wand},
title = {Formalization of Recursive Path Orders for Lambda-Free Higher-Order Terms},
journal = {Archive of Formal Proofs},
month = sep,
year = 2016,
note = {\url{http://isa-afp.org/entries/Lambda_Free_RPOs.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Nested_Multisets_Ordinals.html">Nested_Multisets_Ordinals</a> </td></tr>
<tr><td class="datahead">Used by:</td>
- <td class="data"><a href="Higher_Order_Terms.html">Higher_Order_Terms</a>, <a href="Lambda_Free_EPO.html">Lambda_Free_EPO</a>, <a href="Lambda_Free_KBOs.html">Lambda_Free_KBOs</a> </td></tr>
+ <td class="data"><a href="Functional_Ordered_Resolution_Prover.html">Functional_Ordered_Resolution_Prover</a>, <a href="Higher_Order_Terms.html">Higher_Order_Terms</a>, <a href="Lambda_Free_EPO.html">Lambda_Free_EPO</a>, <a href="Lambda_Free_KBOs.html">Lambda_Free_KBOs</a>, <a href="Saturation_Framework.html">Saturation_Framework</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lambda_Free_RPOs/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Lambda_Free_RPOs/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lambda_Free_RPOs/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Lambda_Free_RPOs-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Lambda_Free_RPOs-2019-06-11.tar.gz">
afp-Lambda_Free_RPOs-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Lambda_Free_RPOs-2018-08-16.tar.gz">
afp-Lambda_Free_RPOs-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Lambda_Free_RPOs-2017-10-10.tar.gz">
afp-Lambda_Free_RPOs-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Lambda_Free_RPOs-2016-12-17.tar.gz">
afp-Lambda_Free_RPOs-2016-12-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Landau_Symbols.html b/web/entries/Landau_Symbols.html
--- a/web/entries/Landau_Symbols.html
+++ b/web/entries/Landau_Symbols.html
@@ -1,212 +1,212 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Landau Symbols - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>andau
<font class="first">S</font>ymbols
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Landau Symbols</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-07-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This entry provides Landau symbols to describe and reason about the asymptotic growth of functions for sufficiently large inputs. A number of simplification procedures are provided for additional convenience: cancelling of dominated terms in sums under a Landau symbol, cancelling of common factors in products, and a decision procedure for Landau expressions containing products of powers of functions like x, ln(x), ln(ln(x)) etc.</div></td>
+ <td class="abstract mathjax_process">This entry provides Landau symbols to describe and reason about the asymptotic growth of functions for sufficiently large inputs. A number of simplification procedures are provided for additional convenience: cancelling of dominated terms in sums under a Landau symbol, cancelling of common factors in products, and a decision procedure for Landau expressions containing products of powers of functions like x, ln(x), ln(ln(x)) etc.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Landau_Symbols-AFP,
author = {Manuel Eberl},
title = {Landau Symbols},
journal = {Archive of Formal Proofs},
month = jul,
year = 2015,
note = {\url{http://isa-afp.org/entries/Landau_Symbols.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Akra_Bazzi.html">Akra_Bazzi</a>, <a href="Catalan_Numbers.html">Catalan_Numbers</a>, <a href="Comparison_Sort_Lower_Bound.html">Comparison_Sort_Lower_Bound</a>, <a href="CryptHOL.html">CryptHOL</a>, <a href="Dirichlet_L.html">Dirichlet_L</a>, <a href="Dirichlet_Series.html">Dirichlet_Series</a>, <a href="Error_Function.html">Error_Function</a>, <a href="Euler_MacLaurin.html">Euler_MacLaurin</a>, <a href="Quick_Sort_Cost.html">Quick_Sort_Cost</a>, <a href="Random_BSTs.html">Random_BSTs</a>, <a href="Stirling_Formula.html">Stirling_Formula</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Landau_Symbols/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Landau_Symbols/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Landau_Symbols/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Landau_Symbols-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Landau_Symbols-2019-06-11.tar.gz">
afp-Landau_Symbols-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Landau_Symbols-2018-08-16.tar.gz">
afp-Landau_Symbols-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Landau_Symbols-2017-10-10.tar.gz">
afp-Landau_Symbols-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Landau_Symbols-2016-12-17.tar.gz">
afp-Landau_Symbols-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Landau_Symbols-2016-02-22.tar.gz">
afp-Landau_Symbols-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Landau_Symbols-2015-07-15.tar.gz">
afp-Landau_Symbols-2015-07-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Laplace_Transform.html b/web/entries/Laplace_Transform.html
--- a/web/entries/Laplace_Transform.html
+++ b/web/entries/Laplace_Transform.html
@@ -1,192 +1,192 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Laplace Transform - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>aplace
<font class="first">T</font>ransform
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Laplace Transform</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-08-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry formalizes the Laplace transform and concrete Laplace
transforms for arithmetic functions, frequency shift, integration and
(higher) differentiation in the time domain. It proves Lerch's
lemma and uniqueness of the Laplace transform for continuous
functions. In order to formalize the foundational assumptions, this
entry contains a formalization of piecewise continuous functions and
-functions of exponential order.</div></td>
+functions of exponential order.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Laplace_Transform-AFP,
author = {Fabian Immler},
title = {Laplace Transform},
journal = {Archive of Formal Proofs},
month = aug,
year = 2019,
note = {\url{http://isa-afp.org/entries/Laplace_Transform.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Laplace_Transform/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Laplace_Transform/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Laplace_Transform/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Laplace_Transform-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Laplace_Transform-2019-08-16.tar.gz">
afp-Laplace_Transform-2019-08-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Latin_Square.html b/web/entries/Latin_Square.html
--- a/web/entries/Latin_Square.html
+++ b/web/entries/Latin_Square.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Latin Square - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>atin
<font class="first">S</font>quare
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Latin Square</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Alexander Bentkamp (bentkamp /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-12-02</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
-A Latin Square is a n x n table filled with integers from 1 to n where each number appears exactly once in each row and each column. A Latin Rectangle is a partially filled n x n table with r filled rows and n-r empty rows, such that each number appears at most once in each row and each column. The main result of this theory is that any Latin Rectangle can be completed to a Latin Square.</div></td>
+ <td class="abstract mathjax_process">
+A Latin Square is a n x n table filled with integers from 1 to n where each number appears exactly once in each row and each column. A Latin Rectangle is a partially filled n x n table with r filled rows and n-r empty rows, such that each number appears at most once in each row and each column. The main result of this theory is that any Latin Rectangle can be completed to a Latin Square.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Latin_Square-AFP,
author = {Alexander Bentkamp},
title = {Latin Square},
journal = {Archive of Formal Proofs},
month = dec,
year = 2015,
note = {\url{http://isa-afp.org/entries/Latin_Square.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Marriage.html">Marriage</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Latin_Square/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Latin_Square/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Latin_Square/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Latin_Square-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Latin_Square-2019-06-11.tar.gz">
afp-Latin_Square-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Latin_Square-2018-08-16.tar.gz">
afp-Latin_Square-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Latin_Square-2017-10-10.tar.gz">
afp-Latin_Square-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Latin_Square-2016-12-17.tar.gz">
afp-Latin_Square-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Latin_Square-2016-02-22.tar.gz">
afp-Latin_Square-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Latin_Square-2015-12-03.tar.gz">
afp-Latin_Square-2015-12-03.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/LatticeProperties.html b/web/entries/LatticeProperties.html
--- a/web/entries/LatticeProperties.html
+++ b/web/entries/LatticeProperties.html
@@ -1,259 +1,259 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Lattice Properties - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>attice
<font class="first">P</font>roperties
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Lattice Properties</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Viorel Preoteasa (viorel /dot/ preoteasa /at/ aalto /dot/ fi)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-09-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This formalization introduces and collects some algebraic structures based on lattices and complete lattices for use in other developments. The structures introduced are modular, and lattice ordered groups. In addition to the results proved for the new lattices, this formalization also introduces theorems about latices and complete lattices in general.</div></td>
+ <td class="abstract mathjax_process">This formalization introduces and collects some algebraic structures based on lattices and complete lattices for use in other developments. The structures introduced are modular, and lattice ordered groups. In addition to the results proved for the new lattices, this formalization also introduces theorems about latices and complete lattices in general.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2012-01-05]: Removed the theory about distributive complete lattices which is in the standard library now.
Added a theory about well founded and transitive relations and a result about fixpoints in complete lattices and well founded relations.
Moved the results about conjunctive and disjunctive functions to a new theory.
Removed the syntactic classes for inf and sup which are in the standard library now.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{LatticeProperties-AFP,
author = {Viorel Preoteasa},
title = {Lattice Properties},
journal = {Archive of Formal Proofs},
month = sep,
year = 2011,
note = {\url{http://isa-afp.org/entries/LatticeProperties.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="DataRefinementIBP.html">DataRefinementIBP</a>, <a href="MonoBoolTranAlgebra.html">MonoBoolTranAlgebra</a>, <a href="PseudoHoops.html">PseudoHoops</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LatticeProperties/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/LatticeProperties/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LatticeProperties/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-LatticeProperties-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-LatticeProperties-2019-06-28.tar.gz">
afp-LatticeProperties-2019-06-28.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-LatticeProperties-2019-06-11.tar.gz">
afp-LatticeProperties-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LatticeProperties-2018-08-16.tar.gz">
afp-LatticeProperties-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-LatticeProperties-2017-10-10.tar.gz">
afp-LatticeProperties-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-LatticeProperties-2016-12-17.tar.gz">
afp-LatticeProperties-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-LatticeProperties-2016-02-22.tar.gz">
afp-LatticeProperties-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-LatticeProperties-2015-05-27.tar.gz">
afp-LatticeProperties-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-LatticeProperties-2014-08-28.tar.gz">
afp-LatticeProperties-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-LatticeProperties-2013-12-11.tar.gz">
afp-LatticeProperties-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-LatticeProperties-2013-11-17.tar.gz">
afp-LatticeProperties-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-LatticeProperties-2013-02-16.tar.gz">
afp-LatticeProperties-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-LatticeProperties-2012-05-24.tar.gz">
afp-LatticeProperties-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-LatticeProperties-2011-10-11.tar.gz">
afp-LatticeProperties-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-LatticeProperties-2011-09-27.tar.gz">
afp-LatticeProperties-2011-09-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Launchbury.html b/web/entries/Launchbury.html
--- a/web/entries/Launchbury.html
+++ b/web/entries/Launchbury.html
@@ -1,268 +1,268 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Correctness of Launchbury's Natural Semantics for Lazy Evaluation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">C</font>orrectness
of
<font class="first">L</font>aunchbury's
<font class="first">N</font>atural
<font class="first">S</font>emantics
for
<font class="first">L</font>azy
<font class="first">E</font>valuation
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Correctness of Launchbury's Natural Semantics for Lazy Evaluation</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Joachim Breitner (joachim /at/ cis /dot/ upenn /dot/ edu)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-01-31</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">In his seminal paper "Natural Semantics for Lazy Evaluation", John Launchbury proves his semantics correct with respect to a denotational semantics, and outlines an adequacy proof. We have formalized both semantics and machine-checked the correctness proof, clarifying some details. Furthermore, we provide a new and more direct adequacy proof that does not require intermediate operational semantics.</div></td>
+ <td class="abstract mathjax_process">In his seminal paper "Natural Semantics for Lazy Evaluation", John Launchbury proves his semantics correct with respect to a denotational semantics, and outlines an adequacy proof. We have formalized both semantics and machine-checked the correctness proof, clarifying some details. Furthermore, we provide a new and more direct adequacy proof that does not require intermediate operational semantics.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2014-05-24]: Added the proof of adequacy, as well as simplified and improved the existing proofs. Adjusted abstract accordingly.
[2015-03-16]: Booleans and if-then-else added to syntax and semantics, making this entry suitable to be used by the entry "Call_Arity".</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Launchbury-AFP,
author = {Joachim Breitner},
title = {The Correctness of Launchbury's Natural Semantics for Lazy Evaluation},
journal = {Archive of Formal Proofs},
month = jan,
year = 2013,
note = {\url{http://isa-afp.org/entries/Launchbury.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="FinFun.html">FinFun</a>, <a href="Nominal2.html">Nominal2</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Call_Arity.html">Call_Arity</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Launchbury/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Launchbury/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Launchbury/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Launchbury-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Launchbury-2019-06-11.tar.gz">
afp-Launchbury-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Launchbury-2018-08-16.tar.gz">
afp-Launchbury-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Launchbury-2017-10-10.tar.gz">
afp-Launchbury-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Launchbury-2016-12-17.tar.gz">
afp-Launchbury-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Launchbury-2016-02-22.tar.gz">
afp-Launchbury-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Launchbury-2015-05-27.tar.gz">
afp-Launchbury-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Launchbury-2014-08-28.tar.gz">
afp-Launchbury-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Launchbury-2014-05-25.tar.gz">
afp-Launchbury-2014-05-25.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Launchbury-2014-05-24.tar.gz">
afp-Launchbury-2014-05-24.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Launchbury-2013-12-11.tar.gz">
afp-Launchbury-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Launchbury-2013-11-17.tar.gz">
afp-Launchbury-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Launchbury-2013-02-25.tar.gz">
afp-Launchbury-2013-02-25.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Launchbury-2013-02-24.tar.gz">
afp-Launchbury-2013-02-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Lazy-Lists-II.html b/web/entries/Lazy-Lists-II.html
--- a/web/entries/Lazy-Lists-II.html
+++ b/web/entries/Lazy-Lists-II.html
@@ -1,291 +1,291 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Lazy Lists II - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>azy
<font class="first">L</font>ists
<font class="first">I</font>I
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Lazy Lists II</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Stefan Friedrich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-04-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This theory contains some useful extensions to the LList (lazy list) theory by <a href="http://www.cl.cam.ac.uk/~lp15/">Larry Paulson</a>, including finite, infinite, and positive llists over an alphabet, as well as the new constants take and drop and the prefix order of llists. Finally, the notions of safety and liveness in the sense of Alpern and Schneider (1985) are defined.</div></td>
+ <td class="abstract mathjax_process">This theory contains some useful extensions to the LList (lazy list) theory by <a href="http://www.cl.cam.ac.uk/~lp15/">Larry Paulson</a>, including finite, infinite, and positive llists over an alphabet, as well as the new constants take and drop and the prefix order of llists. Finally, the notions of safety and liveness in the sense of Alpern and Schneider (1985) are defined.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Lazy-Lists-II-AFP,
author = {Stefan Friedrich},
title = {Lazy Lists II},
journal = {Archive of Formal Proofs},
month = apr,
year = 2004,
note = {\url{http://isa-afp.org/entries/Lazy-Lists-II.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Coinductive.html">Coinductive</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Topology.html">Topology</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lazy-Lists-II/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Lazy-Lists-II/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lazy-Lists-II/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Lazy-Lists-II-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Lazy-Lists-II-2019-06-11.tar.gz">
afp-Lazy-Lists-II-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Lazy-Lists-II-2018-08-16.tar.gz">
afp-Lazy-Lists-II-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Lazy-Lists-II-2017-10-10.tar.gz">
afp-Lazy-Lists-II-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Lazy-Lists-II-2016-12-17.tar.gz">
afp-Lazy-Lists-II-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Lazy-Lists-II-2016-02-22.tar.gz">
afp-Lazy-Lists-II-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Lazy-Lists-II-2015-05-27.tar.gz">
afp-Lazy-Lists-II-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Lazy-Lists-II-2014-08-28.tar.gz">
afp-Lazy-Lists-II-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Lazy-Lists-II-2013-12-11.tar.gz">
afp-Lazy-Lists-II-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Lazy-Lists-II-2013-11-17.tar.gz">
afp-Lazy-Lists-II-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Lazy-Lists-II-2013-02-16.tar.gz">
afp-Lazy-Lists-II-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Lazy-Lists-II-2012-05-24.tar.gz">
afp-Lazy-Lists-II-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Lazy-Lists-II-2011-10-11.tar.gz">
afp-Lazy-Lists-II-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Lazy-Lists-II-2011-02-11.tar.gz">
afp-Lazy-Lists-II-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Lazy-Lists-II-2010-07-01.tar.gz">
afp-Lazy-Lists-II-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Lazy-Lists-II-2009-12-12.tar.gz">
afp-Lazy-Lists-II-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Lazy-Lists-II-2009-04-29.tar.gz">
afp-Lazy-Lists-II-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Lazy-Lists-II-2008-06-10.tar.gz">
afp-Lazy-Lists-II-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Lazy-Lists-II-2007-11-27.tar.gz">
afp-Lazy-Lists-II-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Lazy-Lists-II-2005-10-14.tar.gz">
afp-Lazy-Lists-II-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Lazy-Lists-II-2004-05-21.tar.gz">
afp-Lazy-Lists-II-2004-05-21.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Lazy-Lists-II-2004-04-27.tar.gz">
afp-Lazy-Lists-II-2004-04-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Lazy_Case.html b/web/entries/Lazy_Case.html
--- a/web/entries/Lazy_Case.html
+++ b/web/entries/Lazy_Case.html
@@ -1,216 +1,216 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Lazifying case constants - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>azifying
case
constants
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Lazifying case constants</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-04-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Isabelle's code generator performs various adaptations for target
languages. Among others, case statements are printed as match
expressions. Internally, this is a sophisticated procedure, because in
HOL, case statements are represented as nested calls to the case
combinators as generated by the datatype package. Furthermore, the
procedure relies on laziness of match expressions in the target
language, i.e., that branches guarded by patterns that fail to match
are not evaluated. Similarly, <tt>if-then-else</tt> is
printed to the corresponding construct in the target language. This
entry provides tooling to replace these special cases in the code
generator by ignoring these target language features, instead printing
-case expressions and <tt>if-then-else</tt> as functions.</div></td>
+case expressions and <tt>if-then-else</tt> as functions.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Lazy_Case-AFP,
author = {Lars Hupel},
title = {Lazifying case constants},
journal = {Archive of Formal Proofs},
month = apr,
year = 2017,
note = {\url{http://isa-afp.org/entries/Lazy_Case.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Dict_Construction.html">Dict_Construction</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lazy_Case/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Lazy_Case/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lazy_Case/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Lazy_Case-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Lazy_Case-2019-06-11.tar.gz">
afp-Lazy_Case-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Lazy_Case-2018-08-16.tar.gz">
afp-Lazy_Case-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Lazy_Case-2017-10-10.tar.gz">
afp-Lazy_Case-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Lazy_Case-2017-04-20.tar.gz">
afp-Lazy_Case-2017-04-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Lehmer.html b/web/entries/Lehmer.html
--- a/web/entries/Lehmer.html
+++ b/web/entries/Lehmer.html
@@ -1,231 +1,231 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Lehmer's Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>ehmer's
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Lehmer's Theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a> and
<a href="http://www21.in.tum.de/~noschinl/">Lars Noschinski</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-07-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">In 1927, Lehmer presented criterions for primality, based on the converse of Fermat's litte theorem. This work formalizes the second criterion from Lehmer's paper, a necessary and sufficient condition for primality.
+ <td class="abstract mathjax_process">In 1927, Lehmer presented criterions for primality, based on the converse of Fermat's litte theorem. This work formalizes the second criterion from Lehmer's paper, a necessary and sufficient condition for primality.
<p>
As a side product we formalize some properties of Euler's phi-function,
-the notion of the order of an element of a group, and the cyclicity of the multiplicative group of a finite field.</div></td>
+the notion of the order of an element of a group, and the cyclicity of the multiplicative group of a finite field.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Lehmer-AFP,
author = {Simon Wimmer and Lars Noschinski},
title = {Lehmer's Theorem},
journal = {Archive of Formal Proofs},
month = jul,
year = 2013,
note = {\url{http://isa-afp.org/entries/Lehmer.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Pratt_Certificate.html">Pratt_Certificate</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lehmer/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Lehmer/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lehmer/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Lehmer-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Lehmer-2019-06-11.tar.gz">
afp-Lehmer-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Lehmer-2018-08-16.tar.gz">
afp-Lehmer-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Lehmer-2017-10-10.tar.gz">
afp-Lehmer-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Lehmer-2016-12-17.tar.gz">
afp-Lehmer-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Lehmer-2016-02-22.tar.gz">
afp-Lehmer-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Lehmer-2015-05-27.tar.gz">
afp-Lehmer-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Lehmer-2014-08-28.tar.gz">
afp-Lehmer-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Lehmer-2013-12-11.tar.gz">
afp-Lehmer-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Lehmer-2013-11-17.tar.gz">
afp-Lehmer-2013-11-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Lifting_Definition_Option.html b/web/entries/Lifting_Definition_Option.html
--- a/web/entries/Lifting_Definition_Option.html
+++ b/web/entries/Lifting_Definition_Option.html
@@ -1,231 +1,231 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Lifting Definition Option - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>ifting
<font class="first">D</font>efinition
<font class="first">O</font>ption
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Lifting Definition Option</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-10-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We implemented a command that can be used to easily generate
elements of a restricted type <tt>{x :: 'a. P x}</tt>,
provided the definition is of the form
<tt>f ys = (if check ys then Some(generate ys :: 'a) else None)</tt> where
<tt>ys</tt> is a list of variables <tt>y1 ... yn</tt> and
<tt>check ys ==> P(generate ys)</tt> can be proved.
<p>
In principle, such a definition is also directly possible using the
<tt>lift_definition</tt> command. However, then this definition will not be
suitable for code-generation. To this end, we automated a more complex
construction of Joachim Breitner which is amenable for code-generation, and
where the test <tt>check ys</tt> will only be performed once. In the
automation, one auxiliary type is created, and Isabelle's lifting- and
-transfer-package is invoked several times.</div></td>
+transfer-package is invoked several times.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Lifting_Definition_Option-AFP,
author = {René Thiemann},
title = {Lifting Definition Option},
journal = {Archive of Formal Proofs},
month = oct,
year = 2014,
note = {\url{http://isa-afp.org/entries/Lifting_Definition_Option.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lifting_Definition_Option/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Lifting_Definition_Option/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lifting_Definition_Option/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Lifting_Definition_Option-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Lifting_Definition_Option-2019-06-11.tar.gz">
afp-Lifting_Definition_Option-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Lifting_Definition_Option-2018-08-16.tar.gz">
afp-Lifting_Definition_Option-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Lifting_Definition_Option-2017-10-10.tar.gz">
afp-Lifting_Definition_Option-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Lifting_Definition_Option-2016-12-17.tar.gz">
afp-Lifting_Definition_Option-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Lifting_Definition_Option-2016-02-22.tar.gz">
afp-Lifting_Definition_Option-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Lifting_Definition_Option-2015-05-27.tar.gz">
afp-Lifting_Definition_Option-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Lifting_Definition_Option-2014-10-15.tar.gz">
afp-Lifting_Definition_Option-2014-10-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/LightweightJava.html b/web/entries/LightweightJava.html
--- a/web/entries/LightweightJava.html
+++ b/web/entries/LightweightJava.html
@@ -1,246 +1,246 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Lightweight Java - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>ightweight
<font class="first">J</font>ava
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Lightweight Java</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://rok.strnisa.com/lj/">Rok Strniša</a> and
<a href="http://research.microsoft.com/people/mattpark/">Matthew Parkinson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-02-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">A fully-formalized and extensible minimal imperative fragment of Java.</div></td>
+ <td class="abstract mathjax_process">A fully-formalized and extensible minimal imperative fragment of Java.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{LightweightJava-AFP,
author = {Rok Strniša and Matthew Parkinson},
title = {Lightweight Java},
journal = {Archive of Formal Proofs},
month = feb,
year = 2011,
note = {\url{http://isa-afp.org/entries/LightweightJava.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LightweightJava/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/LightweightJava/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LightweightJava/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-LightweightJava-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-LightweightJava-2019-06-11.tar.gz">
afp-LightweightJava-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LightweightJava-2018-08-16.tar.gz">
afp-LightweightJava-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-LightweightJava-2017-10-10.tar.gz">
afp-LightweightJava-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-LightweightJava-2016-12-17.tar.gz">
afp-LightweightJava-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-LightweightJava-2016-02-22.tar.gz">
afp-LightweightJava-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-LightweightJava-2015-05-27.tar.gz">
afp-LightweightJava-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-LightweightJava-2014-08-28.tar.gz">
afp-LightweightJava-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-LightweightJava-2013-12-11.tar.gz">
afp-LightweightJava-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-LightweightJava-2013-11-17.tar.gz">
afp-LightweightJava-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-LightweightJava-2013-02-16.tar.gz">
afp-LightweightJava-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-LightweightJava-2012-05-24.tar.gz">
afp-LightweightJava-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-LightweightJava-2011-10-11.tar.gz">
afp-LightweightJava-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-LightweightJava-2011-03-02.tar.gz">
afp-LightweightJava-2011-03-02.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/LinearQuantifierElim.html b/web/entries/LinearQuantifierElim.html
--- a/web/entries/LinearQuantifierElim.html
+++ b/web/entries/LinearQuantifierElim.html
@@ -1,291 +1,291 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Quantifier Elimination for Linear Arithmetic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">Q</font>uantifier
<font class="first">E</font>limination
for
<font class="first">L</font>inear
<font class="first">A</font>rithmetic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Quantifier Elimination for Linear Arithmetic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-01-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This article formalizes quantifier elimination procedures for dense linear orders, linear real arithmetic and Presburger arithmetic. In each case both a DNF-based non-elementary algorithm and one or more (doubly) exponential NNF-based algorithms are formalized, including the well-known algorithms by Ferrante and Rackoff and by Cooper. The NNF-based algorithms for dense linear orders are new but based on Ferrante and Rackoff and on an algorithm by Loos and Weisspfenning which simulates infenitesimals. All algorithms are directly executable. In particular, they yield reflective quantifier elimination procedures for HOL itself. The formalization makes heavy use of locales and is therefore highly modular.</div></td>
+ <td class="abstract mathjax_process">This article formalizes quantifier elimination procedures for dense linear orders, linear real arithmetic and Presburger arithmetic. In each case both a DNF-based non-elementary algorithm and one or more (doubly) exponential NNF-based algorithms are formalized, including the well-known algorithms by Ferrante and Rackoff and by Cooper. The NNF-based algorithms for dense linear orders are new but based on Ferrante and Rackoff and on an algorithm by Loos and Weisspfenning which simulates infenitesimals. All algorithms are directly executable. In particular, they yield reflective quantifier elimination procedures for HOL itself. The formalization makes heavy use of locales and is therefore highly modular.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{LinearQuantifierElim-AFP,
author = {Tobias Nipkow},
title = {Quantifier Elimination for Linear Arithmetic},
journal = {Archive of Formal Proofs},
month = jan,
year = 2008,
note = {\url{http://isa-afp.org/entries/LinearQuantifierElim.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LinearQuantifierElim/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/LinearQuantifierElim/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LinearQuantifierElim/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-LinearQuantifierElim-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-LinearQuantifierElim-2019-06-11.tar.gz">
afp-LinearQuantifierElim-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LinearQuantifierElim-2018-08-16.tar.gz">
afp-LinearQuantifierElim-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-LinearQuantifierElim-2017-10-10.tar.gz">
afp-LinearQuantifierElim-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-LinearQuantifierElim-2016-12-17.tar.gz">
afp-LinearQuantifierElim-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-LinearQuantifierElim-2016-02-22.tar.gz">
afp-LinearQuantifierElim-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-LinearQuantifierElim-2015-05-27.tar.gz">
afp-LinearQuantifierElim-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-LinearQuantifierElim-2014-08-28.tar.gz">
afp-LinearQuantifierElim-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-LinearQuantifierElim-2013-12-11.tar.gz">
afp-LinearQuantifierElim-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-LinearQuantifierElim-2013-11-17.tar.gz">
afp-LinearQuantifierElim-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-LinearQuantifierElim-2013-03-02.tar.gz">
afp-LinearQuantifierElim-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-LinearQuantifierElim-2013-02-16.tar.gz">
afp-LinearQuantifierElim-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-LinearQuantifierElim-2012-05-24.tar.gz">
afp-LinearQuantifierElim-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-LinearQuantifierElim-2011-10-11.tar.gz">
afp-LinearQuantifierElim-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-LinearQuantifierElim-2011-02-11.tar.gz">
afp-LinearQuantifierElim-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-LinearQuantifierElim-2010-07-01.tar.gz">
afp-LinearQuantifierElim-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-LinearQuantifierElim-2009-12-12.tar.gz">
afp-LinearQuantifierElim-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-LinearQuantifierElim-2009-04-29.tar.gz">
afp-LinearQuantifierElim-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-LinearQuantifierElim-2008-06-10.tar.gz">
afp-LinearQuantifierElim-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-LinearQuantifierElim-2008-02-12.tar.gz">
afp-LinearQuantifierElim-2008-02-12.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-LinearQuantifierElim-2008-01-24.tar.gz">
afp-LinearQuantifierElim-2008-01-24.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-LinearQuantifierElim-2008-01-11.tar.gz">
afp-LinearQuantifierElim-2008-01-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Linear_Inequalities.html b/web/entries/Linear_Inequalities.html
--- a/web/entries/Linear_Inequalities.html
+++ b/web/entries/Linear_Inequalities.html
@@ -1,199 +1,199 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Linear Inequalities - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>inear
<font class="first">I</font>nequalities
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Linear Inequalities</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/users/bottesch/">Ralph Bottesch</a>,
Alban Reynaud and
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-06-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize results about linear inqualities, mainly from
Schrijver's book. The main results are the proof of the
fundamental theorem on linear inequalities, Farkas' lemma,
Carathéodory's theorem, the Farkas-Minkowsky-Weyl theorem, the
decomposition theorem of polyhedra, and Meyer's result that the
integer hull of a polyhedron is a polyhedron itself. Several theorems
include bounds on the appearing numbers, and in particular we provide
-an a-priori bound on mixed-integer solutions of linear inequalities.</div></td>
+an a-priori bound on mixed-integer solutions of linear inequalities.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Linear_Inequalities-AFP,
author = {Ralph Bottesch and Alban Reynaud and René Thiemann},
title = {Linear Inequalities},
journal = {Archive of Formal Proofs},
month = jun,
year = 2019,
note = {\url{http://isa-afp.org/entries/Linear_Inequalities.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="LLL_Basis_Reduction.html">LLL_Basis_Reduction</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Linear_Programming.html">Linear_Programming</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Linear_Inequalities/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Linear_Inequalities/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Linear_Inequalities/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Linear_Inequalities-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Linear_Inequalities-2019-06-24.tar.gz">
afp-Linear_Inequalities-2019-06-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Linear_Programming.html b/web/entries/Linear_Programming.html
--- a/web/entries/Linear_Programming.html
+++ b/web/entries/Linear_Programming.html
@@ -1,194 +1,194 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Linear Programming - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>inear
<font class="first">P</font>rogramming
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Linear Programming</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.parsert.com/">Julian Parsert</a> and
<a href="http://cl-informatik.uibk.ac.at/cek/">Cezary Kaliszyk</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-08-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We use the previous formalization of the general simplex algorithm to
formulate an algorithm for solving linear programs. We encode the
linear programs using only linear constraints. Solving these
constraints also solves the original linear program. This algorithm is
proven to be sound by applying the weak duality theorem which is also
-part of this formalization.</div></td>
+part of this formalization.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Linear_Programming-AFP,
author = {Julian Parsert and Cezary Kaliszyk},
title = {Linear Programming},
journal = {Archive of Formal Proofs},
month = aug,
year = 2019,
note = {\url{http://isa-afp.org/entries/Linear_Programming.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Farkas.html">Farkas</a>, <a href="Jordan_Normal_Form.html">Jordan_Normal_Form</a>, <a href="Linear_Inequalities.html">Linear_Inequalities</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Linear_Programming/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Linear_Programming/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Linear_Programming/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Linear_Programming-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Linear_Programming-2019-09-23.tar.gz">
afp-Linear_Programming-2019-09-23.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Linear_Recurrences.html b/web/entries/Linear_Recurrences.html
--- a/web/entries/Linear_Recurrences.html
+++ b/web/entries/Linear_Recurrences.html
@@ -1,214 +1,214 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Linear Recurrences - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>inear
<font class="first">R</font>ecurrences
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Linear Recurrences</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-10-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p> Linear recurrences with constant coefficients are an
interesting class of recurrence equations that can be solved
explicitly. The most famous example are certainly the Fibonacci
numbers with the equation <i>f</i>(<i>n</i>) =
<i>f</i>(<i>n</i>-1) +
<i>f</i>(<i>n</i> - 2) and the quite
non-obvious closed form
(<i>&phi;</i><sup><i>n</i></sup>
-
(-<i>&phi;</i>)<sup>-<i>n</i></sup>)
/ &radic;<span style="text-decoration:
overline">5</span> where &phi; is the golden ratio.
</p> <p> In this work, I build on existing tools in
Isabelle &ndash; such as formal power series and polynomial
factorisation algorithms &ndash; to develop a theory of these
recurrences and derive a fully executable solver for them that can be
-exported to programming languages like Haskell. </p></div></td>
+exported to programming languages like Haskell. </p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Linear_Recurrences-AFP,
author = {Manuel Eberl},
title = {Linear Recurrences},
journal = {Archive of Formal Proofs},
month = oct,
year = 2017,
note = {\url{http://isa-afp.org/entries/Linear_Recurrences.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Count_Complex_Roots.html">Count_Complex_Roots</a>, <a href="Polynomial_Factorization.html">Polynomial_Factorization</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Linear_Recurrences/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Linear_Recurrences/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Linear_Recurrences/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Linear_Recurrences-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Linear_Recurrences-2019-06-11.tar.gz">
afp-Linear_Recurrences-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Linear_Recurrences-2018-08-16.tar.gz">
afp-Linear_Recurrences-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Linear_Recurrences-2017-10-17.tar.gz">
afp-Linear_Recurrences-2017-10-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Liouville_Numbers.html b/web/entries/Liouville_Numbers.html
--- a/web/entries/Liouville_Numbers.html
+++ b/web/entries/Liouville_Numbers.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Liouville numbers - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>iouville
numbers
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Liouville numbers</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-12-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
Liouville numbers are a class of transcendental numbers that can be approximated
particularly well with rational numbers. Historically, they were the first
numbers whose transcendence was proven.
</p><p>
In this entry, we define the concept of Liouville numbers as well as the
standard construction to obtain Liouville numbers (including Liouville's
constant) and we prove their most important properties: irrationality and
transcendence.
</p><p>
The proof is very elementary and requires only standard arithmetic, the Mean
Value Theorem for polynomials, and the boundedness of polynomials on compact
intervals.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Liouville_Numbers-AFP,
author = {Manuel Eberl},
title = {Liouville numbers},
journal = {Archive of Formal Proofs},
month = dec,
year = 2015,
note = {\url{http://isa-afp.org/entries/Liouville_Numbers.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Liouville_Numbers/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Liouville_Numbers/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Liouville_Numbers/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Liouville_Numbers-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Liouville_Numbers-2019-06-11.tar.gz">
afp-Liouville_Numbers-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Liouville_Numbers-2018-08-16.tar.gz">
afp-Liouville_Numbers-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Liouville_Numbers-2017-10-10.tar.gz">
afp-Liouville_Numbers-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Liouville_Numbers-2016-12-17.tar.gz">
afp-Liouville_Numbers-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Liouville_Numbers-2016-02-22.tar.gz">
afp-Liouville_Numbers-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Liouville_Numbers-2016-01-05.tar.gz">
afp-Liouville_Numbers-2016-01-05.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/List-Index.html b/web/entries/List-Index.html
--- a/web/entries/List-Index.html
+++ b/web/entries/List-Index.html
@@ -1,257 +1,257 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>List Index - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>ist
<font class="first">I</font>ndex
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">List Index</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-02-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This theory provides functions for finding the index of an element in a list, by predicate and by value.</div></td>
+ <td class="abstract mathjax_process">This theory provides functions for finding the index of an element in a list, by predicate and by value.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{List-Index-AFP,
author = {Tobias Nipkow},
title = {List Index},
journal = {Archive of Formal Proofs},
month = feb,
year = 2010,
note = {\url{http://isa-afp.org/entries/List-Index.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Affine_Arithmetic.html">Affine_Arithmetic</a>, <a href="Comparison_Sort_Lower_Bound.html">Comparison_Sort_Lower_Bound</a>, <a href="Formula_Derivatives.html">Formula_Derivatives</a>, <a href="Higher_Order_Terms.html">Higher_Order_Terms</a>, <a href="Jinja.html">Jinja</a>, <a href="List_Update.html">List_Update</a>, <a href="LTL_to_DRA.html">LTL_to_DRA</a>, <a href="MSO_Regex_Equivalence.html">MSO_Regex_Equivalence</a>, <a href="Nested_Multisets_Ordinals.html">Nested_Multisets_Ordinals</a>, <a href="Ordinary_Differential_Equations.html">Ordinary_Differential_Equations</a>, <a href="Planarity_Certificates.html">Planarity_Certificates</a>, <a href="Quick_Sort_Cost.html">Quick_Sort_Cost</a>, <a href="Randomised_Social_Choice.html">Randomised_Social_Choice</a>, <a href="Refine_Imperative_HOL.html">Refine_Imperative_HOL</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/List-Index/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/List-Index/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/List-Index/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-List-Index-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-List-Index-2019-06-11.tar.gz">
afp-List-Index-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-List-Index-2018-08-16.tar.gz">
afp-List-Index-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-List-Index-2017-10-10.tar.gz">
afp-List-Index-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-List-Index-2016-12-17.tar.gz">
afp-List-Index-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-List-Index-2016-02-22.tar.gz">
afp-List-Index-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-List-Index-2015-05-27.tar.gz">
afp-List-Index-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-List-Index-2014-08-28.tar.gz">
afp-List-Index-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-List-Index-2013-12-11.tar.gz">
afp-List-Index-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-List-Index-2013-11-17.tar.gz">
afp-List-Index-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-List-Index-2013-02-16.tar.gz">
afp-List-Index-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-List-Index-2012-05-24.tar.gz">
afp-List-Index-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-List-Index-2011-10-11.tar.gz">
afp-List-Index-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-List-Index-2011-02-11.tar.gz">
afp-List-Index-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-List-Index-2010-07-01.tar.gz">
afp-List-Index-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-List-Index-2010-02-20.tar.gz">
afp-List-Index-2010-02-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/List-Infinite.html b/web/entries/List-Infinite.html
--- a/web/entries/List-Infinite.html
+++ b/web/entries/List-Infinite.html
@@ -1,252 +1,252 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Infinite Lists - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>nfinite
<font class="first">L</font>ists
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Infinite Lists</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
David Trachtenherz
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-02-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We introduce a theory of infinite lists in HOL formalized as functions over naturals (folder ListInf, theories ListInf and ListInf_Prefix). It also provides additional results for finite lists (theory ListInf/List2), natural numbers (folder CommonArith, esp. division/modulo, naturals with infinity), sets (folder CommonSet, esp. cutting/truncating sets, traversing sets of naturals).</div></td>
+ <td class="abstract mathjax_process">We introduce a theory of infinite lists in HOL formalized as functions over naturals (folder ListInf, theories ListInf and ListInf_Prefix). It also provides additional results for finite lists (theory ListInf/List2), natural numbers (folder CommonArith, esp. division/modulo, naturals with infinity), sets (folder CommonSet, esp. cutting/truncating sets, traversing sets of naturals).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{List-Infinite-AFP,
author = {David Trachtenherz},
title = {Infinite Lists},
journal = {Archive of Formal Proofs},
month = feb,
year = 2011,
note = {\url{http://isa-afp.org/entries/List-Infinite.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Nat-Interval-Logic.html">Nat-Interval-Logic</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/List-Infinite/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/List-Infinite/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/List-Infinite/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-List-Infinite-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-List-Infinite-2019-06-11.tar.gz">
afp-List-Infinite-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-List-Infinite-2018-08-16.tar.gz">
afp-List-Infinite-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-List-Infinite-2017-10-10.tar.gz">
afp-List-Infinite-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-List-Infinite-2016-12-17.tar.gz">
afp-List-Infinite-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-List-Infinite-2016-02-22.tar.gz">
afp-List-Infinite-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-List-Infinite-2015-05-27.tar.gz">
afp-List-Infinite-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-List-Infinite-2014-08-28.tar.gz">
afp-List-Infinite-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-List-Infinite-2013-12-11.tar.gz">
afp-List-Infinite-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-List-Infinite-2013-11-17.tar.gz">
afp-List-Infinite-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-List-Infinite-2013-03-02.tar.gz">
afp-List-Infinite-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-List-Infinite-2013-02-16.tar.gz">
afp-List-Infinite-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-List-Infinite-2012-05-24.tar.gz">
afp-List-Infinite-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-List-Infinite-2011-10-11.tar.gz">
afp-List-Infinite-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-List-Infinite-2011-02-24.tar.gz">
afp-List-Infinite-2011-02-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/List_Interleaving.html b/web/entries/List_Interleaving.html
--- a/web/entries/List_Interleaving.html
+++ b/web/entries/List_Interleaving.html
@@ -1,238 +1,238 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Reasoning about Lists via List Interleaving - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>easoning
about
<font class="first">L</font>ists
via
<font class="first">L</font>ist
<font class="first">I</font>nterleaving
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Reasoning about Lists via List Interleaving</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Pasquale Noce (pasquale /dot/ noce /dot/ lavoro /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-06-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
Among the various mathematical tools introduced in his outstanding work on
Communicating Sequential Processes, Hoare has defined "interleaves" as the
predicate satisfied by any three lists such that the first list may be
split into sublists alternately extracted from the other two ones, whatever
is the criterion for extracting an item from either one list or the other
in each step.
</p><p>
This paper enriches Hoare's definition by identifying such criterion with
the truth value of a predicate taking as inputs the head and the tail of
the first list. This enhanced "interleaves" predicate turns out to permit
the proof of equalities between lists without the need of an induction.
Some rules that allow to infer "interleaves" statements without induction,
particularly applying to the addition or removal of a prefix to the input
lists, are also proven. Finally, a stronger version of the predicate, named
"Interleaves", is shown to fulfil further rules applying to the addition or
removal of a suffix to the input lists.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{List_Interleaving-AFP,
author = {Pasquale Noce},
title = {Reasoning about Lists via List Interleaving},
journal = {Archive of Formal Proofs},
month = jun,
year = 2015,
note = {\url{http://isa-afp.org/entries/List_Interleaving.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Noninterference_Ipurge_Unwinding.html">Noninterference_Ipurge_Unwinding</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/List_Interleaving/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/List_Interleaving/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/List_Interleaving/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-List_Interleaving-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-List_Interleaving-2019-06-11.tar.gz">
afp-List_Interleaving-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-List_Interleaving-2018-08-16.tar.gz">
afp-List_Interleaving-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-List_Interleaving-2017-10-10.tar.gz">
afp-List_Interleaving-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-List_Interleaving-2016-12-17.tar.gz">
afp-List_Interleaving-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-List_Interleaving-2016-02-22.tar.gz">
afp-List_Interleaving-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-List_Interleaving-2015-06-13.tar.gz">
afp-List_Interleaving-2015-06-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/List_Inversions.html b/web/entries/List_Inversions.html
--- a/web/entries/List_Inversions.html
+++ b/web/entries/List_Inversions.html
@@ -1,202 +1,202 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Inversions of a List - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">I</font>nversions
of
a
<font class="first">L</font>ist
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Inversions of a List</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-02-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This entry defines the set of <em>inversions</em>
of a list, i.e. the pairs of indices that violate sortedness. It also
proves the correctness of the well-known
<em>O</em>(<em>n log n</em>)
divide-and-conquer algorithm to compute the number of
-inversions.</p></div></td>
+inversions.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{List_Inversions-AFP,
author = {Manuel Eberl},
title = {The Inversions of a List},
journal = {Archive of Formal Proofs},
month = feb,
year = 2019,
note = {\url{http://isa-afp.org/entries/List_Inversions.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/List_Inversions/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/List_Inversions/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/List_Inversions/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-List_Inversions-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-List_Inversions-2019-06-11.tar.gz">
afp-List_Inversions-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-List_Inversions-2019-02-21.tar.gz">
afp-List_Inversions-2019-02-21.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/List_Update.html b/web/entries/List_Update.html
--- a/web/entries/List_Update.html
+++ b/web/entries/List_Update.html
@@ -1,234 +1,234 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Analysis of List Update Algorithms - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>nalysis
of
<font class="first">L</font>ist
<font class="first">U</font>pdate
<font class="first">A</font>lgorithms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Analysis of List Update Algorithms</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://in.tum.de/~haslbema/">Maximilian P.L. Haslbeck</a> and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-02-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
These theories formalize the quantitative analysis of a number of classical algorithms for the list update problem: 2-competitiveness of move-to-front, the lower bound of 2 for the competitiveness of deterministic list update algorithms and 1.6-competitiveness of the randomized COMB algorithm, the best randomized list update algorithm known to date.
The material is based on the first two chapters of <i>Online Computation
and Competitive Analysis</i> by Borodin and El-Yaniv.
</p>
<p>
For an informal description see the FSTTCS 2016 publication
<a href="http://www21.in.tum.de/~nipkow/pubs/fsttcs16.html">Verified Analysis of List Update Algorithms</a>
by Haslbeck and Nipkow.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{List_Update-AFP,
author = {Maximilian P.L. Haslbeck and Tobias Nipkow},
title = {Analysis of List Update Algorithms},
journal = {Archive of Formal Proofs},
month = feb,
year = 2016,
note = {\url{http://isa-afp.org/entries/List_Update.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="List-Index.html">List-Index</a>, <a href="Regular-Sets.html">Regular-Sets</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/List_Update/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/List_Update/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/List_Update/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-List_Update-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-List_Update-2019-06-11.tar.gz">
afp-List_Update-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-List_Update-2018-08-16.tar.gz">
afp-List_Update-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-List_Update-2017-10-10.tar.gz">
afp-List_Update-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-List_Update-2016-12-17.tar.gz">
afp-List_Update-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-List_Update-2016-10-15.tar.gz">
afp-List_Update-2016-10-15.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-List_Update-2016-02-23.tar.gz">
afp-List_Update-2016-02-23.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-List_Update-2016-02-22.tar.gz">
afp-List_Update-2016-02-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/LocalLexing.html b/web/entries/LocalLexing.html
--- a/web/entries/LocalLexing.html
+++ b/web/entries/LocalLexing.html
@@ -1,209 +1,209 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Local Lexing - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>ocal
<font class="first">L</font>exing
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Local Lexing</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Steven Obua (steven /at/ recursivemind /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-04-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This formalisation accompanies the paper <a
href="https://arxiv.org/abs/1702.03277">Local
Lexing</a> which introduces a novel parsing concept of the same
name. The paper also gives a high-level algorithm for local lexing as
an extension of Earley's algorithm. This formalisation proves the
algorithm to be correct with respect to its local lexing semantics. As
a special case, this formalisation thus also contains a proof of the
correctness of Earley's algorithm. The paper contains a short
-outline of how this formalisation is organised.</div></td>
+outline of how this formalisation is organised.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{LocalLexing-AFP,
author = {Steven Obua},
title = {Local Lexing},
journal = {Archive of Formal Proofs},
month = apr,
year = 2017,
note = {\url{http://isa-afp.org/entries/LocalLexing.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LocalLexing/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/LocalLexing/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/LocalLexing/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-LocalLexing-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-LocalLexing-2019-06-11.tar.gz">
afp-LocalLexing-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-LocalLexing-2018-08-16.tar.gz">
afp-LocalLexing-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-LocalLexing-2017-10-10.tar.gz">
afp-LocalLexing-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-LocalLexing-2017-04-28.tar.gz">
afp-LocalLexing-2017-04-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Localization_Ring.html b/web/entries/Localization_Ring.html
--- a/web/entries/Localization_Ring.html
+++ b/web/entries/Localization_Ring.html
@@ -1,208 +1,208 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Localization of a Commutative Ring - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">L</font>ocalization
of
a
<font class="first">C</font>ommutative
<font class="first">R</font>ing
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Localization of a Commutative Ring</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://sites.google.com/site/anthonybordg/">Anthony Bordg</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-06-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize the localization of a commutative ring R with respect to
a multiplicative subset (i.e. a submonoid of R seen as a
multiplicative monoid). This localization is itself a commutative ring
and we build the natural homomorphism of rings from R to its
-localization.</div></td>
+localization.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Localization_Ring-AFP,
author = {Anthony Bordg},
title = {The Localization of a Commutative Ring},
journal = {Archive of Formal Proofs},
month = jun,
year = 2018,
note = {\url{http://isa-afp.org/entries/Localization_Ring.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Localization_Ring/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Localization_Ring/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Localization_Ring/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Localization_Ring-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Localization_Ring-2019-06-11.tar.gz">
afp-Localization_Ring-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Localization_Ring-2018-08-16.tar.gz">
afp-Localization_Ring-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Localization_Ring-2018-06-17.tar.gz">
afp-Localization_Ring-2018-06-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Locally-Nameless-Sigma.html b/web/entries/Locally-Nameless-Sigma.html
--- a/web/entries/Locally-Nameless-Sigma.html
+++ b/web/entries/Locally-Nameless-Sigma.html
@@ -1,264 +1,264 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Locally Nameless Sigma Calculus - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>ocally
<font class="first">N</font>ameless
<font class="first">S</font>igma
<font class="first">C</font>alculus
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Locally Nameless Sigma Calculus</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Ludovic Henrio (Ludovic /dot/ Henrio /at/ sophia /dot/ inria /dot/ fr),
Florian Kammüller (flokam /at/ cs /dot/ tu-berlin /dot/ de),
Bianca Lutz (sowilo /at/ cs /dot/ tu-berlin /dot/ de) and
Henry Sudhof (hsudhof /at/ cs /dot/ tu-berlin /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-04-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We present a Theory of Objects based on the original functional sigma-calculus by Abadi and Cardelli but with an additional parameter to methods. We prove confluence of the operational semantics following the outline of Nipkow's proof of confluence for the lambda-calculus reusing his theory Commutation, a generic diamond lemma reduction. We furthermore formalize a simple type system for our sigma-calculus including a proof of type safety. The entire development uses the concept of Locally Nameless representation for binders. We reuse an earlier proof of confluence for a simpler sigma-calculus based on de Bruijn indices and lists to represent objects.</div></td>
+ <td class="abstract mathjax_process">We present a Theory of Objects based on the original functional sigma-calculus by Abadi and Cardelli but with an additional parameter to methods. We prove confluence of the operational semantics following the outline of Nipkow's proof of confluence for the lambda-calculus reusing his theory Commutation, a generic diamond lemma reduction. We furthermore formalize a simple type system for our sigma-calculus including a proof of type safety. The entire development uses the concept of Locally Nameless representation for binders. We reuse an earlier proof of confluence for a simpler sigma-calculus based on de Bruijn indices and lists to represent objects.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Locally-Nameless-Sigma-AFP,
author = {Ludovic Henrio and Florian Kammüller and Bianca Lutz and Henry Sudhof},
title = {Locally Nameless Sigma Calculus},
journal = {Archive of Formal Proofs},
month = apr,
year = 2010,
note = {\url{http://isa-afp.org/entries/Locally-Nameless-Sigma.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Applicative_Lifting.html">Applicative_Lifting</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Locally-Nameless-Sigma/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Locally-Nameless-Sigma/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Locally-Nameless-Sigma/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Locally-Nameless-Sigma-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Locally-Nameless-Sigma-2019-06-11.tar.gz">
afp-Locally-Nameless-Sigma-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Locally-Nameless-Sigma-2018-08-16.tar.gz">
afp-Locally-Nameless-Sigma-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Locally-Nameless-Sigma-2017-10-10.tar.gz">
afp-Locally-Nameless-Sigma-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Locally-Nameless-Sigma-2016-12-17.tar.gz">
afp-Locally-Nameless-Sigma-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Locally-Nameless-Sigma-2016-02-22.tar.gz">
afp-Locally-Nameless-Sigma-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Locally-Nameless-Sigma-2015-05-27.tar.gz">
afp-Locally-Nameless-Sigma-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Locally-Nameless-Sigma-2014-08-28.tar.gz">
afp-Locally-Nameless-Sigma-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Locally-Nameless-Sigma-2013-12-11.tar.gz">
afp-Locally-Nameless-Sigma-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Locally-Nameless-Sigma-2013-11-17.tar.gz">
afp-Locally-Nameless-Sigma-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Locally-Nameless-Sigma-2013-02-16.tar.gz">
afp-Locally-Nameless-Sigma-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Locally-Nameless-Sigma-2012-05-24.tar.gz">
afp-Locally-Nameless-Sigma-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Locally-Nameless-Sigma-2011-10-11.tar.gz">
afp-Locally-Nameless-Sigma-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Locally-Nameless-Sigma-2011-02-11.tar.gz">
afp-Locally-Nameless-Sigma-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Locally-Nameless-Sigma-2010-07-01.tar.gz">
afp-Locally-Nameless-Sigma-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Locally-Nameless-Sigma-2010-05-03.tar.gz">
afp-Locally-Nameless-Sigma-2010-05-03.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Lowe_Ontological_Argument.html b/web/entries/Lowe_Ontological_Argument.html
--- a/web/entries/Lowe_Ontological_Argument.html
+++ b/web/entries/Lowe_Ontological_Argument.html
@@ -1,222 +1,222 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Computer-assisted Reconstruction and Assessment of E. J. Lowe's Modal Ontological Argument - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>omputer-assisted
<font class="first">R</font>econstruction
and
<font class="first">A</font>ssessment
of
<font class="first">E</font>.
<font class="first">J</font>.
<font class="first">L</font>owe's
<font class="first">M</font>odal
<font class="first">O</font>ntological
<font class="first">A</font>rgument
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Computer-assisted Reconstruction and Assessment of E. J. Lowe's Modal Ontological Argument</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
David Fuenmayor (davfuenmayor /at/ gmail /dot/ com) and
<a href="http://christoph-benzmueller.de">Christoph Benzmüller</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-09-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Computers may help us to understand --not just verify-- philosophical
arguments. By utilizing modern proof assistants in an iterative
interpretive process, we can reconstruct and assess an argument by
fully formal means. Through the mechanization of a variant of St.
Anselm's ontological argument by E. J. Lowe, which is a
paradigmatic example of a natural-language argument with strong ties
to metaphysics and religion, we offer an ideal showcase for our
-computer-assisted interpretive method.</div></td>
+computer-assisted interpretive method.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Lowe_Ontological_Argument-AFP,
author = {David Fuenmayor and Christoph Benzmüller},
title = {Computer-assisted Reconstruction and Assessment of E. J. Lowe's Modal Ontological Argument},
journal = {Archive of Formal Proofs},
month = sep,
year = 2017,
note = {\url{http://isa-afp.org/entries/Lowe_Ontological_Argument.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lowe_Ontological_Argument/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Lowe_Ontological_Argument/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lowe_Ontological_Argument/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Lowe_Ontological_Argument-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Lowe_Ontological_Argument-2019-06-11.tar.gz">
afp-Lowe_Ontological_Argument-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Lowe_Ontological_Argument-2018-08-16.tar.gz">
afp-Lowe_Ontological_Argument-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Lowe_Ontological_Argument-2017-10-16.tar.gz">
afp-Lowe_Ontological_Argument-2017-10-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Lower_Semicontinuous.html b/web/entries/Lower_Semicontinuous.html
--- a/web/entries/Lower_Semicontinuous.html
+++ b/web/entries/Lower_Semicontinuous.html
@@ -1,247 +1,247 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Lower Semicontinuous Functions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>ower
<font class="first">S</font>emicontinuous
<font class="first">F</font>unctions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Lower Semicontinuous Functions</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Bogdan Grechuk (grechukbogdan /at/ yandex /dot/ ru)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-01-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We define the notions of lower and upper semicontinuity for functions from a metric space to the extended real line. We prove that a function is both lower and upper semicontinuous if and only if it is continuous. We also give several equivalent characterizations of lower semicontinuity. In particular, we prove that a function is lower semicontinuous if and only if its epigraph is a closed set. Also, we introduce the notion of the lower semicontinuous hull of an arbitrary function and prove its basic properties.</div></td>
+ <td class="abstract mathjax_process">We define the notions of lower and upper semicontinuity for functions from a metric space to the extended real line. We prove that a function is both lower and upper semicontinuous if and only if it is continuous. We also give several equivalent characterizations of lower semicontinuity. In particular, we prove that a function is lower semicontinuous if and only if its epigraph is a closed set. Also, we introduce the notion of the lower semicontinuous hull of an arbitrary function and prove its basic properties.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Lower_Semicontinuous-AFP,
author = {Bogdan Grechuk},
title = {Lower Semicontinuous Functions},
journal = {Archive of Formal Proofs},
month = jan,
year = 2011,
note = {\url{http://isa-afp.org/entries/Lower_Semicontinuous.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lower_Semicontinuous/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Lower_Semicontinuous/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lower_Semicontinuous/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Lower_Semicontinuous-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Lower_Semicontinuous-2019-06-11.tar.gz">
afp-Lower_Semicontinuous-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Lower_Semicontinuous-2018-08-16.tar.gz">
afp-Lower_Semicontinuous-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Lower_Semicontinuous-2017-10-10.tar.gz">
afp-Lower_Semicontinuous-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Lower_Semicontinuous-2016-12-17.tar.gz">
afp-Lower_Semicontinuous-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Lower_Semicontinuous-2016-02-22.tar.gz">
afp-Lower_Semicontinuous-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Lower_Semicontinuous-2015-05-27.tar.gz">
afp-Lower_Semicontinuous-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Lower_Semicontinuous-2014-08-28.tar.gz">
afp-Lower_Semicontinuous-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Lower_Semicontinuous-2013-12-11.tar.gz">
afp-Lower_Semicontinuous-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Lower_Semicontinuous-2013-11-17.tar.gz">
afp-Lower_Semicontinuous-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Lower_Semicontinuous-2013-02-16.tar.gz">
afp-Lower_Semicontinuous-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Lower_Semicontinuous-2012-05-24.tar.gz">
afp-Lower_Semicontinuous-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Lower_Semicontinuous-2011-10-11.tar.gz">
afp-Lower_Semicontinuous-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Lower_Semicontinuous-2011-02-11.tar.gz">
afp-Lower_Semicontinuous-2011-02-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Lp.html b/web/entries/Lp.html
--- a/web/entries/Lp.html
+++ b/web/entries/Lp.html
@@ -1,205 +1,205 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Lp spaces - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>p
spaces
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Lp spaces</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Sebastien Gouezel
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-10-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
-Lp is the space of functions whose p-th power is integrable. It is one of the most fundamental Banach spaces that is used in analysis and probability. We develop a framework for function spaces, and then implement the Lp spaces in this framework using the existing integration theory in Isabelle/HOL. Our development contains most fundamental properties of Lp spaces, notably the Hölder and Minkowski inequalities, completeness of Lp, duality, stability under almost sure convergence, multiplication of functions in Lp and Lq, stability under conditional expectation.</div></td>
+ <td class="abstract mathjax_process">
+Lp is the space of functions whose p-th power is integrable. It is one of the most fundamental Banach spaces that is used in analysis and probability. We develop a framework for function spaces, and then implement the Lp spaces in this framework using the existing integration theory in Isabelle/HOL. Our development contains most fundamental properties of Lp spaces, notably the Hölder and Minkowski inequalities, completeness of Lp, duality, stability under almost sure convergence, multiplication of functions in Lp and Lq, stability under conditional expectation.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Lp-AFP,
author = {Sebastien Gouezel},
title = {Lp spaces},
journal = {Archive of Formal Proofs},
month = oct,
year = 2016,
note = {\url{http://isa-afp.org/entries/Lp.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Ergodic_Theory.html">Ergodic_Theory</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Fourier.html">Fourier</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lp/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Lp/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Lp/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Lp-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Lp-2019-06-11.tar.gz">
afp-Lp-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Lp-2018-08-16.tar.gz">
afp-Lp-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Lp-2017-10-10.tar.gz">
afp-Lp-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Lp-2016-12-17.tar.gz">
afp-Lp-2016-12-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Lucas_Theorem.html b/web/entries/Lucas_Theorem.html
new file mode 100644
--- /dev/null
+++ b/web/entries/Lucas_Theorem.html
@@ -0,0 +1,192 @@
+<!DOCTYPE html>
+<html lang="en">
+<head>
+<meta charset="utf-8">
+<title>Lucas's Theorem - Archive of Formal Proofs
+</title>
+<link rel="stylesheet" type="text/css" href="../front.css">
+<link rel="icon" href="../images/favicon.ico" type="image/icon">
+<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
+<!-- MathJax for LaTeX support in abstracts -->
+<script>
+MathJax = {
+ tex: {
+ inlineMath: [['$', '$'], ['\\(', '\\)']]
+ },
+ processEscapes: true,
+ svg: {
+ fontCache: 'global'
+ }
+};
+</script>
+<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
+</head>
+
+<body class="mathjax_ignore">
+
+<table width="100%">
+<tbody>
+<tr>
+
+<!-- Navigation -->
+<td width="20%" align="center" valign="top">
+ <p>&nbsp;</p>
+ <a href="https://www.isa-afp.org/">
+ <img src="../images/isabelle.png" width="100" height="88" border=0>
+ </a>
+ <p>&nbsp;</p>
+ <p>&nbsp;</p>
+ <table class="nav" width="80%">
+ <tr>
+ <td class="nav" width="100%"><a href="../index.html">Home</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../about.html">About</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../submitting.html">Submission</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../updating.html">Updating Entries</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../using.html">Using Entries</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../search.html">Search</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../statistics.html">Statistics</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../topics.html">Index</a></td>
+ </tr>
+ <tr>
+ <td class="nav"><a href="../download.html">Download</a></td>
+ </tr>
+ </table>
+ <p>&nbsp;</p>
+ <p>&nbsp;</p>
+</td>
+
+
+<!-- Content -->
+<td width="80%" valign="top">
+<div align="center">
+ <p>&nbsp;</p>
+ <h1> <font class="first">L</font>ucas's
+
+ <font class="first">T</font>heorem
+
+</h1>
+ <p>&nbsp;</p>
+
+<table width="80%" class="data">
+<tbody>
+<tr>
+ <td class="datahead" width="20%">Title:</td>
+ <td class="data" width="80%">Lucas's Theorem</td>
+</tr>
+
+<tr>
+ <td class="datahead">
+ Author:
+ </td>
+ <td class="data">
+ Chelsea Edmonds (cle47 /at/ cam /dot/ ac /dot/ uk)
+ </td>
+</tr>
+
+
+
+<tr>
+ <td class="datahead">Submission date:</td>
+ <td class="data">2020-04-07</td>
+</tr>
+
+<tr>
+ <td class="datahead" valign="top">Abstract:</td>
+ <td class="abstract mathjax_process">
+This work presents a formalisation of a generating function proof for
+Lucas's theorem. We first outline extensions to the existing
+Formal Power Series (FPS) library, including an equivalence relation
+for coefficients modulo <em>n</em>, an alternate binomial theorem statement,
+and a formalised proof of the Freshman's dream (mod <em>p</em>) lemma.
+The second part of the work presents the formal proof of Lucas's
+Theorem. Working backwards, the formalisation first proves a well
+known corollary of the theorem which is easier to formalise, and then
+applies induction to prove the original theorem statement. The proof
+of the corollary aims to provide a good example of a formalised
+generating function equivalence proof using the FPS library. The final
+theorem statement is intended to be integrated into the formalised
+proof of Hilbert's 10th Problem.</td>
+</tr>
+
+
+<tr>
+ <td class="datahead" valign="top">BibTeX:</td>
+ <td class="formatted">
+ <pre>@article{Lucas_Theorem-AFP,
+ author = {Chelsea Edmonds},
+ title = {Lucas's Theorem},
+ journal = {Archive of Formal Proofs},
+ month = apr,
+ year = 2020,
+ note = {\url{http://isa-afp.org/entries/Lucas_Theorem.html},
+ Formal proof development},
+ ISSN = {2150-914x},
+}</pre>
+ </td>
+</tr>
+
+ <tr><td class="datahead">License:</td>
+ <td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
+
+
+
+
+
+
+ </tbody>
+</table>
+
+<p></p>
+
+<table class="links">
+ <tbody>
+ <tr>
+ <td class="links">
+ <a href="../browser_info/current/AFP/Lucas_Theorem/outline.pdf">Proof outline</a><br>
+ <a href="../browser_info/current/AFP/Lucas_Theorem/document.pdf">Proof document</a>
+ </td>
+ </tr>
+ <tr>
+ <td class="links">
+ <a href="../browser_info/current/AFP/Lucas_Theorem/index.html">Browse theories</a>
+ </td></tr>
+ <tr>
+ <td class="links">
+ <a href="../release/afp-Lucas_Theorem-current.tar.gz">Download this entry</a>
+ </td>
+ </tr>
+
+
+ <tr><td class="links">Older releases:
+ None
+ </td></tr>
+
+ </tbody>
+</table>
+
+</div>
+</td>
+
+</tr>
+</tbody>
+</table>
+
+<script src="../jquery.min.js"></script>
+<script src="../script.js"></script>
+
+</body>
+</html>
\ No newline at end of file
diff --git a/web/entries/MFMC_Countable.html b/web/entries/MFMC_Countable.html
--- a/web/entries/MFMC_Countable.html
+++ b/web/entries/MFMC_Countable.html
@@ -1,249 +1,249 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Formal Proof of the Max-Flow Min-Cut Theorem for Countable Networks - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">F</font>ormal
<font class="first">P</font>roof
of
the
<font class="first">M</font>ax-Flow
<font class="first">M</font>in-Cut
<font class="first">T</font>heorem
for
<font class="first">C</font>ountable
<font class="first">N</font>etworks
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Formal Proof of the Max-Flow Min-Cut Theorem for Countable Networks</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-05-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This article formalises a proof of the maximum-flow minimal-cut
theorem for networks with countably many edges. A network is a
directed graph with non-negative real-valued edge labels and two
dedicated vertices, the source and the sink. A flow in a network
assigns non-negative real numbers to the edges such that for all
vertices except for the source and the sink, the sum of values on
incoming edges equals the sum of values on outgoing edges. A cut is a
subset of the vertices which contains the source, but not the sink.
Our theorem states that in every network, there is a flow and a cut
such that the flow saturates all the edges going out of the cut and is
zero on all the incoming edges. The proof is based on the paper
<emph>The Max-Flow Min-Cut theorem for countable networks</emph> by
Aharoni et al. Additionally, we prove a characterisation of the
lifting operation for relations on discrete probability distributions,
which leads to a concise proof of its distributivity over relation
-composition.</div></td>
+composition.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2017-09-06]:
derive characterisation for the lifting operations on discrete distributions from finite version of the max-flow min-cut theorem
(revision a7a198f5bab0)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{MFMC_Countable-AFP,
author = {Andreas Lochbihler},
title = {A Formal Proof of the Max-Flow Min-Cut Theorem for Countable Networks},
journal = {Archive of Formal Proofs},
month = may,
year = 2016,
note = {\url{http://isa-afp.org/entries/MFMC_Countable.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="EdmondsKarp_Maxflow.html">EdmondsKarp_Maxflow</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Probabilistic_While.html">Probabilistic_While</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MFMC_Countable/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/MFMC_Countable/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MFMC_Countable/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-MFMC_Countable-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-MFMC_Countable-2019-06-11.tar.gz">
afp-MFMC_Countable-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-MFMC_Countable-2018-08-16.tar.gz">
afp-MFMC_Countable-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-MFMC_Countable-2017-10-10.tar.gz">
afp-MFMC_Countable-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-MFMC_Countable-2016-12-17.tar.gz">
afp-MFMC_Countable-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-MFMC_Countable-2016-05-09.tar.gz">
afp-MFMC_Countable-2016-05-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/MFODL_Monitor_Optimized.html b/web/entries/MFODL_Monitor_Optimized.html
--- a/web/entries/MFODL_Monitor_Optimized.html
+++ b/web/entries/MFODL_Monitor_Optimized.html
@@ -1,237 +1,237 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
an
<font class="first">O</font>ptimized
<font class="first">M</font>onitoring
<font class="first">A</font>lgorithm
for
<font class="first">M</font>etric
<font class="first">F</font>irst-Order
<font class="first">D</font>ynamic
<font class="first">L</font>ogic
with
<font class="first">A</font>ggregations
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Thibault Dardinier,
Lukas Heimes,
Martin Raszyk (martin /dot/ raszyk /at/ inf /dot/ ethz /dot/ ch),
Joshua Schneider and
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-04-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
A monitor is a runtime verification tool that solves the following
problem: Given a stream of time-stamped events and a policy formulated
in a specification language, decide whether the policy is satisfied at
every point in the stream. We verify the correctness of an executable
monitor for specifications given as formulas in metric first-order
dynamic logic (MFODL), which combines the features of metric
first-order temporal logic (MFOTL) and metric dynamic logic. Thus,
MFODL supports real-time constraints, first-order parameters, and
regular expressions. Additionally, the monitor supports aggregation
operations such as count and sum. This formalization, which is
described in a <a
href="http://people.inf.ethz.ch/trayteld/papers/ijcar20-verimonplus/verimonplus.pdf">
forthcoming paper at IJCAR 2020</a>, significantly extends <a
href="https://www.isa-afp.org/entries/MFOTL_Monitor.html">previous
work on a verified monitor</a> for MFOTL. Apart from the
addition of regular expressions and aggregations, we implemented <a
href="https://www.isa-afp.org/entries/Generic_Join.html">multi-way
joins</a> and a specialized sliding window algorithm to further
-optimize the monitor.</div></td>
+optimize the monitor.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{MFODL_Monitor_Optimized-AFP,
author = {Thibault Dardinier and Lukas Heimes and Martin Raszyk and Joshua Schneider and Dmitriy Traytel},
title = {Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations},
journal = {Archive of Formal Proofs},
month = apr,
year = 2020,
note = {\url{http://isa-afp.org/entries/MFODL_Monitor_Optimized.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Containers.html">Containers</a>, <a href="Generic_Join.html">Generic_Join</a>, <a href="IEEE_Floating_Point.html">IEEE_Floating_Point</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MFODL_Monitor_Optimized/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/MFODL_Monitor_Optimized/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MFODL_Monitor_Optimized/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-MFODL_Monitor_Optimized-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-MFODL_Monitor_Optimized-2020-04-12.tar.gz">
afp-MFODL_Monitor_Optimized-2020-04-12.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-MFODL_Monitor_Optimized-2020-04-11.tar.gz">
afp-MFODL_Monitor_Optimized-2020-04-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/MFOTL_Monitor.html b/web/entries/MFOTL_Monitor.html
--- a/web/entries/MFOTL_Monitor.html
+++ b/web/entries/MFOTL_Monitor.html
@@ -1,221 +1,221 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of a Monitoring Algorithm for Metric First-Order Temporal Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
a
<font class="first">M</font>onitoring
<font class="first">A</font>lgorithm
for
<font class="first">M</font>etric
<font class="first">F</font>irst-Order
<font class="first">T</font>emporal
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of a Monitoring Algorithm for Metric First-Order Temporal Logic</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Joshua Schneider and
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-07-04</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
A monitor is a runtime verification tool that solves the following
problem: Given a stream of time-stamped events and a policy formulated
in a specification language, decide whether the policy is satisfied at
every point in the stream. We verify the correctness of an executable
monitor for specifications given as formulas in metric first-order
temporal logic (MFOTL), an expressive extension of linear temporal
logic with real-time constraints and first-order quantification. The
verified monitor implements a simplified variant of the algorithm used
in the efficient MonPoly monitoring tool. The formalization is
presented in a forthcoming <a
href="http://people.inf.ethz.ch/trayteld/papers/rv19-verimon/verimon.pdf">RV
2019 paper</a>, which also compares the output of the verified
monitor to that of other monitoring tools on randomly generated
inputs. This case study revealed several errors in the optimized but
-unverified tools.</div></td>
+unverified tools.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{MFOTL_Monitor-AFP,
author = {Joshua Schneider and Dmitriy Traytel},
title = {Formalization of a Monitoring Algorithm for Metric First-Order Temporal Logic},
journal = {Archive of Formal Proofs},
month = jul,
year = 2019,
note = {\url{http://isa-afp.org/entries/MFOTL_Monitor.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Containers.html">Containers</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Generic_Join.html">Generic_Join</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MFOTL_Monitor/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/MFOTL_Monitor/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MFOTL_Monitor/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-MFOTL_Monitor-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-MFOTL_Monitor-2019-07-05.tar.gz">
afp-MFOTL_Monitor-2019-07-05.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/MSO_Regex_Equivalence.html b/web/entries/MSO_Regex_Equivalence.html
--- a/web/entries/MSO_Regex_Equivalence.html
+++ b/web/entries/MSO_Regex_Equivalence.html
@@ -1,260 +1,260 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Decision Procedures for MSO on Words Based on Derivatives of Regular Expressions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>ecision
<font class="first">P</font>rocedures
for
<font class="first">M</font>SO
on
<font class="first">W</font>ords
<font class="first">B</font>ased
on
<font class="first">D</font>erivatives
of
<font class="first">R</font>egular
<font class="first">E</font>xpressions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Decision Procedures for MSO on Words Based on Derivatives of Regular Expressions</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a> and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-06-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Monadic second-order logic on finite words (MSO) is a decidable yet
expressive logic into which many decision problems can be encoded. Since MSO
formulas correspond to regular languages, equivalence of MSO formulas can be
reduced to the equivalence of some regular structures (e.g. automata). We
verify an executable decision procedure for MSO formulas that is not based
on automata but on regular expressions.
<p>
Decision procedures for regular expression equivalence have been formalized
before, usually based on Brzozowski derivatives. Yet, for a straightforward
embedding of MSO formulas into regular expressions an extension of regular
expressions with a projection operation is required. We prove total
correctness and completeness of an equivalence checker for regular
expressions extended in that way. We also define a language-preserving
translation of formulas into regular expressions with respect to two
different semantics of MSO.
<p>
-The formalization is described in this <a href="http://www21.in.tum.de/~nipkow/pubs/icfp13.html">ICFP 2013 functional pearl</a>.</div></td>
+The formalization is described in this <a href="http://www21.in.tum.de/~nipkow/pubs/icfp13.html">ICFP 2013 functional pearl</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{MSO_Regex_Equivalence-AFP,
author = {Dmitriy Traytel and Tobias Nipkow},
title = {Decision Procedures for MSO on Words Based on Derivatives of Regular Expressions},
journal = {Archive of Formal Proofs},
month = jun,
year = 2014,
note = {\url{http://isa-afp.org/entries/MSO_Regex_Equivalence.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Deriving.html">Deriving</a>, <a href="List-Index.html">List-Index</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MSO_Regex_Equivalence/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/MSO_Regex_Equivalence/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MSO_Regex_Equivalence/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-MSO_Regex_Equivalence-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-MSO_Regex_Equivalence-2019-06-11.tar.gz">
afp-MSO_Regex_Equivalence-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-MSO_Regex_Equivalence-2018-08-16.tar.gz">
afp-MSO_Regex_Equivalence-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-MSO_Regex_Equivalence-2017-10-10.tar.gz">
afp-MSO_Regex_Equivalence-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-MSO_Regex_Equivalence-2016-12-17.tar.gz">
afp-MSO_Regex_Equivalence-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-MSO_Regex_Equivalence-2016-02-22.tar.gz">
afp-MSO_Regex_Equivalence-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-MSO_Regex_Equivalence-2015-05-27.tar.gz">
afp-MSO_Regex_Equivalence-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-MSO_Regex_Equivalence-2014-08-28.tar.gz">
afp-MSO_Regex_Equivalence-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-MSO_Regex_Equivalence-2014-06-12.tar.gz">
afp-MSO_Regex_Equivalence-2014-06-12.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Markov_Models.html b/web/entries/Markov_Models.html
--- a/web/entries/Markov_Models.html
+++ b/web/entries/Markov_Models.html
@@ -1,258 +1,258 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Markov Models - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>arkov
<font class="first">M</font>odels
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Markov Models</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a> and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-01-03</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This is a formalization of Markov models in Isabelle/HOL. It
+ <td class="abstract mathjax_process">This is a formalization of Markov models in Isabelle/HOL. It
builds on Isabelle's probability theory. The available models are
currently Discrete-Time Markov Chains and a extensions of them with
rewards.
<p>
As application of these models we formalize probabilistic model
checking of pCTL formulas, analysis of IPv4 address allocation in
ZeroConf and an analysis of the anonymity of the Crowds protocol.
-<a href="http://arxiv.org/abs/1212.3870">See here for the corresponding paper.</a></div></td>
+<a href="http://arxiv.org/abs/1212.3870">See here for the corresponding paper.</a></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Markov_Models-AFP,
author = {Johannes Hölzl and Tobias Nipkow},
title = {Markov Models},
journal = {Archive of Formal Proofs},
month = jan,
year = 2012,
note = {\url{http://isa-afp.org/entries/Markov_Models.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Coinductive.html">Coinductive</a>, <a href="Gauss-Jordan-Elim-Fun.html">Gauss-Jordan-Elim-Fun</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Hidden_Markov_Models.html">Hidden_Markov_Models</a>, <a href="Probabilistic_Noninterference.html">Probabilistic_Noninterference</a>, <a href="Probabilistic_Timed_Automata.html">Probabilistic_Timed_Automata</a>, <a href="Stochastic_Matrices.html">Stochastic_Matrices</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Markov_Models/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Markov_Models/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Markov_Models/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Markov_Models-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Markov_Models-2019-06-11.tar.gz">
afp-Markov_Models-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Markov_Models-2018-08-16.tar.gz">
afp-Markov_Models-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Markov_Models-2017-10-10.tar.gz">
afp-Markov_Models-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Markov_Models-2016-12-17.tar.gz">
afp-Markov_Models-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Markov_Models-2016-02-22.tar.gz">
afp-Markov_Models-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Markov_Models-2015-05-27.tar.gz">
afp-Markov_Models-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Markov_Models-2014-08-28.tar.gz">
afp-Markov_Models-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Markov_Models-2013-12-11.tar.gz">
afp-Markov_Models-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Markov_Models-2013-11-17.tar.gz">
afp-Markov_Models-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Markov_Models-2013-02-16.tar.gz">
afp-Markov_Models-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Markov_Models-2012-05-24.tar.gz">
afp-Markov_Models-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Markov_Models-2012-01-08.tar.gz">
afp-Markov_Models-2012-01-08.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Markov_Models-2012-01-05.tar.gz">
afp-Markov_Models-2012-01-05.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Marriage.html b/web/entries/Marriage.html
--- a/web/entries/Marriage.html
+++ b/web/entries/Marriage.html
@@ -1,259 +1,259 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Hall's Marriage Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">H</font>all's
<font class="first">M</font>arriage
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Hall's Marriage Theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Dongchen Jiang (dongchenjiang /at/ googlemail /dot/ com) and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-12-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Two proofs of Hall's Marriage Theorem: one due to Halmos and Vaughan, one due to Rado.</div></td>
+ <td class="abstract mathjax_process">Two proofs of Hall's Marriage Theorem: one due to Halmos and Vaughan, one due to Rado.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2011-09-09]: Added Rado's proof</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Marriage-AFP,
author = {Dongchen Jiang and Tobias Nipkow},
title = {Hall's Marriage Theorem},
journal = {Archive of Formal Proofs},
month = dec,
year = 2010,
note = {\url{http://isa-afp.org/entries/Marriage.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Latin_Square.html">Latin_Square</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Marriage/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Marriage/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Marriage/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Marriage-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Marriage-2019-06-11.tar.gz">
afp-Marriage-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Marriage-2018-08-16.tar.gz">
afp-Marriage-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Marriage-2017-10-10.tar.gz">
afp-Marriage-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Marriage-2016-12-17.tar.gz">
afp-Marriage-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Marriage-2016-02-22.tar.gz">
afp-Marriage-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Marriage-2015-05-27.tar.gz">
afp-Marriage-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Marriage-2014-08-28.tar.gz">
afp-Marriage-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Marriage-2013-12-11.tar.gz">
afp-Marriage-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Marriage-2013-11-17.tar.gz">
afp-Marriage-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Marriage-2013-02-16.tar.gz">
afp-Marriage-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Marriage-2012-05-24.tar.gz">
afp-Marriage-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Marriage-2011-10-11.tar.gz">
afp-Marriage-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Marriage-2011-02-11.tar.gz">
afp-Marriage-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Marriage-2010-12-17.tar.gz">
afp-Marriage-2010-12-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Mason_Stothers.html b/web/entries/Mason_Stothers.html
--- a/web/entries/Mason_Stothers.html
+++ b/web/entries/Mason_Stothers.html
@@ -1,219 +1,219 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Mason–Stothers Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">M</font>ason–Stothers
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Mason–Stothers Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-12-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This article provides a formalisation of Snyder’s simple and
elegant proof of the Mason&ndash;Stothers theorem, which is the
polynomial analogue of the famous abc Conjecture for integers.
Remarkably, Snyder found this very elegant proof when he was still a
high-school student.</p> <p>In short, the statement of the
theorem is that three non-zero coprime polynomials
<em>A</em>, <em>B</em>, <em>C</em>
over a field which sum to 0 and do not all have vanishing derivatives
fulfil max{deg(<em>A</em>), deg(<em>B</em>),
deg(<em>C</em>)} < deg(rad(<em>ABC</em>))
where the rad(<em>P</em>) denotes the
<em>radical</em> of <em>P</em>,
i.&thinsp;e. the product of all unique irreducible factors of
<em>P</em>.</p> <p>This theorem also implies a
kind of polynomial analogue of Fermat’s Last Theorem for polynomials:
except for trivial cases,
<em>A<sup>n</sup></em> +
<em>B<sup>n</sup></em> +
<em>C<sup>n</sup></em> = 0 implies
n&nbsp;&le;&nbsp;2 for coprime polynomials
<em>A</em>, <em>B</em>, <em>C</em>
-over a field.</em></p></div></td>
+over a field.</em></p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Mason_Stothers-AFP,
author = {Manuel Eberl},
title = {The Mason–Stothers Theorem},
journal = {Archive of Formal Proofs},
month = dec,
year = 2017,
note = {\url{http://isa-afp.org/entries/Mason_Stothers.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Mason_Stothers/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Mason_Stothers/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Mason_Stothers/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Mason_Stothers-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Mason_Stothers-2019-06-11.tar.gz">
afp-Mason_Stothers-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Mason_Stothers-2018-08-16.tar.gz">
afp-Mason_Stothers-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Mason_Stothers-2017-12-22.tar.gz">
afp-Mason_Stothers-2017-12-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Matrix.html b/web/entries/Matrix.html
--- a/web/entries/Matrix.html
+++ b/web/entries/Matrix.html
@@ -1,288 +1,288 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Executable Matrix Operations on Matrices of Arbitrary Dimensions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>xecutable
<font class="first">M</font>atrix
<font class="first">O</font>perations
on
<font class="first">M</font>atrices
of
<font class="first">A</font>rbitrary
<font class="first">D</font>imensions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Executable Matrix Operations on Matrices of Arbitrary Dimensions</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com) and
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-06-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We provide the operations of matrix addition, multiplication,
transposition, and matrix comparisons as executable functions over
ordered semirings. Moreover, it is proven that strongly normalizing
(monotone) orders can be lifted to strongly normalizing (monotone) orders
over matrices. We further show that the standard semirings over the
naturals, integers, and rationals, as well as the arctic semirings
satisfy the axioms that are required by our matrix theory. Our
formalization is part of the <a
href="http://cl-informatik.uibk.ac.at/software/ceta">CeTA</a> system
which contains several termination techniques. The provided theories have
been essential to formalize matrix-interpretations and arctic
-interpretations.</div></td>
+interpretations.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2010-09-17]: Moved theory on arbitrary (ordered) semirings to Abstract Rewriting.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Matrix-AFP,
author = {Christian Sternagel and René Thiemann},
title = {Executable Matrix Operations on Matrices of Arbitrary Dimensions},
journal = {Archive of Formal Proofs},
month = jun,
year = 2010,
note = {\url{http://isa-afp.org/entries/Matrix.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Abstract-Rewriting.html">Abstract-Rewriting</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Matrix_Tensor.html">Matrix_Tensor</a>, <a href="Polynomials.html">Polynomials</a>, <a href="Transitive-Closure.html">Transitive-Closure</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Matrix/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Matrix/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Matrix/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Matrix-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Matrix-2019-06-11.tar.gz">
afp-Matrix-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Matrix-2018-08-16.tar.gz">
afp-Matrix-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Matrix-2017-10-10.tar.gz">
afp-Matrix-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Matrix-2016-12-17.tar.gz">
afp-Matrix-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Matrix-2016-02-22.tar.gz">
afp-Matrix-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Matrix-2015-05-27.tar.gz">
afp-Matrix-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Matrix-2014-08-28.tar.gz">
afp-Matrix-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Matrix-2013-12-11.tar.gz">
afp-Matrix-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Matrix-2013-11-17.tar.gz">
afp-Matrix-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Matrix-2013-02-16.tar.gz">
afp-Matrix-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Matrix-2012-05-24.tar.gz">
afp-Matrix-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Matrix-2011-10-11.tar.gz">
afp-Matrix-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Matrix-2011-02-11.tar.gz">
afp-Matrix-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Matrix-2010-07-01.tar.gz">
afp-Matrix-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Matrix-2010-06-17.tar.gz">
afp-Matrix-2010-06-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Matrix_Tensor.html b/web/entries/Matrix_Tensor.html
--- a/web/entries/Matrix_Tensor.html
+++ b/web/entries/Matrix_Tensor.html
@@ -1,227 +1,227 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Tensor Product of Matrices - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>ensor
<font class="first">P</font>roduct
of
<font class="first">M</font>atrices
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Tensor Product of Matrices</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
T.V.H. Prathamesh (prathamesh /at/ imsc /dot/ res /dot/ in)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-01-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
In this work, the Kronecker tensor product of matrices and the proofs of
some of its properties are formalized. Properties which have been formalized
include associativity of the tensor product and the mixed-product
-property.</div></td>
+property.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Matrix_Tensor-AFP,
author = {T.V.H. Prathamesh},
title = {Tensor Product of Matrices},
journal = {Archive of Formal Proofs},
month = jan,
year = 2016,
note = {\url{http://isa-afp.org/entries/Matrix_Tensor.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Matrix.html">Matrix</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Knot_Theory.html">Knot_Theory</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Matrix_Tensor/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Matrix_Tensor/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Matrix_Tensor/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Matrix_Tensor-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Matrix_Tensor-2019-06-11.tar.gz">
afp-Matrix_Tensor-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Matrix_Tensor-2018-08-16.tar.gz">
afp-Matrix_Tensor-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Matrix_Tensor-2017-10-10.tar.gz">
afp-Matrix_Tensor-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Matrix_Tensor-2016-12-17.tar.gz">
afp-Matrix_Tensor-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Matrix_Tensor-2016-02-22.tar.gz">
afp-Matrix_Tensor-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Matrix_Tensor-2016-01-19.tar.gz">
afp-Matrix_Tensor-2016-01-19.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Matrix_Tensor-2016-01-18.tar.gz">
afp-Matrix_Tensor-2016-01-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Matroids.html b/web/entries/Matroids.html
--- a/web/entries/Matroids.html
+++ b/web/entries/Matroids.html
@@ -1,198 +1,198 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Matroids - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>atroids
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Matroids</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Jonas Keinholz
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-11-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This article defines the combinatorial structures known as
<em>Independence Systems</em> and
<em>Matroids</em> and provides basic concepts and theorems
related to them. These structures play an important role in
combinatorial optimisation, e. g. greedy algorithms such as
Kruskal's algorithm. The development is based on Oxley's
<a href="http://www.math.lsu.edu/~oxley/survey4.pdf">`What
-is a Matroid?'</a>.</p></div></td>
+is a Matroid?'</a>.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Matroids-AFP,
author = {Jonas Keinholz},
title = {Matroids},
journal = {Archive of Formal Proofs},
month = nov,
year = 2018,
note = {\url{http://isa-afp.org/entries/Matroids.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Kruskal.html">Kruskal</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Matroids/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Matroids/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Matroids/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Matroids-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Matroids-2019-06-11.tar.gz">
afp-Matroids-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Matroids-2018-11-20.tar.gz">
afp-Matroids-2018-11-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Max-Card-Matching.html b/web/entries/Max-Card-Matching.html
--- a/web/entries/Max-Card-Matching.html
+++ b/web/entries/Max-Card-Matching.html
@@ -1,269 +1,269 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Maximum Cardinality Matching - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>aximum
<font class="first">C</font>ardinality
<font class="first">M</font>atching
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Maximum Cardinality Matching</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Christine Rizkallah
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-07-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
A <em>matching</em> in a graph <i>G</i> is a subset <i>M</i> of the
edges of <i>G</i> such that no two share an endpoint. A matching has maximum
cardinality if its cardinality is at least as large as that of any other
matching. An <em>odd-set cover</em> <i>OSC</i> of a graph <i>G</i> is a
labeling of the nodes of <i>G</i> with integers such that every edge of
<i>G</i> is either incident to a node labeled 1 or connects two nodes
labeled with the same number <i>i &ge; 2</i>.
</p><p>
This article proves Edmonds theorem:<br>
Let <i>M</i> be a matching in a graph <i>G</i> and let <i>OSC</i> be an
odd-set cover of <i>G</i>.
For any <i>i &ge; 0</i>, let <var>n(i)</var> be the number of nodes
labeled <i>i</i>. If <i>|M| = n(1) +
&sum;<sub>i &ge; 2</sub>(n(i) div 2)</i>,
then <i>M</i> is a maximum cardinality matching.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Max-Card-Matching-AFP,
author = {Christine Rizkallah},
title = {Maximum Cardinality Matching},
journal = {Archive of Formal Proofs},
month = jul,
year = 2011,
note = {\url{http://isa-afp.org/entries/Max-Card-Matching.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Max-Card-Matching/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Max-Card-Matching/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Max-Card-Matching/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Max-Card-Matching-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Max-Card-Matching-2019-06-11.tar.gz">
afp-Max-Card-Matching-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Max-Card-Matching-2018-08-16.tar.gz">
afp-Max-Card-Matching-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Max-Card-Matching-2017-10-10.tar.gz">
afp-Max-Card-Matching-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Max-Card-Matching-2016-12-17.tar.gz">
afp-Max-Card-Matching-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Max-Card-Matching-2016-02-22.tar.gz">
afp-Max-Card-Matching-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Max-Card-Matching-2015-05-27.tar.gz">
afp-Max-Card-Matching-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Max-Card-Matching-2014-08-28.tar.gz">
afp-Max-Card-Matching-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Max-Card-Matching-2013-12-11.tar.gz">
afp-Max-Card-Matching-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Max-Card-Matching-2013-11-17.tar.gz">
afp-Max-Card-Matching-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Max-Card-Matching-2013-02-16.tar.gz">
afp-Max-Card-Matching-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Max-Card-Matching-2012-05-24.tar.gz">
afp-Max-Card-Matching-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Max-Card-Matching-2011-10-11.tar.gz">
afp-Max-Card-Matching-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Max-Card-Matching-2011-08-19.tar.gz">
afp-Max-Card-Matching-2011-08-19.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Max-Card-Matching-2011-08-15.tar.gz">
afp-Max-Card-Matching-2011-08-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Median_Of_Medians_Selection.html b/web/entries/Median_Of_Medians_Selection.html
--- a/web/entries/Median_Of_Medians_Selection.html
+++ b/web/entries/Median_Of_Medians_Selection.html
@@ -1,207 +1,207 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Median-of-Medians Selection Algorithm - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">M</font>edian-of-Medians
<font class="first">S</font>election
<font class="first">A</font>lgorithm
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Median-of-Medians Selection Algorithm</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-12-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This entry provides an executable functional implementation
of the Median-of-Medians algorithm for selecting the
<em>k</em>-th smallest element of an unsorted list
deterministically in linear time. The size bounds for the recursive
call that lead to the linear upper bound on the run-time of the
-algorithm are also proven. </p></div></td>
+algorithm are also proven. </p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Median_Of_Medians_Selection-AFP,
author = {Manuel Eberl},
title = {The Median-of-Medians Selection Algorithm},
journal = {Archive of Formal Proofs},
month = dec,
year = 2017,
note = {\url{http://isa-afp.org/entries/Median_Of_Medians_Selection.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="KD_Tree.html">KD_Tree</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Median_Of_Medians_Selection/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Median_Of_Medians_Selection/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Median_Of_Medians_Selection/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Median_Of_Medians_Selection-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Median_Of_Medians_Selection-2019-06-11.tar.gz">
afp-Median_Of_Medians_Selection-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Median_Of_Medians_Selection-2018-08-16.tar.gz">
afp-Median_Of_Medians_Selection-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Median_Of_Medians_Selection-2017-12-22.tar.gz">
afp-Median_Of_Medians_Selection-2017-12-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Menger.html b/web/entries/Menger.html
--- a/web/entries/Menger.html
+++ b/web/entries/Menger.html
@@ -1,207 +1,207 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Menger's Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>enger's
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Menger's Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://logic.las.tu-berlin.de/Members/Dittmann/">Christoph Dittmann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-02-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a formalization of Menger's Theorem for directed and
undirected graphs in Isabelle/HOL. This well-known result shows that
if two non-adjacent distinct vertices u, v in a directed graph have no
separator smaller than n, then there exist n internally
vertex-disjoint paths from u to v. The version for undirected graphs
follows immediately because undirected graphs are a special case of
-directed graphs.</div></td>
+directed graphs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Menger-AFP,
author = {Christoph Dittmann},
title = {Menger's Theorem},
journal = {Archive of Formal Proofs},
month = feb,
year = 2017,
note = {\url{http://isa-afp.org/entries/Menger.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Menger/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Menger/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Menger/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Menger-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Menger-2019-06-11.tar.gz">
afp-Menger-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Menger-2018-08-16.tar.gz">
afp-Menger-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Menger-2017-10-10.tar.gz">
afp-Menger-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Menger-2017-02-27.tar.gz">
afp-Menger-2017-02-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Mersenne_Primes.html b/web/entries/Mersenne_Primes.html
--- a/web/entries/Mersenne_Primes.html
+++ b/web/entries/Mersenne_Primes.html
@@ -1,202 +1,202 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Mersenne primes and the Lucas–Lehmer test - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>ersenne
primes
and
the
<font class="first">L</font>ucas–Lehmer
test
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Mersenne primes and the Lucas–Lehmer test</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-01-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This article provides formal proofs of basic properties of
Mersenne numbers, i. e. numbers of the form
2<sup><em>n</em></sup> - 1, and especially of
Mersenne primes.</p> <p>In particular, an efficient,
verified, and executable version of the Lucas&ndash;Lehmer test is
developed. This test decides primality for Mersenne numbers in time
-polynomial in <em>n</em>.</p></div></td>
+polynomial in <em>n</em>.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Mersenne_Primes-AFP,
author = {Manuel Eberl},
title = {Mersenne primes and the Lucas–Lehmer test},
journal = {Archive of Formal Proofs},
month = jan,
year = 2020,
note = {\url{http://isa-afp.org/entries/Mersenne_Primes.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Native_Word.html">Native_Word</a>, <a href="Pell.html">Pell</a>, <a href="Probabilistic_Prime_Tests.html">Probabilistic_Prime_Tests</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Mersenne_Primes/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Mersenne_Primes/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Mersenne_Primes/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Mersenne_Primes-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Mersenne_Primes-2020-01-20.tar.gz">
afp-Mersenne_Primes-2020-01-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/MiniML.html b/web/entries/MiniML.html
--- a/web/entries/MiniML.html
+++ b/web/entries/MiniML.html
@@ -1,301 +1,301 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Mini ML - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>ini
<font class="first">M</font>L
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Mini ML</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Wolfgang Naraschewski and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-03-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This theory defines the type inference rules and the type inference algorithm <i>W</i> for MiniML (simply-typed lambda terms with <tt>let</tt>) due to Milner. It proves the soundness and completeness of <i>W</i> w.r.t. the rules.</div></td>
+ <td class="abstract mathjax_process">This theory defines the type inference rules and the type inference algorithm <i>W</i> for MiniML (simply-typed lambda terms with <tt>let</tt>) due to Milner. It proves the soundness and completeness of <i>W</i> w.r.t. the rules.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{MiniML-AFP,
author = {Wolfgang Naraschewski and Tobias Nipkow},
title = {Mini ML},
journal = {Archive of Formal Proofs},
month = mar,
year = 2004,
note = {\url{http://isa-afp.org/entries/MiniML.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MiniML/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/MiniML/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MiniML/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-MiniML-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-MiniML-2019-06-11.tar.gz">
afp-MiniML-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-MiniML-2019-04-18.tar.gz">
afp-MiniML-2019-04-18.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-MiniML-2018-08-16.tar.gz">
afp-MiniML-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-MiniML-2017-10-10.tar.gz">
afp-MiniML-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-MiniML-2016-12-17.tar.gz">
afp-MiniML-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-MiniML-2016-02-22.tar.gz">
afp-MiniML-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-MiniML-2015-05-27.tar.gz">
afp-MiniML-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-MiniML-2014-08-28.tar.gz">
afp-MiniML-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-MiniML-2013-12-11.tar.gz">
afp-MiniML-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-MiniML-2013-11-17.tar.gz">
afp-MiniML-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-MiniML-2013-03-02.tar.gz">
afp-MiniML-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-MiniML-2013-02-16.tar.gz">
afp-MiniML-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-MiniML-2012-05-24.tar.gz">
afp-MiniML-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-MiniML-2011-10-11.tar.gz">
afp-MiniML-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-MiniML-2011-02-11.tar.gz">
afp-MiniML-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-MiniML-2010-07-01.tar.gz">
afp-MiniML-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-MiniML-2009-12-12.tar.gz">
afp-MiniML-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-MiniML-2009-04-29.tar.gz">
afp-MiniML-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-MiniML-2008-06-10.tar.gz">
afp-MiniML-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-MiniML-2007-11-27.tar.gz">
afp-MiniML-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-MiniML-2005-10-14.tar.gz">
afp-MiniML-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-MiniML-2004-05-21.tar.gz">
afp-MiniML-2004-05-21.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-MiniML-2004-04-20.tar.gz">
afp-MiniML-2004-04-20.tar.gz
</a>
</li>
<li>Isabelle 2003:
<a href="../release/afp-MiniML-2004-03-23.tar.gz">
afp-MiniML-2004-03-23.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Minimal_SSA.html b/web/entries/Minimal_SSA.html
--- a/web/entries/Minimal_SSA.html
+++ b/web/entries/Minimal_SSA.html
@@ -1,226 +1,226 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Minimal Static Single Assignment Form - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>inimal
<font class="first">S</font>tatic
<font class="first">S</font>ingle
<font class="first">A</font>ssignment
<font class="first">F</font>orm
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Minimal Static Single Assignment Form</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Max Wagner (max /at/ trollbu /dot/ de) and
<a href="http://pp.ipd.kit.edu/person.php?id=88">Denis Lohner</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-01-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This formalization is an extension to <a
href="https://www.isa-afp.org/entries/Formal_SSA.html">"Verified
Construction of Static Single Assignment Form"</a>. In
their work, the authors have shown that <a
href="https://doi.org/10.1007/978-3-642-37051-9_6">Braun
et al.'s static single assignment (SSA) construction
algorithm</a> produces minimal SSA form for input programs with
a reducible control flow graph (CFG). However Braun et al. also
proposed an extension to their algorithm that they claim produces
minimal SSA form even for irreducible CFGs.<br> In this
formalization we support that claim by giving a mechanized proof.
</p>
<p>As the extension of Braun et al.'s algorithm
aims for removing so-called redundant strongly connected components of
phi functions, we show that this suffices to guarantee minimality
according to <a href="https://doi.org/10.1145/115372.115320">Cytron et
-al.</a>.</p></div></td>
+al.</a>.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Minimal_SSA-AFP,
author = {Max Wagner and Denis Lohner},
title = {Minimal Static Single Assignment Form},
journal = {Archive of Formal Proofs},
month = jan,
year = 2017,
note = {\url{http://isa-afp.org/entries/Minimal_SSA.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Formal_SSA.html">Formal_SSA</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Minimal_SSA/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Minimal_SSA/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Minimal_SSA/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Minimal_SSA-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Minimal_SSA-2019-06-11.tar.gz">
afp-Minimal_SSA-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Minimal_SSA-2018-08-16.tar.gz">
afp-Minimal_SSA-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Minimal_SSA-2017-10-10.tar.gz">
afp-Minimal_SSA-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Minimal_SSA-2017-01-19.tar.gz">
afp-Minimal_SSA-2017-01-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Minkowskis_Theorem.html b/web/entries/Minkowskis_Theorem.html
--- a/web/entries/Minkowskis_Theorem.html
+++ b/web/entries/Minkowskis_Theorem.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Minkowski's Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>inkowski's
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Minkowski's Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-07-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>Minkowski's theorem relates a subset of
&#8477;<sup>n</sup>, the Lebesgue measure, and the
integer lattice &#8484;<sup>n</sup>: It states that
any convex subset of &#8477;<sup>n</sup> with volume
greater than 2<sup>n</sup> contains at least one lattice
point from &#8484;<sup>n</sup>\{0}, i.&thinsp;e. a
non-zero point with integer coefficients.</p> <p>A
related theorem which directly implies this is Blichfeldt's
theorem, which states that any subset of
&#8477;<sup>n</sup> with a volume greater than 1
contains two different points whose difference vector has integer
components.</p> <p>The entry contains a proof of both
-theorems.</p></div></td>
+theorems.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Minkowskis_Theorem-AFP,
author = {Manuel Eberl},
title = {Minkowski's Theorem},
journal = {Archive of Formal Proofs},
month = jul,
year = 2017,
note = {\url{http://isa-afp.org/entries/Minkowskis_Theorem.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Minkowskis_Theorem/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Minkowskis_Theorem/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Minkowskis_Theorem/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Minkowskis_Theorem-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Minkowskis_Theorem-2019-06-11.tar.gz">
afp-Minkowskis_Theorem-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Minkowskis_Theorem-2018-08-16.tar.gz">
afp-Minkowskis_Theorem-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Minkowskis_Theorem-2017-10-10.tar.gz">
afp-Minkowskis_Theorem-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Minkowskis_Theorem-2017-07-15.tar.gz">
afp-Minkowskis_Theorem-2017-07-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Minsky_Machines.html b/web/entries/Minsky_Machines.html
--- a/web/entries/Minsky_Machines.html
+++ b/web/entries/Minsky_Machines.html
@@ -1,211 +1,211 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Minsky Machines - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>insky
<font class="first">M</font>achines
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Minsky Machines</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Bertram Felgenhauer (int-e /at/ gmx /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-08-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p> We formalize undecidablity results for Minsky machines. To
this end, we also formalize recursive inseparability.
</p><p> We start by proving that Minsky machines can
compute arbitrary primitive recursive and recursive functions. We then
show that there is a deterministic Minsky machine with one argument
and two final states such that the set of inputs that are accepted in
one state is recursively inseparable from the set of inputs that are
accepted in the other state. </p><p> As a corollary, the
set of Minsky configurations that reach the first state but not the
second recursively inseparable from the set of Minsky configurations
that reach the second state but not the first. In particular both
these sets are undecidable. </p><p> We do
<em>not</em> prove that recursive functions can simulate
-Minsky machines. </p></div></td>
+Minsky machines. </p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Minsky_Machines-AFP,
author = {Bertram Felgenhauer},
title = {Minsky Machines},
journal = {Archive of Formal Proofs},
month = aug,
year = 2018,
note = {\url{http://isa-afp.org/entries/Minsky_Machines.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Abstract-Rewriting.html">Abstract-Rewriting</a>, <a href="Recursion-Theory-I.html">Recursion-Theory-I</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Minsky_Machines/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Minsky_Machines/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Minsky_Machines/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Minsky_Machines-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Minsky_Machines-2019-06-11.tar.gz">
afp-Minsky_Machines-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Minsky_Machines-2018-08-16.tar.gz">
afp-Minsky_Machines-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Minsky_Machines-2018-08-14.tar.gz">
afp-Minsky_Machines-2018-08-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Modal_Logics_for_NTS.html b/web/entries/Modal_Logics_for_NTS.html
--- a/web/entries/Modal_Logics_for_NTS.html
+++ b/web/entries/Modal_Logics_for_NTS.html
@@ -1,239 +1,239 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Modal Logics for Nominal Transition Systems - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>odal
<font class="first">L</font>ogics
for
<font class="first">N</font>ominal
<font class="first">T</font>ransition
<font class="first">S</font>ystems
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Modal Logics for Nominal Transition Systems</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Tjark Weber (tjark /dot/ weber /at/ it /dot/ uu /dot/ se),
Lars-Henrik Eriksson (lhe /at/ it /dot/ uu /dot/ se),
Joachim Parrow (joachim /dot/ parrow /at/ it /dot/ uu /dot/ se),
Johannes Borgström (johannes /dot/ borgstrom /at/ it /dot/ uu /dot/ se) and
Ramunas Gutkovas (ramunas /dot/ gutkovas /at/ it /dot/ uu /dot/ se)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-10-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize a uniform semantic substrate for a wide variety of
process calculi where states and action labels can be from arbitrary
nominal sets. A Hennessy-Milner logic for these systems is defined,
and proved adequate for bisimulation equivalence. A main novelty is
the construction of an infinitary nominal data type to model formulas
with (finitely supported) infinite conjunctions and actions that may
contain binding names. The logic is generalized to treat different
bisimulation variants such as early, late and open in a systematic
-way.</div></td>
+way.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2017-01-29]:
Formalization of weak bisimilarity added
(revision c87cc2057d9c)</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Modal_Logics_for_NTS-AFP,
author = {Tjark Weber and Lars-Henrik Eriksson and Joachim Parrow and Johannes Borgström and Ramunas Gutkovas},
title = {Modal Logics for Nominal Transition Systems},
journal = {Archive of Formal Proofs},
month = oct,
year = 2016,
note = {\url{http://isa-afp.org/entries/Modal_Logics_for_NTS.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Nominal2.html">Nominal2</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Modal_Logics_for_NTS/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Modal_Logics_for_NTS/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Modal_Logics_for_NTS/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Modal_Logics_for_NTS-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Modal_Logics_for_NTS-2019-06-11.tar.gz">
afp-Modal_Logics_for_NTS-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Modal_Logics_for_NTS-2018-08-16.tar.gz">
afp-Modal_Logics_for_NTS-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Modal_Logics_for_NTS-2017-10-10.tar.gz">
afp-Modal_Logics_for_NTS-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Modal_Logics_for_NTS-2016-12-17.tar.gz">
afp-Modal_Logics_for_NTS-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Modal_Logics_for_NTS-2016-10-27.tar.gz">
afp-Modal_Logics_for_NTS-2016-10-27.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Modal_Logics_for_NTS-2016-10-25.tar.gz">
afp-Modal_Logics_for_NTS-2016-10-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Modular_Assembly_Kit_Security.html b/web/entries/Modular_Assembly_Kit_Security.html
--- a/web/entries/Modular_Assembly_Kit_Security.html
+++ b/web/entries/Modular_Assembly_Kit_Security.html
@@ -1,225 +1,225 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>An Isabelle/HOL Formalization of the Modular Assembly Kit for Security Properties - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>n
<font class="first">I</font>sabelle/HOL
<font class="first">F</font>ormalization
of
the
<font class="first">M</font>odular
<font class="first">A</font>ssembly
<font class="first">K</font>it
for
<font class="first">S</font>ecurity
<font class="first">P</font>roperties
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">An Isabelle/HOL Formalization of the Modular Assembly Kit for Security Properties</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Oliver Bračevac (bracevac /at/ st /dot/ informatik /dot/ tu-darmstadt /dot/ de),
Richard Gay (gay /at/ mais /dot/ informatik /dot/ tu-darmstadt /dot/ de),
Sylvia Grewe (grewe /at/ st /dot/ informatik /dot/ tu-darmstadt /dot/ de),
Heiko Mantel (mantel /at/ mais /dot/ informatik /dot/ tu-darmstadt /dot/ de),
Henning Sudbrock (sudbrock /at/ mais /dot/ informatik /dot/ tu-darmstadt /dot/ de) and
Markus Tasch (tasch /at/ mais /dot/ informatik /dot/ tu-darmstadt /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-05-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The "Modular Assembly Kit for Security Properties" (MAKS) is
a framework for both the definition and verification of possibilistic
information-flow security properties at the specification-level. MAKS
supports the uniform representation of a wide range of possibilistic
information-flow properties and provides support for the verification
of such properties via unwinding results and compositionality results.
-We provide a formalization of this framework in Isabelle/HOL.</div></td>
+We provide a formalization of this framework in Isabelle/HOL.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Modular_Assembly_Kit_Security-AFP,
author = {Oliver Bračevac and Richard Gay and Sylvia Grewe and Heiko Mantel and Henning Sudbrock and Markus Tasch},
title = {An Isabelle/HOL Formalization of the Modular Assembly Kit for Security Properties},
journal = {Archive of Formal Proofs},
month = may,
year = 2018,
note = {\url{http://isa-afp.org/entries/Modular_Assembly_Kit_Security.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Modular_Assembly_Kit_Security/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Modular_Assembly_Kit_Security/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Modular_Assembly_Kit_Security/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Modular_Assembly_Kit_Security-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Modular_Assembly_Kit_Security-2019-06-11.tar.gz">
afp-Modular_Assembly_Kit_Security-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Modular_Assembly_Kit_Security-2018-08-16.tar.gz">
afp-Modular_Assembly_Kit_Security-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Modular_Assembly_Kit_Security-2018-05-09.tar.gz">
afp-Modular_Assembly_Kit_Security-2018-05-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Monad_Memo_DP.html b/web/entries/Monad_Memo_DP.html
--- a/web/entries/Monad_Memo_DP.html
+++ b/web/entries/Monad_Memo_DP.html
@@ -1,217 +1,217 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Monadification, Memoization and Dynamic Programming - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>onadification,
<font class="first">M</font>emoization
and
<font class="first">D</font>ynamic
<font class="first">P</font>rogramming
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Monadification, Memoization and Dynamic Programming</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>,
Shuwei Hu (shuwei /dot/ hu /at/ tum /dot/ de) and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-05-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a lightweight framework for the automatic verified
(functional or imperative) memoization of recursive functions. Our
tool can turn a pure Isabelle/HOL function definition into a
monadified version in a state monad or the Imperative HOL heap monad,
and prove a correspondence theorem. We provide a variety of memory
implementations for the two types of monads. A number of simple
techniques allow us to achieve bottom-up computation and
space-efficient memoization. The framework’s utility is demonstrated
on a number of representative dynamic programming problems. A detailed
-description of our work can be found in the accompanying paper [2].</div></td>
+description of our work can be found in the accompanying paper [2].</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Monad_Memo_DP-AFP,
author = {Simon Wimmer and Shuwei Hu and Tobias Nipkow},
title = {Monadification, Memoization and Dynamic Programming},
journal = {Archive of Formal Proofs},
month = may,
year = 2018,
note = {\url{http://isa-afp.org/entries/Monad_Memo_DP.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Show.html">Show</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Hidden_Markov_Models.html">Hidden_Markov_Models</a>, <a href="Optimal_BST.html">Optimal_BST</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Monad_Memo_DP/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Monad_Memo_DP/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Monad_Memo_DP/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Monad_Memo_DP-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Monad_Memo_DP-2019-06-11.tar.gz">
afp-Monad_Memo_DP-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Monad_Memo_DP-2018-08-16.tar.gz">
afp-Monad_Memo_DP-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Monad_Memo_DP-2018-05-23.tar.gz">
afp-Monad_Memo_DP-2018-05-23.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Monad_Normalisation.html b/web/entries/Monad_Normalisation.html
--- a/web/entries/Monad_Normalisation.html
+++ b/web/entries/Monad_Normalisation.html
@@ -1,212 +1,212 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Monad normalisation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>onad
normalisation
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Monad normalisation</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Joshua Schneider,
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a> and
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-05-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The usual monad laws can directly be used as rewrite rules for Isabelle’s
simplifier to normalise monadic HOL terms and decide equivalences.
In a commutative monad, however, the commutativity law is a
higher-order permutative rewrite rule that makes the simplifier loop.
This AFP entry implements a simproc that normalises monadic
expressions in commutative monads using ordered rewriting. The
simproc can also permute computations across control operators like if
-and case.</div></td>
+and case.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Monad_Normalisation-AFP,
author = {Joshua Schneider and Manuel Eberl and Andreas Lochbihler},
title = {Monad normalisation},
journal = {Archive of Formal Proofs},
month = may,
year = 2017,
note = {\url{http://isa-afp.org/entries/Monad_Normalisation.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="CryptHOL.html">CryptHOL</a>, <a href="Randomised_BSTs.html">Randomised_BSTs</a>, <a href="Skip_Lists.html">Skip_Lists</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Monad_Normalisation/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Monad_Normalisation/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Monad_Normalisation/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Monad_Normalisation-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Monad_Normalisation-2019-06-11.tar.gz">
afp-Monad_Normalisation-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Monad_Normalisation-2018-08-16.tar.gz">
afp-Monad_Normalisation-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Monad_Normalisation-2017-10-10.tar.gz">
afp-Monad_Normalisation-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Monad_Normalisation-2017-05-11.tar.gz">
afp-Monad_Normalisation-2017-05-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/MonoBoolTranAlgebra.html b/web/entries/MonoBoolTranAlgebra.html
--- a/web/entries/MonoBoolTranAlgebra.html
+++ b/web/entries/MonoBoolTranAlgebra.html
@@ -1,253 +1,253 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Algebra of Monotonic Boolean Transformers - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>lgebra
of
<font class="first">M</font>onotonic
<font class="first">B</font>oolean
<font class="first">T</font>ransformers
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Algebra of Monotonic Boolean Transformers</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Viorel Preoteasa (viorel /dot/ preoteasa /at/ aalto /dot/ fi)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-09-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Algebras of imperative programming languages have been successful in reasoning about programs. In general an algebra of programs is an algebraic structure with programs as elements and with program compositions (sequential composition, choice, skip) as algebra operations. Various versions of these algebras were introduced to model partial correctness, total correctness, refinement, demonic choice, and other aspects. We formalize here an algebra which can be used to model total correctness, refinement, demonic and angelic choice. The basic model of this algebra are monotonic Boolean transformers (monotonic functions from a Boolean algebra to itself).</div></td>
+ <td class="abstract mathjax_process">Algebras of imperative programming languages have been successful in reasoning about programs. In general an algebra of programs is an algebraic structure with programs as elements and with program compositions (sequential composition, choice, skip) as algebra operations. Various versions of these algebras were introduced to model partial correctness, total correctness, refinement, demonic choice, and other aspects. We formalize here an algebra which can be used to model total correctness, refinement, demonic and angelic choice. The basic model of this algebra are monotonic Boolean transformers (monotonic functions from a Boolean algebra to itself).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{MonoBoolTranAlgebra-AFP,
author = {Viorel Preoteasa},
title = {Algebra of Monotonic Boolean Transformers},
journal = {Archive of Formal Proofs},
month = sep,
year = 2011,
note = {\url{http://isa-afp.org/entries/MonoBoolTranAlgebra.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="LatticeProperties.html">LatticeProperties</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MonoBoolTranAlgebra/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/MonoBoolTranAlgebra/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MonoBoolTranAlgebra/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-MonoBoolTranAlgebra-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-MonoBoolTranAlgebra-2019-06-11.tar.gz">
afp-MonoBoolTranAlgebra-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-MonoBoolTranAlgebra-2018-08-16.tar.gz">
afp-MonoBoolTranAlgebra-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-MonoBoolTranAlgebra-2017-10-10.tar.gz">
afp-MonoBoolTranAlgebra-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-MonoBoolTranAlgebra-2016-12-17.tar.gz">
afp-MonoBoolTranAlgebra-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-MonoBoolTranAlgebra-2016-02-22.tar.gz">
afp-MonoBoolTranAlgebra-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-MonoBoolTranAlgebra-2015-05-27.tar.gz">
afp-MonoBoolTranAlgebra-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-MonoBoolTranAlgebra-2014-08-28.tar.gz">
afp-MonoBoolTranAlgebra-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-MonoBoolTranAlgebra-2013-12-11.tar.gz">
afp-MonoBoolTranAlgebra-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-MonoBoolTranAlgebra-2013-11-17.tar.gz">
afp-MonoBoolTranAlgebra-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-MonoBoolTranAlgebra-2013-02-16.tar.gz">
afp-MonoBoolTranAlgebra-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-MonoBoolTranAlgebra-2012-05-24.tar.gz">
afp-MonoBoolTranAlgebra-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-MonoBoolTranAlgebra-2011-10-11.tar.gz">
afp-MonoBoolTranAlgebra-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-MonoBoolTranAlgebra-2011-09-27.tar.gz">
afp-MonoBoolTranAlgebra-2011-09-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/MonoidalCategory.html b/web/entries/MonoidalCategory.html
--- a/web/entries/MonoidalCategory.html
+++ b/web/entries/MonoidalCategory.html
@@ -1,233 +1,233 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Monoidal Categories - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>onoidal
<font class="first">C</font>ategories
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Monoidal Categories</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Eugene W. Stark (stark /at/ cs /dot/ stonybrook /dot/ edu)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-05-04</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Building on the formalization of basic category theory set out in the
author's previous AFP article, the present article formalizes
some basic aspects of the theory of monoidal categories. Among the
notions defined here are monoidal category, monoidal functor, and
equivalence of monoidal categories. The main theorems formalized are
MacLane's coherence theorem and the constructions of the free
monoidal category and free strict monoidal category generated by a
given category. The coherence theorem is proved syntactically, using
a structurally recursive approach to reduction of terms that might
have some novel aspects. We also give proofs of some results given by
Etingof et al, which may prove useful in a formal setting. In
particular, we show that the left and right unitors need not be taken
as given data in the definition of monoidal category, nor does the
definition of monoidal functor need to take as given a specific
isomorphism expressing the preservation of the unit object. Our
definitions of monoidal category and monoidal functor are stated so as
-to take advantage of the economy afforded by these facts.</div></td>
+to take advantage of the economy afforded by these facts.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2017-05-18]:
Integrated material from MonoidalCategory/Category3Adapter into Category3/ and deleted adapter.
(revision 015543cdd069)<br>
[2018-05-29]:
Modifications required due to 'Category3' changes. Introduced notation for "in hom".
(revision 8318366d4575)<br>
[2020-02-15]:
Cosmetic improvements.
(revision a51840d36867)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{MonoidalCategory-AFP,
author = {Eugene W. Stark},
title = {Monoidal Categories},
journal = {Archive of Formal Proofs},
month = may,
year = 2017,
note = {\url{http://isa-afp.org/entries/MonoidalCategory.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Category3.html">Category3</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Bicategory.html">Bicategory</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MonoidalCategory/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/MonoidalCategory/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MonoidalCategory/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-MonoidalCategory-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-MonoidalCategory-2019-06-11.tar.gz">
afp-MonoidalCategory-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-MonoidalCategory-2018-08-16.tar.gz">
afp-MonoidalCategory-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-MonoidalCategory-2017-10-10.tar.gz">
afp-MonoidalCategory-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-MonoidalCategory-2017-05-05.tar.gz">
afp-MonoidalCategory-2017-05-05.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Monomorphic_Monad.html b/web/entries/Monomorphic_Monad.html
--- a/web/entries/Monomorphic_Monad.html
+++ b/web/entries/Monomorphic_Monad.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Effect polymorphism in higher-order logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>ffect
polymorphism
in
higher-order
logic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Effect polymorphism in higher-order logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-05-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The notion of a monad cannot be expressed within higher-order logic
(HOL) due to type system restrictions. We show that if a monad is used
with values of only one type, this notion can be formalised in HOL.
Based on this idea, we develop a library of effect specifications and
implementations of monads and monad transformers. Hence, we can
abstract over the concrete monad in HOL definitions and thus use the
same definition for different (combinations of) effects. We illustrate
the usefulness of effect polymorphism with a monadic interpreter for a
-simple language.</div></td>
+simple language.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2018-02-15]:
added further specifications and implementations of non-determinism;
more examples
(revision bc5399eea78e)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Monomorphic_Monad-AFP,
author = {Andreas Lochbihler},
title = {Effect polymorphism in higher-order logic},
journal = {Archive of Formal Proofs},
month = may,
year = 2017,
note = {\url{http://isa-afp.org/entries/Monomorphic_Monad.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="CryptHOL.html">CryptHOL</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Monomorphic_Monad/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Monomorphic_Monad/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Monomorphic_Monad/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Monomorphic_Monad-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Monomorphic_Monad-2019-06-11.tar.gz">
afp-Monomorphic_Monad-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Monomorphic_Monad-2018-08-16.tar.gz">
afp-Monomorphic_Monad-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Monomorphic_Monad-2017-10-10.tar.gz">
afp-Monomorphic_Monad-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Monomorphic_Monad-2017-05-11.tar.gz">
afp-Monomorphic_Monad-2017-05-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/MuchAdoAboutTwo.html b/web/entries/MuchAdoAboutTwo.html
--- a/web/entries/MuchAdoAboutTwo.html
+++ b/web/entries/MuchAdoAboutTwo.html
@@ -1,279 +1,279 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Much Ado About Two - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>uch
<font class="first">A</font>do
<font class="first">A</font>bout
<font class="first">T</font>wo
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Much Ado About Two</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~boehmes/">Sascha Böhme</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2007-11-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This article is an Isabelle formalisation of a paper with the same title. In a similar way as Knuth's 0-1-principle for sorting algorithms, that paper develops a 0-1-2-principle for parallel prefix computations.</div></td>
+ <td class="abstract mathjax_process">This article is an Isabelle formalisation of a paper with the same title. In a similar way as Knuth's 0-1-principle for sorting algorithms, that paper develops a 0-1-2-principle for parallel prefix computations.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{MuchAdoAboutTwo-AFP,
author = {Sascha Böhme},
title = {Much Ado About Two},
journal = {Archive of Formal Proofs},
month = nov,
year = 2007,
note = {\url{http://isa-afp.org/entries/MuchAdoAboutTwo.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MuchAdoAboutTwo/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/MuchAdoAboutTwo/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/MuchAdoAboutTwo/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-MuchAdoAboutTwo-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-MuchAdoAboutTwo-2019-06-11.tar.gz">
afp-MuchAdoAboutTwo-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-MuchAdoAboutTwo-2018-08-16.tar.gz">
afp-MuchAdoAboutTwo-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-MuchAdoAboutTwo-2017-10-10.tar.gz">
afp-MuchAdoAboutTwo-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-MuchAdoAboutTwo-2016-12-17.tar.gz">
afp-MuchAdoAboutTwo-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-MuchAdoAboutTwo-2016-02-22.tar.gz">
afp-MuchAdoAboutTwo-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-MuchAdoAboutTwo-2015-05-27.tar.gz">
afp-MuchAdoAboutTwo-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-MuchAdoAboutTwo-2014-08-28.tar.gz">
afp-MuchAdoAboutTwo-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-MuchAdoAboutTwo-2013-12-11.tar.gz">
afp-MuchAdoAboutTwo-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-MuchAdoAboutTwo-2013-11-17.tar.gz">
afp-MuchAdoAboutTwo-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-MuchAdoAboutTwo-2013-03-02.tar.gz">
afp-MuchAdoAboutTwo-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-MuchAdoAboutTwo-2013-02-16.tar.gz">
afp-MuchAdoAboutTwo-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-MuchAdoAboutTwo-2012-05-24.tar.gz">
afp-MuchAdoAboutTwo-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-MuchAdoAboutTwo-2011-10-11.tar.gz">
afp-MuchAdoAboutTwo-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-MuchAdoAboutTwo-2011-02-11.tar.gz">
afp-MuchAdoAboutTwo-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-MuchAdoAboutTwo-2010-07-01.tar.gz">
afp-MuchAdoAboutTwo-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-MuchAdoAboutTwo-2009-12-12.tar.gz">
afp-MuchAdoAboutTwo-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-MuchAdoAboutTwo-2009-04-29.tar.gz">
afp-MuchAdoAboutTwo-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-MuchAdoAboutTwo-2008-06-10.tar.gz">
afp-MuchAdoAboutTwo-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-MuchAdoAboutTwo-2007-11-27.tar.gz">
afp-MuchAdoAboutTwo-2007-11-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Multi_Party_Computation.html b/web/entries/Multi_Party_Computation.html
--- a/web/entries/Multi_Party_Computation.html
+++ b/web/entries/Multi_Party_Computation.html
@@ -1,205 +1,205 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Multi-Party Computation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>ulti-Party
<font class="first">C</font>omputation
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Multi-Party Computation</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://homepages.inf.ed.ac.uk/da/">David Aspinall</a> and
<a href="https://www.turing.ac.uk/people/doctoral-students/david-butler">David Butler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-05-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We use CryptHOL to consider Multi-Party Computation (MPC) protocols.
MPC was first considered by Yao in 1983 and recent advances in
efficiency and an increased demand mean it is now deployed in the real
world. Security is considered using the real/ideal world paradigm. We
first define security in the semi-honest security setting where
parties are assumed not to deviate from the protocol transcript. In
this setting we prove multiple Oblivious Transfer (OT) protocols
secure and then show security for the gates of the GMW protocol. We
then define malicious security, this is a stronger notion of security
where parties are assumed to be fully corrupted by an adversary. In
this setting we again consider OT, as it is a fundamental building
-block of almost all MPC protocols.</div></td>
+block of almost all MPC protocols.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Multi_Party_Computation-AFP,
author = {David Aspinall and David Butler},
title = {Multi-Party Computation},
journal = {Archive of Formal Proofs},
month = may,
year = 2019,
note = {\url{http://isa-afp.org/entries/Multi_Party_Computation.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Game_Based_Crypto.html">Game_Based_Crypto</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Multi_Party_Computation/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Multi_Party_Computation/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Multi_Party_Computation/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Multi_Party_Computation-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Multi_Party_Computation-2019-06-11.tar.gz">
afp-Multi_Party_Computation-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Multi_Party_Computation-2019-05-10.tar.gz">
afp-Multi_Party_Computation-2019-05-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Multirelations.html b/web/entries/Multirelations.html
--- a/web/entries/Multirelations.html
+++ b/web/entries/Multirelations.html
@@ -1,220 +1,220 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Binary Multirelations - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>inary
<font class="first">M</font>ultirelations
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Binary Multirelations</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.sci.kagoshima-u.ac.jp/~furusawa/">Hitoshi Furusawa</a> and
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-06-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Binary multirelations associate elements of a set with its subsets; hence
they are binary relations from a set to its power set. Applications include
alternating automata, models and logics for games, program semantics with
dual demonic and angelic nondeterministic choices and concurrent dynamic
logics. This proof document supports an arXiv article that formalises the
basic algebra of multirelations and proposes axiom systems for them,
-ranging from weak bi-monoids to weak bi-quantales.</div></td>
+ranging from weak bi-monoids to weak bi-quantales.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Multirelations-AFP,
author = {Hitoshi Furusawa and Georg Struth},
title = {Binary Multirelations},
journal = {Archive of Formal Proofs},
month = jun,
year = 2015,
note = {\url{http://isa-afp.org/entries/Multirelations.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Kleene_Algebra.html">Kleene_Algebra</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Multirelations/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Multirelations/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Multirelations/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Multirelations-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Multirelations-2019-06-11.tar.gz">
afp-Multirelations-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Multirelations-2018-08-16.tar.gz">
afp-Multirelations-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Multirelations-2017-10-10.tar.gz">
afp-Multirelations-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Multirelations-2016-12-17.tar.gz">
afp-Multirelations-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Multirelations-2016-02-22.tar.gz">
afp-Multirelations-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Multirelations-2015-06-13.tar.gz">
afp-Multirelations-2015-06-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Myhill-Nerode.html b/web/entries/Myhill-Nerode.html
--- a/web/entries/Myhill-Nerode.html
+++ b/web/entries/Myhill-Nerode.html
@@ -1,267 +1,267 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Myhill-Nerode Theorem Based on Regular Expressions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">M</font>yhill-Nerode
<font class="first">T</font>heorem
<font class="first">B</font>ased
on
<font class="first">R</font>egular
<font class="first">E</font>xpressions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Myhill-Nerode Theorem Based on Regular Expressions</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Chunhan Wu,
Xingyuan Zhang and
<a href="http://www.inf.kcl.ac.uk/staff/urbanc/">Christian Urban</a>
</td>
</tr>
<tr>
<td class="datahead">
Contributor:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-08-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">There are many proofs of the Myhill-Nerode theorem using automata. In this library we give a proof entirely based on regular expressions, since regularity of languages can be conveniently defined using regular expressions (it is more painful in HOL to define regularity in terms of automata). We prove the first direction of the Myhill-Nerode theorem by solving equational systems that involve regular expressions. For the second direction we give two proofs: one using tagging-functions and another using partial derivatives. We also establish various closure properties of regular languages. Most details of the theories are described in our ITP 2011 paper.</div></td>
+ <td class="abstract mathjax_process">There are many proofs of the Myhill-Nerode theorem using automata. In this library we give a proof entirely based on regular expressions, since regularity of languages can be conveniently defined using regular expressions (it is more painful in HOL to define regularity in terms of automata). We prove the first direction of the Myhill-Nerode theorem by solving equational systems that involve regular expressions. For the second direction we give two proofs: one using tagging-functions and another using partial derivatives. We also establish various closure properties of regular languages. Most details of the theories are described in our ITP 2011 paper.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Myhill-Nerode-AFP,
author = {Chunhan Wu and Xingyuan Zhang and Christian Urban},
title = {The Myhill-Nerode Theorem Based on Regular Expressions},
journal = {Archive of Formal Proofs},
month = aug,
year = 2011,
note = {\url{http://isa-afp.org/entries/Myhill-Nerode.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Abstract-Rewriting.html">Abstract-Rewriting</a>, <a href="Open_Induction.html">Open_Induction</a>, <a href="Regular-Sets.html">Regular-Sets</a>, <a href="Well_Quasi_Orders.html">Well_Quasi_Orders</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Myhill-Nerode/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Myhill-Nerode/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Myhill-Nerode/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Myhill-Nerode-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Myhill-Nerode-2019-06-11.tar.gz">
afp-Myhill-Nerode-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Myhill-Nerode-2018-08-16.tar.gz">
afp-Myhill-Nerode-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Myhill-Nerode-2017-10-10.tar.gz">
afp-Myhill-Nerode-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Myhill-Nerode-2016-12-17.tar.gz">
afp-Myhill-Nerode-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Myhill-Nerode-2016-02-22.tar.gz">
afp-Myhill-Nerode-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Myhill-Nerode-2015-05-27.tar.gz">
afp-Myhill-Nerode-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Myhill-Nerode-2014-08-28.tar.gz">
afp-Myhill-Nerode-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Myhill-Nerode-2013-12-11.tar.gz">
afp-Myhill-Nerode-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Myhill-Nerode-2013-11-17.tar.gz">
afp-Myhill-Nerode-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Myhill-Nerode-2013-03-02.tar.gz">
afp-Myhill-Nerode-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Myhill-Nerode-2013-02-16.tar.gz">
afp-Myhill-Nerode-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Myhill-Nerode-2012-05-24.tar.gz">
afp-Myhill-Nerode-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Myhill-Nerode-2011-10-11.tar.gz">
afp-Myhill-Nerode-2011-10-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Name_Carrying_Type_Inference.html b/web/entries/Name_Carrying_Type_Inference.html
--- a/web/entries/Name_Carrying_Type_Inference.html
+++ b/web/entries/Name_Carrying_Type_Inference.html
@@ -1,228 +1,228 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Verified Metatheory and Type Inference for a Name-Carrying Simply-Typed Lambda Calculus - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>erified
<font class="first">M</font>etatheory
and
<font class="first">T</font>ype
<font class="first">I</font>nference
for
a
<font class="first">N</font>ame-Carrying
<font class="first">S</font>imply-Typed
<font class="first">L</font>ambda
<font class="first">C</font>alculus
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Verified Metatheory and Type Inference for a Name-Carrying Simply-Typed Lambda Calculus</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Michael Rawson (michaelrawson76 /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-07-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
I formalise a Church-style simply-typed
\(\lambda\)-calculus, extended with pairs, a unit value, and
projection functions, and show some metatheory of the calculus, such
as the subject reduction property. Particular attention is paid to the
treatment of names in the calculus. A nominal style of binding is
used, but I use a manual approach over Nominal Isabelle in order to
extract an executable type inference algorithm. More information can
be found in my <a
href="http://www.openthesis.org/documents/Verified-Metatheory-Type-Inference-Simply-603182.html">undergraduate
-dissertation</a>.</div></td>
+dissertation</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Name_Carrying_Type_Inference-AFP,
author = {Michael Rawson},
title = {Verified Metatheory and Type Inference for a Name-Carrying Simply-Typed Lambda Calculus},
journal = {Archive of Formal Proofs},
month = jul,
year = 2017,
note = {\url{http://isa-afp.org/entries/Name_Carrying_Type_Inference.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Name_Carrying_Type_Inference/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Name_Carrying_Type_Inference/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Name_Carrying_Type_Inference/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Name_Carrying_Type_Inference-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Name_Carrying_Type_Inference-2019-06-11.tar.gz">
afp-Name_Carrying_Type_Inference-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Name_Carrying_Type_Inference-2018-08-16.tar.gz">
afp-Name_Carrying_Type_Inference-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Name_Carrying_Type_Inference-2017-10-10.tar.gz">
afp-Name_Carrying_Type_Inference-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Name_Carrying_Type_Inference-2017-07-15.tar.gz">
afp-Name_Carrying_Type_Inference-2017-07-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Nat-Interval-Logic.html b/web/entries/Nat-Interval-Logic.html
--- a/web/entries/Nat-Interval-Logic.html
+++ b/web/entries/Nat-Interval-Logic.html
@@ -1,257 +1,257 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Interval Temporal Logic on Natural Numbers - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>nterval
<font class="first">T</font>emporal
<font class="first">L</font>ogic
on
<font class="first">N</font>atural
<font class="first">N</font>umbers
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Interval Temporal Logic on Natural Numbers</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
David Trachtenherz
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-02-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We introduce a theory of temporal logic operators using sets of natural numbers as time domain, formalized in a shallow embedding manner. The theory comprises special natural intervals (theory IL_Interval: open and closed intervals, continuous and modulo intervals, interval traversing results), operators for shifting intervals to left/right on the number axis as well as expanding/contracting intervals by constant factors (theory IL_IntervalOperators.thy), and ultimately definitions and results for unary and binary temporal operators on arbitrary natural sets (theory IL_TemporalOperators).</div></td>
+ <td class="abstract mathjax_process">We introduce a theory of temporal logic operators using sets of natural numbers as time domain, formalized in a shallow embedding manner. The theory comprises special natural intervals (theory IL_Interval: open and closed intervals, continuous and modulo intervals, interval traversing results), operators for shifting intervals to left/right on the number axis as well as expanding/contracting intervals by constant factors (theory IL_IntervalOperators.thy), and ultimately definitions and results for unary and binary temporal operators on arbitrary natural sets (theory IL_TemporalOperators).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Nat-Interval-Logic-AFP,
author = {David Trachtenherz},
title = {Interval Temporal Logic on Natural Numbers},
journal = {Archive of Formal Proofs},
month = feb,
year = 2011,
note = {\url{http://isa-afp.org/entries/Nat-Interval-Logic.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="List-Infinite.html">List-Infinite</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="AutoFocus-Stream.html">AutoFocus-Stream</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Nat-Interval-Logic/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Nat-Interval-Logic/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Nat-Interval-Logic/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Nat-Interval-Logic-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Nat-Interval-Logic-2019-06-11.tar.gz">
afp-Nat-Interval-Logic-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Nat-Interval-Logic-2018-08-16.tar.gz">
afp-Nat-Interval-Logic-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Nat-Interval-Logic-2017-10-10.tar.gz">
afp-Nat-Interval-Logic-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Nat-Interval-Logic-2016-12-17.tar.gz">
afp-Nat-Interval-Logic-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Nat-Interval-Logic-2016-02-22.tar.gz">
afp-Nat-Interval-Logic-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Nat-Interval-Logic-2015-05-27.tar.gz">
afp-Nat-Interval-Logic-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Nat-Interval-Logic-2014-08-28.tar.gz">
afp-Nat-Interval-Logic-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Nat-Interval-Logic-2013-12-11.tar.gz">
afp-Nat-Interval-Logic-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Nat-Interval-Logic-2013-11-17.tar.gz">
afp-Nat-Interval-Logic-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Nat-Interval-Logic-2013-02-16.tar.gz">
afp-Nat-Interval-Logic-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Nat-Interval-Logic-2012-05-24.tar.gz">
afp-Nat-Interval-Logic-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Nat-Interval-Logic-2011-10-11.tar.gz">
afp-Nat-Interval-Logic-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Nat-Interval-Logic-2011-02-24.tar.gz">
afp-Nat-Interval-Logic-2011-02-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Native_Word.html b/web/entries/Native_Word.html
--- a/web/entries/Native_Word.html
+++ b/web/entries/Native_Word.html
@@ -1,251 +1,251 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Native Word - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">N</font>ative
<font class="first">W</font>ord
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Native Word</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="datahead">
Contributor:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~lammich">Peter Lammich</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-09-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This entry makes machine words and machine arithmetic available for code generation from Isabelle/HOL. It provides a common abstraction that hides the differences between the different target languages. The code generator maps these operations to the APIs of the target languages. Apart from that, we extend the available bit operations on types int and integer, and map them to the operations in the target languages.</div></td>
+ <td class="abstract mathjax_process">This entry makes machine words and machine arithmetic available for code generation from Isabelle/HOL. It provides a common abstraction that hides the differences between the different target languages. The code generator maps these operations to the APIs of the target languages. Apart from that, we extend the available bit operations on types int and integer, and map them to the operations in the target languages.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2013-11-06]:
added conversion function between native words and characters
(revision fd23d9a7fe3a)<br>
[2014-03-31]:
added words of default size in the target language (by Peter Lammich)
(revision 25caf5065833)<br>
[2014-10-06]:
proper test setup with compilation and execution of tests in all target languages
(revision 5d7a1c9ae047)<br>
[2017-09-02]:
added 64-bit words (revision c89f86244e3c)<br>
[2018-07-15]:
added cast operators for default-size words (revision fc1f1fb8dd30)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Native_Word-AFP,
author = {Andreas Lochbihler},
title = {Native Word},
journal = {Archive of Formal Proofs},
month = sep,
year = 2013,
note = {\url{http://isa-afp.org/entries/Native_Word.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Collections.html">Collections</a>, <a href="Datatype_Order_Generator.html">Datatype_Order_Generator</a>, <a href="Iptables_Semantics.html">Iptables_Semantics</a>, <a href="JinjaThreads.html">JinjaThreads</a>, <a href="Mersenne_Primes.html">Mersenne_Primes</a>, <a href="ROBDD.html">ROBDD</a>, <a href="Separation_Logic_Imperative_HOL.html">Separation_Logic_Imperative_HOL</a>, <a href="WebAssembly.html">WebAssembly</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Native_Word/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Native_Word/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Native_Word/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Native_Word-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Native_Word-2019-06-11.tar.gz">
afp-Native_Word-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Native_Word-2018-08-16.tar.gz">
afp-Native_Word-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Native_Word-2017-10-10.tar.gz">
afp-Native_Word-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Native_Word-2016-12-17.tar.gz">
afp-Native_Word-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Native_Word-2016-02-22.tar.gz">
afp-Native_Word-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Native_Word-2015-05-27.tar.gz">
afp-Native_Word-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Native_Word-2014-08-28.tar.gz">
afp-Native_Word-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Native_Word-2013-12-11.tar.gz">
afp-Native_Word-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Native_Word-2013-11-17.tar.gz">
afp-Native_Word-2013-11-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Nested_Multisets_Ordinals.html b/web/entries/Nested_Multisets_Ordinals.html
--- a/web/entries/Nested_Multisets_Ordinals.html
+++ b/web/entries/Nested_Multisets_Ordinals.html
@@ -1,220 +1,220 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of Nested Multisets, Hereditary Multisets, and Syntactic Ordinals - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
<font class="first">N</font>ested
<font class="first">M</font>ultisets,
<font class="first">H</font>ereditary
<font class="first">M</font>ultisets,
and
<font class="first">S</font>yntactic
<font class="first">O</font>rdinals
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of Nested Multisets, Hereditary Multisets, and Syntactic Ordinals</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Jasmin Christian Blanchette (j /dot/ c /dot/ blanchette /at/ vu /dot/ nl),
Mathias Fleury (fleury /at/ mpi-inf /dot/ mpg /dot/ de) and
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-11-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This Isabelle/HOL formalization introduces a nested multiset datatype and defines Dershowitz and Manna's nested multiset order. The order is proved well founded and linear. By removing one constructor, we transform the nested multisets into hereditary multisets. These are isomorphic to the syntactic ordinals—the ordinals can be recursively expressed in Cantor normal form. Addition, subtraction, multiplication, and linear orders are provided on this type.</div></td>
+ <td class="abstract mathjax_process">This Isabelle/HOL formalization introduces a nested multiset datatype and defines Dershowitz and Manna's nested multiset order. The order is proved well founded and linear. By removing one constructor, we transform the nested multisets into hereditary multisets. These are isomorphic to the syntactic ordinals—the ordinals can be recursively expressed in Cantor normal form. Addition, subtraction, multiplication, and linear orders are provided on this type.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Nested_Multisets_Ordinals-AFP,
author = {Jasmin Christian Blanchette and Mathias Fleury and Dmitriy Traytel},
title = {Formalization of Nested Multisets, Hereditary Multisets, and Syntactic Ordinals},
journal = {Archive of Formal Proofs},
month = nov,
year = 2016,
note = {\url{http://isa-afp.org/entries/Nested_Multisets_Ordinals.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="List-Index.html">List-Index</a>, <a href="Ordinal.html">Ordinal</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Functional_Ordered_Resolution_Prover.html">Functional_Ordered_Resolution_Prover</a>, <a href="Lambda_Free_KBOs.html">Lambda_Free_KBOs</a>, <a href="Lambda_Free_RPOs.html">Lambda_Free_RPOs</a>, <a href="Ordered_Resolution_Prover.html">Ordered_Resolution_Prover</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Nested_Multisets_Ordinals/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Nested_Multisets_Ordinals/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Nested_Multisets_Ordinals/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Nested_Multisets_Ordinals-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Nested_Multisets_Ordinals-2019-06-11.tar.gz">
afp-Nested_Multisets_Ordinals-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Nested_Multisets_Ordinals-2018-08-16.tar.gz">
afp-Nested_Multisets_Ordinals-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Nested_Multisets_Ordinals-2017-10-10.tar.gz">
afp-Nested_Multisets_Ordinals-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Nested_Multisets_Ordinals-2016-12-17.tar.gz">
afp-Nested_Multisets_Ordinals-2016-12-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Network_Security_Policy_Verification.html b/web/entries/Network_Security_Policy_Verification.html
--- a/web/entries/Network_Security_Policy_Verification.html
+++ b/web/entries/Network_Security_Policy_Verification.html
@@ -1,266 +1,266 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Network Security Policy Verification - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">N</font>etwork
<font class="first">S</font>ecurity
<font class="first">P</font>olicy
<font class="first">V</font>erification
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Network Security Policy Verification</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-07-04</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a unified theory for verifying network security policies.
A security policy is represented as directed graph.
To check high-level security goals, security invariants over the policy are
expressed. We cover monotonic security invariants, i.e. prohibiting more does not harm
security. We provide the following contributions for the security invariant theory.
<ul>
<li>Secure auto-completion of scenario-specific knowledge, which eases usability.</li>
<li>Security violations can be repaired by tightening the policy iff the
security invariants hold for the deny-all policy.</li>
<li>An algorithm to compute a security policy.</li>
<li>A formalization of stateful connection semantics in network security mechanisms.</li>
<li>An algorithm to compute a secure stateful implementation of a policy.</li>
<li>An executable implementation of all the theory.</li>
<li>Examples, ranging from an aircraft cabin data network to the analysis
of a large real-world firewall.</li>
<li>More examples: A fully automated translation of high-level security goals to both
firewall and SDN configurations (see Examples/Distributed_WebApp.thy).</li>
</ul>
For a detailed description, see
<ul>
<li>C. Diekmann, A. Korsten, and G. Carle.
<a href="http://www.net.in.tum.de/fileadmin/bibtex/publications/papers/diekmann2015mansdnnfv.pdf">Demonstrating
topoS: Theorem-prover-based synthesis of secure network configurations.</a>
In 2nd International Workshop on Management of SDN and NFV Systems, manSDN/NFV, Barcelona, Spain, November 2015.</li>
<li>C. Diekmann, S.-A. Posselt, H. Niedermayer, H. Kinkelin, O. Hanka, and G. Carle.
<a href="http://www.net.in.tum.de/pub/diekmann/forte14.pdf">Verifying Security Policies using Host Attributes.</a>
In FORTE, 34th IFIP International Conference on Formal Techniques for Distributed Objects,
Components and Systems, Berlin, Germany, June 2014.</li>
<li>C. Diekmann, L. Hupel, and G. Carle. Directed Security Policies:
<a href="http://rvg.web.cse.unsw.edu.au/eptcs/paper.cgi?ESSS2014.3">A Stateful Network Implementation.</a>
In J. Pang and Y. Liu, editors, Engineering Safety and Security Systems,
volume 150 of Electronic Proceedings in Theoretical Computer Science,
pages 20-34, Singapore, May 2014. Open Publishing Association.</li>
-</ul></div></td>
+</ul></td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2015-04-14]:
Added Distributed WebApp example and improved graphviz visualization
(revision 4dde08ca2ab8)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Network_Security_Policy_Verification-AFP,
author = {Cornelius Diekmann},
title = {Network Security Policy Verification},
journal = {Archive of Formal Proofs},
month = jul,
year = 2014,
note = {\url{http://isa-afp.org/entries/Network_Security_Policy_Verification.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Automatic_Refinement.html">Automatic_Refinement</a>, <a href="Transitive-Closure.html">Transitive-Closure</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Network_Security_Policy_Verification/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Network_Security_Policy_Verification/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Network_Security_Policy_Verification/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Network_Security_Policy_Verification-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Network_Security_Policy_Verification-2019-06-11.tar.gz">
afp-Network_Security_Policy_Verification-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Network_Security_Policy_Verification-2018-08-16.tar.gz">
afp-Network_Security_Policy_Verification-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Network_Security_Policy_Verification-2017-10-10.tar.gz">
afp-Network_Security_Policy_Verification-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Network_Security_Policy_Verification-2016-12-17.tar.gz">
afp-Network_Security_Policy_Verification-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Network_Security_Policy_Verification-2016-02-22.tar.gz">
afp-Network_Security_Policy_Verification-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Network_Security_Policy_Verification-2015-05-27.tar.gz">
afp-Network_Security_Policy_Verification-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Network_Security_Policy_Verification-2014-08-28.tar.gz">
afp-Network_Security_Policy_Verification-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Network_Security_Policy_Verification-2014-07-09.tar.gz">
afp-Network_Security_Policy_Verification-2014-07-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Neumann_Morgenstern_Utility.html b/web/entries/Neumann_Morgenstern_Utility.html
--- a/web/entries/Neumann_Morgenstern_Utility.html
+++ b/web/entries/Neumann_Morgenstern_Utility.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Von-Neumann-Morgenstern Utility Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>on-Neumann-Morgenstern
<font class="first">U</font>tility
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Von-Neumann-Morgenstern Utility Theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.parsert.com/">Julian Parsert</a> and
<a href="http://cl-informatik.uibk.ac.at/cek/">Cezary Kaliszyk</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-07-04</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Utility functions form an essential part of game theory and economics.
In order to guarantee the existence of utility functions most of the
time sufficient properties are assumed in an axiomatic manner. One
famous and very common set of such assumptions is that of expected
utility theory. Here, the rationality, continuity, and independence of
preferences is assumed. The von-Neumann-Morgenstern Utility theorem
shows that these assumptions are necessary and sufficient for an
expected utility function to exists. This theorem was proven by
Neumann and Morgenstern in ``Theory of Games and Economic
Behavior'' which is regarded as one of the most influential
works in game theory. The formalization includes formal definitions of
the underlying concepts including continuity and independence of
-preferences.</div></td>
+preferences.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Neumann_Morgenstern_Utility-AFP,
author = {Julian Parsert and Cezary Kaliszyk},
title = {Von-Neumann-Morgenstern Utility Theorem},
journal = {Archive of Formal Proofs},
month = jul,
year = 2018,
note = {\url{http://isa-afp.org/entries/Neumann_Morgenstern_Utility.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="First_Welfare_Theorem.html">First_Welfare_Theorem</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Neumann_Morgenstern_Utility/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Neumann_Morgenstern_Utility/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Neumann_Morgenstern_Utility/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Neumann_Morgenstern_Utility-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Neumann_Morgenstern_Utility-2019-06-11.tar.gz">
afp-Neumann_Morgenstern_Utility-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Neumann_Morgenstern_Utility-2018-08-16.tar.gz">
afp-Neumann_Morgenstern_Utility-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Neumann_Morgenstern_Utility-2018-07-04.tar.gz">
afp-Neumann_Morgenstern_Utility-2018-07-04.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/No_FTL_observers.html b/web/entries/No_FTL_observers.html
--- a/web/entries/No_FTL_observers.html
+++ b/web/entries/No_FTL_observers.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>No Faster-Than-Light Observers - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">N</font>o
<font class="first">F</font>aster-Than-Light
<font class="first">O</font>bservers
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">No Faster-Than-Light Observers</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Mike Stannett (m /dot/ stannett /at/ sheffield /dot/ ac /dot/ uk) and
<a href="http://www.renyi.hu/~nemeti/">István Németi</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-04-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We provide a formal proof within First Order Relativity Theory that no
observer can travel faster than the speed of light. Originally
reported in Stannett & Németi (2014) "Using Isabelle/HOL to verify
first-order relativity theory", Journal of Automated Reasoning 52(4),
-pp. 361-378.</div></td>
+pp. 361-378.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{No_FTL_observers-AFP,
author = {Mike Stannett and István Németi},
title = {No Faster-Than-Light Observers},
journal = {Archive of Formal Proofs},
month = apr,
year = 2016,
note = {\url{http://isa-afp.org/entries/No_FTL_observers.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/No_FTL_observers/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/No_FTL_observers/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/No_FTL_observers/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-No_FTL_observers-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-No_FTL_observers-2019-06-11.tar.gz">
afp-No_FTL_observers-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-No_FTL_observers-2018-08-16.tar.gz">
afp-No_FTL_observers-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-No_FTL_observers-2017-10-10.tar.gz">
afp-No_FTL_observers-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-No_FTL_observers-2016-12-17.tar.gz">
afp-No_FTL_observers-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-No_FTL_observers-2016-04-28.tar.gz">
afp-No_FTL_observers-2016-04-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Nominal2.html b/web/entries/Nominal2.html
--- a/web/entries/Nominal2.html
+++ b/web/entries/Nominal2.html
@@ -1,234 +1,234 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Nominal 2 - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">N</font>ominal
<font class="first">2</font>
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Nominal 2</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.inf.kcl.ac.uk/staff/urbanc/">Christian Urban</a>,
<a href="http://www.in.tum.de/~berghofe">Stefan Berghofer</a> and
<a href="http://cl-informatik.uibk.ac.at/cek/">Cezary Kaliszyk</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-02-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>Dealing with binders, renaming of bound variables, capture-avoiding
substitution, etc., is very often a major problem in formal
proofs, especially in proofs by structural and rule
induction. Nominal Isabelle is designed to make such proofs easy to
formalise: it provides an infrastructure for declaring nominal
datatypes (that is alpha-equivalence classes) and for defining
functions over them by structural recursion. It also provides
induction principles that have Barendregt’s variable convention
already built in.
</p><p>
This entry can be used as a more advanced replacement for
HOL/Nominal in the Isabelle distribution.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Nominal2-AFP,
author = {Christian Urban and Stefan Berghofer and Cezary Kaliszyk},
title = {Nominal 2},
journal = {Archive of Formal Proofs},
month = feb,
year = 2013,
note = {\url{http://isa-afp.org/entries/Nominal2.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="FinFun.html">FinFun</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Incompleteness.html">Incompleteness</a>, <a href="LambdaAuth.html">LambdaAuth</a>, <a href="Launchbury.html">Launchbury</a>, <a href="Modal_Logics_for_NTS.html">Modal_Logics_for_NTS</a>, <a href="Rewriting_Z.html">Rewriting_Z</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Nominal2/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Nominal2/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Nominal2/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Nominal2-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Nominal2-2019-06-11.tar.gz">
afp-Nominal2-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Nominal2-2018-08-16.tar.gz">
afp-Nominal2-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Nominal2-2017-10-10.tar.gz">
afp-Nominal2-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Nominal2-2016-12-17.tar.gz">
afp-Nominal2-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Nominal2-2016-04-24.tar.gz">
afp-Nominal2-2016-04-24.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Nominal2-2013-11-17.tar.gz">
afp-Nominal2-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Nominal2-2013-02-24.tar.gz">
afp-Nominal2-2013-02-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Noninterference_CSP.html b/web/entries/Noninterference_CSP.html
--- a/web/entries/Noninterference_CSP.html
+++ b/web/entries/Noninterference_CSP.html
@@ -1,258 +1,258 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Noninterference Security in Communicating Sequential Processes - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">N</font>oninterference
<font class="first">S</font>ecurity
in
<font class="first">C</font>ommunicating
<font class="first">S</font>equential
<font class="first">P</font>rocesses
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Noninterference Security in Communicating Sequential Processes</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Pasquale Noce (pasquale /dot/ noce /dot/ lavoro /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-05-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
An extension of classical noninterference security for deterministic
state machines, as introduced by Goguen and Meseguer and elegantly
formalized by Rushby, to nondeterministic systems should satisfy two
fundamental requirements: it should be based on a mathematically precise
theory of nondeterminism, and should be equivalent to (or at least not
weaker than) the classical notion in the degenerate deterministic case.
</p>
<p>
This paper proposes a definition of noninterference security applying
to Hoare's Communicating Sequential Processes (CSP) in the general case of
a possibly intransitive noninterference policy, and proves the
equivalence of this security property to classical noninterference
security for processes representing deterministic state machines.
</p>
<p>
Furthermore, McCullough's generalized noninterference security is shown
to be weaker than both the proposed notion of CSP noninterference security
for a generic process, and classical noninterference security for processes
representing deterministic state machines. This renders CSP noninterference
security preferable as an extension of classical noninterference security
to nondeterministic systems.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Noninterference_CSP-AFP,
author = {Pasquale Noce},
title = {Noninterference Security in Communicating Sequential Processes},
journal = {Archive of Formal Proofs},
month = may,
year = 2014,
note = {\url{http://isa-afp.org/entries/Noninterference_CSP.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Noninterference_Ipurge_Unwinding.html">Noninterference_Ipurge_Unwinding</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Noninterference_CSP/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Noninterference_CSP/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Noninterference_CSP/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Noninterference_CSP-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Noninterference_CSP-2019-06-11.tar.gz">
afp-Noninterference_CSP-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Noninterference_CSP-2018-08-16.tar.gz">
afp-Noninterference_CSP-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Noninterference_CSP-2017-10-10.tar.gz">
afp-Noninterference_CSP-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Noninterference_CSP-2016-12-17.tar.gz">
afp-Noninterference_CSP-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Noninterference_CSP-2016-02-22.tar.gz">
afp-Noninterference_CSP-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Noninterference_CSP-2015-06-13.tar.gz">
afp-Noninterference_CSP-2015-06-13.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Noninterference_CSP-2015-05-27.tar.gz">
afp-Noninterference_CSP-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Noninterference_CSP-2014-08-28.tar.gz">
afp-Noninterference_CSP-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Noninterference_CSP-2014-05-24.tar.gz">
afp-Noninterference_CSP-2014-05-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Noninterference_Concurrent_Composition.html b/web/entries/Noninterference_Concurrent_Composition.html
--- a/web/entries/Noninterference_Concurrent_Composition.html
+++ b/web/entries/Noninterference_Concurrent_Composition.html
@@ -1,236 +1,236 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Conservation of CSP Noninterference Security under Concurrent Composition - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>onservation
of
<font class="first">C</font>SP
<font class="first">N</font>oninterference
<font class="first">S</font>ecurity
under
<font class="first">C</font>oncurrent
<font class="first">C</font>omposition
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Conservation of CSP Noninterference Security under Concurrent Composition</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Pasquale Noce (pasquale /dot/ noce /dot/ lavoro /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-06-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>In his outstanding work on Communicating Sequential Processes,
Hoare has defined two fundamental binary operations allowing to
compose the input processes into another, typically more complex,
process: sequential composition and concurrent composition.
Particularly, the output of the latter operation is a process in which
any event not shared by both operands can occur whenever the operand
that admits the event can engage in it, whereas any event shared by
both operands can occur just in case both can engage in it.</p>
<p>This paper formalizes Hoare's definition of concurrent composition
and proves, in the general case of a possibly intransitive policy,
that CSP noninterference security is conserved under this operation.
This result, along with the previous analogous one concerning
sequential composition, enables the construction of more and more
complex processes enforcing noninterference security by composing,
sequentially or concurrently, simpler secure processes, whose security
can in turn be proven using either the definition of security, or
-unwinding theorems.</p></div></td>
+unwinding theorems.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Noninterference_Concurrent_Composition-AFP,
author = {Pasquale Noce},
title = {Conservation of CSP Noninterference Security under Concurrent Composition},
journal = {Archive of Formal Proofs},
month = jun,
year = 2016,
note = {\url{http://isa-afp.org/entries/Noninterference_Concurrent_Composition.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Noninterference_Sequential_Composition.html">Noninterference_Sequential_Composition</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Noninterference_Concurrent_Composition/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Noninterference_Concurrent_Composition/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Noninterference_Concurrent_Composition/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Noninterference_Concurrent_Composition-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Noninterference_Concurrent_Composition-2019-06-11.tar.gz">
afp-Noninterference_Concurrent_Composition-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Noninterference_Concurrent_Composition-2018-08-16.tar.gz">
afp-Noninterference_Concurrent_Composition-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Noninterference_Concurrent_Composition-2017-10-10.tar.gz">
afp-Noninterference_Concurrent_Composition-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Noninterference_Concurrent_Composition-2016-12-17.tar.gz">
afp-Noninterference_Concurrent_Composition-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Noninterference_Concurrent_Composition-2016-06-13.tar.gz">
afp-Noninterference_Concurrent_Composition-2016-06-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Noninterference_Generic_Unwinding.html b/web/entries/Noninterference_Generic_Unwinding.html
--- a/web/entries/Noninterference_Generic_Unwinding.html
+++ b/web/entries/Noninterference_Generic_Unwinding.html
@@ -1,256 +1,256 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Generic Unwinding Theorem for CSP Noninterference Security - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">G</font>eneric
<font class="first">U</font>nwinding
<font class="first">T</font>heorem
for
<font class="first">C</font>SP
<font class="first">N</font>oninterference
<font class="first">S</font>ecurity
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Generic Unwinding Theorem for CSP Noninterference Security</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Pasquale Noce (pasquale /dot/ noce /dot/ lavoro /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-06-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
The classical definition of noninterference security for a deterministic state
machine with outputs requires to consider the outputs produced by machine
actions after any trace, i.e. any indefinitely long sequence of actions, of the
machine. In order to render the verification of the security of such a machine
more straightforward, there is a need of some sufficient condition for security
such that just individual actions, rather than unbounded sequences of actions,
have to be considered.
</p><p>
By extending previous results applying to transitive noninterference policies,
Rushby has proven an unwinding theorem that provides a sufficient condition of
this kind in the general case of a possibly intransitive policy. This condition
has to be satisfied by a generic function mapping security domains into
equivalence relations over machine states.
</p><p>
An analogous problem arises for CSP noninterference security, whose definition
requires to consider any possible future, i.e. any indefinitely long sequence of
subsequent events and any indefinitely large set of refused events associated to
that sequence, for each process trace.
</p><p>
This paper provides a sufficient condition for CSP noninterference security,
which indeed requires to just consider individual accepted and refused events
and applies to the general case of a possibly intransitive policy. This
condition follows Rushby's one for classical noninterference security, and has
to be satisfied by a generic function mapping security domains into equivalence
relations over process traces; hence its name, Generic Unwinding Theorem.
Variants of this theorem applying to deterministic processes and trace set
processes are also proven. Finally, the sufficient condition for security
expressed by the theorem is shown not to be a necessary condition as well, viz.
there exists a secure process such that no domain-relation map satisfying the
condition exists.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Noninterference_Generic_Unwinding-AFP,
author = {Pasquale Noce},
title = {The Generic Unwinding Theorem for CSP Noninterference Security},
journal = {Archive of Formal Proofs},
month = jun,
year = 2015,
note = {\url{http://isa-afp.org/entries/Noninterference_Generic_Unwinding.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Noninterference_Ipurge_Unwinding.html">Noninterference_Ipurge_Unwinding</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Noninterference_Generic_Unwinding/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Noninterference_Generic_Unwinding/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Noninterference_Generic_Unwinding/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Noninterference_Generic_Unwinding-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Noninterference_Generic_Unwinding-2019-06-11.tar.gz">
afp-Noninterference_Generic_Unwinding-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Noninterference_Generic_Unwinding-2018-08-16.tar.gz">
afp-Noninterference_Generic_Unwinding-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Noninterference_Generic_Unwinding-2017-10-10.tar.gz">
afp-Noninterference_Generic_Unwinding-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Noninterference_Generic_Unwinding-2016-12-17.tar.gz">
afp-Noninterference_Generic_Unwinding-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Noninterference_Generic_Unwinding-2016-02-22.tar.gz">
afp-Noninterference_Generic_Unwinding-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Noninterference_Generic_Unwinding-2015-06-13.tar.gz">
afp-Noninterference_Generic_Unwinding-2015-06-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Noninterference_Inductive_Unwinding.html b/web/entries/Noninterference_Inductive_Unwinding.html
--- a/web/entries/Noninterference_Inductive_Unwinding.html
+++ b/web/entries/Noninterference_Inductive_Unwinding.html
@@ -1,229 +1,229 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Inductive Unwinding Theorem for CSP Noninterference Security - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">I</font>nductive
<font class="first">U</font>nwinding
<font class="first">T</font>heorem
for
<font class="first">C</font>SP
<font class="first">N</font>oninterference
<font class="first">S</font>ecurity
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Inductive Unwinding Theorem for CSP Noninterference Security</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Pasquale Noce (pasquale /dot/ noce /dot/ lavoro /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-08-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
The necessary and sufficient condition for CSP noninterference security stated by the Ipurge Unwinding Theorem is expressed in terms of a pair of event lists varying over the set of process traces. This does not render it suitable for the subsequent application of rule induction in the case of a process defined inductively, since rule induction may rather be applied to a single variable ranging over an inductively defined set.
</p><p>
Starting from the Ipurge Unwinding Theorem, this paper derives a necessary and sufficient condition for CSP noninterference security that involves a single event list varying over the set of process traces, and is thus suitable for rule induction; hence its name, Inductive Unwinding Theorem. Similarly to the Ipurge Unwinding Theorem, the new theorem only requires to consider individual accepted and refused events for each process trace, and applies to the general case of a possibly intransitive noninterference policy. Specific variants of this theorem are additionally proven for deterministic processes and trace set processes.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Noninterference_Inductive_Unwinding-AFP,
author = {Pasquale Noce},
title = {The Inductive Unwinding Theorem for CSP Noninterference Security},
journal = {Archive of Formal Proofs},
month = aug,
year = 2015,
note = {\url{http://isa-afp.org/entries/Noninterference_Inductive_Unwinding.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Noninterference_Ipurge_Unwinding.html">Noninterference_Ipurge_Unwinding</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Noninterference_Inductive_Unwinding/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Noninterference_Inductive_Unwinding/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Noninterference_Inductive_Unwinding/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Noninterference_Inductive_Unwinding-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Noninterference_Inductive_Unwinding-2019-06-11.tar.gz">
afp-Noninterference_Inductive_Unwinding-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Noninterference_Inductive_Unwinding-2018-08-16.tar.gz">
afp-Noninterference_Inductive_Unwinding-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Noninterference_Inductive_Unwinding-2017-10-10.tar.gz">
afp-Noninterference_Inductive_Unwinding-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Noninterference_Inductive_Unwinding-2016-12-17.tar.gz">
afp-Noninterference_Inductive_Unwinding-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Noninterference_Inductive_Unwinding-2016-02-22.tar.gz">
afp-Noninterference_Inductive_Unwinding-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Noninterference_Inductive_Unwinding-2015-08-19.tar.gz">
afp-Noninterference_Inductive_Unwinding-2015-08-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Noninterference_Ipurge_Unwinding.html b/web/entries/Noninterference_Ipurge_Unwinding.html
--- a/web/entries/Noninterference_Ipurge_Unwinding.html
+++ b/web/entries/Noninterference_Ipurge_Unwinding.html
@@ -1,257 +1,257 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Ipurge Unwinding Theorem for CSP Noninterference Security - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">I</font>purge
<font class="first">U</font>nwinding
<font class="first">T</font>heorem
for
<font class="first">C</font>SP
<font class="first">N</font>oninterference
<font class="first">S</font>ecurity
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Ipurge Unwinding Theorem for CSP Noninterference Security</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Pasquale Noce (pasquale /dot/ noce /dot/ lavoro /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-06-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
The definition of noninterference security for Communicating Sequential
Processes requires to consider any possible future, i.e. any indefinitely long
sequence of subsequent events and any indefinitely large set of refused events
associated to that sequence, for each process trace. In order to render the
verification of the security of a process more straightforward, there is a need
of some sufficient condition for security such that just individual accepted and
refused events, rather than unbounded sequences and sets of events, have to be
considered.
</p><p>
Of course, if such a sufficient condition were necessary as well, it would be
even more valuable, since it would permit to prove not only that a process is
secure by verifying that the condition holds, but also that a process is not
secure by verifying that the condition fails to hold.
</p><p>
This paper provides a necessary and sufficient condition for CSP noninterference
security, which indeed requires to just consider individual accepted and refused
events and applies to the general case of a possibly intransitive policy. This
condition follows Rushby's output consistency for deterministic state machines
with outputs, and has to be satisfied by a specific function mapping security
domains into equivalence relations over process traces. The definition of this
function makes use of an intransitive purge function following Rushby's one;
hence the name given to the condition, Ipurge Unwinding Theorem.
</p><p>
Furthermore, in accordance with Hoare's formal definition of deterministic
processes, it is shown that a process is deterministic just in case it is a
trace set process, i.e. it may be identified by means of a trace set alone,
matching the set of its traces, in place of a failures-divergences pair. Then,
variants of the Ipurge Unwinding Theorem are proven for deterministic processes
and trace set processes.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Noninterference_Ipurge_Unwinding-AFP,
author = {Pasquale Noce},
title = {The Ipurge Unwinding Theorem for CSP Noninterference Security},
journal = {Archive of Formal Proofs},
month = jun,
year = 2015,
note = {\url{http://isa-afp.org/entries/Noninterference_Ipurge_Unwinding.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="List_Interleaving.html">List_Interleaving</a>, <a href="Noninterference_CSP.html">Noninterference_CSP</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Noninterference_Generic_Unwinding.html">Noninterference_Generic_Unwinding</a>, <a href="Noninterference_Inductive_Unwinding.html">Noninterference_Inductive_Unwinding</a>, <a href="Noninterference_Sequential_Composition.html">Noninterference_Sequential_Composition</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Noninterference_Ipurge_Unwinding/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Noninterference_Ipurge_Unwinding/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Noninterference_Ipurge_Unwinding/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Noninterference_Ipurge_Unwinding-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Noninterference_Ipurge_Unwinding-2019-06-11.tar.gz">
afp-Noninterference_Ipurge_Unwinding-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Noninterference_Ipurge_Unwinding-2018-08-16.tar.gz">
afp-Noninterference_Ipurge_Unwinding-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Noninterference_Ipurge_Unwinding-2017-10-10.tar.gz">
afp-Noninterference_Ipurge_Unwinding-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Noninterference_Ipurge_Unwinding-2016-12-17.tar.gz">
afp-Noninterference_Ipurge_Unwinding-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Noninterference_Ipurge_Unwinding-2016-02-22.tar.gz">
afp-Noninterference_Ipurge_Unwinding-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Noninterference_Ipurge_Unwinding-2015-06-13.tar.gz">
afp-Noninterference_Ipurge_Unwinding-2015-06-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Noninterference_Sequential_Composition.html b/web/entries/Noninterference_Sequential_Composition.html
--- a/web/entries/Noninterference_Sequential_Composition.html
+++ b/web/entries/Noninterference_Sequential_Composition.html
@@ -1,237 +1,237 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Conservation of CSP Noninterference Security under Sequential Composition - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>onservation
of
<font class="first">C</font>SP
<font class="first">N</font>oninterference
<font class="first">S</font>ecurity
under
<font class="first">S</font>equential
<font class="first">C</font>omposition
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Conservation of CSP Noninterference Security under Sequential Composition</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Pasquale Noce (pasquale /dot/ noce /dot/ lavoro /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-04-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>In his outstanding work on Communicating Sequential Processes, Hoare
has defined two fundamental binary operations allowing to compose the
input processes into another, typically more complex, process:
sequential composition and concurrent composition. Particularly, the
output of the former operation is a process that initially behaves
like the first operand, and then like the second operand once the
execution of the first one has terminated successfully, as long as it
does.</p>
<p>This paper formalizes Hoare's definition of sequential
composition and proves, in the general case of a possibly intransitive
policy, that CSP noninterference security is conserved under this
operation, provided that successful termination cannot be affected by
confidential events and cannot occur as an alternative to other events
in the traces of the first operand. Both of these assumptions are
shown, by means of counterexamples, to be necessary for the theorem to
-hold.</p></div></td>
+hold.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Noninterference_Sequential_Composition-AFP,
author = {Pasquale Noce},
title = {Conservation of CSP Noninterference Security under Sequential Composition},
journal = {Archive of Formal Proofs},
month = apr,
year = 2016,
note = {\url{http://isa-afp.org/entries/Noninterference_Sequential_Composition.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Noninterference_Ipurge_Unwinding.html">Noninterference_Ipurge_Unwinding</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Noninterference_Concurrent_Composition.html">Noninterference_Concurrent_Composition</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Noninterference_Sequential_Composition/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Noninterference_Sequential_Composition/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Noninterference_Sequential_Composition/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Noninterference_Sequential_Composition-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Noninterference_Sequential_Composition-2019-06-11.tar.gz">
afp-Noninterference_Sequential_Composition-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Noninterference_Sequential_Composition-2018-08-16.tar.gz">
afp-Noninterference_Sequential_Composition-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Noninterference_Sequential_Composition-2017-10-10.tar.gz">
afp-Noninterference_Sequential_Composition-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Noninterference_Sequential_Composition-2016-12-17.tar.gz">
afp-Noninterference_Sequential_Composition-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Noninterference_Sequential_Composition-2016-04-26.tar.gz">
afp-Noninterference_Sequential_Composition-2016-04-26.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/NormByEval.html b/web/entries/NormByEval.html
--- a/web/entries/NormByEval.html
+++ b/web/entries/NormByEval.html
@@ -1,278 +1,278 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Normalization by Evaluation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">N</font>ormalization
by
<font class="first">E</font>valuation
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Normalization by Evaluation</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.linta.de/~aehlig/">Klaus Aehlig</a> and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-02-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This article formalizes normalization by evaluation as implemented in Isabelle. Lambda calculus plus term rewriting is compiled into a functional program with pattern matching. It is proved that the result of a successful evaluation is a) correct, i.e. equivalent to the input, and b) in normal form.</div></td>
+ <td class="abstract mathjax_process">This article formalizes normalization by evaluation as implemented in Isabelle. Lambda calculus plus term rewriting is compiled into a functional program with pattern matching. It is proved that the result of a successful evaluation is a) correct, i.e. equivalent to the input, and b) in normal form.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{NormByEval-AFP,
author = {Klaus Aehlig and Tobias Nipkow},
title = {Normalization by Evaluation},
journal = {Archive of Formal Proofs},
month = feb,
year = 2008,
note = {\url{http://isa-afp.org/entries/NormByEval.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/NormByEval/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/NormByEval/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/NormByEval/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-NormByEval-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-NormByEval-2019-06-11.tar.gz">
afp-NormByEval-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-NormByEval-2018-08-16.tar.gz">
afp-NormByEval-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-NormByEval-2017-10-10.tar.gz">
afp-NormByEval-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-NormByEval-2016-12-17.tar.gz">
afp-NormByEval-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-NormByEval-2016-02-22.tar.gz">
afp-NormByEval-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-NormByEval-2015-05-27.tar.gz">
afp-NormByEval-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-NormByEval-2014-08-28.tar.gz">
afp-NormByEval-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-NormByEval-2013-12-11.tar.gz">
afp-NormByEval-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-NormByEval-2013-11-17.tar.gz">
afp-NormByEval-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-NormByEval-2013-02-16.tar.gz">
afp-NormByEval-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-NormByEval-2012-05-24.tar.gz">
afp-NormByEval-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-NormByEval-2011-10-11.tar.gz">
afp-NormByEval-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-NormByEval-2011-02-11.tar.gz">
afp-NormByEval-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-NormByEval-2010-07-01.tar.gz">
afp-NormByEval-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-NormByEval-2009-12-12.tar.gz">
afp-NormByEval-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-NormByEval-2009-04-29.tar.gz">
afp-NormByEval-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-NormByEval-2008-06-10.tar.gz">
afp-NormByEval-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-NormByEval-2008-02-22.tar.gz">
afp-NormByEval-2008-02-22.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-NormByEval-2008-02-18.tar.gz">
afp-NormByEval-2008-02-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Nullstellensatz.html b/web/entries/Nullstellensatz.html
--- a/web/entries/Nullstellensatz.html
+++ b/web/entries/Nullstellensatz.html
@@ -1,199 +1,199 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Hilbert's Nullstellensatz - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">H</font>ilbert's
<font class="first">N</font>ullstellensatz
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Hilbert's Nullstellensatz</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://risc.jku.at/m/alexander-maletzky/">Alexander Maletzky</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-06-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry formalizes Hilbert's Nullstellensatz, an important
theorem in algebraic geometry that can be viewed as the generalization
of the Fundamental Theorem of Algebra to multivariate polynomials: If
a set of (multivariate) polynomials over an algebraically closed field
has no common zero, then the ideal it generates is the entire
polynomial ring. The formalization proves several equivalent versions
of this celebrated theorem: the weak Nullstellensatz, the strong
Nullstellensatz (connecting algebraic varieties and radical ideals),
and the field-theoretic Nullstellensatz. The formalization follows
Chapter 4.1. of <a
href="https://link.springer.com/book/10.1007/978-0-387-35651-8">Ideals,
-Varieties, and Algorithms</a> by Cox, Little and O'Shea.</div></td>
+Varieties, and Algorithms</a> by Cox, Little and O'Shea.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Nullstellensatz-AFP,
author = {Alexander Maletzky},
title = {Hilbert's Nullstellensatz},
journal = {Archive of Formal Proofs},
month = jun,
year = 2019,
note = {\url{http://isa-afp.org/entries/Nullstellensatz.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Groebner_Bases.html">Groebner_Bases</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Nullstellensatz/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Nullstellensatz/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Nullstellensatz/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Nullstellensatz-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Nullstellensatz-2019-06-17.tar.gz">
afp-Nullstellensatz-2019-06-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Octonions.html b/web/entries/Octonions.html
--- a/web/entries/Octonions.html
+++ b/web/entries/Octonions.html
@@ -1,196 +1,196 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Octonions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">O</font>ctonions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Octonions</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~ak2110/">Angeliki Koutsoukou-Argyraki</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-09-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We develop the basic theory of Octonions, including various identities
and properties of the octonions and of the octonionic product, a
description of 7D isometries and representations of orthogonal
transformations. To this end we first develop the theory of the vector
cross product in 7 dimensions. The development of the theory of
Octonions is inspired by that of the theory of Quaternions by Lawrence
Paulson. However, we do not work within the type class real_algebra_1
-because the octonionic product is not associative.</div></td>
+because the octonionic product is not associative.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Octonions-AFP,
author = {Angeliki Koutsoukou-Argyraki},
title = {Octonions},
journal = {Archive of Formal Proofs},
month = sep,
year = 2018,
note = {\url{http://isa-afp.org/entries/Octonions.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Octonions/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Octonions/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Octonions/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Octonions-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Octonions-2019-06-11.tar.gz">
afp-Octonions-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Octonions-2018-09-16.tar.gz">
afp-Octonions-2018-09-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/OpSets.html b/web/entries/OpSets.html
--- a/web/entries/OpSets.html
+++ b/web/entries/OpSets.html
@@ -1,221 +1,221 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>OpSets: Sequential Specifications for Replicated Datatypes - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">O</font>pSets:
<font class="first">S</font>equential
<font class="first">S</font>pecifications
for
<font class="first">R</font>eplicated
<font class="first">D</font>atatypes
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">OpSets: Sequential Specifications for Replicated Datatypes</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Martin Kleppmann (mk428 /at/ cl /dot/ cam /dot/ ac /dot/ uk),
Victor B. F. Gomes (vb358 /at/ cl /dot/ cam /dot/ ac /dot/ uk),
Dominic P. Mulligan (Dominic /dot/ Mulligan /at/ arm /dot/ com) and
Alastair R. Beresford (arb33 /at/ cl /dot/ cam /dot/ ac /dot/ uk)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-05-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We introduce OpSets, an executable framework for specifying and
reasoning about the semantics of replicated datatypes that provide
eventual consistency in a distributed system, and for mechanically
verifying algorithms that implement these datatypes. Our approach is
simple but expressive, allowing us to succinctly specify a variety of
abstract datatypes, including maps, sets, lists, text, graphs, trees,
and registers. Our datatypes are also composable, enabling the
construction of complex data structures. To demonstrate the utility of
OpSets for analysing replication algorithms, we highlight an important
correctness property for collaborative text editing that has
traditionally been overlooked; algorithms that do not satisfy this
property can exhibit awkward interleaving of text. We use OpSets to
specify this correctness property and prove that although one existing
replication algorithm satisfies this property, several other published
-algorithms do not.</div></td>
+algorithms do not.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{OpSets-AFP,
author = {Martin Kleppmann and Victor B. F. Gomes and Dominic P. Mulligan and Alastair R. Beresford},
title = {OpSets: Sequential Specifications for Replicated Datatypes},
journal = {Archive of Formal Proofs},
month = may,
year = 2018,
note = {\url{http://isa-afp.org/entries/OpSets.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/OpSets/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/OpSets/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/OpSets/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-OpSets-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-OpSets-2019-06-11.tar.gz">
afp-OpSets-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-OpSets-2018-08-16.tar.gz">
afp-OpSets-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-OpSets-2018-05-25.tar.gz">
afp-OpSets-2018-05-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Open_Induction.html b/web/entries/Open_Induction.html
--- a/web/entries/Open_Induction.html
+++ b/web/entries/Open_Induction.html
@@ -1,240 +1,240 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Open Induction - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">O</font>pen
<font class="first">I</font>nduction
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Open Induction</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Mizuhito Ogawa and
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-11-02</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
A proof of the open induction schema based on J.-C. Raoult, Proving open properties by induction, <i>Information Processing Letters</i> 29, 1988, pp.19-23.
-<p>This research was supported by the Austrian Science Fund (FWF): J3202.</p></div></td>
+<p>This research was supported by the Austrian Science Fund (FWF): J3202.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Open_Induction-AFP,
author = {Mizuhito Ogawa and Christian Sternagel},
title = {Open Induction},
journal = {Archive of Formal Proofs},
month = nov,
year = 2012,
note = {\url{http://isa-afp.org/entries/Open_Induction.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Decreasing-Diagrams-II.html">Decreasing-Diagrams-II</a>, <a href="Functional_Ordered_Resolution_Prover.html">Functional_Ordered_Resolution_Prover</a>, <a href="Myhill-Nerode.html">Myhill-Nerode</a>, <a href="Well_Quasi_Orders.html">Well_Quasi_Orders</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Open_Induction/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Open_Induction/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Open_Induction/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Open_Induction-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Open_Induction-2019-06-11.tar.gz">
afp-Open_Induction-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Open_Induction-2018-08-16.tar.gz">
afp-Open_Induction-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Open_Induction-2017-10-10.tar.gz">
afp-Open_Induction-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Open_Induction-2016-12-17.tar.gz">
afp-Open_Induction-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Open_Induction-2016-02-22.tar.gz">
afp-Open_Induction-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Open_Induction-2015-05-27.tar.gz">
afp-Open_Induction-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Open_Induction-2014-08-28.tar.gz">
afp-Open_Induction-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Open_Induction-2013-12-11.tar.gz">
afp-Open_Induction-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Open_Induction-2013-11-17.tar.gz">
afp-Open_Induction-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Open_Induction-2013-03-02.tar.gz">
afp-Open_Induction-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Open_Induction-2013-02-16.tar.gz">
afp-Open_Induction-2013-02-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Optics.html b/web/entries/Optics.html
--- a/web/entries/Optics.html
+++ b/web/entries/Optics.html
@@ -1,221 +1,221 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Optics - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">O</font>ptics
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Optics</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www-users.cs.york.ac.uk/~simonf/">Simon Foster</a> and
Frank Zeyda
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-05-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Lenses provide an abstract interface for manipulating data types
through spatially-separated views. They are defined abstractly in
terms of two functions, <em>get</em>, the return a value
from the source type, and <em>put</em> that updates the
value. We mechanise the underlying theory of lenses, in terms of an
algebraic hierarchy of lenses, including well-behaved and very
well-behaved lenses, each lens class being characterised by a set of
lens laws. We also mechanise a lens algebra in Isabelle that enables
their composition and comparison, so as to allow construction of
complex lenses. This is accompanied by a large library of algebraic
laws. Moreover we also show how the lens classes can be applied by
-instantiating them with a number of Isabelle data types.</div></td>
+instantiating them with a number of Isabelle data types.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2020-03-02]:
Added partial bijective and symmetric lenses.
Improved alphabet command generating additional lenses and results.
Several additional lens relations, including observational equivalence.
Additional theorems throughout.
Adaptations for Isabelle 2020.
(revision 44e2e5c)</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Optics-AFP,
author = {Simon Foster and Frank Zeyda},
title = {Optics},
journal = {Archive of Formal Proofs},
month = may,
year = 2017,
note = {\url{http://isa-afp.org/entries/Optics.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Optics/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Optics/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Optics/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Optics-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Optics-2019-06-11.tar.gz">
afp-Optics-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Optics-2018-08-16.tar.gz">
afp-Optics-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Optics-2017-10-10.tar.gz">
afp-Optics-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Optics-2017-06-01.tar.gz">
afp-Optics-2017-06-01.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Optimal_BST.html b/web/entries/Optimal_BST.html
--- a/web/entries/Optimal_BST.html
+++ b/web/entries/Optimal_BST.html
@@ -1,208 +1,208 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Optimal Binary Search Trees - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">O</font>ptimal
<font class="first">B</font>inary
<font class="first">S</font>earch
<font class="first">T</font>rees
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Optimal Binary Search Trees</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a> and
Dániel Somogyi
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-05-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This article formalizes recursive algorithms for the construction
of optimal binary search trees given fixed access frequencies.
We follow Knuth (1971), Yao (1980) and Mehlhorn (1984).
The algorithms are memoized with the help of the AFP article
<a href="Monad_Memo_DP.html">Monadification, Memoization and Dynamic Programming</a>,
-thus yielding dynamic programming algorithms.</div></td>
+thus yielding dynamic programming algorithms.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Optimal_BST-AFP,
author = {Tobias Nipkow and Dániel Somogyi},
title = {Optimal Binary Search Trees},
journal = {Archive of Formal Proofs},
month = may,
year = 2018,
note = {\url{http://isa-afp.org/entries/Optimal_BST.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Monad_Memo_DP.html">Monad_Memo_DP</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Optimal_BST/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Optimal_BST/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Optimal_BST/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Optimal_BST-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Optimal_BST-2019-06-11.tar.gz">
afp-Optimal_BST-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Optimal_BST-2018-08-16.tar.gz">
afp-Optimal_BST-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Optimal_BST-2018-05-29.tar.gz">
afp-Optimal_BST-2018-05-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Orbit_Stabiliser.html b/web/entries/Orbit_Stabiliser.html
--- a/web/entries/Orbit_Stabiliser.html
+++ b/web/entries/Orbit_Stabiliser.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Orbit-Stabiliser Theorem with Application to Rotational Symmetries - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">O</font>rbit-Stabiliser
<font class="first">T</font>heorem
with
<font class="first">A</font>pplication
to
<font class="first">R</font>otational
<font class="first">S</font>ymmetries
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Orbit-Stabiliser Theorem with Application to Rotational Symmetries</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Jonas Rädle (jonas /dot/ raedle /at/ tum /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-08-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The Orbit-Stabiliser theorem is a basic result in the algebra of
groups that factors the order of a group into the sizes of its orbits
and stabilisers. We formalize the notion of a group action and the
related concepts of orbits and stabilisers. This allows us to prove
the orbit-stabiliser theorem. In the second part of this work, we
formalize the tetrahedral group and use the orbit-stabiliser theorem
to prove that there are twelve (orientation-preserving) rotations of
-the tetrahedron.</div></td>
+the tetrahedron.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Orbit_Stabiliser-AFP,
author = {Jonas Rädle},
title = {Orbit-Stabiliser Theorem with Application to Rotational Symmetries},
journal = {Archive of Formal Proofs},
month = aug,
year = 2017,
note = {\url{http://isa-afp.org/entries/Orbit_Stabiliser.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Orbit_Stabiliser/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Orbit_Stabiliser/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Orbit_Stabiliser/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Orbit_Stabiliser-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Orbit_Stabiliser-2019-06-11.tar.gz">
afp-Orbit_Stabiliser-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Orbit_Stabiliser-2018-08-16.tar.gz">
afp-Orbit_Stabiliser-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Orbit_Stabiliser-2017-10-10.tar.gz">
afp-Orbit_Stabiliser-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Orbit_Stabiliser-2017-08-23.tar.gz">
afp-Orbit_Stabiliser-2017-08-23.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Order_Lattice_Props.html b/web/entries/Order_Lattice_Props.html
--- a/web/entries/Order_Lattice_Props.html
+++ b/web/entries/Order_Lattice_Props.html
@@ -1,211 +1,211 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Properties of Orderings and Lattices - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>roperties
of
<font class="first">O</font>rderings
and
<font class="first">L</font>attices
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Properties of Orderings and Lattices</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-12-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
These components add further fundamental order and lattice-theoretic
concepts and properties to Isabelle's libraries. They follow by
and large the introductory sections of the Compendium of Continuous
Lattices, covering directed and filtered sets, down-closed and
up-closed sets, ideals and filters, Galois connections, closure and
co-closure operators. Some emphasis is on duality and morphisms
between structures, as in the Compendium. To this end, three ad-hoc
-approaches to duality are compared.</div></td>
+approaches to duality are compared.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Order_Lattice_Props-AFP,
author = {Georg Struth},
title = {Properties of Orderings and Lattices},
journal = {Archive of Formal Proofs},
month = dec,
year = 2018,
note = {\url{http://isa-afp.org/entries/Order_Lattice_Props.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Quantales.html">Quantales</a>, <a href="Transformer_Semantics.html">Transformer_Semantics</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Order_Lattice_Props/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Order_Lattice_Props/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Order_Lattice_Props/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Order_Lattice_Props-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Order_Lattice_Props-2019-06-28.tar.gz">
afp-Order_Lattice_Props-2019-06-28.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-Order_Lattice_Props-2019-06-11.tar.gz">
afp-Order_Lattice_Props-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Order_Lattice_Props-2018-12-19.tar.gz">
afp-Order_Lattice_Props-2018-12-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Ordered_Resolution_Prover.html b/web/entries/Ordered_Resolution_Prover.html
--- a/web/entries/Ordered_Resolution_Prover.html
+++ b/web/entries/Ordered_Resolution_Prover.html
@@ -1,221 +1,221 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of Bachmair and Ganzinger's Ordered Resolution Prover - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
<font class="first">B</font>achmair
and
<font class="first">G</font>anzinger's
<font class="first">O</font>rdered
<font class="first">R</font>esolution
<font class="first">P</font>rover
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of Bachmair and Ganzinger's Ordered Resolution Prover</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://people.compute.dtu.dk/andschl/">Anders Schlichtkrull</a>,
Jasmin Christian Blanchette (j /dot/ c /dot/ blanchette /at/ vu /dot/ nl),
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a> and
Uwe Waldmann (uwe /at/ mpi-inf /dot/ mpg /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-01-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This Isabelle/HOL formalization covers Sections 2 to 4 of Bachmair and
Ganzinger's "Resolution Theorem Proving" chapter in the
<em>Handbook of Automated Reasoning</em>. This includes
soundness and completeness of unordered and ordered variants of ground
resolution with and without literal selection, the standard redundancy
criterion, a general framework for refutational theorem proving, and
-soundness and completeness of an abstract first-order prover.</div></td>
+soundness and completeness of an abstract first-order prover.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Ordered_Resolution_Prover-AFP,
author = {Anders Schlichtkrull and Jasmin Christian Blanchette and Dmitriy Traytel and Uwe Waldmann},
title = {Formalization of Bachmair and Ganzinger's Ordered Resolution Prover},
journal = {Archive of Formal Proofs},
month = jan,
year = 2018,
note = {\url{http://isa-afp.org/entries/Ordered_Resolution_Prover.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Coinductive.html">Coinductive</a>, <a href="Nested_Multisets_Ordinals.html">Nested_Multisets_Ordinals</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Functional_Ordered_Resolution_Prover.html">Functional_Ordered_Resolution_Prover</a>, <a href="Saturation_Framework.html">Saturation_Framework</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ordered_Resolution_Prover/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Ordered_Resolution_Prover/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ordered_Resolution_Prover/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Ordered_Resolution_Prover-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Ordered_Resolution_Prover-2019-06-11.tar.gz">
afp-Ordered_Resolution_Prover-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Ordered_Resolution_Prover-2018-08-16.tar.gz">
afp-Ordered_Resolution_Prover-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Ordered_Resolution_Prover-2018-01-22.tar.gz">
afp-Ordered_Resolution_Prover-2018-01-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Ordinal.html b/web/entries/Ordinal.html
--- a/web/entries/Ordinal.html
+++ b/web/entries/Ordinal.html
@@ -1,277 +1,277 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Countable Ordinals - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>ountable
<font class="first">O</font>rdinals
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Countable Ordinals</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Brian Huffman
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2005-11-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This development defines a well-ordered type of countable ordinals. It includes notions of continuous and normal functions, recursively defined functions over ordinals, least fixed-points, and derivatives. Much of ordinal arithmetic is formalized, including exponentials and logarithms. The development concludes with formalizations of Cantor Normal Form and Veblen hierarchies over normal functions.</div></td>
+ <td class="abstract mathjax_process">This development defines a well-ordered type of countable ordinals. It includes notions of continuous and normal functions, recursively defined functions over ordinals, least fixed-points, and derivatives. Much of ordinal arithmetic is formalized, including exponentials and logarithms. The development concludes with formalizations of Cantor Normal Form and Veblen hierarchies over normal functions.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Ordinal-AFP,
author = {Brian Huffman},
title = {Countable Ordinals},
journal = {Archive of Formal Proofs},
month = nov,
year = 2005,
note = {\url{http://isa-afp.org/entries/Ordinal.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Nested_Multisets_Ordinals.html">Nested_Multisets_Ordinals</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ordinal/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Ordinal/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ordinal/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Ordinal-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Ordinal-2019-06-11.tar.gz">
afp-Ordinal-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Ordinal-2018-08-16.tar.gz">
afp-Ordinal-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Ordinal-2017-10-10.tar.gz">
afp-Ordinal-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Ordinal-2016-12-17.tar.gz">
afp-Ordinal-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Ordinal-2016-02-22.tar.gz">
afp-Ordinal-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Ordinal-2015-05-27.tar.gz">
afp-Ordinal-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Ordinal-2014-08-28.tar.gz">
afp-Ordinal-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Ordinal-2013-12-11.tar.gz">
afp-Ordinal-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Ordinal-2013-11-17.tar.gz">
afp-Ordinal-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Ordinal-2013-02-16.tar.gz">
afp-Ordinal-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Ordinal-2012-05-24.tar.gz">
afp-Ordinal-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Ordinal-2011-10-11.tar.gz">
afp-Ordinal-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Ordinal-2011-02-11.tar.gz">
afp-Ordinal-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Ordinal-2010-07-01.tar.gz">
afp-Ordinal-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Ordinal-2009-12-12.tar.gz">
afp-Ordinal-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Ordinal-2009-04-29.tar.gz">
afp-Ordinal-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Ordinal-2008-06-10.tar.gz">
afp-Ordinal-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Ordinal-2007-11-27.tar.gz">
afp-Ordinal-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Ordinal-2005-11-16.tar.gz">
afp-Ordinal-2005-11-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Ordinals_and_Cardinals.html b/web/entries/Ordinals_and_Cardinals.html
--- a/web/entries/Ordinals_and_Cardinals.html
+++ b/web/entries/Ordinals_and_Cardinals.html
@@ -1,271 +1,271 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Ordinals and Cardinals - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">O</font>rdinals
and
<font class="first">C</font>ardinals
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Ordinals and Cardinals</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Andrei Popescu (a /dot/ popescu /at/ mdx /dot/ ac /dot/ uk)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2009-09-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We develop a basic theory of ordinals and cardinals in Isabelle/HOL, up to the point where some cardinality facts relevant for the ``working mathematician" become available. Unlike in set theory, here we do not have at hand canonical notions of ordinal and cardinal. Therefore, here an ordinal is merely a well-order relation and a cardinal is an ordinal minim w.r.t. order embedding on its field.</div></td>
+ <td class="abstract mathjax_process">We develop a basic theory of ordinals and cardinals in Isabelle/HOL, up to the point where some cardinality facts relevant for the ``working mathematician" become available. Unlike in set theory, here we do not have at hand canonical notions of ordinal and cardinal. Therefore, here an ordinal is merely a well-order relation and a cardinal is an ordinal minim w.r.t. order embedding on its field.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2012-09-25]: This entry has been discontinued because it is now part of the Isabelle distribution.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Ordinals_and_Cardinals-AFP,
author = {Andrei Popescu},
title = {Ordinals and Cardinals},
journal = {Archive of Formal Proofs},
month = sep,
year = 2009,
note = {\url{http://isa-afp.org/entries/Ordinals_and_Cardinals.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ordinals_and_Cardinals/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Ordinals_and_Cardinals/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ordinals_and_Cardinals/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Ordinals_and_Cardinals-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Ordinals_and_Cardinals-2019-06-11.tar.gz">
afp-Ordinals_and_Cardinals-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Ordinals_and_Cardinals-2018-08-16.tar.gz">
afp-Ordinals_and_Cardinals-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Ordinals_and_Cardinals-2017-10-10.tar.gz">
afp-Ordinals_and_Cardinals-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Ordinals_and_Cardinals-2016-12-17.tar.gz">
afp-Ordinals_and_Cardinals-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Ordinals_and_Cardinals-2016-02-22.tar.gz">
afp-Ordinals_and_Cardinals-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Ordinals_and_Cardinals-2015-05-27.tar.gz">
afp-Ordinals_and_Cardinals-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Ordinals_and_Cardinals-2014-08-28.tar.gz">
afp-Ordinals_and_Cardinals-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Ordinals_and_Cardinals-2013-12-11.tar.gz">
afp-Ordinals_and_Cardinals-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Ordinals_and_Cardinals-2013-11-17.tar.gz">
afp-Ordinals_and_Cardinals-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Ordinals_and_Cardinals-2013-02-16.tar.gz">
afp-Ordinals_and_Cardinals-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Ordinals_and_Cardinals-2012-05-24.tar.gz">
afp-Ordinals_and_Cardinals-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Ordinals_and_Cardinals-2011-10-11.tar.gz">
afp-Ordinals_and_Cardinals-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Ordinals_and_Cardinals-2011-02-11.tar.gz">
afp-Ordinals_and_Cardinals-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Ordinals_and_Cardinals-2010-07-01.tar.gz">
afp-Ordinals_and_Cardinals-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Ordinals_and_Cardinals-2009-12-12.tar.gz">
afp-Ordinals_and_Cardinals-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Ordinals_and_Cardinals-2009-09-09.tar.gz">
afp-Ordinals_and_Cardinals-2009-09-09.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Ordinals_and_Cardinals-2009-09-07.tar.gz">
afp-Ordinals_and_Cardinals-2009-09-07.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Ordinary_Differential_Equations.html b/web/entries/Ordinary_Differential_Equations.html
--- a/web/entries/Ordinary_Differential_Equations.html
+++ b/web/entries/Ordinary_Differential_Equations.html
@@ -1,268 +1,268 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Ordinary Differential Equations - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">O</font>rdinary
<font class="first">D</font>ifferential
<font class="first">E</font>quations
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Ordinary Differential Equations</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a> and
<a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-04-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>Session Ordinary-Differential-Equations formalizes ordinary differential equations (ODEs) and initial value
problems. This work comprises proofs for local and global existence of unique solutions
(Picard-Lindelöf theorem). Moreover, it contains a formalization of the (continuous or even
differentiable) dependency of the flow on initial conditions as the <i>flow</i> of ODEs.</p>
<p>
Not in the generated document are the following sessions:
<ul>
<li> HOL-ODE-Numerics:
Rigorous numerical algorithms for computing enclosures of solutions based on Runge-Kutta methods
and affine arithmetic. Reachability analysis with splitting and reduction at hyperplanes.</li>
<li> HOL-ODE-Examples:
Applications of the numerical algorithms to concrete systems of ODEs.</li>
<li> Lorenz_C0, Lorenz_C1:
Verified algorithms for checking C1-information according to Tucker's proof,
computation of C0-information.</li>
</ul>
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2014-02-13]: added an implementation of the Euler method based on affine arithmetic<br>
[2016-04-14]: added flow and variational equation<br>
[2016-08-03]: numerical algorithms for reachability analysis (using second-order Runge-Kutta methods, splitting, and reduction) implemented using Lammich's framework for automatic refinement<br>
[2017-09-20]: added Poincare map and propagation of variational equation in
reachability analysis, verified algorithms for C1-information and computations
for C0-information of the Lorenz attractor.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Ordinary_Differential_Equations-AFP,
author = {Fabian Immler and Johannes Hölzl},
title = {Ordinary Differential Equations},
journal = {Archive of Formal Proofs},
month = apr,
year = 2012,
note = {\url{http://isa-afp.org/entries/Ordinary_Differential_Equations.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Affine_Arithmetic.html">Affine_Arithmetic</a>, <a href="List-Index.html">List-Index</a>, <a href="Triangle.html">Triangle</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Differential_Dynamic_Logic.html">Differential_Dynamic_Logic</a>, <a href="Hybrid_Systems_VCs.html">Hybrid_Systems_VCs</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ordinary_Differential_Equations/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Ordinary_Differential_Equations/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ordinary_Differential_Equations/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Ordinary_Differential_Equations-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Ordinary_Differential_Equations-2019-06-11.tar.gz">
afp-Ordinary_Differential_Equations-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Ordinary_Differential_Equations-2018-08-16.tar.gz">
afp-Ordinary_Differential_Equations-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Ordinary_Differential_Equations-2017-10-10.tar.gz">
afp-Ordinary_Differential_Equations-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Ordinary_Differential_Equations-2016-12-17.tar.gz">
afp-Ordinary_Differential_Equations-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Ordinary_Differential_Equations-2016-02-22.tar.gz">
afp-Ordinary_Differential_Equations-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Ordinary_Differential_Equations-2015-05-27.tar.gz">
afp-Ordinary_Differential_Equations-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Ordinary_Differential_Equations-2014-08-28.tar.gz">
afp-Ordinary_Differential_Equations-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Ordinary_Differential_Equations-2013-12-11.tar.gz">
afp-Ordinary_Differential_Equations-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Ordinary_Differential_Equations-2013-11-17.tar.gz">
afp-Ordinary_Differential_Equations-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Ordinary_Differential_Equations-2013-02-16.tar.gz">
afp-Ordinary_Differential_Equations-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Ordinary_Differential_Equations-2012-05-24.tar.gz">
afp-Ordinary_Differential_Equations-2012-05-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/PCF.html b/web/entries/PCF.html
--- a/web/entries/PCF.html
+++ b/web/entries/PCF.html
@@ -1,248 +1,248 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Logical Relations for PCF - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">L</font>ogical
<font class="first">R</font>elations
for
<font class="first">P</font>CF
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Logical Relations for PCF</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-07-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We apply Andy Pitts's methods of defining relations over domains to
+ <td class="abstract mathjax_process">We apply Andy Pitts's methods of defining relations over domains to
several classical results in the literature. We show that the Y
combinator coincides with the domain-theoretic fixpoint operator,
that parallel-or and the Plotkin existential are not definable in
PCF, that the continuation semantics for PCF coincides with the
direct semantics, and that our domain-theoretic semantics for PCF is
adequate for reasoning about contextual equivalence in an
operational semantics. Our version of PCF is untyped and has both
strict and non-strict function abstractions. The development is
-carried out in HOLCF.</div></td>
+carried out in HOLCF.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{PCF-AFP,
author = {Peter Gammie},
title = {Logical Relations for PCF},
journal = {Archive of Formal Proofs},
month = jul,
year = 2012,
note = {\url{http://isa-afp.org/entries/PCF.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/PCF/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/PCF/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/PCF/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-PCF-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-PCF-2019-06-11.tar.gz">
afp-PCF-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-PCF-2018-08-16.tar.gz">
afp-PCF-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-PCF-2017-10-10.tar.gz">
afp-PCF-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-PCF-2016-12-17.tar.gz">
afp-PCF-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-PCF-2016-02-22.tar.gz">
afp-PCF-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-PCF-2015-05-27.tar.gz">
afp-PCF-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-PCF-2014-08-28.tar.gz">
afp-PCF-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-PCF-2013-12-11.tar.gz">
afp-PCF-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-PCF-2013-11-17.tar.gz">
afp-PCF-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-PCF-2013-02-16.tar.gz">
afp-PCF-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-PCF-2012-07-03.tar.gz">
afp-PCF-2012-07-03.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/PLM.html b/web/entries/PLM.html
--- a/web/entries/PLM.html
+++ b/web/entries/PLM.html
@@ -1,253 +1,253 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Representation and Partial Automation of the Principia Logico-Metaphysica in Isabelle/HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>epresentation
and
<font class="first">P</font>artial
<font class="first">A</font>utomation
of
the
<font class="first">P</font>rincipia
<font class="first">L</font>ogico-Metaphysica
in
<font class="first">I</font>sabelle/HOL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Representation and Partial Automation of the Principia Logico-Metaphysica in Isabelle/HOL</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Daniel Kirchner (daniel /at/ ekpyron /dot/ org)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-09-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p> We present an embedding of the second-order fragment of the
Theory of Abstract Objects as described in Edward Zalta's
upcoming work <a
href="https://mally.stanford.edu/principia.pdf">Principia
Logico-Metaphysica (PLM)</a> in the automated reasoning
framework Isabelle/HOL. The Theory of Abstract Objects is a
metaphysical theory that reifies property patterns, as they for
example occur in the abstract reasoning of mathematics, as
<b>abstract objects</b> and provides an axiomatic
framework that allows to reason about these objects. It thereby serves
as a fundamental metaphysical theory that can be used to axiomatize
and describe a wide range of philosophical objects, such as Platonic
forms or Leibniz' concepts, and has the ambition to function as a
foundational theory of mathematics. The target theory of our embedding
as described in chapters 7-9 of PLM employs a modal relational type
theory as logical foundation for which a representation in functional
type theory is <a
href="https://mally.stanford.edu/Papers/rtt.pdf">known to
be challenging</a>. </p> <p> Nevertheless we arrive
at a functioning representation of the theory in the functional logic
of Isabelle/HOL based on a semantical representation of an Aczel-model
of the theory. Based on this representation we construct an
implementation of the deductive system of PLM which allows to
automatically and interactively find and verify theorems of PLM.
</p> <p> Our work thereby supports the concept of shallow
semantical embeddings of logical systems in HOL as a universal tool
for logical reasoning <a
href="http://www.mi.fu-berlin.de/inf/groups/ag-ki/publications/Universal-Reasoning/1703_09620_pd.pdf">as
promoted by Christoph Benzm&uuml;ller</a>. </p>
<p> The most notable result of the presented work is the
discovery of a previously unknown paradox in the formulation of the
Theory of Abstract Objects. The embedding of the theory in
Isabelle/HOL played a vital part in this discovery. Furthermore it was
possible to immediately offer several options to modify the theory to
guarantee its consistency. Thereby our work could provide a
significant contribution to the development of a proper grounding for
-object theory. </p></div></td>
+object theory. </p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{PLM-AFP,
author = {Daniel Kirchner},
title = {Representation and Partial Automation of the Principia Logico-Metaphysica in Isabelle/HOL},
journal = {Archive of Formal Proofs},
month = sep,
year = 2017,
note = {\url{http://isa-afp.org/entries/PLM.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/PLM/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/PLM/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/PLM/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-PLM-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-PLM-2019-06-11.tar.gz">
afp-PLM-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-PLM-2018-08-16.tar.gz">
afp-PLM-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-PLM-2017-10-10.tar.gz">
afp-PLM-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-PLM-2017-09-19.tar.gz">
afp-PLM-2017-09-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/POPLmark-deBruijn.html b/web/entries/POPLmark-deBruijn.html
--- a/web/entries/POPLmark-deBruijn.html
+++ b/web/entries/POPLmark-deBruijn.html
@@ -1,283 +1,283 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>POPLmark Challenge Via de Bruijn Indices - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>OPLmark
<font class="first">C</font>hallenge
<font class="first">V</font>ia
de
<font class="first">B</font>ruijn
<font class="first">I</font>ndices
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">POPLmark Challenge Via de Bruijn Indices</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.in.tum.de/~berghofe">Stefan Berghofer</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2007-08-02</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We present a solution to the POPLmark challenge designed by Aydemir et al., which has as a goal the formalization of the meta-theory of System F<sub>&lt;:</sub>. The formalization is carried out in the theorem prover Isabelle/HOL using an encoding based on de Bruijn indices. We start with a relatively simple formalization covering only the basic features of System F<sub>&lt;:</sub>, and explain how it can be extended to also cover records and more advanced binding constructs.</div></td>
+ <td class="abstract mathjax_process">We present a solution to the POPLmark challenge designed by Aydemir et al., which has as a goal the formalization of the meta-theory of System F<sub>&lt;:</sub>. The formalization is carried out in the theorem prover Isabelle/HOL using an encoding based on de Bruijn indices. We start with a relatively simple formalization covering only the basic features of System F<sub>&lt;:</sub>, and explain how it can be extended to also cover records and more advanced binding constructs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{POPLmark-deBruijn-AFP,
author = {Stefan Berghofer},
title = {POPLmark Challenge Via de Bruijn Indices},
journal = {Archive of Formal Proofs},
month = aug,
year = 2007,
note = {\url{http://isa-afp.org/entries/POPLmark-deBruijn.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/POPLmark-deBruijn/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/POPLmark-deBruijn/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/POPLmark-deBruijn/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-POPLmark-deBruijn-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-POPLmark-deBruijn-2019-06-11.tar.gz">
afp-POPLmark-deBruijn-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-POPLmark-deBruijn-2018-08-16.tar.gz">
afp-POPLmark-deBruijn-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-POPLmark-deBruijn-2017-10-10.tar.gz">
afp-POPLmark-deBruijn-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-POPLmark-deBruijn-2016-12-17.tar.gz">
afp-POPLmark-deBruijn-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-POPLmark-deBruijn-2016-02-22.tar.gz">
afp-POPLmark-deBruijn-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-POPLmark-deBruijn-2015-05-27.tar.gz">
afp-POPLmark-deBruijn-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-POPLmark-deBruijn-2014-08-28.tar.gz">
afp-POPLmark-deBruijn-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-POPLmark-deBruijn-2013-12-11.tar.gz">
afp-POPLmark-deBruijn-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-POPLmark-deBruijn-2013-11-17.tar.gz">
afp-POPLmark-deBruijn-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-POPLmark-deBruijn-2013-03-02.tar.gz">
afp-POPLmark-deBruijn-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-POPLmark-deBruijn-2013-02-16.tar.gz">
afp-POPLmark-deBruijn-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-POPLmark-deBruijn-2012-05-24.tar.gz">
afp-POPLmark-deBruijn-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-POPLmark-deBruijn-2011-10-11.tar.gz">
afp-POPLmark-deBruijn-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-POPLmark-deBruijn-2011-02-11.tar.gz">
afp-POPLmark-deBruijn-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-POPLmark-deBruijn-2010-07-01.tar.gz">
afp-POPLmark-deBruijn-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-POPLmark-deBruijn-2009-12-12.tar.gz">
afp-POPLmark-deBruijn-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-POPLmark-deBruijn-2009-04-29.tar.gz">
afp-POPLmark-deBruijn-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-POPLmark-deBruijn-2008-06-10.tar.gz">
afp-POPLmark-deBruijn-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-POPLmark-deBruijn-2007-11-27.tar.gz">
afp-POPLmark-deBruijn-2007-11-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/PSemigroupsConvolution.html b/web/entries/PSemigroupsConvolution.html
--- a/web/entries/PSemigroupsConvolution.html
+++ b/web/entries/PSemigroupsConvolution.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Partial Semigroups and Convolution Algebras - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>artial
<font class="first">S</font>emigroups
and
<font class="first">C</font>onvolution
<font class="first">A</font>lgebras
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Partial Semigroups and Convolution Algebras</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Brijesh Dongol (brijesh /dot/ dongol /at/ brunel /dot/ ac /dot/ uk),
Victor B. F. Gomes (vb358 /at/ cl /dot/ cam /dot/ ac /dot/ uk),
Ian J. Hayes (ian /dot/ hayes /at/ itee /dot/ uq /dot/ edu /dot/ au) and
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-06-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Partial Semigroups are relevant to the foundations of quantum
mechanics and combinatorics as well as to interval and separation
logics. Convolution algebras can be understood either as algebras of
generalised binary modalities over ternary Kripke frames, in
particular over partial semigroups, or as algebras of quantale-valued
functions which are equipped with a convolution-style operation of
multiplication that is parametrised by a ternary relation. Convolution
algebras provide algebraic semantics for various substructural logics,
including categorial, relevance and linear logics, for separation
logic and for interval logics; they cover quantitative and qualitative
applications. These mathematical components for partial semigroups and
convolution algebras provide uniform foundations from which models of
computation based on relations, program traces or pomsets, and
verification components for separation or interval temporal logics can
-be built with little effort.</div></td>
+be built with little effort.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{PSemigroupsConvolution-AFP,
author = {Brijesh Dongol and Victor B. F. Gomes and Ian J. Hayes and Georg Struth},
title = {Partial Semigroups and Convolution Algebras},
journal = {Archive of Formal Proofs},
month = jun,
year = 2017,
note = {\url{http://isa-afp.org/entries/PSemigroupsConvolution.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/PSemigroupsConvolution/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/PSemigroupsConvolution/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/PSemigroupsConvolution/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-PSemigroupsConvolution-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-PSemigroupsConvolution-2019-06-11.tar.gz">
afp-PSemigroupsConvolution-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-PSemigroupsConvolution-2018-08-16.tar.gz">
afp-PSemigroupsConvolution-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-PSemigroupsConvolution-2017-10-10.tar.gz">
afp-PSemigroupsConvolution-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-PSemigroupsConvolution-2017-06-13.tar.gz">
afp-PSemigroupsConvolution-2017-06-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Pairing_Heap.html b/web/entries/Pairing_Heap.html
--- a/web/entries/Pairing_Heap.html
+++ b/web/entries/Pairing_Heap.html
@@ -1,214 +1,214 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Pairing Heap - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>airing
<font class="first">H</font>eap
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Pairing Heap</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Hauke Brinkop (hauke /dot/ brinkop /at/ googlemail /dot/ com) and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-07-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This library defines three different versions of pairing heaps: a
functional version of the original design based on binary
trees [Fredman et al. 1986], the version by Okasaki [1998] and
a modified version of the latter that is free of structural invariants.
<p>
The amortized complexity of pairing heaps is analyzed in the AFP article
-<a href="http://isa-afp.org/entries/Amortized_Complexity.html">Amortized Complexity</a>.</div></td>
+<a href="http://isa-afp.org/entries/Amortized_Complexity.html">Amortized Complexity</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">Origin:</td>
<td class="abstract">This library was extracted from Amortized Complexity and extended.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Pairing_Heap-AFP,
author = {Hauke Brinkop and Tobias Nipkow},
title = {Pairing Heap},
journal = {Archive of Formal Proofs},
month = jul,
year = 2016,
note = {\url{http://isa-afp.org/entries/Pairing_Heap.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Amortized_Complexity.html">Amortized_Complexity</a>, <a href="CakeML_Codegen.html">CakeML_Codegen</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Pairing_Heap/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Pairing_Heap/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Pairing_Heap/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Pairing_Heap-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Pairing_Heap-2019-06-11.tar.gz">
afp-Pairing_Heap-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Pairing_Heap-2018-08-16.tar.gz">
afp-Pairing_Heap-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Pairing_Heap-2017-10-10.tar.gz">
afp-Pairing_Heap-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Pairing_Heap-2016-12-17.tar.gz">
afp-Pairing_Heap-2016-12-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Paraconsistency.html b/web/entries/Paraconsistency.html
--- a/web/entries/Paraconsistency.html
+++ b/web/entries/Paraconsistency.html
@@ -1,219 +1,219 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Paraconsistency - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>araconsistency
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Paraconsistency</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://people.compute.dtu.dk/andschl/">Anders Schlichtkrull</a> and
<a href="https://people.compute.dtu.dk/jovi/">Jørgen Villadsen</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-12-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Paraconsistency is about handling inconsistency in a coherent way. In
classical and intuitionistic logic everything follows from an
inconsistent theory. A paraconsistent logic avoids the explosion.
Quite a few applications in computer science and engineering are
discussed in the Intelligent Systems Reference Library Volume 110:
Towards Paraconsistent Engineering (Springer 2016). We formalize a
paraconsistent many-valued logic that we motivated and described in a
special issue on logical approaches to paraconsistency (Journal of
Applied Non-Classical Logics 2005). We limit ourselves to the
propositional fragment of the higher-order logic. The logic is based
on so-called key equalities and has a countably infinite number of
truth values. We prove theorems in the logic using the definition of
validity. We verify truth tables and also counterexamples for
non-theorems. We prove meta-theorems about the logic and finally we
-investigate a case study.</div></td>
+investigate a case study.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Paraconsistency-AFP,
author = {Anders Schlichtkrull and Jørgen Villadsen},
title = {Paraconsistency},
journal = {Archive of Formal Proofs},
month = dec,
year = 2016,
note = {\url{http://isa-afp.org/entries/Paraconsistency.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Paraconsistency/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Paraconsistency/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Paraconsistency/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Paraconsistency-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Paraconsistency-2019-06-11.tar.gz">
afp-Paraconsistency-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Paraconsistency-2018-08-16.tar.gz">
afp-Paraconsistency-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Paraconsistency-2017-10-10.tar.gz">
afp-Paraconsistency-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Paraconsistency-2016-12-17.tar.gz">
afp-Paraconsistency-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Paraconsistency-2016-12-08.tar.gz">
afp-Paraconsistency-2016-12-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Parity_Game.html b/web/entries/Parity_Game.html
--- a/web/entries/Parity_Game.html
+++ b/web/entries/Parity_Game.html
@@ -1,221 +1,221 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Positional Determinacy of Parity Games - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>ositional
<font class="first">D</font>eterminacy
of
<font class="first">P</font>arity
<font class="first">G</font>ames
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Positional Determinacy of Parity Games</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://logic.las.tu-berlin.de/Members/Dittmann/">Christoph Dittmann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-11-02</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a formalization of parity games (a two-player game on
directed graphs) and a proof of their positional determinacy in
-Isabelle/HOL. This proof works for both finite and infinite games.</div></td>
+Isabelle/HOL. This proof works for both finite and infinite games.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Parity_Game-AFP,
author = {Christoph Dittmann},
title = {Positional Determinacy of Parity Games},
journal = {Archive of Formal Proofs},
month = nov,
year = 2015,
note = {\url{http://isa-afp.org/entries/Parity_Game.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Coinductive.html">Coinductive</a>, <a href="Graph_Theory.html">Graph_Theory</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Parity_Game/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Parity_Game/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Parity_Game/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Parity_Game-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Parity_Game-2019-06-11.tar.gz">
afp-Parity_Game-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Parity_Game-2018-08-16.tar.gz">
afp-Parity_Game-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Parity_Game-2017-10-10.tar.gz">
afp-Parity_Game-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Parity_Game-2016-12-17.tar.gz">
afp-Parity_Game-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Parity_Game-2016-02-22.tar.gz">
afp-Parity_Game-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Parity_Game-2015-11-02.tar.gz">
afp-Parity_Game-2015-11-02.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Partial_Function_MR.html b/web/entries/Partial_Function_MR.html
--- a/web/entries/Partial_Function_MR.html
+++ b/web/entries/Partial_Function_MR.html
@@ -1,226 +1,226 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Mutually Recursive Partial Functions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">M</font>utually
<font class="first">R</font>ecursive
<font class="first">P</font>artial
<font class="first">F</font>unctions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Mutually Recursive Partial Functions</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-02-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We provide a wrapper around the partial-function command that supports mutual recursion.</div></td>
+ <td class="abstract mathjax_process">We provide a wrapper around the partial-function command that supports mutual recursion.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Partial_Function_MR-AFP,
author = {René Thiemann},
title = {Mutually Recursive Partial Functions},
journal = {Archive of Formal Proofs},
month = feb,
year = 2014,
note = {\url{http://isa-afp.org/entries/Partial_Function_MR.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Certification_Monads.html">Certification_Monads</a>, <a href="Polynomial_Factorization.html">Polynomial_Factorization</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Partial_Function_MR/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Partial_Function_MR/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Partial_Function_MR/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Partial_Function_MR-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Partial_Function_MR-2019-06-11.tar.gz">
afp-Partial_Function_MR-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Partial_Function_MR-2018-08-16.tar.gz">
afp-Partial_Function_MR-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Partial_Function_MR-2017-10-10.tar.gz">
afp-Partial_Function_MR-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Partial_Function_MR-2016-12-17.tar.gz">
afp-Partial_Function_MR-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Partial_Function_MR-2016-02-22.tar.gz">
afp-Partial_Function_MR-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Partial_Function_MR-2015-05-27.tar.gz">
afp-Partial_Function_MR-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Partial_Function_MR-2014-08-28.tar.gz">
afp-Partial_Function_MR-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Partial_Function_MR-2014-02-19.tar.gz">
afp-Partial_Function_MR-2014-02-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Partial_Order_Reduction.html b/web/entries/Partial_Order_Reduction.html
--- a/web/entries/Partial_Order_Reduction.html
+++ b/web/entries/Partial_Order_Reduction.html
@@ -1,200 +1,200 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Partial Order Reduction - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>artial
<font class="first">O</font>rder
<font class="first">R</font>eduction
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Partial Order Reduction</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~brunnerj/">Julian Brunner</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-06-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry provides a formalization of the abstract theory of ample
set partial order reduction. The formalization includes transition
systems with actions, trace theory, as well as basics on finite,
infinite, and lazy sequences. We also provide a basic framework for
static analysis on concurrent systems with respect to the ample set
-condition.</div></td>
+condition.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Partial_Order_Reduction-AFP,
author = {Julian Brunner},
title = {Partial Order Reduction},
journal = {Archive of Formal Proofs},
month = jun,
year = 2018,
note = {\url{http://isa-afp.org/entries/Partial_Order_Reduction.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Coinductive.html">Coinductive</a>, <a href="Stuttering_Equivalence.html">Stuttering_Equivalence</a>, <a href="Transition_Systems_and_Automata.html">Transition_Systems_and_Automata</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Partial_Order_Reduction/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Partial_Order_Reduction/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Partial_Order_Reduction/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Partial_Order_Reduction-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Partial_Order_Reduction-2019-06-11.tar.gz">
afp-Partial_Order_Reduction-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Partial_Order_Reduction-2018-08-16.tar.gz">
afp-Partial_Order_Reduction-2018-08-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Password_Authentication_Protocol.html b/web/entries/Password_Authentication_Protocol.html
--- a/web/entries/Password_Authentication_Protocol.html
+++ b/web/entries/Password_Authentication_Protocol.html
@@ -1,233 +1,233 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Verification of a Diffie-Hellman Password-based Authentication Protocol by Extending the Inductive Method - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>erification
of
a
<font class="first">D</font>iffie-Hellman
<font class="first">P</font>assword-based
<font class="first">A</font>uthentication
<font class="first">P</font>rotocol
by
<font class="first">E</font>xtending
the
<font class="first">I</font>nductive
<font class="first">M</font>ethod
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Verification of a Diffie-Hellman Password-based Authentication Protocol by Extending the Inductive Method</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Pasquale Noce (pasquale /dot/ noce /dot/ lavoro /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-01-03</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This paper constructs a formal model of a Diffie-Hellman
password-based authentication protocol between a user and a smart
card, and proves its security. The protocol provides for the dispatch
of the user's password to the smart card on a secure messaging
channel established by means of Password Authenticated Connection
Establishment (PACE), where the mapping method being used is Chip
Authentication Mapping. By applying and suitably extending
Paulson's Inductive Method, this paper proves that the protocol
establishes trustworthy secure messaging channels, preserves the
secrecy of users' passwords, and provides an effective mutual
authentication service. What is more, these security properties turn
out to hold independently of the secrecy of the PACE authentication
-key.</div></td>
+key.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Password_Authentication_Protocol-AFP,
author = {Pasquale Noce},
title = {Verification of a Diffie-Hellman Password-based Authentication Protocol by Extending the Inductive Method},
journal = {Archive of Formal Proofs},
month = jan,
year = 2017,
note = {\url{http://isa-afp.org/entries/Password_Authentication_Protocol.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Password_Authentication_Protocol/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Password_Authentication_Protocol/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Password_Authentication_Protocol/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Password_Authentication_Protocol-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Password_Authentication_Protocol-2019-06-11.tar.gz">
afp-Password_Authentication_Protocol-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Password_Authentication_Protocol-2018-08-16.tar.gz">
afp-Password_Authentication_Protocol-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Password_Authentication_Protocol-2017-10-10.tar.gz">
afp-Password_Authentication_Protocol-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Password_Authentication_Protocol-2017-01-06.tar.gz">
afp-Password_Authentication_Protocol-2017-01-06.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Pell.html b/web/entries/Pell.html
--- a/web/entries/Pell.html
+++ b/web/entries/Pell.html
@@ -1,225 +1,225 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Pell's Equation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>ell's
<font class="first">E</font>quation
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Pell's Equation</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-06-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p> This article gives the basic theory of Pell's equation
<em>x</em><sup>2</sup> = 1 +
<em>D</em>&thinsp;<em>y</em><sup>2</sup>,
where
<em>D</em>&thinsp;&isin;&thinsp;&#8469; is
a parameter and <em>x</em>, <em>y</em> are
integer variables. </p> <p> The main result that is proven
is the following: If <em>D</em> is not a perfect square,
then there exists a <em>fundamental solution</em>
(<em>x</em><sub>0</sub>,
<em>y</em><sub>0</sub>) that is not the
trivial solution (1, 0) and which generates all other solutions
(<em>x</em>, <em>y</em>) in the sense that
there exists some
<em>n</em>&thinsp;&isin;&thinsp;&#8469;
such that |<em>x</em>| +
|<em>y</em>|&thinsp;&radic;<span
style="text-decoration:
overline"><em>D</em></span> =
(<em>x</em><sub>0</sub> +
<em>y</em><sub>0</sub>&thinsp;&radic;<span
style="text-decoration:
overline"><em>D</em></span>)<sup><em>n</em></sup>.
This also implies that the set of solutions is infinite, and it gives
us an explicit and executable characterisation of all the solutions.
</p> <p> Based on this, simple executable algorithms for
computing the fundamental solution and the infinite sequence of all
-non-negative solutions are also provided. </p></div></td>
+non-negative solutions are also provided. </p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Pell-AFP,
author = {Manuel Eberl},
title = {Pell's Equation},
journal = {Archive of Formal Proofs},
month = jun,
year = 2018,
note = {\url{http://isa-afp.org/entries/Pell.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Mersenne_Primes.html">Mersenne_Primes</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Pell/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Pell/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Pell/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Pell-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Pell-2019-06-11.tar.gz">
afp-Pell-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Pell-2018-08-16.tar.gz">
afp-Pell-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Pell-2018-06-25.tar.gz">
afp-Pell-2018-06-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Perfect-Number-Thm.html b/web/entries/Perfect-Number-Thm.html
--- a/web/entries/Perfect-Number-Thm.html
+++ b/web/entries/Perfect-Number-Thm.html
@@ -1,262 +1,262 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Perfect Number Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>erfect
<font class="first">N</font>umber
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Perfect Number Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Mark Ijbema (ijbema /at/ fmf /dot/ nl)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2009-11-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">These theories present the mechanised proof of the Perfect Number Theorem.</div></td>
+ <td class="abstract mathjax_process">These theories present the mechanised proof of the Perfect Number Theorem.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Perfect-Number-Thm-AFP,
author = {Mark Ijbema},
title = {Perfect Number Theorem},
journal = {Archive of Formal Proofs},
month = nov,
year = 2009,
note = {\url{http://isa-afp.org/entries/Perfect-Number-Thm.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Perfect-Number-Thm/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Perfect-Number-Thm/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Perfect-Number-Thm/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Perfect-Number-Thm-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Perfect-Number-Thm-2019-06-11.tar.gz">
afp-Perfect-Number-Thm-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Perfect-Number-Thm-2018-08-16.tar.gz">
afp-Perfect-Number-Thm-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Perfect-Number-Thm-2017-10-10.tar.gz">
afp-Perfect-Number-Thm-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Perfect-Number-Thm-2016-12-17.tar.gz">
afp-Perfect-Number-Thm-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Perfect-Number-Thm-2016-02-22.tar.gz">
afp-Perfect-Number-Thm-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Perfect-Number-Thm-2015-05-27.tar.gz">
afp-Perfect-Number-Thm-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Perfect-Number-Thm-2014-08-28.tar.gz">
afp-Perfect-Number-Thm-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Perfect-Number-Thm-2013-12-11.tar.gz">
afp-Perfect-Number-Thm-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Perfect-Number-Thm-2013-11-17.tar.gz">
afp-Perfect-Number-Thm-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Perfect-Number-Thm-2013-02-16.tar.gz">
afp-Perfect-Number-Thm-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Perfect-Number-Thm-2012-05-24.tar.gz">
afp-Perfect-Number-Thm-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Perfect-Number-Thm-2011-10-11.tar.gz">
afp-Perfect-Number-Thm-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Perfect-Number-Thm-2011-02-11.tar.gz">
afp-Perfect-Number-Thm-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Perfect-Number-Thm-2010-07-01.tar.gz">
afp-Perfect-Number-Thm-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Perfect-Number-Thm-2009-12-12.tar.gz">
afp-Perfect-Number-Thm-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Perfect-Number-Thm-2009-11-24.tar.gz">
afp-Perfect-Number-Thm-2009-11-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Perron_Frobenius.html b/web/entries/Perron_Frobenius.html
--- a/web/entries/Perron_Frobenius.html
+++ b/web/entries/Perron_Frobenius.html
@@ -1,252 +1,252 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Perron-Frobenius Theorem for Spectral Radius Analysis - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>erron-Frobenius
<font class="first">T</font>heorem
for
<font class="first">S</font>pectral
<font class="first">R</font>adius
<font class="first">A</font>nalysis
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Perron-Frobenius Theorem for Spectral Radius Analysis</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a>,
<a href="http://www21.in.tum.de/~kuncar/">Ondřej Kunčar</a>,
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a> and
<a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-05-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>The spectral radius of a matrix A is the maximum norm of all
eigenvalues of A. In previous work we already formalized that for a
complex matrix A, the values in A<sup>n</sup> grow polynomially in n
if and only if the spectral radius is at most one. One problem with
the above characterization is the determination of all
<em>complex</em> eigenvalues. In case A contains only non-negative
real values, a simplification is possible with the help of the
Perron&ndash;Frobenius theorem, which tells us that it suffices to consider only
the <em>real</em> eigenvalues of A, i.e., applying Sturm's method can
decide the polynomial growth of A<sup>n</sup>. </p><p> We formalize
the Perron&ndash;Frobenius theorem based on a proof via Brouwer's fixpoint
theorem, which is available in the HOL multivariate analysis (HMA)
library. Since the results on the spectral radius is based on matrices
in the Jordan normal form (JNF) library, we further develop a
connection which allows us to easily transfer theorems between HMA and
JNF. With this connection we derive the combined result: if A is a
non-negative real matrix, and no real eigenvalue of A is strictly
-larger than one, then A<sup>n</sup> is polynomially bounded in n. </p></div></td>
+larger than one, then A<sup>n</sup> is polynomially bounded in n. </p></td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2017-10-18]:
added Perron-Frobenius theorem for irreducible matrices with generalization
(revision bda1f1ce8a1c)<br/>
[2018-05-17]:
prove conjecture of CPP'18 paper: Jordan blocks of spectral radius have maximum size
(revision ffdb3794e5d5)</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Perron_Frobenius-AFP,
author = {Jose Divasón and Ondřej Kunčar and René Thiemann and Akihisa Yamada},
title = {Perron-Frobenius Theorem for Spectral Radius Analysis},
journal = {Archive of Formal Proofs},
month = may,
year = 2016,
note = {\url{http://isa-afp.org/entries/Perron_Frobenius.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Jordan_Normal_Form.html">Jordan_Normal_Form</a>, <a href="Polynomial_Factorization.html">Polynomial_Factorization</a>, <a href="Rank_Nullity_Theorem.html">Rank_Nullity_Theorem</a>, <a href="Sturm_Sequences.html">Sturm_Sequences</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="LLL_Factorization.html">LLL_Factorization</a>, <a href="Stochastic_Matrices.html">Stochastic_Matrices</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Perron_Frobenius/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Perron_Frobenius/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Perron_Frobenius/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Perron_Frobenius-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Perron_Frobenius-2019-06-11.tar.gz">
afp-Perron_Frobenius-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Perron_Frobenius-2018-08-16.tar.gz">
afp-Perron_Frobenius-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Perron_Frobenius-2017-10-18.tar.gz">
afp-Perron_Frobenius-2017-10-18.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Perron_Frobenius-2017-10-10.tar.gz">
afp-Perron_Frobenius-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Perron_Frobenius-2016-12-17.tar.gz">
afp-Perron_Frobenius-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Perron_Frobenius-2016-05-20.tar.gz">
afp-Perron_Frobenius-2016-05-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Pi_Calculus.html b/web/entries/Pi_Calculus.html
--- a/web/entries/Pi_Calculus.html
+++ b/web/entries/Pi_Calculus.html
@@ -1,245 +1,245 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The pi-calculus in nominal logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
pi-calculus
in
nominal
logic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The pi-calculus in nominal logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.itu.dk/people/jebe">Jesper Bengtson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-05-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We formalise the pi-calculus using the nominal datatype package, based on ideas from the nominal logic by Pitts et al., and demonstrate an implementation in Isabelle/HOL. The purpose is to derive powerful induction rules for the semantics in order to conduct machine checkable proofs, closely following the intuitive arguments found in manual proofs. In this way we have covered many of the standard theorems of bisimulation equivalence and congruence, both late and early, and both strong and weak in a uniform manner. We thus provide one of the most extensive formalisations of a the pi-calculus ever done inside a theorem prover.
+ <td class="abstract mathjax_process">We formalise the pi-calculus using the nominal datatype package, based on ideas from the nominal logic by Pitts et al., and demonstrate an implementation in Isabelle/HOL. The purpose is to derive powerful induction rules for the semantics in order to conduct machine checkable proofs, closely following the intuitive arguments found in manual proofs. In this way we have covered many of the standard theorems of bisimulation equivalence and congruence, both late and early, and both strong and weak in a uniform manner. We thus provide one of the most extensive formalisations of a the pi-calculus ever done inside a theorem prover.
<p>
A significant gain in our formulation is that agents are identified up to alpha-equivalence, thereby greatly reducing the arguments about bound names. This is a normal strategy for manual proofs about the pi-calculus, but that kind of hand waving has previously been difficult to incorporate smoothly in an interactive theorem prover. We show how the nominal logic formalism and its support in Isabelle accomplishes this and thus significantly reduces the tedium of conducting completely formal proofs. This improves on previous work using weak higher order abstract syntax since we do not need extra assumptions to filter out exotic terms and can keep all arguments within a familiar first-order logic.
<p>
-This entry is described in detail in <a href="http://www.itu.dk/people/jebe/files/thesis.pdf">Bengtson's thesis</a>.</div></td>
+This entry is described in detail in <a href="http://www.itu.dk/people/jebe/files/thesis.pdf">Bengtson's thesis</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Pi_Calculus-AFP,
author = {Jesper Bengtson},
title = {The pi-calculus in nominal logic},
journal = {Archive of Formal Proofs},
month = may,
year = 2012,
note = {\url{http://isa-afp.org/entries/Pi_Calculus.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Pi_Calculus/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Pi_Calculus/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Pi_Calculus/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Pi_Calculus-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Pi_Calculus-2019-06-11.tar.gz">
afp-Pi_Calculus-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Pi_Calculus-2018-08-16.tar.gz">
afp-Pi_Calculus-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Pi_Calculus-2017-10-10.tar.gz">
afp-Pi_Calculus-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Pi_Calculus-2016-12-17.tar.gz">
afp-Pi_Calculus-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Pi_Calculus-2016-02-22.tar.gz">
afp-Pi_Calculus-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Pi_Calculus-2015-05-27.tar.gz">
afp-Pi_Calculus-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Pi_Calculus-2014-08-28.tar.gz">
afp-Pi_Calculus-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Pi_Calculus-2013-12-11.tar.gz">
afp-Pi_Calculus-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Pi_Calculus-2013-11-17.tar.gz">
afp-Pi_Calculus-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Pi_Calculus-2013-02-16.tar.gz">
afp-Pi_Calculus-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Pi_Calculus-2012-06-14.tar.gz">
afp-Pi_Calculus-2012-06-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Pi_Transcendental.html b/web/entries/Pi_Transcendental.html
--- a/web/entries/Pi_Transcendental.html
+++ b/web/entries/Pi_Transcendental.html
@@ -1,202 +1,202 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Transcendence of π - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">T</font>ranscendence
of
π
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Transcendence of π</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-09-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This entry shows the transcendence of &pi; based on the
classic proof using the fundamental theorem of symmetric polynomials
first given by von Lindemann in 1882, but the formalisation mostly
follows the version by Niven. The proof reuses much of the machinery
developed in the AFP entry on the transcendence of
-<em>e</em>.</p></div></td>
+<em>e</em>.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Pi_Transcendental-AFP,
author = {Manuel Eberl},
title = {The Transcendence of π},
journal = {Archive of Formal Proofs},
month = sep,
year = 2018,
note = {\url{http://isa-afp.org/entries/Pi_Transcendental.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="E_Transcendental.html">E_Transcendental</a>, <a href="Symmetric_Polynomials.html">Symmetric_Polynomials</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Pi_Transcendental/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Pi_Transcendental/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Pi_Transcendental/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Pi_Transcendental-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Pi_Transcendental-2019-06-11.tar.gz">
afp-Pi_Transcendental-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Pi_Transcendental-2018-10-02.tar.gz">
afp-Pi_Transcendental-2018-10-02.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Planarity_Certificates.html b/web/entries/Planarity_Certificates.html
--- a/web/entries/Planarity_Certificates.html
+++ b/web/entries/Planarity_Certificates.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Planarity Certificates - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>lanarity
<font class="first">C</font>ertificates
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Planarity Certificates</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~noschinl/">Lars Noschinski</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-11-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This development provides a formalization of planarity based on
combinatorial maps and proves that Kuratowski's theorem implies
combinatorial planarity.
Moreover, it contains verified implementations of programs checking
certificates for planarity (i.e., a combinatorial map) or non-planarity
-(i.e., a Kuratowski subgraph).</div></td>
+(i.e., a Kuratowski subgraph).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Planarity_Certificates-AFP,
author = {Lars Noschinski},
title = {Planarity Certificates},
journal = {Archive of Formal Proofs},
month = nov,
year = 2015,
note = {\url{http://isa-afp.org/entries/Planarity_Certificates.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Case_Labeling.html">Case_Labeling</a>, <a href="Graph_Theory.html">Graph_Theory</a>, <a href="List-Index.html">List-Index</a>, <a href="Simpl.html">Simpl</a>, <a href="Transitive-Closure.html">Transitive-Closure</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Planarity_Certificates/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Planarity_Certificates/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Planarity_Certificates/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Planarity_Certificates-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Planarity_Certificates-2019-06-11.tar.gz">
afp-Planarity_Certificates-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Planarity_Certificates-2018-08-16.tar.gz">
afp-Planarity_Certificates-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Planarity_Certificates-2017-10-10.tar.gz">
afp-Planarity_Certificates-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Planarity_Certificates-2016-12-17.tar.gz">
afp-Planarity_Certificates-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Planarity_Certificates-2016-02-22.tar.gz">
afp-Planarity_Certificates-2016-02-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Poincare_Bendixson.html b/web/entries/Poincare_Bendixson.html
--- a/web/entries/Poincare_Bendixson.html
+++ b/web/entries/Poincare_Bendixson.html
@@ -1,199 +1,199 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Poincaré-Bendixson Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">P</font>oincaré-Bendixson
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Poincaré-Bendixson Theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a> and
<a href="https://www.cs.cmu.edu/~yongkiat/">Yong Kiam Tan</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-12-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The Poincaré-Bendixson theorem is a classical result in the study of
(continuous) dynamical systems. Colloquially, it restricts the
possible behaviors of planar dynamical systems: such systems cannot be
chaotic. In practice, it is a useful tool for proving the existence of
(limiting) periodic behavior in planar systems. The theorem is an
interesting and challenging benchmark for formalized mathematics
because proofs in the literature rely on geometric sketches and only
hint at symmetric cases. It also requires a substantial background of
mathematical theories, e.g., the Jordan curve theorem, real analysis,
ordinary differential equations, and limiting (long-term) behavior of
-dynamical systems.</div></td>
+dynamical systems.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Poincare_Bendixson-AFP,
author = {Fabian Immler and Yong Kiam Tan},
title = {The Poincaré-Bendixson Theorem},
journal = {Archive of Formal Proofs},
month = dec,
year = 2019,
note = {\url{http://isa-afp.org/entries/Poincare_Bendixson.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Poincare_Bendixson/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Poincare_Bendixson/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Poincare_Bendixson/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Poincare_Bendixson-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Poincare_Bendixson-2019-12-18.tar.gz">
afp-Poincare_Bendixson-2019-12-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Poincare_Disc.html b/web/entries/Poincare_Disc.html
--- a/web/entries/Poincare_Disc.html
+++ b/web/entries/Poincare_Disc.html
@@ -1,201 +1,201 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Poincaré Disc Model - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>oincaré
<font class="first">D</font>isc
<font class="first">M</font>odel
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Poincaré Disc Model</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://poincare.matf.bg.ac.rs/~danijela">Danijela Simić</a>,
Filip Marić (filip /at/ matf /dot/ bg /dot/ ac /dot/ rs) and
Pierre Boutry (boutry /at/ unistra /dot/ fr)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-12-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We describe formalization of the Poincaré disc model of hyperbolic
geometry within the Isabelle/HOL proof assistant. The model is defined
within the extended complex plane (one dimensional complex projectives
space &#8450;P1), formalized in the AFP entry “Complex Geometry”.
Points, lines, congruence of pairs of points, betweenness of triples
of points, circles, and isometries are defined within the model. It is
shown that the model satisfies all Tarski's axioms except the
Euclid's axiom. It is shown that it satisfies its negation and
the limiting parallels axiom (which proves it to be a model of
-hyperbolic geometry).</div></td>
+hyperbolic geometry).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Poincare_Disc-AFP,
author = {Danijela Simić and Filip Marić and Pierre Boutry},
title = {Poincaré Disc Model},
journal = {Archive of Formal Proofs},
month = dec,
year = 2019,
note = {\url{http://isa-afp.org/entries/Poincare_Disc.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Complex_Geometry.html">Complex_Geometry</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Poincare_Disc/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Poincare_Disc/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Poincare_Disc/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Poincare_Disc-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Poincare_Disc-2020-01-17.tar.gz">
afp-Poincare_Disc-2020-01-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Polynomial_Factorization.html b/web/entries/Polynomial_Factorization.html
--- a/web/entries/Polynomial_Factorization.html
+++ b/web/entries/Polynomial_Factorization.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Polynomial Factorization - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>olynomial
<font class="first">F</font>actorization
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Polynomial Factorization</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a> and
<a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-01-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Based on existing libraries for polynomial interpolation and matrices,
we formalized several factorization algorithms for polynomials, including
Kronecker's algorithm for integer polynomials,
Yun's square-free factorization algorithm for field polynomials, and
Berlekamp's algorithm for polynomials over finite fields.
By combining the last one with Hensel's lifting,
we derive an efficient factorization algorithm for the integer polynomials,
which is then lifted for rational polynomials by mechanizing Gauss' lemma.
Finally, we assembled a combined factorization algorithm for rational polynomials,
which combines all the mentioned algorithms and additionally uses the explicit formula for roots
of quadratic polynomials and a rational root test.
<p>
As side products, we developed division algorithms for polynomials over integral domains,
-as well as primality-testing and prime-factorization algorithms for integers.</div></td>
+as well as primality-testing and prime-factorization algorithms for integers.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Polynomial_Factorization-AFP,
author = {René Thiemann and Akihisa Yamada},
title = {Polynomial Factorization},
journal = {Archive of Formal Proofs},
month = jan,
year = 2016,
note = {\url{http://isa-afp.org/entries/Polynomial_Factorization.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Partial_Function_MR.html">Partial_Function_MR</a>, <a href="Polynomial_Interpolation.html">Polynomial_Interpolation</a>, <a href="Show.html">Show</a>, <a href="Sqrt_Babylonian.html">Sqrt_Babylonian</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Dirichlet_Series.html">Dirichlet_Series</a>, <a href="Functional_Ordered_Resolution_Prover.html">Functional_Ordered_Resolution_Prover</a>, <a href="Jordan_Normal_Form.html">Jordan_Normal_Form</a>, <a href="Linear_Recurrences.html">Linear_Recurrences</a>, <a href="Perron_Frobenius.html">Perron_Frobenius</a>, <a href="Subresultants.html">Subresultants</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Polynomial_Factorization/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Polynomial_Factorization/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Polynomial_Factorization/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Polynomial_Factorization-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Polynomial_Factorization-2019-06-11.tar.gz">
afp-Polynomial_Factorization-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Polynomial_Factorization-2018-08-16.tar.gz">
afp-Polynomial_Factorization-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Polynomial_Factorization-2017-10-10.tar.gz">
afp-Polynomial_Factorization-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Polynomial_Factorization-2016-12-17.tar.gz">
afp-Polynomial_Factorization-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Polynomial_Factorization-2016-02-22.tar.gz">
afp-Polynomial_Factorization-2016-02-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Polynomial_Interpolation.html b/web/entries/Polynomial_Interpolation.html
--- a/web/entries/Polynomial_Interpolation.html
+++ b/web/entries/Polynomial_Interpolation.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Polynomial Interpolation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>olynomial
<font class="first">I</font>nterpolation
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Polynomial Interpolation</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a> and
<a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-01-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalized three algorithms for polynomial interpolation over arbitrary
fields: Lagrange's explicit expression, the recursive algorithm of Neville
and Aitken, and the Newton interpolation in combination with an efficient
implementation of divided differences. Variants of these algorithms for
integer polynomials are also available, where sometimes the interpolation
can fail; e.g., there is no linear integer polynomial <i>p</i> such that
<i>p(0) = 0</i> and <i>p(2) = 1</i>. Moreover, for the Newton interpolation
for integer polynomials, we proved that all intermediate results that are
computed during the algorithm must be integers. This admits an early
failure detection in the implementation. Finally, we proved the uniqueness
of polynomial interpolation.
<p>
The development also contains improved code equations to speed up the
-division of integers in target languages.</div></td>
+division of integers in target languages.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Polynomial_Interpolation-AFP,
author = {René Thiemann and Akihisa Yamada},
title = {Polynomial Interpolation},
journal = {Archive of Formal Proofs},
month = jan,
year = 2016,
note = {\url{http://isa-afp.org/entries/Polynomial_Interpolation.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Sqrt_Babylonian.html">Sqrt_Babylonian</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Deep_Learning.html">Deep_Learning</a>, <a href="Gauss_Sums.html">Gauss_Sums</a>, <a href="Polynomial_Factorization.html">Polynomial_Factorization</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Polynomial_Interpolation/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Polynomial_Interpolation/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Polynomial_Interpolation/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Polynomial_Interpolation-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Polynomial_Interpolation-2019-06-11.tar.gz">
afp-Polynomial_Interpolation-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Polynomial_Interpolation-2018-08-16.tar.gz">
afp-Polynomial_Interpolation-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Polynomial_Interpolation-2017-10-10.tar.gz">
afp-Polynomial_Interpolation-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Polynomial_Interpolation-2016-12-17.tar.gz">
afp-Polynomial_Interpolation-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Polynomial_Interpolation-2016-02-22.tar.gz">
afp-Polynomial_Interpolation-2016-02-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Polynomials.html b/web/entries/Polynomials.html
--- a/web/entries/Polynomials.html
+++ b/web/entries/Polynomials.html
@@ -1,296 +1,296 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Executable Multivariate Polynomials - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>xecutable
<font class="first">M</font>ultivariate
<font class="first">P</font>olynomials
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Executable Multivariate Polynomials</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com),
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>,
<a href="https://risc.jku.at/m/alexander-maletzky/">Alexander Maletzky</a>,
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a>,
<a href="http://isabelle.in.tum.de/~haftmann">Florian Haftmann</a>,
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a> and
Alexander Bentkamp (bentkamp /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-08-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We define multivariate polynomials over arbitrary (ordered) semirings in
combination with (executable) operations like addition, multiplication,
and substitution. We also define (weak) monotonicity of polynomials and
comparison of polynomials where we provide standard estimations like
absolute positiveness or the more recent approach of Neurauter, Zankl,
and Middeldorp. Moreover, it is proven that strongly normalizing
(monotone) orders can be lifted to strongly normalizing (monotone) orders
over polynomials. Our formalization was performed as part of the <a
href="http://cl-informatik.uibk.ac.at/software/ceta">IsaFoR/CeTA-system</a>
which contains several termination techniques. The provided theories have
been essential to formalize polynomial interpretations.
<p>
This formalization also contains an abstract representation as coefficient functions with finite
support and a type of power-products. If this type is ordered by a linear (term) ordering, various
additional notions, such as leading power-product, leading coefficient etc., are introduced as
well. Furthermore, a lot of generic properties of, and functions on, multivariate polynomials are
formalized, including the substitution and evaluation homomorphisms, embeddings of polynomial rings
into larger rings (i.e. with one additional indeterminate), homogenization and dehomogenization of
-polynomials, and the canonical isomorphism between R[X,Y] and R[X][Y].</div></td>
+polynomials, and the canonical isomorphism between R[X,Y] and R[X][Y].</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2010-09-17]: Moved theories on arbitrary (ordered) semirings to Abstract Rewriting.<br>
[2016-10-28]: Added abstract representation of polynomials and authors Maletzky/Immler.<br>
[2018-01-23]: Added authors Haftmann, Lochbihler after incorporating
their formalization of multivariate polynomials based on Polynomial mappings.
Moved material from Bentkamp's entry "Deep Learning".<br>
[2019-04-18]: Added material about polynomials whose power-products are represented themselves
by polynomial mappings.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Polynomials-AFP,
author = {Christian Sternagel and René Thiemann and Alexander Maletzky and Fabian Immler and Florian Haftmann and Andreas Lochbihler and Alexander Bentkamp},
title = {Executable Multivariate Polynomials},
journal = {Archive of Formal Proofs},
month = aug,
year = 2010,
note = {\url{http://isa-afp.org/entries/Polynomials.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Abstract-Rewriting.html">Abstract-Rewriting</a>, <a href="Matrix.html">Matrix</a>, <a href="Show.html">Show</a>, <a href="Well_Quasi_Orders.html">Well_Quasi_Orders</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Deep_Learning.html">Deep_Learning</a>, <a href="Groebner_Bases.html">Groebner_Bases</a>, <a href="Lambda_Free_KBOs.html">Lambda_Free_KBOs</a>, <a href="Symmetric_Polynomials.html">Symmetric_Polynomials</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Polynomials/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Polynomials/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Polynomials/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Polynomials-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Polynomials-2020-01-14.tar.gz">
afp-Polynomials-2020-01-14.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-Polynomials-2019-06-11.tar.gz">
afp-Polynomials-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Polynomials-2018-08-16.tar.gz">
afp-Polynomials-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Polynomials-2017-10-10.tar.gz">
afp-Polynomials-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Polynomials-2016-12-17.tar.gz">
afp-Polynomials-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Polynomials-2016-02-22.tar.gz">
afp-Polynomials-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Polynomials-2015-05-27.tar.gz">
afp-Polynomials-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Polynomials-2014-08-28.tar.gz">
afp-Polynomials-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Polynomials-2013-12-11.tar.gz">
afp-Polynomials-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Polynomials-2013-11-17.tar.gz">
afp-Polynomials-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Polynomials-2013-02-16.tar.gz">
afp-Polynomials-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Polynomials-2012-05-24.tar.gz">
afp-Polynomials-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Polynomials-2011-10-11.tar.gz">
afp-Polynomials-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Polynomials-2011-02-11.tar.gz">
afp-Polynomials-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Polynomials-2010-08-11.tar.gz">
afp-Polynomials-2010-08-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Pop_Refinement.html b/web/entries/Pop_Refinement.html
--- a/web/entries/Pop_Refinement.html
+++ b/web/entries/Pop_Refinement.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Pop-Refinement - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>op-Refinement
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Pop-Refinement</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.kestrel.edu/~coglio">Alessandro Coglio</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-07-03</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Pop-refinement is an approach to stepwise refinement, carried out inside an interactive theorem prover by constructing a monotonically decreasing sequence of predicates over deeply embedded target programs. The sequence starts with a predicate that characterizes the possible implementations, and ends with a predicate that characterizes a unique program in explicit syntactic form. Pop-refinement enables more requirements (e.g. program-level and non-functional) to be captured in the initial specification and preserved through refinement. Security requirements expressed as hyperproperties (i.e. predicates over sets of traces) are always preserved by pop-refinement, unlike the popular notion of refinement as trace set inclusion. Two simple examples in Isabelle/HOL are presented, featuring program-level requirements, non-functional requirements, and hyperproperties.</div></td>
+ <td class="abstract mathjax_process">Pop-refinement is an approach to stepwise refinement, carried out inside an interactive theorem prover by constructing a monotonically decreasing sequence of predicates over deeply embedded target programs. The sequence starts with a predicate that characterizes the possible implementations, and ends with a predicate that characterizes a unique program in explicit syntactic form. Pop-refinement enables more requirements (e.g. program-level and non-functional) to be captured in the initial specification and preserved through refinement. Security requirements expressed as hyperproperties (i.e. predicates over sets of traces) are always preserved by pop-refinement, unlike the popular notion of refinement as trace set inclusion. Two simple examples in Isabelle/HOL are presented, featuring program-level requirements, non-functional requirements, and hyperproperties.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Pop_Refinement-AFP,
author = {Alessandro Coglio},
title = {Pop-Refinement},
journal = {Archive of Formal Proofs},
month = jul,
year = 2014,
note = {\url{http://isa-afp.org/entries/Pop_Refinement.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Pop_Refinement/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Pop_Refinement/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Pop_Refinement/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Pop_Refinement-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Pop_Refinement-2019-06-11.tar.gz">
afp-Pop_Refinement-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Pop_Refinement-2018-08-16.tar.gz">
afp-Pop_Refinement-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Pop_Refinement-2017-10-10.tar.gz">
afp-Pop_Refinement-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Pop_Refinement-2016-12-17.tar.gz">
afp-Pop_Refinement-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Pop_Refinement-2016-02-22.tar.gz">
afp-Pop_Refinement-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Pop_Refinement-2015-05-27.tar.gz">
afp-Pop_Refinement-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Pop_Refinement-2014-08-28.tar.gz">
afp-Pop_Refinement-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Pop_Refinement-2014-07-03.tar.gz">
afp-Pop_Refinement-2014-07-03.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Posix-Lexing.html b/web/entries/Posix-Lexing.html
--- a/web/entries/Posix-Lexing.html
+++ b/web/entries/Posix-Lexing.html
@@ -1,230 +1,230 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>POSIX Lexing with Derivatives of Regular Expressions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>OSIX
<font class="first">L</font>exing
with
<font class="first">D</font>erivatives
of
<font class="first">R</font>egular
<font class="first">E</font>xpressions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">POSIX Lexing with Derivatives of Regular Expressions</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://kcl.academia.edu/FahadAusaf">Fahad Ausaf</a>,
<a href="https://rd.host.cs.st-andrews.ac.uk">Roy Dyckhoff</a> and
<a href="http://www.inf.kcl.ac.uk/staff/urbanc/">Christian Urban</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-05-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Brzozowski introduced the notion of derivatives for regular
expressions. They can be used for a very simple regular expression
matching algorithm. Sulzmann and Lu cleverly extended this algorithm
in order to deal with POSIX matching, which is the underlying
disambiguation strategy for regular expressions needed in lexers. In
this entry we give our inductive definition of what a POSIX value is
and show (i) that such a value is unique (for given regular expression
and string being matched) and (ii) that Sulzmann and Lu's algorithm
always generates such a value (provided that the regular expression
matches the string). We also prove the correctness of an optimised
-version of the POSIX matching algorithm.</div></td>
+version of the POSIX matching algorithm.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Posix-Lexing-AFP,
author = {Fahad Ausaf and Roy Dyckhoff and Christian Urban},
title = {POSIX Lexing with Derivatives of Regular Expressions},
journal = {Archive of Formal Proofs},
month = may,
year = 2016,
note = {\url{http://isa-afp.org/entries/Posix-Lexing.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Regular-Sets.html">Regular-Sets</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Posix-Lexing/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Posix-Lexing/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Posix-Lexing/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Posix-Lexing-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Posix-Lexing-2019-06-11.tar.gz">
afp-Posix-Lexing-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Posix-Lexing-2018-08-16.tar.gz">
afp-Posix-Lexing-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Posix-Lexing-2017-10-10.tar.gz">
afp-Posix-Lexing-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Posix-Lexing-2016-12-17.tar.gz">
afp-Posix-Lexing-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Posix-Lexing-2016-05-24.tar.gz">
afp-Posix-Lexing-2016-05-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Possibilistic_Noninterference.html b/web/entries/Possibilistic_Noninterference.html
--- a/web/entries/Possibilistic_Noninterference.html
+++ b/web/entries/Possibilistic_Noninterference.html
@@ -1,244 +1,244 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Possibilistic Noninterference - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>ossibilistic
<font class="first">N</font>oninterference
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Possibilistic Noninterference</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Andrei Popescu (a /dot/ popescu /at/ mdx /dot/ ac /dot/ uk) and
<a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-09-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We formalize a wide variety of Volpano/Smith-style noninterference
+ <td class="abstract mathjax_process">We formalize a wide variety of Volpano/Smith-style noninterference
notions for a while language with parallel composition.
We systematize and classify these notions according to
compositionality w.r.t. the language constructs. Compositionality
yields sound syntactic criteria (a.k.a. type systems) in a uniform way.
<p>
An <a href="http://www21.in.tum.de/~nipkow/pubs/cpp12.html">article</a>
about these proofs is published in the proceedings
-of the conference Certified Programs and Proofs 2012.</div></td>
+of the conference Certified Programs and Proofs 2012.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Possibilistic_Noninterference-AFP,
author = {Andrei Popescu and Johannes Hölzl},
title = {Possibilistic Noninterference},
journal = {Archive of Formal Proofs},
month = sep,
year = 2012,
note = {\url{http://isa-afp.org/entries/Possibilistic_Noninterference.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Possibilistic_Noninterference/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Possibilistic_Noninterference/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Possibilistic_Noninterference/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Possibilistic_Noninterference-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Possibilistic_Noninterference-2019-06-11.tar.gz">
afp-Possibilistic_Noninterference-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Possibilistic_Noninterference-2018-08-16.tar.gz">
afp-Possibilistic_Noninterference-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Possibilistic_Noninterference-2017-10-10.tar.gz">
afp-Possibilistic_Noninterference-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Possibilistic_Noninterference-2016-12-17.tar.gz">
afp-Possibilistic_Noninterference-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Possibilistic_Noninterference-2016-02-22.tar.gz">
afp-Possibilistic_Noninterference-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Possibilistic_Noninterference-2015-05-27.tar.gz">
afp-Possibilistic_Noninterference-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Possibilistic_Noninterference-2014-08-28.tar.gz">
afp-Possibilistic_Noninterference-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Possibilistic_Noninterference-2013-12-11.tar.gz">
afp-Possibilistic_Noninterference-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Possibilistic_Noninterference-2013-11-17.tar.gz">
afp-Possibilistic_Noninterference-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Possibilistic_Noninterference-2013-02-16.tar.gz">
afp-Possibilistic_Noninterference-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Possibilistic_Noninterference-2012-09-10.tar.gz">
afp-Possibilistic_Noninterference-2012-09-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Pratt_Certificate.html b/web/entries/Pratt_Certificate.html
--- a/web/entries/Pratt_Certificate.html
+++ b/web/entries/Pratt_Certificate.html
@@ -1,237 +1,237 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Pratt's Primality Certificates - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>ratt's
<font class="first">P</font>rimality
<font class="first">C</font>ertificates
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Pratt's Primality Certificates</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a> and
<a href="http://www21.in.tum.de/~noschinl/">Lars Noschinski</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-07-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">In 1975, Pratt introduced a proof system for certifying primes. He showed that a number <i>p</i> is prime iff a primality certificate for <i>p</i> exists. By showing a logarithmic upper bound on the length of the certificates in size of the prime number, he concluded that the decision problem for prime numbers is in NP. This work formalizes soundness and completeness of Pratt's proof system as well as an upper bound for the size of the certificate.</div></td>
+ <td class="abstract mathjax_process">In 1975, Pratt introduced a proof system for certifying primes. He showed that a number <i>p</i> is prime iff a primality certificate for <i>p</i> exists. By showing a logarithmic upper bound on the length of the certificates in size of the prime number, he concluded that the decision problem for prime numbers is in NP. This work formalizes soundness and completeness of Pratt's proof system as well as an upper bound for the size of the certificate.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Pratt_Certificate-AFP,
author = {Simon Wimmer and Lars Noschinski},
title = {Pratt's Primality Certificates},
journal = {Archive of Formal Proofs},
month = jul,
year = 2013,
note = {\url{http://isa-afp.org/entries/Pratt_Certificate.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Lehmer.html">Lehmer</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Bertrands_Postulate.html">Bertrands_Postulate</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Pratt_Certificate/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Pratt_Certificate/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Pratt_Certificate/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Pratt_Certificate-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Pratt_Certificate-2019-06-11.tar.gz">
afp-Pratt_Certificate-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Pratt_Certificate-2018-08-16.tar.gz">
afp-Pratt_Certificate-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Pratt_Certificate-2017-10-10.tar.gz">
afp-Pratt_Certificate-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Pratt_Certificate-2016-12-17.tar.gz">
afp-Pratt_Certificate-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Pratt_Certificate-2016-02-22.tar.gz">
afp-Pratt_Certificate-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Pratt_Certificate-2015-05-27.tar.gz">
afp-Pratt_Certificate-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Pratt_Certificate-2014-08-28.tar.gz">
afp-Pratt_Certificate-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Pratt_Certificate-2013-12-11.tar.gz">
afp-Pratt_Certificate-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Pratt_Certificate-2013-11-17.tar.gz">
afp-Pratt_Certificate-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Pratt_Certificate-2013-07-29.tar.gz">
afp-Pratt_Certificate-2013-07-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Presburger-Automata.html b/web/entries/Presburger-Automata.html
--- a/web/entries/Presburger-Automata.html
+++ b/web/entries/Presburger-Automata.html
@@ -1,265 +1,265 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalizing the Logic-Automaton Connection - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalizing
the
<font class="first">L</font>ogic-Automaton
<font class="first">C</font>onnection
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalizing the Logic-Automaton Connection</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.in.tum.de/~berghofe">Stefan Berghofer</a> and
Markus Reiter
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2009-12-03</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This work presents a formalization of a library for automata on bit strings. It forms the basis of a reflection-based decision procedure for Presburger arithmetic, which is efficiently executable thanks to Isabelle's code generator. With this work, we therefore provide a mechanized proof of a well-known connection between logic and automata theory. The formalization is also described in a publication [TPHOLs 2009].</div></td>
+ <td class="abstract mathjax_process">This work presents a formalization of a library for automata on bit strings. It forms the basis of a reflection-based decision procedure for Presburger arithmetic, which is efficiently executable thanks to Isabelle's code generator. With this work, we therefore provide a mechanized proof of a well-known connection between logic and automata theory. The formalization is also described in a publication [TPHOLs 2009].</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Presburger-Automata-AFP,
author = {Stefan Berghofer and Markus Reiter},
title = {Formalizing the Logic-Automaton Connection},
journal = {Archive of Formal Proofs},
month = dec,
year = 2009,
note = {\url{http://isa-afp.org/entries/Presburger-Automata.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Presburger-Automata/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Presburger-Automata/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Presburger-Automata/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Presburger-Automata-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Presburger-Automata-2019-06-11.tar.gz">
afp-Presburger-Automata-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Presburger-Automata-2018-08-16.tar.gz">
afp-Presburger-Automata-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Presburger-Automata-2017-10-10.tar.gz">
afp-Presburger-Automata-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Presburger-Automata-2016-12-17.tar.gz">
afp-Presburger-Automata-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Presburger-Automata-2016-02-22.tar.gz">
afp-Presburger-Automata-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Presburger-Automata-2015-05-27.tar.gz">
afp-Presburger-Automata-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Presburger-Automata-2014-08-28.tar.gz">
afp-Presburger-Automata-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Presburger-Automata-2013-12-11.tar.gz">
afp-Presburger-Automata-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Presburger-Automata-2013-11-17.tar.gz">
afp-Presburger-Automata-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Presburger-Automata-2013-03-02.tar.gz">
afp-Presburger-Automata-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Presburger-Automata-2013-02-16.tar.gz">
afp-Presburger-Automata-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Presburger-Automata-2012-05-24.tar.gz">
afp-Presburger-Automata-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Presburger-Automata-2011-10-11.tar.gz">
afp-Presburger-Automata-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Presburger-Automata-2011-02-11.tar.gz">
afp-Presburger-Automata-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Presburger-Automata-2010-07-01.tar.gz">
afp-Presburger-Automata-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Presburger-Automata-2009-12-12.tar.gz">
afp-Presburger-Automata-2009-12-12.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Prim_Dijkstra_Simple.html b/web/entries/Prim_Dijkstra_Simple.html
--- a/web/entries/Prim_Dijkstra_Simple.html
+++ b/web/entries/Prim_Dijkstra_Simple.html
@@ -1,210 +1,210 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Purely Functional, Simple, and Efficient Implementation of Prim and Dijkstra - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>urely
<font class="first">F</font>unctional,
<font class="first">S</font>imple,
and
<font class="first">E</font>fficient
<font class="first">I</font>mplementation
of
<font class="first">P</font>rim
and
<font class="first">D</font>ijkstra
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Purely Functional, Simple, and Efficient Implementation of Prim and Dijkstra</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Peter Lammich and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-06-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We verify purely functional, simple and efficient implementations of
Prim's and Dijkstra's algorithms. This constitutes the first
verification of an executable and even efficient version of
Prim's algorithm. This entry formalizes the second part of our
ITP-2019 proof pearl <em>Purely Functional, Simple and Efficient
-Priority Search Trees and Applications to Prim and Dijkstra</em>.</div></td>
+Priority Search Trees and Applications to Prim and Dijkstra</em>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Prim_Dijkstra_Simple-AFP,
author = {Peter Lammich and Tobias Nipkow},
title = {Purely Functional, Simple, and Efficient Implementation of Prim and Dijkstra},
journal = {Archive of Formal Proofs},
month = jun,
year = 2019,
note = {\url{http://isa-afp.org/entries/Prim_Dijkstra_Simple.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Priority_Search_Trees.html">Priority_Search_Trees</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Prim_Dijkstra_Simple/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Prim_Dijkstra_Simple/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Prim_Dijkstra_Simple/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Prim_Dijkstra_Simple-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Prim_Dijkstra_Simple-2019-06-29.tar.gz">
afp-Prim_Dijkstra_Simple-2019-06-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Prime_Distribution_Elementary.html b/web/entries/Prime_Distribution_Elementary.html
--- a/web/entries/Prime_Distribution_Elementary.html
+++ b/web/entries/Prime_Distribution_Elementary.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Elementary Facts About the Distribution of Primes - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>lementary
<font class="first">F</font>acts
<font class="first">A</font>bout
the
<font class="first">D</font>istribution
of
<font class="first">P</font>rimes
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Elementary Facts About the Distribution of Primes</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-02-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This entry is a formalisation of Chapter 4 (and parts of
Chapter 3) of Apostol's <a
href="https://www.springer.com/de/book/9780387901633"><em>Introduction
to Analytic Number Theory</em></a>. The main topics that
are addressed are properties of the distribution of prime numbers that
can be shown in an elementary way (i.&thinsp;e. without the Prime
Number Theorem), the various equivalent forms of the PNT (which imply
each other in elementary ways), and consequences that follow from the
PNT in elementary ways. The latter include, most notably, asymptotic
bounds for the number of distinct prime factors of
<em>n</em>, the divisor function
<em>d(n)</em>, Euler's totient function
<em>&phi;(n)</em>, and
-lcm(1,&hellip;,<em>n</em>).</p></div></td>
+lcm(1,&hellip;,<em>n</em>).</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Prime_Distribution_Elementary-AFP,
author = {Manuel Eberl},
title = {Elementary Facts About the Distribution of Primes},
journal = {Archive of Formal Proofs},
month = feb,
year = 2019,
note = {\url{http://isa-afp.org/entries/Prime_Distribution_Elementary.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Prime_Number_Theorem.html">Prime_Number_Theorem</a>, <a href="Zeta_Function.html">Zeta_Function</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="IMO2019.html">IMO2019</a>, <a href="Zeta_3_Irrational.html">Zeta_3_Irrational</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Prime_Distribution_Elementary/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Prime_Distribution_Elementary/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Prime_Distribution_Elementary/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Prime_Distribution_Elementary-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Prime_Distribution_Elementary-2019-06-11.tar.gz">
afp-Prime_Distribution_Elementary-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Prime_Distribution_Elementary-2019-02-22.tar.gz">
afp-Prime_Distribution_Elementary-2019-02-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Prime_Harmonic_Series.html b/web/entries/Prime_Harmonic_Series.html
--- a/web/entries/Prime_Harmonic_Series.html
+++ b/web/entries/Prime_Harmonic_Series.html
@@ -1,233 +1,233 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Divergence of the Prime Harmonic Series - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">D</font>ivergence
of
the
<font class="first">P</font>rime
<font class="first">H</font>armonic
<font class="first">S</font>eries
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Divergence of the Prime Harmonic Series</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-12-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
In this work, we prove the lower bound <span class="nobr">ln(H_n) -
ln(5/3)</span> for the
partial sum of the Prime Harmonic series and, based on this, the divergence of
the Prime Harmonic Series
<span class="nobr">∑[p&thinsp;prime]&thinsp;·&thinsp;1/p.</span>
</p><p>
The proof relies on the unique squarefree decomposition of natural numbers. This
is similar to Euler's original proof (which was highly informal and morally
questionable). Its advantage over proofs by contradiction, like the famous one
by Paul Erdős, is that it provides a relatively good lower bound for the partial
sums.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Prime_Harmonic_Series-AFP,
author = {Manuel Eberl},
title = {The Divergence of the Prime Harmonic Series},
journal = {Archive of Formal Proofs},
month = dec,
year = 2015,
note = {\url{http://isa-afp.org/entries/Prime_Harmonic_Series.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Prime_Harmonic_Series/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Prime_Harmonic_Series/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Prime_Harmonic_Series/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Prime_Harmonic_Series-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Prime_Harmonic_Series-2019-06-11.tar.gz">
afp-Prime_Harmonic_Series-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Prime_Harmonic_Series-2018-08-16.tar.gz">
afp-Prime_Harmonic_Series-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Prime_Harmonic_Series-2017-10-10.tar.gz">
afp-Prime_Harmonic_Series-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Prime_Harmonic_Series-2016-12-17.tar.gz">
afp-Prime_Harmonic_Series-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Prime_Harmonic_Series-2016-02-22.tar.gz">
afp-Prime_Harmonic_Series-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Prime_Harmonic_Series-2016-01-05.tar.gz">
afp-Prime_Harmonic_Series-2016-01-05.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Prime_Number_Theorem.html b/web/entries/Prime_Number_Theorem.html
--- a/web/entries/Prime_Number_Theorem.html
+++ b/web/entries/Prime_Number_Theorem.html
@@ -1,234 +1,234 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Prime Number Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">P</font>rime
<font class="first">N</font>umber
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Prime Number Theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a> and
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-09-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This article provides a short proof of the Prime Number
Theorem in several equivalent forms, most notably
&pi;(<em>x</em>) ~ <em>x</em>/ln
<em>x</em> where &pi;(<em>x</em>) is the
number of primes no larger than <em>x</em>. It also
defines other basic number-theoretic functions related to primes like
Chebyshev's functions &thetasym; and &psi; and the
&ldquo;<em>n</em>-th prime number&rdquo; function
p<sub><em>n</em></sub>. We also show various
bounds and relationship between these functions are shown. Lastly, we
derive Mertens' First and Second Theorem, i.&thinsp;e.
&sum;<sub><em>p</em>&le;<em>x</em></sub>
ln <em>p</em>/<em>p</em> = ln
<em>x</em> + <em>O</em>(1) and
&sum;<sub><em>p</em>&le;<em>x</em></sub>
1/<em>p</em> = ln ln <em>x</em> + M +
<em>O</em>(1/ln <em>x</em>). We also give
explicit bounds for the remainder terms.</p> <p>The proof
of the Prime Number Theorem builds on a library of Dirichlet series
and analytic combinatorics. We essentially follow the presentation by
Newman. The core part of the proof is a Tauberian theorem for
Dirichlet series, which is proven using complex analysis and then used
to strengthen Mertens' First Theorem to
&sum;<sub><em>p</em>&le;<em>x</em></sub>
ln <em>p</em>/<em>p</em> = ln
<em>x</em> + c + <em>o</em>(1).</p>
<p>A variant of this proof has been formalised before by
Harrison in HOL Light, and formalisations of Selberg's elementary
proof exist both by Avigad <em>et al.</em> in Isabelle and
by Carneiro in Metamath. The advantage of the analytic proof is that,
while it requires more powerful mathematical tools, it is considerably
shorter and clearer. This article attempts to provide a short and
clear formalisation of all components of that proof using the full
range of mathematical machinery available in Isabelle, staying as
-close as possible to Newman's simple paper proof.</p></div></td>
+close as possible to Newman's simple paper proof.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Prime_Number_Theorem-AFP,
author = {Manuel Eberl and Lawrence C. Paulson},
title = {The Prime Number Theorem},
journal = {Archive of Formal Proofs},
month = sep,
year = 2018,
note = {\url{http://isa-afp.org/entries/Prime_Number_Theorem.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Stirling_Formula.html">Stirling_Formula</a>, <a href="Zeta_Function.html">Zeta_Function</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Prime_Distribution_Elementary.html">Prime_Distribution_Elementary</a>, <a href="Transcendence_Series_Hancl_Rucki.html">Transcendence_Series_Hancl_Rucki</a>, <a href="Zeta_3_Irrational.html">Zeta_3_Irrational</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Prime_Number_Theorem/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Prime_Number_Theorem/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Prime_Number_Theorem/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Prime_Number_Theorem-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Prime_Number_Theorem-2019-06-11.tar.gz">
afp-Prime_Number_Theorem-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Prime_Number_Theorem-2018-09-20.tar.gz">
afp-Prime_Number_Theorem-2018-09-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Priority_Queue_Braun.html b/web/entries/Priority_Queue_Braun.html
--- a/web/entries/Priority_Queue_Braun.html
+++ b/web/entries/Priority_Queue_Braun.html
@@ -1,230 +1,230 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Priority Queues Based on Braun Trees - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>riority
<font class="first">Q</font>ueues
<font class="first">B</font>ased
on
<font class="first">B</font>raun
<font class="first">T</font>rees
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Priority Queues Based on Braun Trees</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-09-04</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry verifies priority queues based on Braun trees. Insertion
and deletion take logarithmic time and preserve the balanced nature
-of Braun trees. Two implementations of deletion are provided.</div></td>
+of Braun trees. Two implementations of deletion are provided.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2019-12-16]: Added theory Priority_Queue_Braun2 with second version of del_min</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Priority_Queue_Braun-AFP,
author = {Tobias Nipkow},
title = {Priority Queues Based on Braun Trees},
journal = {Archive of Formal Proofs},
month = sep,
year = 2014,
note = {\url{http://isa-afp.org/entries/Priority_Queue_Braun.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Priority_Queue_Braun/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Priority_Queue_Braun/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Priority_Queue_Braun/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Priority_Queue_Braun-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Priority_Queue_Braun-2019-06-11.tar.gz">
afp-Priority_Queue_Braun-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Priority_Queue_Braun-2018-08-16.tar.gz">
afp-Priority_Queue_Braun-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Priority_Queue_Braun-2017-10-10.tar.gz">
afp-Priority_Queue_Braun-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Priority_Queue_Braun-2016-12-17.tar.gz">
afp-Priority_Queue_Braun-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Priority_Queue_Braun-2016-02-22.tar.gz">
afp-Priority_Queue_Braun-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Priority_Queue_Braun-2015-05-27.tar.gz">
afp-Priority_Queue_Braun-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Priority_Queue_Braun-2014-09-04.tar.gz">
afp-Priority_Queue_Braun-2014-09-04.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Priority_Search_Trees.html b/web/entries/Priority_Search_Trees.html
--- a/web/entries/Priority_Search_Trees.html
+++ b/web/entries/Priority_Search_Trees.html
@@ -1,200 +1,200 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Priority Search Trees - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>riority
<font class="first">S</font>earch
<font class="first">T</font>rees
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Priority Search Trees</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Peter Lammich and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-06-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a new, purely functional, simple and efficient data
structure combining a search tree and a priority queue, which we call
a <em>priority search tree</em>. The salient feature of priority search
trees is that they offer a decrease-key operation, something that is
missing from other simple, purely functional priority queue
implementations. Priority search trees can be implemented on top of
any search tree. This entry does the implementation for red-black
trees. This entry formalizes the first part of our ITP-2019 proof
pearl <em>Purely Functional, Simple and Efficient Priority
-Search Trees and Applications to Prim and Dijkstra</em>.</div></td>
+Search Trees and Applications to Prim and Dijkstra</em>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Priority_Search_Trees-AFP,
author = {Peter Lammich and Tobias Nipkow},
title = {Priority Search Trees},
journal = {Archive of Formal Proofs},
month = jun,
year = 2019,
note = {\url{http://isa-afp.org/entries/Priority_Search_Trees.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Prim_Dijkstra_Simple.html">Prim_Dijkstra_Simple</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Priority_Search_Trees/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Priority_Search_Trees/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Priority_Search_Trees/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Priority_Search_Trees-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Priority_Search_Trees-2019-06-29.tar.gz">
afp-Priority_Search_Trees-2019-06-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Probabilistic_Noninterference.html b/web/entries/Probabilistic_Noninterference.html
--- a/web/entries/Probabilistic_Noninterference.html
+++ b/web/entries/Probabilistic_Noninterference.html
@@ -1,223 +1,223 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Probabilistic Noninterference - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>robabilistic
<font class="first">N</font>oninterference
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Probabilistic Noninterference</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Andrei Popescu (a /dot/ popescu /at/ mdx /dot/ ac /dot/ uk) and
<a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-03-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We formalize a probabilistic noninterference for a multi-threaded language with uniform scheduling, where probabilistic behaviour comes from both the scheduler and the individual threads. We define notions probabilistic noninterference in two variants: resumption-based and trace-based. For the resumption-based notions, we prove compositionality w.r.t. the language constructs and establish sound type-system-like syntactic criteria. This is a formalization of the mathematical development presented at CPP 2013 and CALCO 2013. It is the probabilistic variant of the Possibilistic Noninterference AFP entry.</div></td>
+ <td class="abstract mathjax_process">We formalize a probabilistic noninterference for a multi-threaded language with uniform scheduling, where probabilistic behaviour comes from both the scheduler and the individual threads. We define notions probabilistic noninterference in two variants: resumption-based and trace-based. For the resumption-based notions, we prove compositionality w.r.t. the language constructs and establish sound type-system-like syntactic criteria. This is a formalization of the mathematical development presented at CPP 2013 and CALCO 2013. It is the probabilistic variant of the Possibilistic Noninterference AFP entry.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Probabilistic_Noninterference-AFP,
author = {Andrei Popescu and Johannes Hölzl},
title = {Probabilistic Noninterference},
journal = {Archive of Formal Proofs},
month = mar,
year = 2014,
note = {\url{http://isa-afp.org/entries/Probabilistic_Noninterference.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Coinductive.html">Coinductive</a>, <a href="Markov_Models.html">Markov_Models</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Probabilistic_Noninterference/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Probabilistic_Noninterference/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Probabilistic_Noninterference/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Probabilistic_Noninterference-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Probabilistic_Noninterference-2019-06-11.tar.gz">
afp-Probabilistic_Noninterference-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Probabilistic_Noninterference-2018-08-16.tar.gz">
afp-Probabilistic_Noninterference-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Probabilistic_Noninterference-2017-10-10.tar.gz">
afp-Probabilistic_Noninterference-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Probabilistic_Noninterference-2016-12-17.tar.gz">
afp-Probabilistic_Noninterference-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Probabilistic_Noninterference-2016-02-22.tar.gz">
afp-Probabilistic_Noninterference-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Probabilistic_Noninterference-2015-05-27.tar.gz">
afp-Probabilistic_Noninterference-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Probabilistic_Noninterference-2014-08-28.tar.gz">
afp-Probabilistic_Noninterference-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Probabilistic_Noninterference-2014-03-16.tar.gz">
afp-Probabilistic_Noninterference-2014-03-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Probabilistic_Prime_Tests.html b/web/entries/Probabilistic_Prime_Tests.html
--- a/web/entries/Probabilistic_Prime_Tests.html
+++ b/web/entries/Probabilistic_Prime_Tests.html
@@ -1,206 +1,206 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Probabilistic Primality Testing - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>robabilistic
<font class="first">P</font>rimality
<font class="first">T</font>esting
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Probabilistic Primality Testing</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Daniel Stüwe and
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-02-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>The most efficient known primality tests are
<em>probabilistic</em> in the sense that they use
randomness and may, with some probability, mistakenly classify a
composite number as prime &ndash; but never a prime number as
composite. Examples of this are the Miller&ndash;Rabin test, the
Solovay&ndash;Strassen test, and (in most cases) Fermat's
test.</p> <p>This entry defines these three tests and
proves their correctness. It also develops some of the
number-theoretic foundations, such as Carmichael numbers and the
Jacobi symbol with an efficient executable algorithm to compute
-it.</p></div></td>
+it.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Probabilistic_Prime_Tests-AFP,
author = {Daniel Stüwe and Manuel Eberl},
title = {Probabilistic Primality Testing},
journal = {Archive of Formal Proofs},
month = feb,
year = 2019,
note = {\url{http://isa-afp.org/entries/Probabilistic_Prime_Tests.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Mersenne_Primes.html">Mersenne_Primes</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Probabilistic_Prime_Tests/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Probabilistic_Prime_Tests/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Probabilistic_Prime_Tests/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Probabilistic_Prime_Tests-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Probabilistic_Prime_Tests-2019-06-11.tar.gz">
afp-Probabilistic_Prime_Tests-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Probabilistic_Prime_Tests-2019-02-15.tar.gz">
afp-Probabilistic_Prime_Tests-2019-02-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Probabilistic_System_Zoo.html b/web/entries/Probabilistic_System_Zoo.html
--- a/web/entries/Probabilistic_System_Zoo.html
+++ b/web/entries/Probabilistic_System_Zoo.html
@@ -1,226 +1,226 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Zoo of Probabilistic Systems - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">Z</font>oo
of
<font class="first">P</font>robabilistic
<font class="first">S</font>ystems
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Zoo of Probabilistic Systems</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>,
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a> and
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-05-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Numerous models of probabilistic systems are studied in the literature.
Coalgebra has been used to classify them into system types and compare their
expressiveness. We formalize the resulting hierarchy of probabilistic system
types by modeling the semantics of the different systems as codatatypes.
This approach yields simple and concise proofs, as bisimilarity coincides
with equality for codatatypes.
<p>
-This work is described in detail in the ITP 2015 publication by the authors.</div></td>
+This work is described in detail in the ITP 2015 publication by the authors.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Probabilistic_System_Zoo-AFP,
author = {Johannes Hölzl and Andreas Lochbihler and Dmitriy Traytel},
title = {A Zoo of Probabilistic Systems},
journal = {Archive of Formal Proofs},
month = may,
year = 2015,
note = {\url{http://isa-afp.org/entries/Probabilistic_System_Zoo.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Probabilistic_System_Zoo/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Probabilistic_System_Zoo/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Probabilistic_System_Zoo/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Probabilistic_System_Zoo-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Probabilistic_System_Zoo-2019-06-11.tar.gz">
afp-Probabilistic_System_Zoo-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Probabilistic_System_Zoo-2018-08-16.tar.gz">
afp-Probabilistic_System_Zoo-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Probabilistic_System_Zoo-2017-10-10.tar.gz">
afp-Probabilistic_System_Zoo-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Probabilistic_System_Zoo-2016-12-17.tar.gz">
afp-Probabilistic_System_Zoo-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Probabilistic_System_Zoo-2016-02-22.tar.gz">
afp-Probabilistic_System_Zoo-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Probabilistic_System_Zoo-2015-05-28.tar.gz">
afp-Probabilistic_System_Zoo-2015-05-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Probabilistic_Timed_Automata.html b/web/entries/Probabilistic_Timed_Automata.html
--- a/web/entries/Probabilistic_Timed_Automata.html
+++ b/web/entries/Probabilistic_Timed_Automata.html
@@ -1,211 +1,211 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Probabilistic Timed Automata - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>robabilistic
<font class="first">T</font>imed
<font class="first">A</font>utomata
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Probabilistic Timed Automata</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a> and
<a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-05-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a formalization of probabilistic timed automata (PTA) for
which we try to follow the formula MDP + TA = PTA as far as possible:
our work starts from our existing formalizations of Markov decision
processes (MDP) and timed automata (TA) and combines them modularly.
We prove the fundamental result for probabilistic timed automata: the
region construction that is known from timed automata carries over to
the probabilistic setting. In particular, this allows us to prove that
minimum and maximum reachability probabilities can be computed via a
reduction to MDP model checking, including the case where one wants to
disregard unrealizable behavior. Further information can be found in
-our ITP paper [2].</div></td>
+our ITP paper [2].</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Probabilistic_Timed_Automata-AFP,
author = {Simon Wimmer and Johannes Hölzl},
title = {Probabilistic Timed Automata},
journal = {Archive of Formal Proofs},
month = may,
year = 2018,
note = {\url{http://isa-afp.org/entries/Probabilistic_Timed_Automata.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Markov_Models.html">Markov_Models</a>, <a href="Timed_Automata.html">Timed_Automata</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Probabilistic_Timed_Automata/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Probabilistic_Timed_Automata/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Probabilistic_Timed_Automata/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Probabilistic_Timed_Automata-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Probabilistic_Timed_Automata-2019-06-11.tar.gz">
afp-Probabilistic_Timed_Automata-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Probabilistic_Timed_Automata-2018-08-16.tar.gz">
afp-Probabilistic_Timed_Automata-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Probabilistic_Timed_Automata-2018-05-25.tar.gz">
afp-Probabilistic_Timed_Automata-2018-05-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Probabilistic_While.html b/web/entries/Probabilistic_While.html
--- a/web/entries/Probabilistic_While.html
+++ b/web/entries/Probabilistic_While.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Probabilistic while loop - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>robabilistic
while
loop
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Probabilistic while loop</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-05-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This AFP entry defines a probabilistic while operator based on
sub-probability mass functions and formalises zero-one laws and variant
rules for probabilistic loop termination. As applications, we
implement probabilistic algorithms for the Bernoulli, geometric and
arbitrary uniform distributions that only use fair coin flips, and
-prove them correct and terminating with probability 1.</div></td>
+prove them correct and terminating with probability 1.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2018-02-02]:
Added a proof that probabilistic conditioning can be implemented by repeated sampling.
(revision 305867c4e911)<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Probabilistic_While-AFP,
author = {Andreas Lochbihler},
title = {Probabilistic while loop},
journal = {Archive of Formal Proofs},
month = may,
year = 2017,
note = {\url{http://isa-afp.org/entries/Probabilistic_While.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="MFMC_Countable.html">MFMC_Countable</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="CryptHOL.html">CryptHOL</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Probabilistic_While/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Probabilistic_While/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Probabilistic_While/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Probabilistic_While-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Probabilistic_While-2019-06-11.tar.gz">
afp-Probabilistic_While-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Probabilistic_While-2018-08-16.tar.gz">
afp-Probabilistic_While-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Probabilistic_While-2017-10-10.tar.gz">
afp-Probabilistic_While-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Probabilistic_While-2017-05-11.tar.gz">
afp-Probabilistic_While-2017-05-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Program-Conflict-Analysis.html b/web/entries/Program-Conflict-Analysis.html
--- a/web/entries/Program-Conflict-Analysis.html
+++ b/web/entries/Program-Conflict-Analysis.html
@@ -1,298 +1,298 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of Conflict Analysis of Programs with Procedures, Thread Creation, and Monitors - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
<font class="first">C</font>onflict
<font class="first">A</font>nalysis
of
<font class="first">P</font>rograms
with
<font class="first">P</font>rocedures,
<font class="first">T</font>hread
<font class="first">C</font>reation,
and
<font class="first">M</font>onitors
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of Conflict Analysis of Programs with Procedures, Thread Creation, and Monitors</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Peter Lammich and
<a href="http://cs.uni-muenster.de/u/mmo/">Markus Müller-Olm</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2007-12-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">In this work we formally verify the soundness and precision of a static program analysis that detects conflicts (e. g. data races) in programs with procedures, thread creation and monitors with the Isabelle theorem prover. As common in static program analysis, our program model abstracts guarded branching by nondeterministic branching, but completely interprets the call-/return behavior of procedures, synchronization by monitors, and thread creation. The analysis is based on the observation that all conflicts already occur in a class of particularly restricted schedules. These restricted schedules are suited to constraint-system-based program analysis. The formalization is based upon a flowgraph-based program model with an operational semantics as reference point.</div></td>
+ <td class="abstract mathjax_process">In this work we formally verify the soundness and precision of a static program analysis that detects conflicts (e. g. data races) in programs with procedures, thread creation and monitors with the Isabelle theorem prover. As common in static program analysis, our program model abstracts guarded branching by nondeterministic branching, but completely interprets the call-/return behavior of procedures, synchronization by monitors, and thread creation. The analysis is based on the observation that all conflicts already occur in a class of particularly restricted schedules. These restricted schedules are suited to constraint-system-based program analysis. The formalization is based upon a flowgraph-based program model with an operational semantics as reference point.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Program-Conflict-Analysis-AFP,
author = {Peter Lammich and Markus Müller-Olm},
title = {Formalization of Conflict Analysis of Programs with Procedures, Thread Creation, and Monitors},
journal = {Archive of Formal Proofs},
month = dec,
year = 2007,
note = {\url{http://isa-afp.org/entries/Program-Conflict-Analysis.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Flow_Networks.html">Flow_Networks</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Program-Conflict-Analysis/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Program-Conflict-Analysis/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Program-Conflict-Analysis/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Program-Conflict-Analysis-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Program-Conflict-Analysis-2019-06-11.tar.gz">
afp-Program-Conflict-Analysis-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Program-Conflict-Analysis-2018-08-16.tar.gz">
afp-Program-Conflict-Analysis-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Program-Conflict-Analysis-2017-10-10.tar.gz">
afp-Program-Conflict-Analysis-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Program-Conflict-Analysis-2016-12-17.tar.gz">
afp-Program-Conflict-Analysis-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Program-Conflict-Analysis-2016-02-22.tar.gz">
afp-Program-Conflict-Analysis-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Program-Conflict-Analysis-2015-05-27.tar.gz">
afp-Program-Conflict-Analysis-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Program-Conflict-Analysis-2014-08-28.tar.gz">
afp-Program-Conflict-Analysis-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Program-Conflict-Analysis-2013-12-11.tar.gz">
afp-Program-Conflict-Analysis-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Program-Conflict-Analysis-2013-11-17.tar.gz">
afp-Program-Conflict-Analysis-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Program-Conflict-Analysis-2013-03-02.tar.gz">
afp-Program-Conflict-Analysis-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Program-Conflict-Analysis-2013-02-16.tar.gz">
afp-Program-Conflict-Analysis-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Program-Conflict-Analysis-2012-05-24.tar.gz">
afp-Program-Conflict-Analysis-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Program-Conflict-Analysis-2011-10-11.tar.gz">
afp-Program-Conflict-Analysis-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Program-Conflict-Analysis-2011-02-11.tar.gz">
afp-Program-Conflict-Analysis-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Program-Conflict-Analysis-2010-07-01.tar.gz">
afp-Program-Conflict-Analysis-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Program-Conflict-Analysis-2009-12-12.tar.gz">
afp-Program-Conflict-Analysis-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Program-Conflict-Analysis-2009-04-29.tar.gz">
afp-Program-Conflict-Analysis-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Program-Conflict-Analysis-2008-06-10.tar.gz">
afp-Program-Conflict-Analysis-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Program-Conflict-Analysis-2007-12-20.tar.gz">
afp-Program-Conflict-Analysis-2007-12-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Projective_Geometry.html b/web/entries/Projective_Geometry.html
--- a/web/entries/Projective_Geometry.html
+++ b/web/entries/Projective_Geometry.html
@@ -1,203 +1,203 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Projective Geometry - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>rojective
<font class="first">G</font>eometry
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Projective Geometry</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://sites.google.com/site/anthonybordg/">Anthony Bordg</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-06-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize the basics of projective geometry. In particular, we give
a proof of the so-called Hessenberg's theorem in projective plane
geometry. We also provide a proof of the so-called Desargues's
theorem based on an axiomatization of (higher) projective space
geometry using the notion of rank of a matroid. This last approach
allows to handle incidence relations in an homogeneous way dealing
only with points and without the need of talking explicitly about
-lines, planes or any higher entity.</div></td>
+lines, planes or any higher entity.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Projective_Geometry-AFP,
author = {Anthony Bordg},
title = {Projective Geometry},
journal = {Archive of Formal Proofs},
month = jun,
year = 2018,
note = {\url{http://isa-afp.org/entries/Projective_Geometry.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Projective_Geometry/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Projective_Geometry/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Projective_Geometry/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Projective_Geometry-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Projective_Geometry-2019-06-11.tar.gz">
afp-Projective_Geometry-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Projective_Geometry-2018-08-16.tar.gz">
afp-Projective_Geometry-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Projective_Geometry-2018-06-15.tar.gz">
afp-Projective_Geometry-2018-06-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Promela.html b/web/entries/Promela.html
--- a/web/entries/Promela.html
+++ b/web/entries/Promela.html
@@ -1,232 +1,232 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Promela Formalization - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>romela
<font class="first">F</font>ormalization
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Promela Formalization</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
René Neumann (rene /dot/ neumann /at/ in /dot/ tum /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-05-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present an executable formalization of the language Promela, the
description language for models of the model checker SPIN. This
formalization is part of the work for a completely verified model
checker (CAVA), but also serves as a useful (and executable!)
description of the semantics of the language itself, something that is
currently missing.
The formalization uses three steps: It takes an abstract syntax tree
generated from an SML parser, removes syntactic sugar and enriches it
with type information. This further gets translated into a transition
-system, on which the semantic engine (read: successor function) operates.</div></td>
+system, on which the semantic engine (read: successor function) operates.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Promela-AFP,
author = {René Neumann},
title = {Promela Formalization},
journal = {Archive of Formal Proofs},
month = may,
year = 2014,
note = {\url{http://isa-afp.org/entries/Promela.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="CAVA_Automata.html">CAVA_Automata</a>, <a href="LTL.html">LTL</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Promela/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Promela/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Promela/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Promela-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Promela-2019-06-11.tar.gz">
afp-Promela-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Promela-2018-08-16.tar.gz">
afp-Promela-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Promela-2017-10-10.tar.gz">
afp-Promela-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Promela-2016-12-17.tar.gz">
afp-Promela-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Promela-2016-02-22.tar.gz">
afp-Promela-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Promela-2015-05-27.tar.gz">
afp-Promela-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Promela-2014-08-28.tar.gz">
afp-Promela-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Promela-2014-05-29.tar.gz">
afp-Promela-2014-05-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Proof_Strategy_Language.html b/web/entries/Proof_Strategy_Language.html
--- a/web/entries/Proof_Strategy_Language.html
+++ b/web/entries/Proof_Strategy_Language.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Proof Strategy Language - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>roof
<font class="first">S</font>trategy
<font class="first">L</font>anguage
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Proof Strategy Language</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Yutaka Nagashima
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-12-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Isabelle includes various automatic tools for finding proofs under
certain conditions. However, for each conjecture, knowing which
automation to use, and how to tweak its parameters, is currently
labour intensive. We have developed a language, PSL, designed to
capture high level proof strategies. PSL offloads the construction of
human-readable fast-to-replay proof scripts to automatic search,
making use of search-time information about each conjecture. Our
preliminary evaluations show that PSL reduces the labour cost of
interactive theorem proving. This submission contains the
implementation of PSL and an example theory file, Example.thy, showing
-how to write poof strategies in PSL.</div></td>
+how to write poof strategies in PSL.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Proof_Strategy_Language-AFP,
author = {Yutaka Nagashima},
title = {Proof Strategy Language},
journal = {Archive of Formal Proofs},
month = dec,
year = 2016,
note = {\url{http://isa-afp.org/entries/Proof_Strategy_Language.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Proof_Strategy_Language/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Proof_Strategy_Language/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Proof_Strategy_Language/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Proof_Strategy_Language-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Proof_Strategy_Language-2019-06-11.tar.gz">
afp-Proof_Strategy_Language-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Proof_Strategy_Language-2018-08-16.tar.gz">
afp-Proof_Strategy_Language-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Proof_Strategy_Language-2017-10-10.tar.gz">
afp-Proof_Strategy_Language-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Proof_Strategy_Language-2016-12-21.tar.gz">
afp-Proof_Strategy_Language-2016-12-21.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/PropResPI.html b/web/entries/PropResPI.html
--- a/web/entries/PropResPI.html
+++ b/web/entries/PropResPI.html
@@ -1,240 +1,240 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Propositional Resolution and Prime Implicates Generation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>ropositional
<font class="first">R</font>esolution
and
<font class="first">P</font>rime
<font class="first">I</font>mplicates
<font class="first">G</font>eneration
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Propositional Resolution and Prime Implicates Generation</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://membres-lig.imag.fr/peltier/">Nicolas Peltier</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-03-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We provide formal proofs in Isabelle-HOL (using mostly structured Isar
proofs) of the soundness and completeness of the Resolution rule in
propositional logic. The completeness proofs take into account the
usual redundancy elimination rules (tautology elimination and
subsumption), and several refinements of the Resolution rule are
considered: ordered resolution (with selection functions), positive
and negative resolution, semantic resolution and unit resolution (the
latter refinement is complete only for clause sets that are Horn-
renamable). We also define a concrete procedure for computing
saturated sets and establish its soundness and completeness. The
clause sets are not assumed to be finite, so that the results can be
applied to formulas obtained by grounding sets of first-order clauses
(however, a total ordering among atoms is assumed to be given).
Next, we show that the unrestricted Resolution rule is deductive-
complete, in the sense that it is able to generate all (prime)
implicates of any set of propositional clauses (i.e., all entailment-
minimal, non-valid, clausal consequences of the considered set). The
generation of prime implicates is an important problem, with many
applications in artificial intelligence and verification (for
abductive reasoning, knowledge compilation, diagnosis, debugging
etc.). We also show that implicates can be computed in an incremental
way, by fixing an ordering among all the atoms in the considered sets
and resolving upon these atoms one by one in the considered order
(with no backtracking). This feature is critical for the efficient
computation of prime implicates. Building on these results, we provide
a procedure for computing such implicates and establish its soundness
-and completeness.</div></td>
+and completeness.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{PropResPI-AFP,
author = {Nicolas Peltier},
title = {Propositional Resolution and Prime Implicates Generation},
journal = {Archive of Formal Proofs},
month = mar,
year = 2016,
note = {\url{http://isa-afp.org/entries/PropResPI.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/PropResPI/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/PropResPI/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/PropResPI/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-PropResPI-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-PropResPI-2019-06-11.tar.gz">
afp-PropResPI-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-PropResPI-2018-08-16.tar.gz">
afp-PropResPI-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-PropResPI-2017-10-10.tar.gz">
afp-PropResPI-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-PropResPI-2016-12-17.tar.gz">
afp-PropResPI-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-PropResPI-2016-03-11.tar.gz">
afp-PropResPI-2016-03-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Propositional_Proof_Systems.html b/web/entries/Propositional_Proof_Systems.html
--- a/web/entries/Propositional_Proof_Systems.html
+++ b/web/entries/Propositional_Proof_Systems.html
@@ -1,209 +1,209 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Propositional Proof Systems - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>ropositional
<font class="first">P</font>roof
<font class="first">S</font>ystems
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Propositional Proof Systems</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://liftm.de">Julius Michaelis</a> and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-06-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize a range of proof systems for classical propositional
logic (sequent calculus, natural deduction, Hilbert systems,
resolution) and prove the most important meta-theoretic results about
semantics and proofs: compactness, soundness, completeness,
translations between proof systems, cut-elimination, interpolation and
-model existence.</div></td>
+model existence.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Propositional_Proof_Systems-AFP,
author = {Julius Michaelis and Tobias Nipkow},
title = {Propositional Proof Systems},
journal = {Archive of Formal Proofs},
month = jun,
year = 2017,
note = {\url{http://isa-afp.org/entries/Propositional_Proof_Systems.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Propositional_Proof_Systems/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Propositional_Proof_Systems/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Propositional_Proof_Systems/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Propositional_Proof_Systems-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Propositional_Proof_Systems-2019-06-11.tar.gz">
afp-Propositional_Proof_Systems-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Propositional_Proof_Systems-2018-08-16.tar.gz">
afp-Propositional_Proof_Systems-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Propositional_Proof_Systems-2017-10-10.tar.gz">
afp-Propositional_Proof_Systems-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Propositional_Proof_Systems-2017-06-22.tar.gz">
afp-Propositional_Proof_Systems-2017-06-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Prpu_Maxflow.html b/web/entries/Prpu_Maxflow.html
--- a/web/entries/Prpu_Maxflow.html
+++ b/web/entries/Prpu_Maxflow.html
@@ -1,220 +1,220 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalizing Push-Relabel Algorithms - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalizing
<font class="first">P</font>ush-Relabel
<font class="first">A</font>lgorithms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalizing Push-Relabel Algorithms</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Peter Lammich and
S. Reza Sefidgar
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-06-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a formalization of push-relabel algorithms for computing
the maximum flow in a network. We start with Goldberg's et
al.~generic push-relabel algorithm, for which we show correctness and
the time complexity bound of O(V^2E). We then derive the
relabel-to-front and FIFO implementation. Using stepwise refinement
techniques, we derive an efficient verified implementation. Our
formal proof of the abstract algorithms closely follows a standard
textbook proof. It is accessible even without being an expert in
Isabelle/HOL, the interactive theorem prover used for the
-formalization.</div></td>
+formalization.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Prpu_Maxflow-AFP,
author = {Peter Lammich and S. Reza Sefidgar},
title = {Formalizing Push-Relabel Algorithms},
journal = {Archive of Formal Proofs},
month = jun,
year = 2017,
note = {\url{http://isa-afp.org/entries/Prpu_Maxflow.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Flow_Networks.html">Flow_Networks</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Prpu_Maxflow/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Prpu_Maxflow/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Prpu_Maxflow/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Prpu_Maxflow-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Prpu_Maxflow-2020-01-14.tar.gz">
afp-Prpu_Maxflow-2020-01-14.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-Prpu_Maxflow-2019-06-11.tar.gz">
afp-Prpu_Maxflow-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Prpu_Maxflow-2018-08-16.tar.gz">
afp-Prpu_Maxflow-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Prpu_Maxflow-2017-10-10.tar.gz">
afp-Prpu_Maxflow-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Prpu_Maxflow-2017-06-02.tar.gz">
afp-Prpu_Maxflow-2017-06-02.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/PseudoHoops.html b/web/entries/PseudoHoops.html
--- a/web/entries/PseudoHoops.html
+++ b/web/entries/PseudoHoops.html
@@ -1,249 +1,249 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Pseudo Hoops - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>seudo
<font class="first">H</font>oops
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Pseudo Hoops</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
George Georgescu,
Laurentiu Leustean and
Viorel Preoteasa (viorel /dot/ preoteasa /at/ aalto /dot/ fi)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-09-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Pseudo-hoops are algebraic structures introduced by B. Bosbach under the name of complementary semigroups. In this formalization we prove some properties of pseudo-hoops and we define the basic concepts of filter and normal filter. The lattice of normal filters is isomorphic with the lattice of congruences of a pseudo-hoop. We also study some important classes of pseudo-hoops. Bounded Wajsberg pseudo-hoops are equivalent to pseudo-Wajsberg algebras and bounded basic pseudo-hoops are equivalent to pseudo-BL algebras. Some examples of pseudo-hoops are given in the last section of the formalization.</div></td>
+ <td class="abstract mathjax_process">Pseudo-hoops are algebraic structures introduced by B. Bosbach under the name of complementary semigroups. In this formalization we prove some properties of pseudo-hoops and we define the basic concepts of filter and normal filter. The lattice of normal filters is isomorphic with the lattice of congruences of a pseudo-hoop. We also study some important classes of pseudo-hoops. Bounded Wajsberg pseudo-hoops are equivalent to pseudo-Wajsberg algebras and bounded basic pseudo-hoops are equivalent to pseudo-BL algebras. Some examples of pseudo-hoops are given in the last section of the formalization.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{PseudoHoops-AFP,
author = {George Georgescu and Laurentiu Leustean and Viorel Preoteasa},
title = {Pseudo Hoops},
journal = {Archive of Formal Proofs},
month = sep,
year = 2011,
note = {\url{http://isa-afp.org/entries/PseudoHoops.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="LatticeProperties.html">LatticeProperties</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/PseudoHoops/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/PseudoHoops/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/PseudoHoops/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-PseudoHoops-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-PseudoHoops-2019-06-11.tar.gz">
afp-PseudoHoops-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-PseudoHoops-2018-08-16.tar.gz">
afp-PseudoHoops-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-PseudoHoops-2017-10-10.tar.gz">
afp-PseudoHoops-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-PseudoHoops-2016-12-17.tar.gz">
afp-PseudoHoops-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-PseudoHoops-2016-02-22.tar.gz">
afp-PseudoHoops-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-PseudoHoops-2015-05-27.tar.gz">
afp-PseudoHoops-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-PseudoHoops-2014-08-28.tar.gz">
afp-PseudoHoops-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-PseudoHoops-2013-12-11.tar.gz">
afp-PseudoHoops-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-PseudoHoops-2013-11-17.tar.gz">
afp-PseudoHoops-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-PseudoHoops-2013-02-16.tar.gz">
afp-PseudoHoops-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-PseudoHoops-2012-05-24.tar.gz">
afp-PseudoHoops-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-PseudoHoops-2011-10-11.tar.gz">
afp-PseudoHoops-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-PseudoHoops-2011-09-27.tar.gz">
afp-PseudoHoops-2011-09-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Psi_Calculi.html b/web/entries/Psi_Calculi.html
--- a/web/entries/Psi_Calculi.html
+++ b/web/entries/Psi_Calculi.html
@@ -1,243 +1,243 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Psi-calculi in Isabelle - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>si-calculi
in
<font class="first">I</font>sabelle
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Psi-calculi in Isabelle</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.itu.dk/people/jebe">Jesper Bengtson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-05-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Psi-calculi are extensions of the pi-calculus, accommodating arbitrary nominal datatypes to represent not only data but also communication channels, assertions and conditions, giving it an expressive power beyond the applied pi-calculus and the concurrent constraint pi-calculus.
+ <td class="abstract mathjax_process">Psi-calculi are extensions of the pi-calculus, accommodating arbitrary nominal datatypes to represent not only data but also communication channels, assertions and conditions, giving it an expressive power beyond the applied pi-calculus and the concurrent constraint pi-calculus.
<p>
We have formalised psi-calculi in the interactive theorem prover Isabelle using its nominal datatype package. One distinctive feature is that the framework needs to treat binding sequences, as opposed to single binders, in an efficient way. While different methods for formalising single binder calculi have been proposed over the last decades, representations for such binding sequences are not very well explored.
<p>
The main effort in the formalisation is to keep the machine checked proofs as close to their pen-and-paper counterparts as possible. This includes treating all binding sequences as atomic elements, and creating custom induction and inversion rules that to remove the bulk of manual alpha-conversions.
<p>
-This entry is described in detail in <a href="http://www.itu.dk/people/jebe/files/thesis.pdf">Bengtson's thesis</a>.</div></td>
+This entry is described in detail in <a href="http://www.itu.dk/people/jebe/files/thesis.pdf">Bengtson's thesis</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Psi_Calculi-AFP,
author = {Jesper Bengtson},
title = {Psi-calculi in Isabelle},
journal = {Archive of Formal Proofs},
month = may,
year = 2012,
note = {\url{http://isa-afp.org/entries/Psi_Calculi.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Psi_Calculi/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Psi_Calculi/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Psi_Calculi/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Psi_Calculi-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Psi_Calculi-2019-06-11.tar.gz">
afp-Psi_Calculi-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Psi_Calculi-2018-08-16.tar.gz">
afp-Psi_Calculi-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Psi_Calculi-2017-10-10.tar.gz">
afp-Psi_Calculi-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Psi_Calculi-2016-12-17.tar.gz">
afp-Psi_Calculi-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Psi_Calculi-2016-02-22.tar.gz">
afp-Psi_Calculi-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Psi_Calculi-2015-05-27.tar.gz">
afp-Psi_Calculi-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Psi_Calculi-2014-08-28.tar.gz">
afp-Psi_Calculi-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Psi_Calculi-2013-12-11.tar.gz">
afp-Psi_Calculi-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Psi_Calculi-2013-11-17.tar.gz">
afp-Psi_Calculi-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Psi_Calculi-2013-02-16.tar.gz">
afp-Psi_Calculi-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Psi_Calculi-2012-06-14.tar.gz">
afp-Psi_Calculi-2012-06-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Ptolemys_Theorem.html b/web/entries/Ptolemys_Theorem.html
--- a/web/entries/Ptolemys_Theorem.html
+++ b/web/entries/Ptolemys_Theorem.html
@@ -1,210 +1,210 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Ptolemy's Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>tolemy's
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Ptolemy's Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-08-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry provides an analytic proof to Ptolemy's Theorem using
polar form transformation and trigonometric identities.
In this formalization, we use ideas from John Harrison's HOL Light
formalization and the proof sketch on the Wikipedia entry of Ptolemy's Theorem.
-This theorem is the 95th theorem of the Top 100 Theorems list.</div></td>
+This theorem is the 95th theorem of the Top 100 Theorems list.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Ptolemys_Theorem-AFP,
author = {Lukas Bulwahn},
title = {Ptolemy's Theorem},
journal = {Archive of Formal Proofs},
month = aug,
year = 2016,
note = {\url{http://isa-afp.org/entries/Ptolemys_Theorem.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ptolemys_Theorem/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Ptolemys_Theorem/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ptolemys_Theorem/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Ptolemys_Theorem-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Ptolemys_Theorem-2019-06-11.tar.gz">
afp-Ptolemys_Theorem-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Ptolemys_Theorem-2018-08-16.tar.gz">
afp-Ptolemys_Theorem-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Ptolemys_Theorem-2017-10-10.tar.gz">
afp-Ptolemys_Theorem-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Ptolemys_Theorem-2016-12-17.tar.gz">
afp-Ptolemys_Theorem-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Ptolemys_Theorem-2016-08-08.tar.gz">
afp-Ptolemys_Theorem-2016-08-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/QHLProver.html b/web/entries/QHLProver.html
--- a/web/entries/QHLProver.html
+++ b/web/entries/QHLProver.html
@@ -1,207 +1,207 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Quantum Hoare Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">Q</font>uantum
<font class="first">H</font>oare
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Quantum Hoare Logic</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Junyi Liu,
<a href="http://lcs.ios.ac.cn/~bzhan/">Bohua Zhan</a>,
Shuling Wang,
Shenggang Ying,
Tao Liu,
Yangjia Li,
Mingsheng Ying and
Naijun Zhan
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-03-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize quantum Hoare logic as given in [1]. In particular, we
specify the syntax and denotational semantics of a simple model of
quantum programs. Then, we write down the rules of quantum Hoare logic
for partial correctness, and show the soundness and completeness of
the resulting proof system. As an application, we verify the
-correctness of Grover’s algorithm.</div></td>
+correctness of Grover’s algorithm.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{QHLProver-AFP,
author = {Junyi Liu and Bohua Zhan and Shuling Wang and Shenggang Ying and Tao Liu and Yangjia Li and Mingsheng Ying and Naijun Zhan},
title = {Quantum Hoare Logic},
journal = {Archive of Formal Proofs},
month = mar,
year = 2019,
note = {\url{http://isa-afp.org/entries/QHLProver.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Deep_Learning.html">Deep_Learning</a>, <a href="Jordan_Normal_Form.html">Jordan_Normal_Form</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/QHLProver/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/QHLProver/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/QHLProver/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-QHLProver-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-QHLProver-2019-06-11.tar.gz">
afp-QHLProver-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-QHLProver-2019-03-25.tar.gz">
afp-QHLProver-2019-03-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/QR_Decomposition.html b/web/entries/QR_Decomposition.html
--- a/web/entries/QR_Decomposition.html
+++ b/web/entries/QR_Decomposition.html
@@ -1,222 +1,222 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>QR Decomposition - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">Q</font>R
<font class="first">D</font>ecomposition
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">QR Decomposition</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a> and
<a href="http://www.unirioja.es/cu/jearansa">Jesús Aransay</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-02-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">QR decomposition is an algorithm to decompose a real matrix A into the product of two other matrices Q and R, where Q is orthogonal and R is invertible and upper triangular. The algorithm is useful for the least squares problem; i.e., the computation of the best approximation of an unsolvable system of linear equations. As a side-product, the Gram-Schmidt process has also been formalized. A refinement using immutable arrays is presented as well. The development relies, among others, on the AFP entry "Implementing field extensions of the form Q[sqrt(b)]" by René Thiemann, which allows execution of the algorithm using symbolic computations. Verified code can be generated and executed using floats as well.</div></td>
+ <td class="abstract mathjax_process">QR decomposition is an algorithm to decompose a real matrix A into the product of two other matrices Q and R, where Q is orthogonal and R is invertible and upper triangular. The algorithm is useful for the least squares problem; i.e., the computation of the best approximation of an unsolvable system of linear equations. As a side-product, the Gram-Schmidt process has also been formalized. A refinement using immutable arrays is presented as well. The development relies, among others, on the AFP entry "Implementing field extensions of the form Q[sqrt(b)]" by René Thiemann, which allows execution of the algorithm using symbolic computations. Verified code can be generated and executed using floats as well.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2015-06-18]: The second part of the Fundamental Theorem of Linear Algebra has been generalized to more general inner product spaces.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{QR_Decomposition-AFP,
author = {Jose Divasón and Jesús Aransay},
title = {QR Decomposition},
journal = {Archive of Formal Proofs},
month = feb,
year = 2015,
note = {\url{http://isa-afp.org/entries/QR_Decomposition.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Gauss_Jordan.html">Gauss_Jordan</a>, <a href="Rank_Nullity_Theorem.html">Rank_Nullity_Theorem</a>, <a href="Real_Impl.html">Real_Impl</a>, <a href="Sqrt_Babylonian.html">Sqrt_Babylonian</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/QR_Decomposition/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/QR_Decomposition/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/QR_Decomposition/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-QR_Decomposition-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-QR_Decomposition-2019-06-11.tar.gz">
afp-QR_Decomposition-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-QR_Decomposition-2018-08-16.tar.gz">
afp-QR_Decomposition-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-QR_Decomposition-2017-10-10.tar.gz">
afp-QR_Decomposition-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-QR_Decomposition-2016-12-17.tar.gz">
afp-QR_Decomposition-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-QR_Decomposition-2016-02-22.tar.gz">
afp-QR_Decomposition-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-QR_Decomposition-2015-05-27.tar.gz">
afp-QR_Decomposition-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-QR_Decomposition-2015-02-13.tar.gz">
afp-QR_Decomposition-2015-02-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Quantales.html b/web/entries/Quantales.html
--- a/web/entries/Quantales.html
+++ b/web/entries/Quantales.html
@@ -1,195 +1,195 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Quantales - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">Q</font>uantales
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Quantales</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-12-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
These mathematical components formalise basic properties of quantales,
together with some important models, constructions, and concepts,
-including quantic nuclei and conuclei.</div></td>
+including quantic nuclei and conuclei.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Quantales-AFP,
author = {Georg Struth},
title = {Quantales},
journal = {Archive of Formal Proofs},
month = dec,
year = 2018,
note = {\url{http://isa-afp.org/entries/Quantales.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Kleene_Algebra.html">Kleene_Algebra</a>, <a href="Order_Lattice_Props.html">Order_Lattice_Props</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Transformer_Semantics.html">Transformer_Semantics</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Quantales/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Quantales/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Quantales/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Quantales-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Quantales-2019-06-11.tar.gz">
afp-Quantales-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Quantales-2018-12-19.tar.gz">
afp-Quantales-2018-12-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Quaternions.html b/web/entries/Quaternions.html
--- a/web/entries/Quaternions.html
+++ b/web/entries/Quaternions.html
@@ -1,197 +1,197 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Quaternions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">Q</font>uaternions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Quaternions</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-09-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This theory is inspired by the HOL Light development of quaternions,
but follows its own route. Quaternions are developed coinductively, as
in the existing formalisation of the complex numbers. Quaternions are
quickly shown to belong to the type classes of real normed division
algebras and real inner product spaces. And therefore they inherit a
great body of facts involving algebraic laws, limits, continuity,
etc., which must be proved explicitly in the HOL Light version. The
development concludes with the geometric interpretation of the product
-of imaginary quaternions.</div></td>
+of imaginary quaternions.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Quaternions-AFP,
author = {Lawrence C. Paulson},
title = {Quaternions},
journal = {Archive of Formal Proofs},
month = sep,
year = 2018,
note = {\url{http://isa-afp.org/entries/Quaternions.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Quaternions/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Quaternions/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Quaternions/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Quaternions-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Quaternions-2019-06-11.tar.gz">
afp-Quaternions-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Quaternions-2018-09-07.tar.gz">
afp-Quaternions-2018-09-07.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Quick_Sort_Cost.html b/web/entries/Quick_Sort_Cost.html
--- a/web/entries/Quick_Sort_Cost.html
+++ b/web/entries/Quick_Sort_Cost.html
@@ -1,221 +1,221 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The number of comparisons in QuickSort - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
number
of
comparisons
in
<font class="first">Q</font>uickSort
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The number of comparisons in QuickSort</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-03-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>We give a formal proof of the well-known results about the
number of comparisons performed by two variants of QuickSort: first,
the expected number of comparisons of randomised QuickSort
(i.&thinsp;e.&nbsp;QuickSort with random pivot choice) is
<em>2&thinsp;(n+1)&thinsp;H<sub>n</sub> -
4&thinsp;n</em>, which is asymptotically equivalent to
<em>2&thinsp;n ln n</em>; second, the number of
comparisons performed by the classic non-randomised QuickSort has the
-same distribution in the average case as the randomised one.</p></div></td>
+same distribution in the average case as the randomised one.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Quick_Sort_Cost-AFP,
author = {Manuel Eberl},
title = {The number of comparisons in QuickSort},
journal = {Archive of Formal Proofs},
month = mar,
year = 2017,
note = {\url{http://isa-afp.org/entries/Quick_Sort_Cost.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Comparison_Sort_Lower_Bound.html">Comparison_Sort_Lower_Bound</a>, <a href="Landau_Symbols.html">Landau_Symbols</a>, <a href="List-Index.html">List-Index</a>, <a href="Regular-Sets.html">Regular-Sets</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Random_BSTs.html">Random_BSTs</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Quick_Sort_Cost/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Quick_Sort_Cost/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Quick_Sort_Cost/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Quick_Sort_Cost-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Quick_Sort_Cost-2019-06-11.tar.gz">
afp-Quick_Sort_Cost-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Quick_Sort_Cost-2018-08-16.tar.gz">
afp-Quick_Sort_Cost-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Quick_Sort_Cost-2017-10-10.tar.gz">
afp-Quick_Sort_Cost-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Quick_Sort_Cost-2017-03-16.tar.gz">
afp-Quick_Sort_Cost-2017-03-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/RIPEMD-160-SPARK.html b/web/entries/RIPEMD-160-SPARK.html
--- a/web/entries/RIPEMD-160-SPARK.html
+++ b/web/entries/RIPEMD-160-SPARK.html
@@ -1,252 +1,252 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>RIPEMD-160 - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>IPEMD-160
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">RIPEMD-160</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-01-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This work presents a verification of an implementation in SPARK/ADA of the cryptographic hash-function RIPEMD-160. A functional specification of RIPEMD-160 is given in Isabelle/HOL. Proofs for the verification conditions generated by the static-analysis toolset of SPARK certify the functional correctness of the implementation.</div></td>
+ <td class="abstract mathjax_process">This work presents a verification of an implementation in SPARK/ADA of the cryptographic hash-function RIPEMD-160. A functional specification of RIPEMD-160 is given in Isabelle/HOL. Proofs for the verification conditions generated by the static-analysis toolset of SPARK certify the functional correctness of the implementation.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2015-11-09]: Entry is now obsolete, moved to Isabelle distribution.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{RIPEMD-160-SPARK-AFP,
author = {Fabian Immler},
title = {RIPEMD-160},
journal = {Archive of Formal Proofs},
month = jan,
year = 2011,
note = {\url{http://isa-afp.org/entries/RIPEMD-160-SPARK.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/RIPEMD-160-SPARK/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/RIPEMD-160-SPARK/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/RIPEMD-160-SPARK/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-RIPEMD-160-SPARK-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-RIPEMD-160-SPARK-2019-06-11.tar.gz">
afp-RIPEMD-160-SPARK-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-RIPEMD-160-SPARK-2018-08-16.tar.gz">
afp-RIPEMD-160-SPARK-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-RIPEMD-160-SPARK-2017-10-10.tar.gz">
afp-RIPEMD-160-SPARK-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-RIPEMD-160-SPARK-2016-12-17.tar.gz">
afp-RIPEMD-160-SPARK-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-RIPEMD-160-SPARK-2016-02-22.tar.gz">
afp-RIPEMD-160-SPARK-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-RIPEMD-160-SPARK-2015-05-27.tar.gz">
afp-RIPEMD-160-SPARK-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-RIPEMD-160-SPARK-2014-08-28.tar.gz">
afp-RIPEMD-160-SPARK-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-RIPEMD-160-SPARK-2013-12-11.tar.gz">
afp-RIPEMD-160-SPARK-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-RIPEMD-160-SPARK-2013-11-17.tar.gz">
afp-RIPEMD-160-SPARK-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-RIPEMD-160-SPARK-2013-02-16.tar.gz">
afp-RIPEMD-160-SPARK-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-RIPEMD-160-SPARK-2012-05-24.tar.gz">
afp-RIPEMD-160-SPARK-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-RIPEMD-160-SPARK-2011-10-11.tar.gz">
afp-RIPEMD-160-SPARK-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-RIPEMD-160-SPARK-2011-02-11.tar.gz">
afp-RIPEMD-160-SPARK-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-RIPEMD-160-SPARK-2011-01-19.tar.gz">
afp-RIPEMD-160-SPARK-2011-01-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/ROBDD.html b/web/entries/ROBDD.html
--- a/web/entries/ROBDD.html
+++ b/web/entries/ROBDD.html
@@ -1,227 +1,227 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Algorithms for Reduced Ordered Binary Decision Diagrams - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>lgorithms
for
<font class="first">R</font>educed
<font class="first">O</font>rdered
<font class="first">B</font>inary
<font class="first">D</font>ecision
<font class="first">D</font>iagrams
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Algorithms for Reduced Ordered Binary Decision Diagrams</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://liftm.de">Julius Michaelis</a>,
<a href="http://cl-informatik.uibk.ac.at/users/mhaslbeck/">Maximilian Haslbeck</a>,
Peter Lammich and
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-04-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a verified and executable implementation of ROBDDs in
Isabelle/HOL. Our implementation relates pointer-based computation in
the Heap monad to operations on an abstract definition of boolean
functions. Internally, we implemented the if-then-else combinator in a
recursive fashion, following the Shannon decomposition of the argument
functions. The implementation mixes and adapts known techniques and is
-built with efficiency in mind.</div></td>
+built with efficiency in mind.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{ROBDD-AFP,
author = {Julius Michaelis and Maximilian Haslbeck and Peter Lammich and Lars Hupel},
title = {Algorithms for Reduced Ordered Binary Decision Diagrams},
journal = {Archive of Formal Proofs},
month = apr,
year = 2016,
note = {\url{http://isa-afp.org/entries/ROBDD.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Automatic_Refinement.html">Automatic_Refinement</a>, <a href="Collections.html">Collections</a>, <a href="Native_Word.html">Native_Word</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ROBDD/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/ROBDD/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ROBDD/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-ROBDD-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-ROBDD-2019-06-11.tar.gz">
afp-ROBDD-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-ROBDD-2018-08-16.tar.gz">
afp-ROBDD-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-ROBDD-2017-10-10.tar.gz">
afp-ROBDD-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-ROBDD-2016-12-17.tar.gz">
afp-ROBDD-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-ROBDD-2016-04-27.tar.gz">
afp-ROBDD-2016-04-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/RSAPSS.html b/web/entries/RSAPSS.html
--- a/web/entries/RSAPSS.html
+++ b/web/entries/RSAPSS.html
@@ -1,287 +1,287 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>SHA1, RSA, PSS and more - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>HA1,
<font class="first">R</font>SA,
<font class="first">P</font>SS
and
more
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">SHA1, RSA, PSS and more</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Christina Lindenberg and
Kai Wirt
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2005-05-02</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Formal verification is getting more and more important in computer science. However the state of the art formal verification methods in cryptography are very rudimentary. These theories are one step to provide a tool box allowing the use of formal methods in every aspect of cryptography. Moreover we present a proof of concept for the feasibility of verification techniques to a standard signature algorithm.</div></td>
+ <td class="abstract mathjax_process">Formal verification is getting more and more important in computer science. However the state of the art formal verification methods in cryptography are very rudimentary. These theories are one step to provide a tool box allowing the use of formal methods in every aspect of cryptography. Moreover we present a proof of concept for the feasibility of verification techniques to a standard signature algorithm.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{RSAPSS-AFP,
author = {Christina Lindenberg and Kai Wirt},
title = {SHA1, RSA, PSS and more},
journal = {Archive of Formal Proofs},
month = may,
year = 2005,
note = {\url{http://isa-afp.org/entries/RSAPSS.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/RSAPSS/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/RSAPSS/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/RSAPSS/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-RSAPSS-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-RSAPSS-2019-06-11.tar.gz">
afp-RSAPSS-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-RSAPSS-2018-08-16.tar.gz">
afp-RSAPSS-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-RSAPSS-2017-10-10.tar.gz">
afp-RSAPSS-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-RSAPSS-2016-12-17.tar.gz">
afp-RSAPSS-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-RSAPSS-2016-02-22.tar.gz">
afp-RSAPSS-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-RSAPSS-2015-05-27.tar.gz">
afp-RSAPSS-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-RSAPSS-2014-08-28.tar.gz">
afp-RSAPSS-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-RSAPSS-2013-12-11.tar.gz">
afp-RSAPSS-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-RSAPSS-2013-11-17.tar.gz">
afp-RSAPSS-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-RSAPSS-2013-03-02.tar.gz">
afp-RSAPSS-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-RSAPSS-2013-02-16.tar.gz">
afp-RSAPSS-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-RSAPSS-2012-05-24.tar.gz">
afp-RSAPSS-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-RSAPSS-2011-10-11.tar.gz">
afp-RSAPSS-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-RSAPSS-2011-02-11.tar.gz">
afp-RSAPSS-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-RSAPSS-2010-07-01.tar.gz">
afp-RSAPSS-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-RSAPSS-2009-12-12.tar.gz">
afp-RSAPSS-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-RSAPSS-2009-04-29.tar.gz">
afp-RSAPSS-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-RSAPSS-2008-06-10.tar.gz">
afp-RSAPSS-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-RSAPSS-2007-11-27.tar.gz">
afp-RSAPSS-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-RSAPSS-2005-10-14.tar.gz">
afp-RSAPSS-2005-10-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Ramsey-Infinite.html b/web/entries/Ramsey-Infinite.html
--- a/web/entries/Ramsey-Infinite.html
+++ b/web/entries/Ramsey-Infinite.html
@@ -1,289 +1,289 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Ramsey's theorem, infinitary version - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>amsey's
theorem,
infinitary
version
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Ramsey's theorem, infinitary version</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Tom Ridge
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-09-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This formalization of Ramsey's theorem (infinitary version) is taken from Boolos and Jeffrey, <i>Computability and Logic</i>, 3rd edition, Chapter 26. It differs slightly from the text by assuming a slightly stronger hypothesis. In particular, the induction hypothesis is stronger, holding for any infinite subset of the naturals. This avoids the rather peculiar mapping argument between kj and aikj on p.263, which is unnecessary and slightly mars this really beautiful result.</div></td>
+ <td class="abstract mathjax_process">This formalization of Ramsey's theorem (infinitary version) is taken from Boolos and Jeffrey, <i>Computability and Logic</i>, 3rd edition, Chapter 26. It differs slightly from the text by assuming a slightly stronger hypothesis. In particular, the induction hypothesis is stronger, holding for any infinite subset of the naturals. This avoids the rather peculiar mapping argument between kj and aikj on p.263, which is unnecessary and slightly mars this really beautiful result.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Ramsey-Infinite-AFP,
author = {Tom Ridge},
title = {Ramsey's theorem, infinitary version},
journal = {Archive of Formal Proofs},
month = sep,
year = 2004,
note = {\url{http://isa-afp.org/entries/Ramsey-Infinite.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ramsey-Infinite/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Ramsey-Infinite/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ramsey-Infinite/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Ramsey-Infinite-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Ramsey-Infinite-2019-06-11.tar.gz">
afp-Ramsey-Infinite-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Ramsey-Infinite-2018-08-16.tar.gz">
afp-Ramsey-Infinite-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Ramsey-Infinite-2017-10-10.tar.gz">
afp-Ramsey-Infinite-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Ramsey-Infinite-2016-12-17.tar.gz">
afp-Ramsey-Infinite-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Ramsey-Infinite-2016-02-22.tar.gz">
afp-Ramsey-Infinite-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Ramsey-Infinite-2015-05-27.tar.gz">
afp-Ramsey-Infinite-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Ramsey-Infinite-2014-08-28.tar.gz">
afp-Ramsey-Infinite-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Ramsey-Infinite-2013-12-11.tar.gz">
afp-Ramsey-Infinite-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Ramsey-Infinite-2013-11-17.tar.gz">
afp-Ramsey-Infinite-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Ramsey-Infinite-2013-02-16.tar.gz">
afp-Ramsey-Infinite-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Ramsey-Infinite-2012-05-24.tar.gz">
afp-Ramsey-Infinite-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Ramsey-Infinite-2011-10-11.tar.gz">
afp-Ramsey-Infinite-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Ramsey-Infinite-2011-02-11.tar.gz">
afp-Ramsey-Infinite-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Ramsey-Infinite-2010-07-01.tar.gz">
afp-Ramsey-Infinite-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Ramsey-Infinite-2009-12-12.tar.gz">
afp-Ramsey-Infinite-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Ramsey-Infinite-2009-04-29.tar.gz">
afp-Ramsey-Infinite-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Ramsey-Infinite-2008-06-10.tar.gz">
afp-Ramsey-Infinite-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Ramsey-Infinite-2007-11-27.tar.gz">
afp-Ramsey-Infinite-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Ramsey-Infinite-2005-10-14.tar.gz">
afp-Ramsey-Infinite-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Ramsey-Infinite-2004-09-21.tar.gz">
afp-Ramsey-Infinite-2004-09-21.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Ramsey-Infinite-2004-09-20.tar.gz">
afp-Ramsey-Infinite-2004-09-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Random_BSTs.html b/web/entries/Random_BSTs.html
--- a/web/entries/Random_BSTs.html
+++ b/web/entries/Random_BSTs.html
@@ -1,223 +1,223 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Expected Shape of Random Binary Search Trees - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>xpected
<font class="first">S</font>hape
of
<font class="first">R</font>andom
<font class="first">B</font>inary
<font class="first">S</font>earch
<font class="first">T</font>rees
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Expected Shape of Random Binary Search Trees</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-04-04</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This entry contains proofs for the textbook results about the
distributions of the height and internal path length of random binary
search trees (BSTs), i.&thinsp;e. BSTs that are formed by taking
an empty BST and inserting elements from a fixed set in random
order.</p> <p>In particular, we prove a logarithmic upper
bound on the expected height and the <em>Θ(n log n)</em>
closed-form solution for the expected internal path length in terms of
the harmonic numbers. We also show how the internal path length
-relates to the average-case cost of a lookup in a BST.</p></div></td>
+relates to the average-case cost of a lookup in a BST.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Random_BSTs-AFP,
author = {Manuel Eberl},
title = {Expected Shape of Random Binary Search Trees},
journal = {Archive of Formal Proofs},
month = apr,
year = 2017,
note = {\url{http://isa-afp.org/entries/Random_BSTs.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Landau_Symbols.html">Landau_Symbols</a>, <a href="Quick_Sort_Cost.html">Quick_Sort_Cost</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Randomised_BSTs.html">Randomised_BSTs</a>, <a href="Treaps.html">Treaps</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Random_BSTs/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Random_BSTs/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Random_BSTs/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Random_BSTs-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Random_BSTs-2019-06-11.tar.gz">
afp-Random_BSTs-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Random_BSTs-2018-08-16.tar.gz">
afp-Random_BSTs-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Random_BSTs-2017-10-10.tar.gz">
afp-Random_BSTs-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Random_BSTs-2017-04-04.tar.gz">
afp-Random_BSTs-2017-04-04.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Random_Graph_Subgraph_Threshold.html b/web/entries/Random_Graph_Subgraph_Threshold.html
--- a/web/entries/Random_Graph_Subgraph_Threshold.html
+++ b/web/entries/Random_Graph_Subgraph_Threshold.html
@@ -1,232 +1,232 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Properties of Random Graphs -- Subgraph Containment - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">P</font>roperties
of
<font class="first">R</font>andom
<font class="first">G</font>raphs
<font class="first">-</font>-
<font class="first">S</font>ubgraph
<font class="first">C</font>ontainment
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Properties of Random Graphs -- Subgraph Containment</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-02-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Random graphs are graphs with a fixed number of vertices, where each edge is present with a fixed probability. We are interested in the probability that a random graph contains a certain pattern, for example a cycle or a clique. A very high edge probability gives rise to perhaps too many edges (which degrades performance for many algorithms), whereas a low edge probability might result in a disconnected graph. We prove a theorem about a threshold probability such that a higher edge probability will asymptotically almost surely produce a random graph with the desired subgraph.</div></td>
+ <td class="abstract mathjax_process">Random graphs are graphs with a fixed number of vertices, where each edge is present with a fixed probability. We are interested in the probability that a random graph contains a certain pattern, for example a cycle or a clique. A very high edge probability gives rise to perhaps too many edges (which degrades performance for many algorithms), whereas a low edge probability might result in a disconnected graph. We prove a theorem about a threshold probability such that a higher edge probability will asymptotically almost surely produce a random graph with the desired subgraph.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Random_Graph_Subgraph_Threshold-AFP,
author = {Lars Hupel},
title = {Properties of Random Graphs -- Subgraph Containment},
journal = {Archive of Formal Proofs},
month = feb,
year = 2014,
note = {\url{http://isa-afp.org/entries/Random_Graph_Subgraph_Threshold.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Girth_Chromatic.html">Girth_Chromatic</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Random_Graph_Subgraph_Threshold/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Random_Graph_Subgraph_Threshold/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Random_Graph_Subgraph_Threshold/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Random_Graph_Subgraph_Threshold-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Random_Graph_Subgraph_Threshold-2019-06-11.tar.gz">
afp-Random_Graph_Subgraph_Threshold-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Random_Graph_Subgraph_Threshold-2018-08-16.tar.gz">
afp-Random_Graph_Subgraph_Threshold-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Random_Graph_Subgraph_Threshold-2017-10-10.tar.gz">
afp-Random_Graph_Subgraph_Threshold-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Random_Graph_Subgraph_Threshold-2016-12-17.tar.gz">
afp-Random_Graph_Subgraph_Threshold-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Random_Graph_Subgraph_Threshold-2016-02-22.tar.gz">
afp-Random_Graph_Subgraph_Threshold-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Random_Graph_Subgraph_Threshold-2015-05-27.tar.gz">
afp-Random_Graph_Subgraph_Threshold-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Random_Graph_Subgraph_Threshold-2014-08-28.tar.gz">
afp-Random_Graph_Subgraph_Threshold-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Random_Graph_Subgraph_Threshold-2014-02-14.tar.gz">
afp-Random_Graph_Subgraph_Threshold-2014-02-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Randomised_BSTs.html b/web/entries/Randomised_BSTs.html
--- a/web/entries/Randomised_BSTs.html
+++ b/web/entries/Randomised_BSTs.html
@@ -1,203 +1,203 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Randomised Binary Search Trees - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>andomised
<font class="first">B</font>inary
<font class="first">S</font>earch
<font class="first">T</font>rees
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Randomised Binary Search Trees</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-10-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This work is a formalisation of the Randomised Binary Search
Trees introduced by Martínez and Roura, including definitions and
correctness proofs.</p> <p>Like randomised treaps, they
are a probabilistic data structure that behaves exactly as if elements
were inserted into a non-balancing BST in random order. However,
unlike treaps, they only use discrete probability distributions, but
-their use of randomness is more complicated.</p></div></td>
+their use of randomness is more complicated.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Randomised_BSTs-AFP,
author = {Manuel Eberl},
title = {Randomised Binary Search Trees},
journal = {Archive of Formal Proofs},
month = oct,
year = 2018,
note = {\url{http://isa-afp.org/entries/Randomised_BSTs.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Monad_Normalisation.html">Monad_Normalisation</a>, <a href="Random_BSTs.html">Random_BSTs</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Randomised_BSTs/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Randomised_BSTs/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Randomised_BSTs/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Randomised_BSTs-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Randomised_BSTs-2019-06-11.tar.gz">
afp-Randomised_BSTs-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Randomised_BSTs-2018-10-19.tar.gz">
afp-Randomised_BSTs-2018-10-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Randomised_Social_Choice.html b/web/entries/Randomised_Social_Choice.html
--- a/web/entries/Randomised_Social_Choice.html
+++ b/web/entries/Randomised_Social_Choice.html
@@ -1,229 +1,229 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Randomised Social Choice Theory - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>andomised
<font class="first">S</font>ocial
<font class="first">C</font>hoice
<font class="first">T</font>heory
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Randomised Social Choice Theory</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-05-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This work contains a formalisation of basic Randomised Social Choice,
including Stochastic Dominance and Social Decision Schemes (SDSs)
along with some of their most important properties (Anonymity,
Neutrality, ex-post- and SD-Efficiency, SD-Strategy-Proofness) and two
particular SDSs – Random Dictatorship and Random Serial Dictatorship
(with proofs of the properties that they satisfy). Many important
properties of these concepts are also proven – such as the two
equivalent characterisations of Stochastic Dominance and the fact that
SD-efficiency of a lottery only depends on the support. The entry
also provides convenient commands to define Preference Profiles, prove
their well-formedness, and automatically derive restrictions that
sufficiently nice SDSs need to satisfy on the defined profiles.
Currently, the formalisation focuses on weak preferences and
Stochastic Dominance, but it should be easy to extend it to other
domains – such as strict preferences – or other lottery extensions –
-such as Bilinear Dominance or Pairwise Comparison.</div></td>
+such as Bilinear Dominance or Pairwise Comparison.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Randomised_Social_Choice-AFP,
author = {Manuel Eberl},
title = {Randomised Social Choice Theory},
journal = {Archive of Formal Proofs},
month = may,
year = 2016,
note = {\url{http://isa-afp.org/entries/Randomised_Social_Choice.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="List-Index.html">List-Index</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Fishburn_Impossibility.html">Fishburn_Impossibility</a>, <a href="SDS_Impossibility.html">SDS_Impossibility</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Randomised_Social_Choice/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Randomised_Social_Choice/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Randomised_Social_Choice/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Randomised_Social_Choice-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Randomised_Social_Choice-2019-06-11.tar.gz">
afp-Randomised_Social_Choice-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Randomised_Social_Choice-2018-08-16.tar.gz">
afp-Randomised_Social_Choice-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Randomised_Social_Choice-2017-10-10.tar.gz">
afp-Randomised_Social_Choice-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Randomised_Social_Choice-2016-12-17.tar.gz">
afp-Randomised_Social_Choice-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Randomised_Social_Choice-2016-05-05.tar.gz">
afp-Randomised_Social_Choice-2016-05-05.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Rank_Nullity_Theorem.html b/web/entries/Rank_Nullity_Theorem.html
--- a/web/entries/Rank_Nullity_Theorem.html
+++ b/web/entries/Rank_Nullity_Theorem.html
@@ -1,248 +1,248 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Rank-Nullity Theorem in Linear Algebra - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>ank-Nullity
<font class="first">T</font>heorem
in
<font class="first">L</font>inear
<font class="first">A</font>lgebra
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Rank-Nullity Theorem in Linear Algebra</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a> and
<a href="http://www.unirioja.es/cu/jearansa">Jesús Aransay</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-01-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">In this contribution, we present some formalizations based on the HOL-Multivariate-Analysis session of Isabelle. Firstly, a generalization of several theorems of such library are presented. Secondly, some definitions and proofs involving Linear Algebra and the four fundamental subspaces of a matrix are shown. Finally, we present a proof of the result known in Linear Algebra as the ``Rank-Nullity Theorem'', which states that, given any linear map f from a finite dimensional vector space V to a vector space W, then the dimension of V is equal to the dimension of the kernel of f (which is a subspace of V) and the dimension of the range of f (which is a subspace of W). The proof presented here is based on the one given by Sheldon Axler in his book <i>Linear Algebra Done Right</i>. As a corollary of the previous theorem, and taking advantage of the relationship between linear maps and matrices, we prove that, for every matrix A (which has associated a linear map between finite dimensional vector spaces), the sum of its null space and its column space (which is equal to the range of the linear map) is equal to the number of columns of A.</div></td>
+ <td class="abstract mathjax_process">In this contribution, we present some formalizations based on the HOL-Multivariate-Analysis session of Isabelle. Firstly, a generalization of several theorems of such library are presented. Secondly, some definitions and proofs involving Linear Algebra and the four fundamental subspaces of a matrix are shown. Finally, we present a proof of the result known in Linear Algebra as the ``Rank-Nullity Theorem'', which states that, given any linear map f from a finite dimensional vector space V to a vector space W, then the dimension of V is equal to the dimension of the kernel of f (which is a subspace of V) and the dimension of the range of f (which is a subspace of W). The proof presented here is based on the one given by Sheldon Axler in his book <i>Linear Algebra Done Right</i>. As a corollary of the previous theorem, and taking advantage of the relationship between linear maps and matrices, we prove that, for every matrix A (which has associated a linear map between finite dimensional vector spaces), the sum of its null space and its column space (which is equal to the range of the linear map) is equal to the number of columns of A.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2014-07-14]: Added some generalizations that allow us to formalize the Rank-Nullity Theorem over finite dimensional vector spaces, instead of over the more particular euclidean spaces. Updated abstract.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Rank_Nullity_Theorem-AFP,
author = {Jose Divasón and Jesús Aransay},
title = {Rank-Nullity Theorem in Linear Algebra},
journal = {Archive of Formal Proofs},
month = jan,
year = 2013,
note = {\url{http://isa-afp.org/entries/Rank_Nullity_Theorem.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Echelon_Form.html">Echelon_Form</a>, <a href="Gauss_Jordan.html">Gauss_Jordan</a>, <a href="Perron_Frobenius.html">Perron_Frobenius</a>, <a href="QR_Decomposition.html">QR_Decomposition</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Rank_Nullity_Theorem/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Rank_Nullity_Theorem/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Rank_Nullity_Theorem/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Rank_Nullity_Theorem-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Rank_Nullity_Theorem-2019-06-11.tar.gz">
afp-Rank_Nullity_Theorem-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Rank_Nullity_Theorem-2018-08-16.tar.gz">
afp-Rank_Nullity_Theorem-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Rank_Nullity_Theorem-2017-10-10.tar.gz">
afp-Rank_Nullity_Theorem-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Rank_Nullity_Theorem-2016-12-17.tar.gz">
afp-Rank_Nullity_Theorem-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Rank_Nullity_Theorem-2016-02-22.tar.gz">
afp-Rank_Nullity_Theorem-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Rank_Nullity_Theorem-2015-05-27.tar.gz">
afp-Rank_Nullity_Theorem-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Rank_Nullity_Theorem-2014-08-28.tar.gz">
afp-Rank_Nullity_Theorem-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Rank_Nullity_Theorem-2013-12-11.tar.gz">
afp-Rank_Nullity_Theorem-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Rank_Nullity_Theorem-2013-11-17.tar.gz">
afp-Rank_Nullity_Theorem-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Rank_Nullity_Theorem-2013-02-16.tar.gz">
afp-Rank_Nullity_Theorem-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Rank_Nullity_Theorem-2013-01-16.tar.gz">
afp-Rank_Nullity_Theorem-2013-01-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Real_Impl.html b/web/entries/Real_Impl.html
--- a/web/entries/Real_Impl.html
+++ b/web/entries/Real_Impl.html
@@ -1,247 +1,247 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Implementing field extensions of the form Q[sqrt(b)] - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>mplementing
field
extensions
of
the
form
<font class="first">Q</font>[sqrt(b)]
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Implementing field extensions of the form Q[sqrt(b)]</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-02-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We apply data refinement to implement the real numbers, where we support all
numbers in the field extension Q[sqrt(b)], i.e., all numbers of the form p +
q * sqrt(b) for rational numbers p and q and some fixed natural number b. To
this end, we also developed algorithms to precisely compute roots of a
rational number, and to perform a factorization of natural numbers which
eliminates duplicate prime factors.
<p>
Our results have been used to certify termination proofs which involve
-polynomial interpretations over the reals.</div></td>
+polynomial interpretations over the reals.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2014-07-11]: Moved NthRoot_Impl to Sqrt-Babylonian.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Real_Impl-AFP,
author = {René Thiemann},
title = {Implementing field extensions of the form Q[sqrt(b)]},
journal = {Archive of Formal Proofs},
month = feb,
year = 2014,
note = {\url{http://isa-afp.org/entries/Real_Impl.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Deriving.html">Deriving</a>, <a href="Show.html">Show</a>, <a href="Sqrt_Babylonian.html">Sqrt_Babylonian</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="QR_Decomposition.html">QR_Decomposition</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Real_Impl/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Real_Impl/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Real_Impl/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Real_Impl-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Real_Impl-2019-06-11.tar.gz">
afp-Real_Impl-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Real_Impl-2018-08-16.tar.gz">
afp-Real_Impl-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Real_Impl-2017-10-10.tar.gz">
afp-Real_Impl-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Real_Impl-2016-12-17.tar.gz">
afp-Real_Impl-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Real_Impl-2016-02-22.tar.gz">
afp-Real_Impl-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Real_Impl-2015-05-27.tar.gz">
afp-Real_Impl-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Real_Impl-2014-08-28.tar.gz">
afp-Real_Impl-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Real_Impl-2014-02-11.tar.gz">
afp-Real_Impl-2014-02-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Recursion-Theory-I.html b/web/entries/Recursion-Theory-I.html
--- a/web/entries/Recursion-Theory-I.html
+++ b/web/entries/Recursion-Theory-I.html
@@ -1,274 +1,274 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Recursion Theory I - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>ecursion
<font class="first">T</font>heory
<font class="first">I</font>
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Recursion Theory I</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Michael Nedzelsky
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-04-05</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This document presents the formalization of introductory material from recursion theory --- definitions and basic properties of primitive recursive functions, Cantor pairing function and computably enumerable sets (including a proof of existence of a one-complete computably enumerable set and a proof of the Rice's theorem).</div></td>
+ <td class="abstract mathjax_process">This document presents the formalization of introductory material from recursion theory --- definitions and basic properties of primitive recursive functions, Cantor pairing function and computably enumerable sets (including a proof of existence of a one-complete computably enumerable set and a proof of the Rice's theorem).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Recursion-Theory-I-AFP,
author = {Michael Nedzelsky},
title = {Recursion Theory I},
journal = {Archive of Formal Proofs},
month = apr,
year = 2008,
note = {\url{http://isa-afp.org/entries/Recursion-Theory-I.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Minsky_Machines.html">Minsky_Machines</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Recursion-Theory-I/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Recursion-Theory-I/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Recursion-Theory-I/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Recursion-Theory-I-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Recursion-Theory-I-2019-06-11.tar.gz">
afp-Recursion-Theory-I-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Recursion-Theory-I-2018-08-16.tar.gz">
afp-Recursion-Theory-I-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Recursion-Theory-I-2017-10-10.tar.gz">
afp-Recursion-Theory-I-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Recursion-Theory-I-2016-12-17.tar.gz">
afp-Recursion-Theory-I-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Recursion-Theory-I-2016-02-22.tar.gz">
afp-Recursion-Theory-I-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Recursion-Theory-I-2015-05-27.tar.gz">
afp-Recursion-Theory-I-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Recursion-Theory-I-2014-08-28.tar.gz">
afp-Recursion-Theory-I-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Recursion-Theory-I-2013-12-11.tar.gz">
afp-Recursion-Theory-I-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Recursion-Theory-I-2013-11-17.tar.gz">
afp-Recursion-Theory-I-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Recursion-Theory-I-2013-02-16.tar.gz">
afp-Recursion-Theory-I-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Recursion-Theory-I-2012-05-24.tar.gz">
afp-Recursion-Theory-I-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Recursion-Theory-I-2011-10-11.tar.gz">
afp-Recursion-Theory-I-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Recursion-Theory-I-2011-02-11.tar.gz">
afp-Recursion-Theory-I-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Recursion-Theory-I-2010-07-01.tar.gz">
afp-Recursion-Theory-I-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Recursion-Theory-I-2009-12-12.tar.gz">
afp-Recursion-Theory-I-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Recursion-Theory-I-2009-04-29.tar.gz">
afp-Recursion-Theory-I-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Recursion-Theory-I-2008-06-10.tar.gz">
afp-Recursion-Theory-I-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Recursion-Theory-I-2008-04-11.tar.gz">
afp-Recursion-Theory-I-2008-04-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Refine_Imperative_HOL.html b/web/entries/Refine_Imperative_HOL.html
--- a/web/entries/Refine_Imperative_HOL.html
+++ b/web/entries/Refine_Imperative_HOL.html
@@ -1,234 +1,234 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Imperative Refinement Framework - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">I</font>mperative
<font class="first">R</font>efinement
<font class="first">F</font>ramework
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Imperative Refinement Framework</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-08-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present the Imperative Refinement Framework (IRF), a tool that
supports a stepwise refinement based approach to imperative programs.
This entry is based on the material we presented in [ITP-2015,
CPP-2016]. It uses the Monadic Refinement Framework as a frontend for
the specification of the abstract programs, and Imperative/HOL as a
backend to generate executable imperative programs. The IRF comes
with tool support to synthesize imperative programs from more
abstract, functional ones, using efficient imperative implementations
for the abstract data structures. This entry also includes the
Imperative Isabelle Collection Framework (IICF), which provides a
library of re-usable imperative collection data structures. Moreover,
this entry contains a quickstart guide and a reference manual, which
provide an introduction to using the IRF for Isabelle/HOL experts. It
also provids a collection of (partly commented) practical examples,
some highlights being Dijkstra's Algorithm, Nested-DFS, and a generic
worklist algorithm with subsumption. Finally, this entry contains
benchmark scripts that compare the runtime of some examples against
reference implementations of the algorithms in Java and C++.
[ITP-2015] Peter Lammich: Refinement to Imperative/HOL. ITP 2015:
253--269 [CPP-2016] Peter Lammich: Refinement based verification of
-imperative data structures. CPP 2016: 27--36</div></td>
+imperative data structures. CPP 2016: 27--36</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Refine_Imperative_HOL-AFP,
author = {Peter Lammich},
title = {The Imperative Refinement Framework},
journal = {Archive of Formal Proofs},
month = aug,
year = 2016,
note = {\url{http://isa-afp.org/entries/Refine_Imperative_HOL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="DFS_Framework.html">DFS_Framework</a>, <a href="Dijkstra_Shortest_Path.html">Dijkstra_Shortest_Path</a>, <a href="List-Index.html">List-Index</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Flow_Networks.html">Flow_Networks</a>, <a href="Floyd_Warshall.html">Floyd_Warshall</a>, <a href="Kruskal.html">Kruskal</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Refine_Imperative_HOL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Refine_Imperative_HOL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Refine_Imperative_HOL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Refine_Imperative_HOL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Refine_Imperative_HOL-2019-06-11.tar.gz">
afp-Refine_Imperative_HOL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Refine_Imperative_HOL-2018-08-16.tar.gz">
afp-Refine_Imperative_HOL-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Refine_Imperative_HOL-2017-10-10.tar.gz">
afp-Refine_Imperative_HOL-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Refine_Imperative_HOL-2016-12-17.tar.gz">
afp-Refine_Imperative_HOL-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Refine_Imperative_HOL-2016-08-08.tar.gz">
afp-Refine_Imperative_HOL-2016-08-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Refine_Monadic.html b/web/entries/Refine_Monadic.html
--- a/web/entries/Refine_Monadic.html
+++ b/web/entries/Refine_Monadic.html
@@ -1,280 +1,280 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Refinement for Monadic Programs - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>efinement
for
<font class="first">M</font>onadic
<font class="first">P</font>rograms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Refinement for Monadic Programs</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-01-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We provide a framework for program and data refinement in Isabelle/HOL.
+ <td class="abstract mathjax_process">We provide a framework for program and data refinement in Isabelle/HOL.
The framework is based on a nondeterminism-monad with assertions, i.e.,
the monad carries a set of results or an assertion failure.
Recursion is expressed by fixed points. For convenience, we also provide
while and foreach combinators.
<p>
The framework provides tools to automatize canonical tasks, such as
verification condition generation, finding appropriate data refinement relations,
and refine an executable program to a form that is accepted by the
Isabelle/HOL code generator.
<p>
This submission comes with a collection of examples and a user-guide,
-illustrating the usage of the framework.</div></td>
+illustrating the usage of the framework.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2012-04-23] Introduced ordered FOREACH loops<br>
[2012-06] New features:
REC_rule_arb and RECT_rule_arb allow for generalizing over variables.
prepare_code_thms - command extracts code equations for recursion combinators.<br>
[2012-07] New example: Nested DFS for emptiness check of Buchi-automata with witness.<br>
New feature:
fo_rule method to apply resolution using first-order matching. Useful for arg_conf, fun_cong.<br>
[2012-08] Adaptation to ICF v2.<br>
[2012-10-05] Adaptations to include support for Automatic Refinement Framework.<br>
[2013-09] This entry now depends on Automatic Refinement<br>
[2014-06] New feature: vc_solve method to solve verification conditions.
Maintenace changes: VCG-rules for nfoldli, improved setup for FOREACH-loops.<br>
[2014-07] Now defining recursion via flat domain. Dropped many single-valued prerequisites.
Changed notion of data refinement. In single-valued case, this matches the old notion.
In non-single valued case, the new notion allows for more convenient rules.
In particular, the new definitions allow for projecting away ghost variables as a refinement step.<br>
[2014-11] New features: le-or-fail relation (leof), modular reasoning about loop invariants.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Refine_Monadic-AFP,
author = {Peter Lammich},
title = {Refinement for Monadic Programs},
journal = {Archive of Formal Proofs},
month = jan,
year = 2012,
note = {\url{http://isa-afp.org/entries/Refine_Monadic.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Automatic_Refinement.html">Automatic_Refinement</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Collections.html">Collections</a>, <a href="JinjaThreads.html">JinjaThreads</a>, <a href="Kruskal.html">Kruskal</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Refine_Monadic/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Refine_Monadic/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Refine_Monadic/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Refine_Monadic-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Refine_Monadic-2019-06-11.tar.gz">
afp-Refine_Monadic-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Refine_Monadic-2018-08-16.tar.gz">
afp-Refine_Monadic-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Refine_Monadic-2017-10-10.tar.gz">
afp-Refine_Monadic-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Refine_Monadic-2016-12-17.tar.gz">
afp-Refine_Monadic-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Refine_Monadic-2016-02-22.tar.gz">
afp-Refine_Monadic-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Refine_Monadic-2015-05-27.tar.gz">
afp-Refine_Monadic-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Refine_Monadic-2014-08-28.tar.gz">
afp-Refine_Monadic-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Refine_Monadic-2013-12-11.tar.gz">
afp-Refine_Monadic-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Refine_Monadic-2013-11-17.tar.gz">
afp-Refine_Monadic-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Refine_Monadic-2013-02-16.tar.gz">
afp-Refine_Monadic-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Refine_Monadic-2012-05-24.tar.gz">
afp-Refine_Monadic-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Refine_Monadic-2012-02-10.tar.gz">
afp-Refine_Monadic-2012-02-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/RefinementReactive.html b/web/entries/RefinementReactive.html
--- a/web/entries/RefinementReactive.html
+++ b/web/entries/RefinementReactive.html
@@ -1,240 +1,240 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of Refinement Calculus for Reactive Systems - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
<font class="first">R</font>efinement
<font class="first">C</font>alculus
for
<font class="first">R</font>eactive
<font class="first">S</font>ystems
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of Refinement Calculus for Reactive Systems</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Viorel Preoteasa (viorel /dot/ preoteasa /at/ aalto /dot/ fi)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-10-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a formalization of refinement calculus for reactive systems.
Refinement calculus is based on monotonic predicate transformers
(monotonic functions from sets of post-states to sets of pre-states),
and it is a powerful formalism for reasoning about imperative programs.
We model reactive systems as monotonic property transformers
that transform sets of output infinite sequences into sets of input
infinite sequences. Within this semantics we can model
refinement of reactive systems, (unbounded) angelic and
demonic nondeterminism, sequential composition, and
other semantic properties. We can model systems that may
fail for some inputs, and we can model compatibility of systems.
We can specify systems that have liveness properties using
linear temporal logic, and we can refine system specifications
into systems based on symbolic transitions systems, suitable
-for implementations.</div></td>
+for implementations.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{RefinementReactive-AFP,
author = {Viorel Preoteasa},
title = {Formalization of Refinement Calculus for Reactive Systems},
journal = {Archive of Formal Proofs},
month = oct,
year = 2014,
note = {\url{http://isa-afp.org/entries/RefinementReactive.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/RefinementReactive/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/RefinementReactive/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/RefinementReactive/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-RefinementReactive-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-RefinementReactive-2019-06-11.tar.gz">
afp-RefinementReactive-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-RefinementReactive-2018-08-16.tar.gz">
afp-RefinementReactive-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-RefinementReactive-2017-10-10.tar.gz">
afp-RefinementReactive-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-RefinementReactive-2016-12-17.tar.gz">
afp-RefinementReactive-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-RefinementReactive-2016-02-22.tar.gz">
afp-RefinementReactive-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-RefinementReactive-2015-05-27.tar.gz">
afp-RefinementReactive-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-RefinementReactive-2014-10-08.tar.gz">
afp-RefinementReactive-2014-10-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Regex_Equivalence.html b/web/entries/Regex_Equivalence.html
--- a/web/entries/Regex_Equivalence.html
+++ b/web/entries/Regex_Equivalence.html
@@ -1,249 +1,249 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Unified Decision Procedures for Regular Expression Equivalence - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">U</font>nified
<font class="first">D</font>ecision
<font class="first">P</font>rocedures
for
<font class="first">R</font>egular
<font class="first">E</font>xpression
<font class="first">E</font>quivalence
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Unified Decision Procedures for Regular Expression Equivalence</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a> and
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-01-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize a unified framework for verified decision procedures for regular
expression equivalence. Five recently published formalizations of such
decision procedures (three based on derivatives, two on marked regular
expressions) can be obtained as instances of the framework. We discover that
the two approaches based on marked regular expressions, which were previously
thought to be the same, are different, and one seems to produce uniformly
smaller automata. The common framework makes it possible to compare the
performance of the different decision procedures in a meaningful way.
<a href="http://www21.in.tum.de/~nipkow/pubs/itp14.html">
The formalization is described in a paper of the same name presented at
-Interactive Theorem Proving 2014</a>.</div></td>
+Interactive Theorem Proving 2014</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Regex_Equivalence-AFP,
author = {Tobias Nipkow and Dmitriy Traytel},
title = {Unified Decision Procedures for Regular Expression Equivalence},
journal = {Archive of Formal Proofs},
month = jan,
year = 2014,
note = {\url{http://isa-afp.org/entries/Regex_Equivalence.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Efficient-Mergesort.html">Efficient-Mergesort</a>, <a href="Regular-Sets.html">Regular-Sets</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Regex_Equivalence/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Regex_Equivalence/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Regex_Equivalence/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Regex_Equivalence-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Regex_Equivalence-2019-06-11.tar.gz">
afp-Regex_Equivalence-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Regex_Equivalence-2018-08-16.tar.gz">
afp-Regex_Equivalence-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Regex_Equivalence-2017-10-10.tar.gz">
afp-Regex_Equivalence-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Regex_Equivalence-2016-12-17.tar.gz">
afp-Regex_Equivalence-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Regex_Equivalence-2016-02-22.tar.gz">
afp-Regex_Equivalence-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Regex_Equivalence-2015-05-27.tar.gz">
afp-Regex_Equivalence-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Regex_Equivalence-2014-11-30.tar.gz">
afp-Regex_Equivalence-2014-11-30.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Regex_Equivalence-2014-08-28.tar.gz">
afp-Regex_Equivalence-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Regex_Equivalence-2014-01-30.tar.gz">
afp-Regex_Equivalence-2014-01-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Regular-Sets.html b/web/entries/Regular-Sets.html
--- a/web/entries/Regular-Sets.html
+++ b/web/entries/Regular-Sets.html
@@ -1,276 +1,276 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Regular Sets and Expressions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>egular
<font class="first">S</font>ets
and
<font class="first">E</font>xpressions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Regular Sets and Expressions</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.in.tum.de/~krauss">Alexander Krauss</a> and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">
Contributor:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-05-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This is a library of constructions on regular expressions and languages. It provides the operations of concatenation, Kleene star and derivative on languages. Regular expressions and their meaning are defined. An executable equivalence checker for regular expressions is verified; it does not need automata but works directly on regular expressions. <i>By mapping regular expressions to binary relations, an automatic and complete proof method for (in)equalities of binary relations over union, concatenation and (reflexive) transitive closure is obtained.</i> <P> Extended regular expressions with complement and intersection are also defined and an equivalence checker is provided.</div></td>
+ <td class="abstract mathjax_process">This is a library of constructions on regular expressions and languages. It provides the operations of concatenation, Kleene star and derivative on languages. Regular expressions and their meaning are defined. An executable equivalence checker for regular expressions is verified; it does not need automata but works directly on regular expressions. <i>By mapping regular expressions to binary relations, an automatic and complete proof method for (in)equalities of binary relations over union, concatenation and (reflexive) transitive closure is obtained.</i> <P> Extended regular expressions with complement and intersection are also defined and an equivalence checker is provided.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2011-08-26]: Christian Urban added a theory about derivatives and partial derivatives of regular expressions<br>
[2012-05-10]: Tobias Nipkow added extended regular expressions<br>
[2012-05-10]: Tobias Nipkow added equivalence checking with partial derivatives</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Regular-Sets-AFP,
author = {Alexander Krauss and Tobias Nipkow},
title = {Regular Sets and Expressions},
journal = {Archive of Formal Proofs},
month = may,
year = 2010,
note = {\url{http://isa-afp.org/entries/Regular-Sets.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Abstract-Rewriting.html">Abstract-Rewriting</a>, <a href="Coinductive_Languages.html">Coinductive_Languages</a>, <a href="Containers.html">Containers</a>, <a href="Finite_Automata_HF.html">Finite_Automata_HF</a>, <a href="Functional-Automata.html">Functional-Automata</a>, <a href="Lambda_Free_KBOs.html">Lambda_Free_KBOs</a>, <a href="List_Update.html">List_Update</a>, <a href="Myhill-Nerode.html">Myhill-Nerode</a>, <a href="Posix-Lexing.html">Posix-Lexing</a>, <a href="Quick_Sort_Cost.html">Quick_Sort_Cost</a>, <a href="Regex_Equivalence.html">Regex_Equivalence</a>, <a href="Transitive-Closure-II.html">Transitive-Closure-II</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Regular-Sets/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Regular-Sets/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Regular-Sets/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Regular-Sets-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Regular-Sets-2019-06-11.tar.gz">
afp-Regular-Sets-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Regular-Sets-2018-08-16.tar.gz">
afp-Regular-Sets-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Regular-Sets-2017-10-10.tar.gz">
afp-Regular-Sets-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Regular-Sets-2016-12-17.tar.gz">
afp-Regular-Sets-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Regular-Sets-2016-02-22.tar.gz">
afp-Regular-Sets-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Regular-Sets-2015-05-27.tar.gz">
afp-Regular-Sets-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Regular-Sets-2014-08-28.tar.gz">
afp-Regular-Sets-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Regular-Sets-2013-12-11.tar.gz">
afp-Regular-Sets-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Regular-Sets-2013-11-17.tar.gz">
afp-Regular-Sets-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Regular-Sets-2013-03-02.tar.gz">
afp-Regular-Sets-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Regular-Sets-2013-02-16.tar.gz">
afp-Regular-Sets-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Regular-Sets-2012-05-24.tar.gz">
afp-Regular-Sets-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Regular-Sets-2011-10-11.tar.gz">
afp-Regular-Sets-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Regular-Sets-2011-02-11.tar.gz">
afp-Regular-Sets-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Regular-Sets-2010-07-01.tar.gz">
afp-Regular-Sets-2010-07-01.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Regular_Algebras.html b/web/entries/Regular_Algebras.html
--- a/web/entries/Regular_Algebras.html
+++ b/web/entries/Regular_Algebras.html
@@ -1,231 +1,231 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Regular Algebras - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>egular
<font class="first">A</font>lgebras
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Regular Algebras</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www-users.cs.york.ac.uk/~simonf/">Simon Foster</a> and
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-05-21</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Regular algebras axiomatise the equational theory of regular expressions as induced by
regular language identity. We use Isabelle/HOL for a detailed systematic study of regular
algebras given by Boffa, Conway, Kozen and Salomaa. We investigate the relationships between
these classes, formalise a soundness proof for the smallest class (Salomaa's) and obtain
completeness of the largest one (Boffa's) relative to a deep result by Krob. In addition
we provide a large collection of regular identities in the general setting of Boffa's axiom.
Our regular algebra hierarchy is orthogonal to the Kleene algebra hierarchy in the Archive
-of Formal Proofs; we have not aimed at an integration for pragmatic reasons.</div></td>
+of Formal Proofs; we have not aimed at an integration for pragmatic reasons.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Regular_Algebras-AFP,
author = {Simon Foster and Georg Struth},
title = {Regular Algebras},
journal = {Archive of Formal Proofs},
month = may,
year = 2014,
note = {\url{http://isa-afp.org/entries/Regular_Algebras.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Kleene_Algebra.html">Kleene_Algebra</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Regular_Algebras/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Regular_Algebras/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Regular_Algebras/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Regular_Algebras-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Regular_Algebras-2019-06-11.tar.gz">
afp-Regular_Algebras-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Regular_Algebras-2018-08-16.tar.gz">
afp-Regular_Algebras-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Regular_Algebras-2017-10-10.tar.gz">
afp-Regular_Algebras-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Regular_Algebras-2016-12-17.tar.gz">
afp-Regular_Algebras-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Regular_Algebras-2016-02-22.tar.gz">
afp-Regular_Algebras-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Regular_Algebras-2015-05-27.tar.gz">
afp-Regular_Algebras-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Regular_Algebras-2014-08-28.tar.gz">
afp-Regular_Algebras-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Regular_Algebras-2014-05-22.tar.gz">
afp-Regular_Algebras-2014-05-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Relation_Algebra.html b/web/entries/Relation_Algebra.html
--- a/web/entries/Relation_Algebra.html
+++ b/web/entries/Relation_Algebra.html
@@ -1,238 +1,238 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Relation Algebra - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>elation
<font class="first">A</font>lgebra
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Relation Algebra</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Alasdair Armstrong,
<a href="https://www-users.cs.york.ac.uk/~simonf/">Simon Foster</a>,
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a> and
Tjark Weber (tjark /dot/ weber /at/ it /dot/ uu /dot/ se)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-01-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Tarski's algebra of binary relations is formalised along the lines of
+ <td class="abstract mathjax_process">Tarski's algebra of binary relations is formalised along the lines of
the standard textbooks of Maddux and Schmidt and Ströhlein. This
includes relation-algebraic concepts such as subidentities, vectors and
a domain operation as well as various notions associated to functions.
Relation algebras are also expanded by a reflexive transitive closure
operation, and they are linked with Kleene algebras and models of binary
-relations and Boolean matrices.</div></td>
+relations and Boolean matrices.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Relation_Algebra-AFP,
author = {Alasdair Armstrong and Simon Foster and Georg Struth and Tjark Weber},
title = {Relation Algebra},
journal = {Archive of Formal Proofs},
month = jan,
year = 2014,
note = {\url{http://isa-afp.org/entries/Relation_Algebra.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Kleene_Algebra.html">Kleene_Algebra</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Residuated_Lattices.html">Residuated_Lattices</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Relation_Algebra/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Relation_Algebra/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Relation_Algebra/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Relation_Algebra-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Relation_Algebra-2019-06-11.tar.gz">
afp-Relation_Algebra-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Relation_Algebra-2018-08-16.tar.gz">
afp-Relation_Algebra-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Relation_Algebra-2017-10-10.tar.gz">
afp-Relation_Algebra-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Relation_Algebra-2016-12-17.tar.gz">
afp-Relation_Algebra-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Relation_Algebra-2016-02-22.tar.gz">
afp-Relation_Algebra-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Relation_Algebra-2015-05-27.tar.gz">
afp-Relation_Algebra-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Relation_Algebra-2014-08-28.tar.gz">
afp-Relation_Algebra-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Relation_Algebra-2014-01-31.tar.gz">
afp-Relation_Algebra-2014-01-31.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Relation_Algebra-2014-01-25.tar.gz">
afp-Relation_Algebra-2014-01-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Relational-Incorrectness-Logic.html b/web/entries/Relational-Incorrectness-Logic.html
--- a/web/entries/Relational-Incorrectness-Logic.html
+++ b/web/entries/Relational-Incorrectness-Logic.html
@@ -1,201 +1,201 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>An Under-Approximate Relational Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>n
<font class="first">U</font>nder-Approximate
<font class="first">R</font>elational
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">An Under-Approximate Relational Logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://people.eng.unimelb.edu.au/tobym/">Toby Murray</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-03-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Recently, authors have proposed under-approximate logics for reasoning
about programs. So far, all such logics have been confined to
reasoning about individual program behaviours. Yet there exist many
over-approximate relational logics for reasoning about pairs of
programs and relating their behaviours. We present the first
under-approximate relational logic, for the simple imperative language
IMP. We prove our logic is both sound and complete. Additionally, we
show how reasoning in this logic can be decomposed into non-relational
reasoning in an under-approximate Hoare logic, mirroring Beringer’s
result for over-approximate relational logics. We illustrate the
application of our logic on some small examples in which we provably
-demonstrate the presence of insecurity.</div></td>
+demonstrate the presence of insecurity.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Relational-Incorrectness-Logic-AFP,
author = {Toby Murray},
title = {An Under-Approximate Relational Logic},
journal = {Archive of Formal Proofs},
month = mar,
year = 2020,
note = {\url{http://isa-afp.org/entries/Relational-Incorrectness-Logic.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Relational-Incorrectness-Logic/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Relational-Incorrectness-Logic/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Relational-Incorrectness-Logic/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Relational-Incorrectness-Logic-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Relational-Incorrectness-Logic-2020-03-26.tar.gz">
afp-Relational-Incorrectness-Logic-2020-03-26.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Rep_Fin_Groups.html b/web/entries/Rep_Fin_Groups.html
--- a/web/entries/Rep_Fin_Groups.html
+++ b/web/entries/Rep_Fin_Groups.html
@@ -1,214 +1,214 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Representations of Finite Groups - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>epresentations
of
<font class="first">F</font>inite
<font class="first">G</font>roups
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Representations of Finite Groups</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://ualberta.ca/~jsylvest/">Jeremy Sylvestre</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-08-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We provide a formal framework for the theory of representations of finite groups, as modules over the group ring. Along the way, we develop the general theory of groups (relying on the group_add class for the basics), modules, and vector spaces, to the extent required for theory of group representations. We then provide formal proofs of several important introductory theorems in the subject, including Maschke's theorem, Schur's lemma, and Frobenius reciprocity. We also prove that every irreducible representation is isomorphic to a submodule of the group ring, leading to the fact that for a finite group there are only finitely many isomorphism classes of irreducible representations. In all of this, no restriction is made on the characteristic of the ring or field of scalars until the definition of a group representation, and then the only restriction made is that the characteristic must not divide the order of the group.</div></td>
+ <td class="abstract mathjax_process">We provide a formal framework for the theory of representations of finite groups, as modules over the group ring. Along the way, we develop the general theory of groups (relying on the group_add class for the basics), modules, and vector spaces, to the extent required for theory of group representations. We then provide formal proofs of several important introductory theorems in the subject, including Maschke's theorem, Schur's lemma, and Frobenius reciprocity. We also prove that every irreducible representation is isomorphic to a submodule of the group ring, leading to the fact that for a finite group there are only finitely many isomorphism classes of irreducible representations. In all of this, no restriction is made on the characteristic of the ring or field of scalars until the definition of a group representation, and then the only restriction made is that the characteristic must not divide the order of the group.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Rep_Fin_Groups-AFP,
author = {Jeremy Sylvestre},
title = {Representations of Finite Groups},
journal = {Archive of Formal Proofs},
month = aug,
year = 2015,
note = {\url{http://isa-afp.org/entries/Rep_Fin_Groups.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Rep_Fin_Groups/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Rep_Fin_Groups/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Rep_Fin_Groups/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Rep_Fin_Groups-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Rep_Fin_Groups-2019-06-11.tar.gz">
afp-Rep_Fin_Groups-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Rep_Fin_Groups-2018-08-16.tar.gz">
afp-Rep_Fin_Groups-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Rep_Fin_Groups-2017-10-10.tar.gz">
afp-Rep_Fin_Groups-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Rep_Fin_Groups-2016-12-17.tar.gz">
afp-Rep_Fin_Groups-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Rep_Fin_Groups-2016-02-22.tar.gz">
afp-Rep_Fin_Groups-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Rep_Fin_Groups-2015-08-12.tar.gz">
afp-Rep_Fin_Groups-2015-08-12.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Residuated_Lattices.html b/web/entries/Residuated_Lattices.html
--- a/web/entries/Residuated_Lattices.html
+++ b/web/entries/Residuated_Lattices.html
@@ -1,230 +1,230 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Residuated Lattices - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>esiduated
<font class="first">L</font>attices
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Residuated Lattices</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Victor B. F. Gomes (vb358 /at/ cl /dot/ cam /dot/ ac /dot/ uk) and
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-04-15</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The theory of residuated lattices, first proposed by Ward and Dilworth, is
formalised in Isabelle/HOL. This includes concepts of residuated functions;
their adjoints and conjugates. It also contains necessary and sufficient
conditions for the existence of these operations in an arbitrary lattice.
The mathematical components for residuated lattices are linked to the AFP
entry for relation algebra. In particular, we prove Jonsson and Tsinakis
-conditions for a residuated boolean algebra to form a relation algebra.</div></td>
+conditions for a residuated boolean algebra to form a relation algebra.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Residuated_Lattices-AFP,
author = {Victor B. F. Gomes and Georg Struth},
title = {Residuated Lattices},
journal = {Archive of Formal Proofs},
month = apr,
year = 2015,
note = {\url{http://isa-afp.org/entries/Residuated_Lattices.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Relation_Algebra.html">Relation_Algebra</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Residuated_Lattices/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Residuated_Lattices/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Residuated_Lattices/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Residuated_Lattices-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Residuated_Lattices-2019-06-28.tar.gz">
afp-Residuated_Lattices-2019-06-28.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-Residuated_Lattices-2019-06-11.tar.gz">
afp-Residuated_Lattices-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Residuated_Lattices-2018-08-16.tar.gz">
afp-Residuated_Lattices-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Residuated_Lattices-2017-10-10.tar.gz">
afp-Residuated_Lattices-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Residuated_Lattices-2016-12-17.tar.gz">
afp-Residuated_Lattices-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Residuated_Lattices-2016-02-22.tar.gz">
afp-Residuated_Lattices-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Residuated_Lattices-2015-05-27.tar.gz">
afp-Residuated_Lattices-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Residuated_Lattices-2015-04-16.tar.gz">
afp-Residuated_Lattices-2015-04-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Resolution_FOL.html b/web/entries/Resolution_FOL.html
--- a/web/entries/Resolution_FOL.html
+++ b/web/entries/Resolution_FOL.html
@@ -1,254 +1,254 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Resolution Calculus for First-Order Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">R</font>esolution
<font class="first">C</font>alculus
for
<font class="first">F</font>irst-Order
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Resolution Calculus for First-Order Logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://people.compute.dtu.dk/andschl/">Anders Schlichtkrull</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-06-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This theory is a formalization of the resolution calculus for
first-order logic. It is proven sound and complete. The soundness
proof uses the substitution lemma, which shows a correspondence
between substitutions and updates to an environment. The completeness
proof uses semantic trees, i.e. trees whose paths are partial Herbrand
interpretations. It employs Herbrand's theorem in a formulation which
states that an unsatisfiable set of clauses has a finite closed
semantic tree. It also uses the lifting lemma which lifts resolution
derivation steps from the ground world up to the first-order world.
The theory is presented in a paper in the Journal of Automated Reasoning
[Sch18] which extends a paper presented at the International Conference
on Interactive Theorem Proving [Sch16]. An earlier version was
presented in an MSc thesis [Sch15]. The formalization mostly follows
textbooks by Ben-Ari [BA12], Chang and Lee [CL73], and Leitsch [Lei97].
The theory is part of the IsaFoL project [IsaFoL]. <p>
<a name="Sch18"></a>[Sch18] Anders Schlichtkrull. "Formalization of the
Resolution Calculus for First-Order Logic". Journal of Automated
Reasoning, 2018.<br> <a name="Sch16"></a>[Sch16] Anders
Schlichtkrull. "Formalization of the Resolution Calculus for First-Order
Logic". In: ITP 2016. Vol. 9807. LNCS. Springer, 2016.<br>
<a name="Sch15"></a>[Sch15] Anders Schlichtkrull. <a href="https://people.compute.dtu.dk/andschl/Thesis.pdf">
"Formalization of Resolution Calculus in Isabelle"</a>.
<a href="https://people.compute.dtu.dk/andschl/Thesis.pdf">https://people.compute.dtu.dk/andschl/Thesis.pdf</a>.
MSc thesis. Technical University of Denmark, 2015.<br>
<a name="BA12"></a>[BA12] Mordechai Ben-Ari. <i>Mathematical Logic for
Computer Science</i>. 3rd. Springer, 2012.<br> <a
name="CL73"></a>[CL73] Chin-Liang Chang and Richard Char-Tung Lee.
<i>Symbolic Logic and Mechanical Theorem Proving</i>. 1st. Academic
Press, Inc., 1973.<br> <a name="Lei97"></a>[Lei97] Alexander
Leitsch. <i>The Resolution Calculus</i>. Texts in theoretical computer
science. Springer, 1997.<br> <a name="IsaFoL"></a>[IsaFoL]
IsaFoL authors. <a href="https://bitbucket.org/jasmin_blanchette/isafol">
IsaFoL: Isabelle Formalization of Logic</a>.
-<a href="https://bitbucket.org/jasmin_blanchette/isafol">https://bitbucket.org/jasmin_blanchette/isafol</a>.</div></td>
+<a href="https://bitbucket.org/jasmin_blanchette/isafol">https://bitbucket.org/jasmin_blanchette/isafol</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2018-01-24]: added several new versions of the soundness and completeness theorems as described in the paper [Sch18]. <br>
[2018-03-20]: added a concrete instance of the unification and completeness theorems using the First-Order Terms AFP-entry from IsaFoR as described in the papers [Sch16] and [Sch18].</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Resolution_FOL-AFP,
author = {Anders Schlichtkrull},
title = {The Resolution Calculus for First-Order Logic},
journal = {Archive of Formal Proofs},
month = jun,
year = 2016,
note = {\url{http://isa-afp.org/entries/Resolution_FOL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="First_Order_Terms.html">First_Order_Terms</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Resolution_FOL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Resolution_FOL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Resolution_FOL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Resolution_FOL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Resolution_FOL-2019-06-11.tar.gz">
afp-Resolution_FOL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Resolution_FOL-2018-08-16.tar.gz">
afp-Resolution_FOL-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Resolution_FOL-2017-10-10.tar.gz">
afp-Resolution_FOL-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Resolution_FOL-2016-12-17.tar.gz">
afp-Resolution_FOL-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Resolution_FOL-2016-06-30.tar.gz">
afp-Resolution_FOL-2016-06-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Rewriting_Z.html b/web/entries/Rewriting_Z.html
--- a/web/entries/Rewriting_Z.html
+++ b/web/entries/Rewriting_Z.html
@@ -1,217 +1,217 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Z Property - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">Z</font>
<font class="first">P</font>roperty
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Z Property</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Bertram Felgenhauer (int-e /at/ gmx /dot/ de),
Julian Nagele,
Vincent van Oostrom and
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-06-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize the Z property introduced by Dehornoy and van Oostrom.
First we show that for any abstract rewrite system, Z implies
confluence. Then we give two examples of proofs using Z: confluence of
lambda-calculus with respect to beta-reduction and confluence of
-combinatory logic.</div></td>
+combinatory logic.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Rewriting_Z-AFP,
author = {Bertram Felgenhauer and Julian Nagele and Vincent van Oostrom and Christian Sternagel},
title = {The Z Property},
journal = {Archive of Formal Proofs},
month = jun,
year = 2016,
note = {\url{http://isa-afp.org/entries/Rewriting_Z.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Abstract-Rewriting.html">Abstract-Rewriting</a>, <a href="Nominal2.html">Nominal2</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Rewriting_Z/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Rewriting_Z/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Rewriting_Z/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Rewriting_Z-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Rewriting_Z-2019-06-11.tar.gz">
afp-Rewriting_Z-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Rewriting_Z-2018-08-16.tar.gz">
afp-Rewriting_Z-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Rewriting_Z-2017-10-10.tar.gz">
afp-Rewriting_Z-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Rewriting_Z-2016-12-17.tar.gz">
afp-Rewriting_Z-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Rewriting_Z-2016-06-30.tar.gz">
afp-Rewriting_Z-2016-06-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Ribbon_Proofs.html b/web/entries/Ribbon_Proofs.html
--- a/web/entries/Ribbon_Proofs.html
+++ b/web/entries/Ribbon_Proofs.html
@@ -1,235 +1,235 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Ribbon Proofs - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>ibbon
<font class="first">P</font>roofs
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Ribbon Proofs</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
John Wickerson
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-01-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This document concerns the theory of ribbon proofs: a diagrammatic proof system, based on separation logic, for verifying program correctness. We include the syntax, proof rules, and soundness results for two alternative formalisations of ribbon proofs. <p> Compared to traditional proof outlines, ribbon proofs emphasise the structure of a proof, so are intelligible and pedagogical. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they may be more scalable. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs.</div></td>
+ <td class="abstract mathjax_process">This document concerns the theory of ribbon proofs: a diagrammatic proof system, based on separation logic, for verifying program correctness. We include the syntax, proof rules, and soundness results for two alternative formalisations of ribbon proofs. <p> Compared to traditional proof outlines, ribbon proofs emphasise the structure of a proof, so are intelligible and pedagogical. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they may be more scalable. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Ribbon_Proofs-AFP,
author = {John Wickerson},
title = {Ribbon Proofs},
journal = {Archive of Formal Proofs},
month = jan,
year = 2013,
note = {\url{http://isa-afp.org/entries/Ribbon_Proofs.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ribbon_Proofs/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Ribbon_Proofs/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Ribbon_Proofs/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Ribbon_Proofs-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Ribbon_Proofs-2019-06-11.tar.gz">
afp-Ribbon_Proofs-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Ribbon_Proofs-2018-08-16.tar.gz">
afp-Ribbon_Proofs-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Ribbon_Proofs-2017-10-10.tar.gz">
afp-Ribbon_Proofs-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Ribbon_Proofs-2016-12-17.tar.gz">
afp-Ribbon_Proofs-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Ribbon_Proofs-2016-02-22.tar.gz">
afp-Ribbon_Proofs-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Ribbon_Proofs-2015-05-27.tar.gz">
afp-Ribbon_Proofs-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Ribbon_Proofs-2014-08-28.tar.gz">
afp-Ribbon_Proofs-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Ribbon_Proofs-2013-12-11.tar.gz">
afp-Ribbon_Proofs-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Ribbon_Proofs-2013-11-17.tar.gz">
afp-Ribbon_Proofs-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Ribbon_Proofs-2013-03-02.tar.gz">
afp-Ribbon_Proofs-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Ribbon_Proofs-2013-02-16.tar.gz">
afp-Ribbon_Proofs-2013-02-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Robbins-Conjecture.html b/web/entries/Robbins-Conjecture.html
--- a/web/entries/Robbins-Conjecture.html
+++ b/web/entries/Robbins-Conjecture.html
@@ -1,265 +1,265 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Complete Proof of the Robbins Conjecture - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">C</font>omplete
<font class="first">P</font>roof
of
the
<font class="first">R</font>obbins
<font class="first">C</font>onjecture
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Complete Proof of the Robbins Conjecture</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Matthew Wampler-Doty
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-05-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This document gives a formalization of the proof of the Robbins conjecture, following A. Mann, <i>A Complete Proof of the Robbins Conjecture</i>, 2003.</div></td>
+ <td class="abstract mathjax_process">This document gives a formalization of the proof of the Robbins conjecture, following A. Mann, <i>A Complete Proof of the Robbins Conjecture</i>, 2003.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Robbins-Conjecture-AFP,
author = {Matthew Wampler-Doty},
title = {A Complete Proof of the Robbins Conjecture},
journal = {Archive of Formal Proofs},
month = may,
year = 2010,
note = {\url{http://isa-afp.org/entries/Robbins-Conjecture.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Robbins-Conjecture/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Robbins-Conjecture/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Robbins-Conjecture/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Robbins-Conjecture-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Robbins-Conjecture-2019-06-11.tar.gz">
afp-Robbins-Conjecture-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Robbins-Conjecture-2018-08-16.tar.gz">
afp-Robbins-Conjecture-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Robbins-Conjecture-2017-10-10.tar.gz">
afp-Robbins-Conjecture-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Robbins-Conjecture-2016-12-17.tar.gz">
afp-Robbins-Conjecture-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Robbins-Conjecture-2016-02-22.tar.gz">
afp-Robbins-Conjecture-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Robbins-Conjecture-2015-05-27.tar.gz">
afp-Robbins-Conjecture-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Robbins-Conjecture-2014-08-28.tar.gz">
afp-Robbins-Conjecture-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Robbins-Conjecture-2013-12-11.tar.gz">
afp-Robbins-Conjecture-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Robbins-Conjecture-2013-11-17.tar.gz">
afp-Robbins-Conjecture-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Robbins-Conjecture-2013-02-16.tar.gz">
afp-Robbins-Conjecture-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Robbins-Conjecture-2012-05-24.tar.gz">
afp-Robbins-Conjecture-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Robbins-Conjecture-2011-10-11.tar.gz">
afp-Robbins-Conjecture-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Robbins-Conjecture-2011-02-11.tar.gz">
afp-Robbins-Conjecture-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Robbins-Conjecture-2010-07-01.tar.gz">
afp-Robbins-Conjecture-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Robbins-Conjecture-2010-05-27.tar.gz">
afp-Robbins-Conjecture-2010-05-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Root_Balanced_Tree.html b/web/entries/Root_Balanced_Tree.html
--- a/web/entries/Root_Balanced_Tree.html
+++ b/web/entries/Root_Balanced_Tree.html
@@ -1,220 +1,220 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Root-Balanced Tree - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>oot-Balanced
<font class="first">T</font>ree
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Root-Balanced Tree</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-08-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
Andersson introduced <em>general balanced trees</em>,
search trees based on the design principle of partial rebuilding:
perform update operations naively until the tree becomes too
unbalanced, at which point a whole subtree is rebalanced. This article
defines and analyzes a functional version of general balanced trees,
which we call <em>root-balanced trees</em>. Using a lightweight model
of execution time, amortized logarithmic complexity is verified in
the theorem prover Isabelle.
</p>
<p>
This is the Isabelle formalization of the material decribed in the APLAS 2017 article
<a href="http://www21.in.tum.de/~nipkow/pubs/aplas17.html">Verified Root-Balanced Trees</a>
by the same author, which also presents experimental results that show
competitiveness of root-balanced with AVL and red-black trees.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Root_Balanced_Tree-AFP,
author = {Tobias Nipkow},
title = {Root-Balanced Tree},
journal = {Archive of Formal Proofs},
month = aug,
year = 2017,
note = {\url{http://isa-afp.org/entries/Root_Balanced_Tree.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Amortized_Complexity.html">Amortized_Complexity</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="CakeML_Codegen.html">CakeML_Codegen</a>, <a href="Closest_Pair_Points.html">Closest_Pair_Points</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Root_Balanced_Tree/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Root_Balanced_Tree/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Root_Balanced_Tree/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Root_Balanced_Tree-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Root_Balanced_Tree-2019-06-11.tar.gz">
afp-Root_Balanced_Tree-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Root_Balanced_Tree-2018-08-16.tar.gz">
afp-Root_Balanced_Tree-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Root_Balanced_Tree-2017-10-10.tar.gz">
afp-Root_Balanced_Tree-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Root_Balanced_Tree-2017-08-20.tar.gz">
afp-Root_Balanced_Tree-2017-08-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Routing.html b/web/entries/Routing.html
--- a/web/entries/Routing.html
+++ b/web/entries/Routing.html
@@ -1,216 +1,216 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Routing - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>outing
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Routing</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://liftm.de">Julius Michaelis</a> and
<a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-08-31</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry contains definitions for routing with routing
tables/longest prefix matching. A routing table entry is modelled as
a record of a prefix match, a metric, an output port, and an optional
next hop. A routing table is a list of entries, sorted by prefix
length and metric. Additionally, a parser and serializer for the
output of the ip-route command, a function to create a relation from
output port to corresponding destination IP space, and a model of a
-Linux-style router are included.</div></td>
+Linux-style router are included.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Routing-AFP,
author = {Julius Michaelis and Cornelius Diekmann},
title = {Routing},
journal = {Archive of Formal Proofs},
month = aug,
year = 2016,
note = {\url{http://isa-afp.org/entries/Routing.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Simple_Firewall.html">Simple_Firewall</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Iptables_Semantics.html">Iptables_Semantics</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Routing/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Routing/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Routing/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Routing-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Routing-2019-06-11.tar.gz">
afp-Routing-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Routing-2018-08-16.tar.gz">
afp-Routing-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Routing-2017-10-10.tar.gz">
afp-Routing-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Routing-2016-12-17.tar.gz">
afp-Routing-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Routing-2016-08-31.tar.gz">
afp-Routing-2016-08-31.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Roy_Floyd_Warshall.html b/web/entries/Roy_Floyd_Warshall.html
--- a/web/entries/Roy_Floyd_Warshall.html
+++ b/web/entries/Roy_Floyd_Warshall.html
@@ -1,235 +1,235 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Transitive closure according to Roy-Floyd-Warshall - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>ransitive
closure
according
to
<font class="first">R</font>oy-Floyd-Warshall
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Transitive closure according to Roy-Floyd-Warshall</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Makarius Wenzel
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-05-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This formulation of the Roy-Floyd-Warshall algorithm for the
+ <td class="abstract mathjax_process">This formulation of the Roy-Floyd-Warshall algorithm for the
transitive closure bypasses matrices and arrays, but uses a more direct
mathematical model with adjacency functions for immediate predecessors and
successors. This can be implemented efficiently in functional programming
-languages and is particularly adequate for sparse relations.</div></td>
+languages and is particularly adequate for sparse relations.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Roy_Floyd_Warshall-AFP,
author = {Makarius Wenzel},
title = {Transitive closure according to Roy-Floyd-Warshall},
journal = {Archive of Formal Proofs},
month = may,
year = 2014,
note = {\url{http://isa-afp.org/entries/Roy_Floyd_Warshall.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Roy_Floyd_Warshall/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Roy_Floyd_Warshall/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Roy_Floyd_Warshall/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Roy_Floyd_Warshall-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Roy_Floyd_Warshall-2020-01-14.tar.gz">
afp-Roy_Floyd_Warshall-2020-01-14.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-Roy_Floyd_Warshall-2019-06-11.tar.gz">
afp-Roy_Floyd_Warshall-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Roy_Floyd_Warshall-2018-08-16.tar.gz">
afp-Roy_Floyd_Warshall-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Roy_Floyd_Warshall-2017-10-10.tar.gz">
afp-Roy_Floyd_Warshall-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Roy_Floyd_Warshall-2016-12-17.tar.gz">
afp-Roy_Floyd_Warshall-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Roy_Floyd_Warshall-2016-02-22.tar.gz">
afp-Roy_Floyd_Warshall-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Roy_Floyd_Warshall-2015-05-27.tar.gz">
afp-Roy_Floyd_Warshall-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Roy_Floyd_Warshall-2014-08-28.tar.gz">
afp-Roy_Floyd_Warshall-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Roy_Floyd_Warshall-2014-05-24.tar.gz">
afp-Roy_Floyd_Warshall-2014-05-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/SATSolverVerification.html b/web/entries/SATSolverVerification.html
--- a/web/entries/SATSolverVerification.html
+++ b/web/entries/SATSolverVerification.html
@@ -1,278 +1,278 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formal Verification of Modern SAT Solvers - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormal
<font class="first">V</font>erification
of
<font class="first">M</font>odern
<font class="first">S</font>AT
<font class="first">S</font>olvers
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formal Verification of Modern SAT Solvers</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Filip Marić (filip /at/ matf /dot/ bg /dot/ ac /dot/ rs)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-07-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This document contains formal correctness proofs of modern SAT solvers. Following (Krstic et al, 2007) and (Nieuwenhuis et al., 2006), solvers are described using state-transition systems. Several different SAT solver descriptions are given and their partial correctness and termination is proved. These include: <ul> <li> a solver based on classical DPLL procedure (using only a backtrack-search with unit propagation),</li> <li> a very general solver with backjumping and learning (similar to the description given in (Nieuwenhuis et al., 2006)), and</li> <li> a solver with a specific conflict analysis algorithm (similar to the description given in (Krstic et al., 2007)).</li> </ul> Within the SAT solver correctness proofs, a large number of lemmas about propositional logic and CNF formulae are proved. This theory is self-contained and could be used for further exploring of properties of CNF based SAT algorithms.</div></td>
+ <td class="abstract mathjax_process">This document contains formal correctness proofs of modern SAT solvers. Following (Krstic et al, 2007) and (Nieuwenhuis et al., 2006), solvers are described using state-transition systems. Several different SAT solver descriptions are given and their partial correctness and termination is proved. These include: <ul> <li> a solver based on classical DPLL procedure (using only a backtrack-search with unit propagation),</li> <li> a very general solver with backjumping and learning (similar to the description given in (Nieuwenhuis et al., 2006)), and</li> <li> a solver with a specific conflict analysis algorithm (similar to the description given in (Krstic et al., 2007)).</li> </ul> Within the SAT solver correctness proofs, a large number of lemmas about propositional logic and CNF formulae are proved. This theory is self-contained and could be used for further exploring of properties of CNF based SAT algorithms.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{SATSolverVerification-AFP,
author = {Filip Marić},
title = {Formal Verification of Modern SAT Solvers},
journal = {Archive of Formal Proofs},
month = jul,
year = 2008,
note = {\url{http://isa-afp.org/entries/SATSolverVerification.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SATSolverVerification/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/SATSolverVerification/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SATSolverVerification/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-SATSolverVerification-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-SATSolverVerification-2019-06-11.tar.gz">
afp-SATSolverVerification-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-SATSolverVerification-2018-08-16.tar.gz">
afp-SATSolverVerification-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-SATSolverVerification-2017-10-10.tar.gz">
afp-SATSolverVerification-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-SATSolverVerification-2016-12-17.tar.gz">
afp-SATSolverVerification-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-SATSolverVerification-2016-02-22.tar.gz">
afp-SATSolverVerification-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-SATSolverVerification-2015-05-27.tar.gz">
afp-SATSolverVerification-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-SATSolverVerification-2014-08-28.tar.gz">
afp-SATSolverVerification-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-SATSolverVerification-2013-12-11.tar.gz">
afp-SATSolverVerification-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-SATSolverVerification-2013-11-17.tar.gz">
afp-SATSolverVerification-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-SATSolverVerification-2013-03-02.tar.gz">
afp-SATSolverVerification-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-SATSolverVerification-2013-02-16.tar.gz">
afp-SATSolverVerification-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-SATSolverVerification-2012-05-24.tar.gz">
afp-SATSolverVerification-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-SATSolverVerification-2011-10-11.tar.gz">
afp-SATSolverVerification-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-SATSolverVerification-2011-02-11.tar.gz">
afp-SATSolverVerification-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-SATSolverVerification-2010-07-01.tar.gz">
afp-SATSolverVerification-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-SATSolverVerification-2009-12-12.tar.gz">
afp-SATSolverVerification-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-SATSolverVerification-2009-04-29.tar.gz">
afp-SATSolverVerification-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-SATSolverVerification-2008-07-27.tar.gz">
afp-SATSolverVerification-2008-07-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/SDS_Impossibility.html b/web/entries/SDS_Impossibility.html
--- a/web/entries/SDS_Impossibility.html
+++ b/web/entries/SDS_Impossibility.html
@@ -1,228 +1,228 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Incompatibility of SD-Efficiency and SD-Strategy-Proofness - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">I</font>ncompatibility
of
<font class="first">S</font>D-Efficiency
and
<font class="first">S</font>D-Strategy-Proofness
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Incompatibility of SD-Efficiency and SD-Strategy-Proofness</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-05-04</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This formalisation contains the proof that there is no anonymous and
neutral Social Decision Scheme for at least four voters and
alternatives that fulfils both SD-Efficiency and SD-Strategy-
Proofness. The proof is a fully structured and quasi-human-redable
one. It was derived from the (unstructured) SMT proof of the case for
exactly four voters and alternatives by Brandl et al. Their proof
relies on an unverified translation of the original problem to SMT,
and the proof that lifts the argument for exactly four voters and
alternatives to the general case is also not machine-checked. In this
Isabelle proof, on the other hand, all of these steps are fully
proven and machine-checked. This is particularly important seeing as a
previously published informal proof of a weaker statement contained a
-mistake in precisely this lifting step.</div></td>
+mistake in precisely this lifting step.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{SDS_Impossibility-AFP,
author = {Manuel Eberl},
title = {The Incompatibility of SD-Efficiency and SD-Strategy-Proofness},
journal = {Archive of Formal Proofs},
month = may,
year = 2016,
note = {\url{http://isa-afp.org/entries/SDS_Impossibility.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Randomised_Social_Choice.html">Randomised_Social_Choice</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SDS_Impossibility/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/SDS_Impossibility/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SDS_Impossibility/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-SDS_Impossibility-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-SDS_Impossibility-2019-06-11.tar.gz">
afp-SDS_Impossibility-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-SDS_Impossibility-2018-08-16.tar.gz">
afp-SDS_Impossibility-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-SDS_Impossibility-2017-10-10.tar.gz">
afp-SDS_Impossibility-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-SDS_Impossibility-2016-12-17.tar.gz">
afp-SDS_Impossibility-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-SDS_Impossibility-2016-05-05.tar.gz">
afp-SDS_Impossibility-2016-05-05.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/SIFPL.html b/web/entries/SIFPL.html
--- a/web/entries/SIFPL.html
+++ b/web/entries/SIFPL.html
@@ -1,274 +1,274 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Secure information flow and program logics - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>ecure
information
flow
and
program
logics
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Secure information flow and program logics</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Lennart Beringer and
<a href="http://www.tcs.informatik.uni-muenchen.de/~mhofmann">Martin Hofmann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-11-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We present interpretations of type systems for secure information flow in Hoare logic, complementing previous encodings in relational program logics. We first treat the imperative language IMP, extended by a simple procedure call mechanism. For this language we consider base-line non-interference in the style of Volpano et al. and the flow-sensitive type system by Hunt and Sands. In both cases, we show how typing derivations may be used to automatically generate proofs in the program logic that certify the absence of illicit flows. We then add instructions for object creation and manipulation, and derive appropriate proof rules for base-line non-interference. As a consequence of our work, standard verification technology may be used for verifying that a concrete program satisfies the non-interference property.<br><br>The present proof development represents an update of the formalisation underlying our paper [CSF 2007] and is intended to resolve any ambiguities that may be present in the paper.</div></td>
+ <td class="abstract mathjax_process">We present interpretations of type systems for secure information flow in Hoare logic, complementing previous encodings in relational program logics. We first treat the imperative language IMP, extended by a simple procedure call mechanism. For this language we consider base-line non-interference in the style of Volpano et al. and the flow-sensitive type system by Hunt and Sands. In both cases, we show how typing derivations may be used to automatically generate proofs in the program logic that certify the absence of illicit flows. We then add instructions for object creation and manipulation, and derive appropriate proof rules for base-line non-interference. As a consequence of our work, standard verification technology may be used for verifying that a concrete program satisfies the non-interference property.<br><br>The present proof development represents an update of the formalisation underlying our paper [CSF 2007] and is intended to resolve any ambiguities that may be present in the paper.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{SIFPL-AFP,
author = {Lennart Beringer and Martin Hofmann},
title = {Secure information flow and program logics},
journal = {Archive of Formal Proofs},
month = nov,
year = 2008,
note = {\url{http://isa-afp.org/entries/SIFPL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SIFPL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/SIFPL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SIFPL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-SIFPL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-SIFPL-2019-06-11.tar.gz">
afp-SIFPL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-SIFPL-2018-08-16.tar.gz">
afp-SIFPL-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-SIFPL-2017-10-10.tar.gz">
afp-SIFPL-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-SIFPL-2016-12-17.tar.gz">
afp-SIFPL-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-SIFPL-2016-02-22.tar.gz">
afp-SIFPL-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-SIFPL-2015-05-27.tar.gz">
afp-SIFPL-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-SIFPL-2014-08-28.tar.gz">
afp-SIFPL-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-SIFPL-2013-12-11.tar.gz">
afp-SIFPL-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-SIFPL-2013-11-17.tar.gz">
afp-SIFPL-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-SIFPL-2013-02-16.tar.gz">
afp-SIFPL-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-SIFPL-2012-05-24.tar.gz">
afp-SIFPL-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-SIFPL-2011-10-11.tar.gz">
afp-SIFPL-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-SIFPL-2011-02-11.tar.gz">
afp-SIFPL-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-SIFPL-2010-07-01.tar.gz">
afp-SIFPL-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-SIFPL-2009-12-12.tar.gz">
afp-SIFPL-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-SIFPL-2009-04-29.tar.gz">
afp-SIFPL-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-SIFPL-2008-11-13.tar.gz">
afp-SIFPL-2008-11-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/SIFUM_Type_Systems.html b/web/entries/SIFUM_Type_Systems.html
--- a/web/entries/SIFUM_Type_Systems.html
+++ b/web/entries/SIFUM_Type_Systems.html
@@ -1,256 +1,256 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Formalization of Assumptions and Guarantees for Compositional Noninterference - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">F</font>ormalization
of
<font class="first">A</font>ssumptions
and
<font class="first">G</font>uarantees
for
<font class="first">C</font>ompositional
<font class="first">N</font>oninterference
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Formalization of Assumptions and Guarantees for Compositional Noninterference</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Sylvia Grewe (grewe /at/ st /dot/ informatik /dot/ tu-darmstadt /dot/ de),
Heiko Mantel (mantel /at/ mais /dot/ informatik /dot/ tu-darmstadt /dot/ de) and
Daniel Schoepe (daniel /at/ schoepe /dot/ org)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-04-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Research in information-flow security aims at developing methods to
+ <td class="abstract mathjax_process">Research in information-flow security aims at developing methods to
identify undesired information leaks within programs from private
(high) sources to public (low) sinks. For a concurrent system, it is
desirable to have compositional analysis methods that allow for
analyzing each thread independently and that nevertheless guarantee
that the parallel composition of successfully analyzed threads
satisfies a global security guarantee. However, such a compositional
analysis should not be overly pessimistic about what an environment
might do with shared resources. Otherwise, the analysis will reject
many intuitively secure programs.
<p>
The paper "Assumptions and Guarantees for Compositional
Noninterference" by Mantel et. al. presents one solution for this problem:
an approach for compositionally reasoning about non-interference in
concurrent programs via rely-guarantee-style reasoning. We present an
-Isabelle/HOL formalization of the concepts and proofs of this approach.</div></td>
+Isabelle/HOL formalization of the concepts and proofs of this approach.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{SIFUM_Type_Systems-AFP,
author = {Sylvia Grewe and Heiko Mantel and Daniel Schoepe},
title = {A Formalization of Assumptions and Guarantees for Compositional Noninterference},
journal = {Archive of Formal Proofs},
month = apr,
year = 2014,
note = {\url{http://isa-afp.org/entries/SIFUM_Type_Systems.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SIFUM_Type_Systems/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/SIFUM_Type_Systems/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SIFUM_Type_Systems/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-SIFUM_Type_Systems-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-SIFUM_Type_Systems-2019-06-11.tar.gz">
afp-SIFUM_Type_Systems-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-SIFUM_Type_Systems-2018-08-16.tar.gz">
afp-SIFUM_Type_Systems-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-SIFUM_Type_Systems-2017-10-10.tar.gz">
afp-SIFUM_Type_Systems-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-SIFUM_Type_Systems-2016-12-17.tar.gz">
afp-SIFUM_Type_Systems-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-SIFUM_Type_Systems-2016-02-22.tar.gz">
afp-SIFUM_Type_Systems-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-SIFUM_Type_Systems-2015-05-27.tar.gz">
afp-SIFUM_Type_Systems-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-SIFUM_Type_Systems-2014-08-28.tar.gz">
afp-SIFUM_Type_Systems-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-SIFUM_Type_Systems-2014-04-25.tar.gz">
afp-SIFUM_Type_Systems-2014-04-25.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-SIFUM_Type_Systems-2014-04-24.tar.gz">
afp-SIFUM_Type_Systems-2014-04-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/SPARCv8.html b/web/entries/SPARCv8.html
--- a/web/entries/SPARCv8.html
+++ b/web/entries/SPARCv8.html
@@ -1,245 +1,245 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A formal model for the SPARCv8 ISA and a proof of non-interference for the LEON3 processor - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
formal
model
for
the
<font class="first">S</font>PARCv8
<font class="first">I</font>SA
and
a
proof
of
non-interference
for
the
<font class="first">L</font>EON3
processor
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A formal model for the SPARCv8 ISA and a proof of non-interference for the LEON3 processor</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Zhe Hou (zhe /dot/ hou /at/ ntu /dot/ edu /dot/ sg),
David Sanan (sanan /at/ ntu /dot/ edu /dot/ sg),
Alwen Tiu (ATiu /at/ ntu /dot/ edu /dot/ sg) and
Yang Liu (yangliu /at/ ntu /dot/ edu /dot/ sg)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-10-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalise the SPARCv8 instruction set architecture (ISA) which is
used in processors such as LEON3. Our formalisation can be specialised
to any SPARCv8 CPU, here we use LEON3 as a running example. Our model
covers the operational semantics for all the instructions in the
integer unit of the SPARCv8 architecture and it supports Isabelle code
export, which effectively turns the Isabelle model into a SPARCv8 CPU
simulator. We prove the language-based non-interference property for
the LEON3 processor. Our model is based on deterministic monad, which
-is a modified version of the non-deterministic monad from NICTA/l4v.</div></td>
+is a modified version of the non-deterministic monad from NICTA/l4v.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{SPARCv8-AFP,
author = {Zhe Hou and David Sanan and Alwen Tiu and Yang Liu},
title = {A formal model for the SPARCv8 ISA and a proof of non-interference for the LEON3 processor},
journal = {Archive of Formal Proofs},
month = oct,
year = 2016,
note = {\url{http://isa-afp.org/entries/SPARCv8.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SPARCv8/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/SPARCv8/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SPARCv8/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-SPARCv8-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-SPARCv8-2019-06-11.tar.gz">
afp-SPARCv8-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-SPARCv8-2018-08-16.tar.gz">
afp-SPARCv8-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-SPARCv8-2017-10-10.tar.gz">
afp-SPARCv8-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-SPARCv8-2016-12-17.tar.gz">
afp-SPARCv8-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-SPARCv8-2016-10-19.tar.gz">
afp-SPARCv8-2016-10-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Safe_OCL.html b/web/entries/Safe_OCL.html
--- a/web/entries/Safe_OCL.html
+++ b/web/entries/Safe_OCL.html
@@ -1,211 +1,211 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Safe OCL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>afe
<font class="first">O</font>CL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Safe OCL</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Denis Nikiforov
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-03-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>The theory is a formalization of the
<a href="https://www.omg.org/spec/OCL/">OCL</a> type system, its abstract
syntax and expression typing rules. The theory does not define a concrete
syntax and a semantics. In contrast to
<a href="https://www.isa-afp.org/entries/Featherweight_OCL.html">Featherweight OCL</a>,
it is based on a deep embedding approach. The type system is defined from scratch,
it is not based on the Isabelle HOL type system.</p>
<p>The Safe OCL distincts nullable and non-nullable types. Also the theory gives a
formal definition of <a href="http://ceur-ws.org/Vol-1512/paper07.pdf">safe
navigation operations</a>. The Safe OCL typing rules are much stricter than rules
given in the OCL specification. It allows one to catch more errors on a type
checking phase.</p>
<p>The type theory presented is four-layered: classes, basic types, generic types,
errorable types. We introduce the following new types: non-nullable types (T[1]),
nullable types (T[?]), OclSuper. OclSuper is a supertype of all other types (basic
types, collections, tuples). This type allows us to define a total supremum function,
so types form an upper semilattice. It allows us to define rich expression typing
rules in an elegant manner.</p>
<p>The Preliminaries Chapter of the theory defines a number of helper lemmas for
transitive closures and tuples. It defines also a generic object model independent
-from OCL. It allows one to use the theory as a reference for formalization of analogous languages.</p></div></td>
+from OCL. It allows one to use the theory as a reference for formalization of analogous languages.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Safe_OCL-AFP,
author = {Denis Nikiforov},
title = {Safe OCL},
journal = {Archive of Formal Proofs},
month = mar,
year = 2019,
note = {\url{http://isa-afp.org/entries/Safe_OCL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Safe_OCL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Safe_OCL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Safe_OCL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Safe_OCL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Safe_OCL-2019-06-11.tar.gz">
afp-Safe_OCL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Safe_OCL-2019-03-14.tar.gz">
afp-Safe_OCL-2019-03-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Saturation_Framework.html b/web/entries/Saturation_Framework.html
--- a/web/entries/Saturation_Framework.html
+++ b/web/entries/Saturation_Framework.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Comprehensive Framework for Saturation Theorem Proving - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">C</font>omprehensive
<font class="first">F</font>ramework
for
<font class="first">S</font>aturation
<font class="first">T</font>heorem
<font class="first">P</font>roving
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Comprehensive Framework for Saturation Theorem Proving</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.mpi-inf.mpg.de/departments/automation-of-logic/people/sophie-tourret/">Sophie Tourret</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-04-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This Isabelle/HOL formalization is the companion of the technical
report “A comprehensive framework for saturation theorem proving”,
itself companion of the eponym IJCAR 2020 paper, written by Uwe
Waldmann, Sophie Tourret, Simon Robillard and Jasmin Blanchette. It
verifies a framework for formal refutational completeness proofs of
abstract provers that implement saturation calculi, such as ordered
resolution or superposition, and allows to model entire prover
architectures in such a way that the static refutational completeness
of a calculus immediately implies the dynamic refutational
completeness of a prover implementing the calculus using a variant of
the given clause loop. The technical report “A comprehensive
framework for saturation theorem proving” is available <a
href="http://matryoshka.gforge.inria.fr/pubs/satur_report.pdf">on
the Matryoshka website</a>. The names of the Isabelle lemmas and
theorems corresponding to the results in the report are indicated in
-the margin of the report.</div></td>
+the margin of the report.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Saturation_Framework-AFP,
author = {Sophie Tourret},
title = {A Comprehensive Framework for Saturation Theorem Proving},
journal = {Archive of Formal Proofs},
month = apr,
year = 2020,
note = {\url{http://isa-afp.org/entries/Saturation_Framework.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
- <td class="data"><a href="Ordered_Resolution_Prover.html">Ordered_Resolution_Prover</a>, <a href="Well_Quasi_Orders.html">Well_Quasi_Orders</a> </td></tr>
+ <td class="data"><a href="Lambda_Free_RPOs.html">Lambda_Free_RPOs</a>, <a href="Ordered_Resolution_Prover.html">Ordered_Resolution_Prover</a>, <a href="Well_Quasi_Orders.html">Well_Quasi_Orders</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Saturation_Framework/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Saturation_Framework/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Saturation_Framework/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Saturation_Framework-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Saturation_Framework-2020-04-10.tar.gz">
afp-Saturation_Framework-2020-04-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Secondary_Sylow.html b/web/entries/Secondary_Sylow.html
--- a/web/entries/Secondary_Sylow.html
+++ b/web/entries/Secondary_Sylow.html
@@ -1,229 +1,229 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Secondary Sylow Theorems - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>econdary
<font class="first">S</font>ylow
<font class="first">T</font>heorems
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Secondary Sylow Theorems</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Jakob von Raumer (psxjv4 /at/ nottingham /dot/ ac /dot/ uk)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-01-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">These theories extend the existing proof of the first Sylow theorem
+ <td class="abstract mathjax_process">These theories extend the existing proof of the first Sylow theorem
(written by Florian Kammueller and L. C. Paulson) by what are often
called the second, third and fourth Sylow theorems. These theorems
state propositions about the number of Sylow p-subgroups of a group
and the fact that they are conjugate to each other. The proofs make
-use of an implementation of group actions and their properties.</div></td>
+use of an implementation of group actions and their properties.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Secondary_Sylow-AFP,
author = {Jakob von Raumer},
title = {Secondary Sylow Theorems},
journal = {Archive of Formal Proofs},
month = jan,
year = 2014,
note = {\url{http://isa-afp.org/entries/Secondary_Sylow.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Jordan_Hoelder.html">Jordan_Hoelder</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Secondary_Sylow/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Secondary_Sylow/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Secondary_Sylow/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Secondary_Sylow-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Secondary_Sylow-2019-06-11.tar.gz">
afp-Secondary_Sylow-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Secondary_Sylow-2018-08-16.tar.gz">
afp-Secondary_Sylow-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Secondary_Sylow-2017-10-10.tar.gz">
afp-Secondary_Sylow-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Secondary_Sylow-2016-12-17.tar.gz">
afp-Secondary_Sylow-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Secondary_Sylow-2016-02-22.tar.gz">
afp-Secondary_Sylow-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Secondary_Sylow-2015-05-27.tar.gz">
afp-Secondary_Sylow-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Secondary_Sylow-2014-08-28.tar.gz">
afp-Secondary_Sylow-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Secondary_Sylow-2014-01-29.tar.gz">
afp-Secondary_Sylow-2014-01-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Security_Protocol_Refinement.html b/web/entries/Security_Protocol_Refinement.html
--- a/web/entries/Security_Protocol_Refinement.html
+++ b/web/entries/Security_Protocol_Refinement.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Developing Security Protocols by Refinement - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">D</font>eveloping
<font class="first">S</font>ecurity
<font class="first">P</font>rotocols
by
<font class="first">R</font>efinement
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Developing Security Protocols by Refinement</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Christoph Sprenger (sprenger /at/ inf /dot/ ethz /dot/ ch) and
Ivano Somaini
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-05-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We propose a development method for security protocols based on
stepwise refinement. Our refinement strategy transforms abstract
security goals into protocols that are secure when operating over an
insecure channel controlled by a Dolev-Yao-style intruder. As
intermediate levels of abstraction, we employ messageless guard
protocols and channel protocols communicating over channels with
security properties. These abstractions provide insights on why
protocols are secure and foster the development of families of
protocols sharing common structure and properties. We have implemented
our method in Isabelle/HOL and used it to develop different entity
authentication and key establishment protocols, including realistic
features such as key confirmation, replay caches, and encrypted
tickets. Our development highlights that guard protocols and channel
protocols provide fundamental abstractions for bridging the gap
between security properties and standard protocol descriptions based
on cryptographic messages. It also shows that our refinement approach
-scales to protocols of nontrivial size and complexity.</div></td>
+scales to protocols of nontrivial size and complexity.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Security_Protocol_Refinement-AFP,
author = {Christoph Sprenger and Ivano Somaini},
title = {Developing Security Protocols by Refinement},
journal = {Archive of Formal Proofs},
month = may,
year = 2017,
note = {\url{http://isa-afp.org/entries/Security_Protocol_Refinement.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Security_Protocol_Refinement/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Security_Protocol_Refinement/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Security_Protocol_Refinement/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Security_Protocol_Refinement-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Security_Protocol_Refinement-2019-06-11.tar.gz">
afp-Security_Protocol_Refinement-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Security_Protocol_Refinement-2018-08-16.tar.gz">
afp-Security_Protocol_Refinement-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Security_Protocol_Refinement-2017-10-10.tar.gz">
afp-Security_Protocol_Refinement-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Security_Protocol_Refinement-2017-05-25.tar.gz">
afp-Security_Protocol_Refinement-2017-05-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Selection_Heap_Sort.html b/web/entries/Selection_Heap_Sort.html
--- a/web/entries/Selection_Heap_Sort.html
+++ b/web/entries/Selection_Heap_Sort.html
@@ -1,249 +1,249 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Verification of Selection and Heap Sort Using Locales - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>erification
of
<font class="first">S</font>election
and
<font class="first">H</font>eap
<font class="first">S</font>ort
<font class="first">U</font>sing
<font class="first">L</font>ocales
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Verification of Selection and Heap Sort Using Locales</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.matf.bg.ac.rs/~danijela">Danijela Petrovic</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-02-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Stepwise program refinement techniques can be used to simplify
program verification. Programs are better understood since their
main properties are clearly stated, and verification of rather
complex algorithms is reduced to proving simple statements
connecting successive program specifications. Additionally, it is
easy to analyze similar algorithms and to compare their properties
within a single formalization. Usually, formal analysis is not done
in educational setting due to complexity of verification and a lack
of tools and procedures to make comparison easy. Verification of an
algorithm should not only give correctness proof, but also better
understanding of an algorithm. If the verification is based on small
step program refinement, it can become simple enough to be
demonstrated within the university-level computer science
curriculum. In this paper we demonstrate this and give a formal
analysis of two well known algorithms (Selection Sort and Heap Sort)
using proof assistant Isabelle/HOL and program refinement
-techniques.</div></td>
+techniques.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Selection_Heap_Sort-AFP,
author = {Danijela Petrovic},
title = {Verification of Selection and Heap Sort Using Locales},
journal = {Archive of Formal Proofs},
month = feb,
year = 2014,
note = {\url{http://isa-afp.org/entries/Selection_Heap_Sort.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Selection_Heap_Sort/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Selection_Heap_Sort/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Selection_Heap_Sort/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Selection_Heap_Sort-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Selection_Heap_Sort-2019-06-11.tar.gz">
afp-Selection_Heap_Sort-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Selection_Heap_Sort-2018-08-16.tar.gz">
afp-Selection_Heap_Sort-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Selection_Heap_Sort-2017-10-10.tar.gz">
afp-Selection_Heap_Sort-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Selection_Heap_Sort-2016-12-17.tar.gz">
afp-Selection_Heap_Sort-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Selection_Heap_Sort-2016-02-22.tar.gz">
afp-Selection_Heap_Sort-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Selection_Heap_Sort-2015-05-27.tar.gz">
afp-Selection_Heap_Sort-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Selection_Heap_Sort-2014-08-28.tar.gz">
afp-Selection_Heap_Sort-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Selection_Heap_Sort-2014-02-18.tar.gz">
afp-Selection_Heap_Sort-2014-02-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/SenSocialChoice.html b/web/entries/SenSocialChoice.html
--- a/web/entries/SenSocialChoice.html
+++ b/web/entries/SenSocialChoice.html
@@ -1,280 +1,280 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Some classical results in Social Choice Theory - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>ome
classical
results
in
<font class="first">S</font>ocial
<font class="first">C</font>hoice
<font class="first">T</font>heory
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Some classical results in Social Choice Theory</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-11-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Drawing on Sen's landmark work "Collective Choice and Social Welfare" (1970), this development proves Arrow's General Possibility Theorem, Sen's Liberal Paradox and May's Theorem in a general setting. The goal was to make precise the classical statements and proofs of these results, and to provide a foundation for more recent results such as the Gibbard-Satterthwaite and Duggan-Schwartz theorems.</div></td>
+ <td class="abstract mathjax_process">Drawing on Sen's landmark work "Collective Choice and Social Welfare" (1970), this development proves Arrow's General Possibility Theorem, Sen's Liberal Paradox and May's Theorem in a general setting. The goal was to make precise the classical statements and proofs of these results, and to provide a foundation for more recent results such as the Gibbard-Satterthwaite and Duggan-Schwartz theorems.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{SenSocialChoice-AFP,
author = {Peter Gammie},
title = {Some classical results in Social Choice Theory},
journal = {Archive of Formal Proofs},
month = nov,
year = 2008,
note = {\url{http://isa-afp.org/entries/SenSocialChoice.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SenSocialChoice/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/SenSocialChoice/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SenSocialChoice/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-SenSocialChoice-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-SenSocialChoice-2019-06-11.tar.gz">
afp-SenSocialChoice-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-SenSocialChoice-2018-08-16.tar.gz">
afp-SenSocialChoice-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-SenSocialChoice-2017-10-10.tar.gz">
afp-SenSocialChoice-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-SenSocialChoice-2016-12-17.tar.gz">
afp-SenSocialChoice-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-SenSocialChoice-2016-02-22.tar.gz">
afp-SenSocialChoice-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-SenSocialChoice-2015-05-27.tar.gz">
afp-SenSocialChoice-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-SenSocialChoice-2014-08-28.tar.gz">
afp-SenSocialChoice-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-SenSocialChoice-2013-12-11.tar.gz">
afp-SenSocialChoice-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-SenSocialChoice-2013-11-17.tar.gz">
afp-SenSocialChoice-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-SenSocialChoice-2013-02-16.tar.gz">
afp-SenSocialChoice-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-SenSocialChoice-2012-05-24.tar.gz">
afp-SenSocialChoice-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-SenSocialChoice-2012-03-15.tar.gz">
afp-SenSocialChoice-2012-03-15.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-SenSocialChoice-2011-10-11.tar.gz">
afp-SenSocialChoice-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-SenSocialChoice-2011-02-11.tar.gz">
afp-SenSocialChoice-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-SenSocialChoice-2010-07-01.tar.gz">
afp-SenSocialChoice-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-SenSocialChoice-2009-12-12.tar.gz">
afp-SenSocialChoice-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-SenSocialChoice-2009-04-29.tar.gz">
afp-SenSocialChoice-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-SenSocialChoice-2008-11-17.tar.gz">
afp-SenSocialChoice-2008-11-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Separata.html b/web/entries/Separata.html
--- a/web/entries/Separata.html
+++ b/web/entries/Separata.html
@@ -1,229 +1,229 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Separata: Isabelle tactics for Separation Algebra - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>eparata:
<font class="first">I</font>sabelle
tactics
for
<font class="first">S</font>eparation
<font class="first">A</font>lgebra
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Separata: Isabelle tactics for Separation Algebra</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Zhe Hou (zhe /dot/ hou /at/ ntu /dot/ edu /dot/ sg),
David Sanan (sanan /at/ ntu /dot/ edu /dot/ sg),
Alwen Tiu (ATiu /at/ ntu /dot/ edu /dot/ sg),
Rajeev Gore (rajeev /dot/ gore /at/ anu /dot/ edu /dot/ au) and
Ranald Clouston (ranald /dot/ clouston /at/ cs /dot/ au /dot/ dk)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-11-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We bring the labelled sequent calculus $LS_{PASL}$ for propositional
abstract separation logic to Isabelle. The tactics given here are
directly applied on an extension of the Separation Algebra in the AFP.
In addition to the cancellative separation algebra, we further
consider some useful properties in the heap model of separation logic,
such as indivisible unit, disjointness, and cross-split. The tactics
are essentially a proof search procedure for the calculus $LS_{PASL}$.
We wrap the tactics in an Isabelle method called separata, and give a
few examples of separation logic formulae which are provable by
-separata.</div></td>
+separata.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Separata-AFP,
author = {Zhe Hou and David Sanan and Alwen Tiu and Rajeev Gore and Ranald Clouston},
title = {Separata: Isabelle tactics for Separation Algebra},
journal = {Archive of Formal Proofs},
month = nov,
year = 2016,
note = {\url{http://isa-afp.org/entries/Separata.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Separation_Algebra.html">Separation_Algebra</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Separata/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Separata/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Separata/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Separata-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Separata-2019-06-11.tar.gz">
afp-Separata-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Separata-2018-08-16.tar.gz">
afp-Separata-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Separata-2017-10-10.tar.gz">
afp-Separata-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Separata-2016-12-17.tar.gz">
afp-Separata-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Separata-2016-11-17.tar.gz">
afp-Separata-2016-11-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Separation_Algebra.html b/web/entries/Separation_Algebra.html
--- a/web/entries/Separation_Algebra.html
+++ b/web/entries/Separation_Algebra.html
@@ -1,244 +1,244 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Separation Algebra - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>eparation
<font class="first">A</font>lgebra
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Separation Algebra</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.cse.unsw.edu.au/~kleing/">Gerwin Klein</a>,
Rafal Kolanski and
Andrew Boyton (andrew /dot/ boyton /at/ nicta /dot/ com /dot/ au)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-05-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We present a generic type class implementation of separation algebra for Isabelle/HOL as well as lemmas and generic tactics which can be used directly for any instantiation of the type class. <P> The ex directory contains example instantiations that include structures such as a heap or virtual memory. <P> The abstract separation algebra is based upon "Abstract Separation Logic" by Calcagno et al. These theories are also the basis of the ITP 2012 rough diamond "Mechanised Separation Algebra" by the authors. <P> The aim of this work is to support and significantly reduce the effort for future separation logic developments in Isabelle/HOL by factoring out the part of separation logic that can be treated abstractly once and for all. This includes developing typical default rule sets for reasoning as well as automated tactic support for separation logic.</div></td>
+ <td class="abstract mathjax_process">We present a generic type class implementation of separation algebra for Isabelle/HOL as well as lemmas and generic tactics which can be used directly for any instantiation of the type class. <P> The ex directory contains example instantiations that include structures such as a heap or virtual memory. <P> The abstract separation algebra is based upon "Abstract Separation Logic" by Calcagno et al. These theories are also the basis of the ITP 2012 rough diamond "Mechanised Separation Algebra" by the authors. <P> The aim of this work is to support and significantly reduce the effort for future separation logic developments in Isabelle/HOL by factoring out the part of separation logic that can be treated abstractly once and for all. This includes developing typical default rule sets for reasoning as well as automated tactic support for separation logic.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Separation_Algebra-AFP,
author = {Gerwin Klein and Rafal Kolanski and Andrew Boyton},
title = {Separation Algebra},
journal = {Archive of Formal Proofs},
month = may,
year = 2012,
note = {\url{http://isa-afp.org/entries/Separation_Algebra.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Hoare_Time.html">Hoare_Time</a>, <a href="Separata.html">Separata</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Separation_Algebra/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Separation_Algebra/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Separation_Algebra/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Separation_Algebra-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Separation_Algebra-2019-06-11.tar.gz">
afp-Separation_Algebra-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Separation_Algebra-2018-08-16.tar.gz">
afp-Separation_Algebra-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Separation_Algebra-2017-10-10.tar.gz">
afp-Separation_Algebra-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Separation_Algebra-2016-12-17.tar.gz">
afp-Separation_Algebra-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Separation_Algebra-2016-02-22.tar.gz">
afp-Separation_Algebra-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Separation_Algebra-2015-05-27.tar.gz">
afp-Separation_Algebra-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Separation_Algebra-2014-08-28.tar.gz">
afp-Separation_Algebra-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Separation_Algebra-2013-12-11.tar.gz">
afp-Separation_Algebra-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Separation_Algebra-2013-11-17.tar.gz">
afp-Separation_Algebra-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Separation_Algebra-2013-02-16.tar.gz">
afp-Separation_Algebra-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Separation_Algebra-2012-05-24.tar.gz">
afp-Separation_Algebra-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Separation_Algebra-2012-05-11.tar.gz">
afp-Separation_Algebra-2012-05-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Separation_Logic_Imperative_HOL.html b/web/entries/Separation_Logic_Imperative_HOL.html
--- a/web/entries/Separation_Logic_Imperative_HOL.html
+++ b/web/entries/Separation_Logic_Imperative_HOL.html
@@ -1,267 +1,267 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Separation Logic Framework for Imperative HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">S</font>eparation
<font class="first">L</font>ogic
<font class="first">F</font>ramework
for
<font class="first">I</font>mperative
<font class="first">H</font>OL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Separation Logic Framework for Imperative HOL</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Peter Lammich and
Rene Meis (rene /dot/ meis /at/ uni-due /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-11-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We provide a framework for separation-logic based correctness proofs of
Imperative HOL programs. Our framework comes with a set of proof methods to
automate canonical tasks such as verification condition generation and
frame inference. Moreover, we provide a set of examples that show the
applicability of our framework. The examples include algorithms on lists,
hash-tables, and union-find trees. We also provide abstract interfaces for
lists, maps, and sets, that allow to develop generic imperative algorithms
and use data-refinement techniques.
<br>
As we target Imperative HOL, our programs can be translated to
efficiently executable code in various target languages, including
-ML, OCaml, Haskell, and Scala.</div></td>
+ML, OCaml, Haskell, and Scala.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Separation_Logic_Imperative_HOL-AFP,
author = {Peter Lammich and Rene Meis},
title = {A Separation Logic Framework for Imperative HOL},
journal = {Archive of Formal Proofs},
month = nov,
year = 2012,
note = {\url{http://isa-afp.org/entries/Separation_Logic_Imperative_HOL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Automatic_Refinement.html">Automatic_Refinement</a>, <a href="Collections.html">Collections</a>, <a href="Native_Word.html">Native_Word</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="UpDown_Scheme.html">UpDown_Scheme</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Separation_Logic_Imperative_HOL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Separation_Logic_Imperative_HOL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Separation_Logic_Imperative_HOL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Separation_Logic_Imperative_HOL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Separation_Logic_Imperative_HOL-2019-06-11.tar.gz">
afp-Separation_Logic_Imperative_HOL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Separation_Logic_Imperative_HOL-2018-08-16.tar.gz">
afp-Separation_Logic_Imperative_HOL-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Separation_Logic_Imperative_HOL-2017-10-10.tar.gz">
afp-Separation_Logic_Imperative_HOL-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Separation_Logic_Imperative_HOL-2016-12-17.tar.gz">
afp-Separation_Logic_Imperative_HOL-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Separation_Logic_Imperative_HOL-2016-02-22.tar.gz">
afp-Separation_Logic_Imperative_HOL-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Separation_Logic_Imperative_HOL-2015-05-27.tar.gz">
afp-Separation_Logic_Imperative_HOL-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Separation_Logic_Imperative_HOL-2014-08-28.tar.gz">
afp-Separation_Logic_Imperative_HOL-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Separation_Logic_Imperative_HOL-2013-12-11.tar.gz">
afp-Separation_Logic_Imperative_HOL-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Separation_Logic_Imperative_HOL-2013-11-17.tar.gz">
afp-Separation_Logic_Imperative_HOL-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Separation_Logic_Imperative_HOL-2013-03-02.tar.gz">
afp-Separation_Logic_Imperative_HOL-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Separation_Logic_Imperative_HOL-2013-02-16.tar.gz">
afp-Separation_Logic_Imperative_HOL-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Separation_Logic_Imperative_HOL-2012-11-15.tar.gz">
afp-Separation_Logic_Imperative_HOL-2012-11-15.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/SequentInvertibility.html b/web/entries/SequentInvertibility.html
--- a/web/entries/SequentInvertibility.html
+++ b/web/entries/SequentInvertibility.html
@@ -1,264 +1,264 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Invertibility in Sequent Calculi - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>nvertibility
in
<font class="first">S</font>equent
<font class="first">C</font>alculi
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Invertibility in Sequent Calculi</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Peter Chapman
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2009-08-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">The invertibility of the rules of a sequent calculus is important for guiding proof search and can be used in some formalised proofs of Cut admissibility. We present sufficient conditions for when a rule is invertible with respect to a calculus. We illustrate the conditions with examples. It must be noted we give purely syntactic criteria; no guarantees are given as to the suitability of the rules.</div></td>
+ <td class="abstract mathjax_process">The invertibility of the rules of a sequent calculus is important for guiding proof search and can be used in some formalised proofs of Cut admissibility. We present sufficient conditions for when a rule is invertible with respect to a calculus. We illustrate the conditions with examples. It must be noted we give purely syntactic criteria; no guarantees are given as to the suitability of the rules.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{SequentInvertibility-AFP,
author = {Peter Chapman},
title = {Invertibility in Sequent Calculi},
journal = {Archive of Formal Proofs},
month = aug,
year = 2009,
note = {\url{http://isa-afp.org/entries/SequentInvertibility.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SequentInvertibility/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/SequentInvertibility/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SequentInvertibility/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-SequentInvertibility-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-SequentInvertibility-2019-06-11.tar.gz">
afp-SequentInvertibility-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-SequentInvertibility-2018-08-16.tar.gz">
afp-SequentInvertibility-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-SequentInvertibility-2017-10-10.tar.gz">
afp-SequentInvertibility-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-SequentInvertibility-2016-12-17.tar.gz">
afp-SequentInvertibility-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-SequentInvertibility-2016-02-22.tar.gz">
afp-SequentInvertibility-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-SequentInvertibility-2015-05-27.tar.gz">
afp-SequentInvertibility-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-SequentInvertibility-2014-08-28.tar.gz">
afp-SequentInvertibility-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-SequentInvertibility-2013-12-11.tar.gz">
afp-SequentInvertibility-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-SequentInvertibility-2013-11-17.tar.gz">
afp-SequentInvertibility-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-SequentInvertibility-2013-02-16.tar.gz">
afp-SequentInvertibility-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-SequentInvertibility-2012-05-24.tar.gz">
afp-SequentInvertibility-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-SequentInvertibility-2011-10-11.tar.gz">
afp-SequentInvertibility-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-SequentInvertibility-2011-02-11.tar.gz">
afp-SequentInvertibility-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-SequentInvertibility-2010-07-01.tar.gz">
afp-SequentInvertibility-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-SequentInvertibility-2009-12-12.tar.gz">
afp-SequentInvertibility-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-SequentInvertibility-2009-09-01.tar.gz">
afp-SequentInvertibility-2009-09-01.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Shivers-CFA.html b/web/entries/Shivers-CFA.html
--- a/web/entries/Shivers-CFA.html
+++ b/web/entries/Shivers-CFA.html
@@ -1,264 +1,264 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Shivers' Control Flow Analysis - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>hivers'
<font class="first">C</font>ontrol
<font class="first">F</font>low
<font class="first">A</font>nalysis
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Shivers' Control Flow Analysis</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Joachim Breitner (joachim /at/ cis /dot/ upenn /dot/ edu)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-11-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
In his dissertation, Olin Shivers introduces a concept of control flow graphs
for functional languages, provides an algorithm to statically derive a safe
approximation of the control flow graph and proves this algorithm correct. In
this research project, Shivers' algorithms and proofs are formalized
-in the HOLCF extension of HOL.</div></td>
+in the HOLCF extension of HOL.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Shivers-CFA-AFP,
author = {Joachim Breitner},
title = {Shivers' Control Flow Analysis},
journal = {Archive of Formal Proofs},
month = nov,
year = 2010,
note = {\url{http://isa-afp.org/entries/Shivers-CFA.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Shivers-CFA/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Shivers-CFA/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Shivers-CFA/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Shivers-CFA-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Shivers-CFA-2019-06-11.tar.gz">
afp-Shivers-CFA-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Shivers-CFA-2018-08-16.tar.gz">
afp-Shivers-CFA-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Shivers-CFA-2017-10-10.tar.gz">
afp-Shivers-CFA-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Shivers-CFA-2016-12-17.tar.gz">
afp-Shivers-CFA-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Shivers-CFA-2016-02-22.tar.gz">
afp-Shivers-CFA-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Shivers-CFA-2015-05-27.tar.gz">
afp-Shivers-CFA-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Shivers-CFA-2014-08-28.tar.gz">
afp-Shivers-CFA-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Shivers-CFA-2013-12-11.tar.gz">
afp-Shivers-CFA-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Shivers-CFA-2013-11-17.tar.gz">
afp-Shivers-CFA-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Shivers-CFA-2013-02-16.tar.gz">
afp-Shivers-CFA-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Shivers-CFA-2012-05-24.tar.gz">
afp-Shivers-CFA-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Shivers-CFA-2011-10-11.tar.gz">
afp-Shivers-CFA-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Shivers-CFA-2011-02-11.tar.gz">
afp-Shivers-CFA-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Shivers-CFA-2010-11-18.tar.gz">
afp-Shivers-CFA-2010-11-18.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Shivers-CFA-2010-11-17.tar.gz">
afp-Shivers-CFA-2010-11-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/ShortestPath.html b/web/entries/ShortestPath.html
--- a/web/entries/ShortestPath.html
+++ b/web/entries/ShortestPath.html
@@ -1,246 +1,246 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>An Axiomatic Characterization of the Single-Source Shortest Path Problem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>n
<font class="first">A</font>xiomatic
<font class="first">C</font>haracterization
of
the
<font class="first">S</font>ingle-Source
<font class="first">S</font>hortest
<font class="first">P</font>ath
<font class="first">P</font>roblem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">An Axiomatic Characterization of the Single-Source Shortest Path Problem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Christine Rizkallah
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-05-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This theory is split into two sections. In the first section, we give a formal proof that a well-known axiomatic characterization of the single-source shortest path problem is correct. Namely, we prove that in a directed graph with a non-negative cost function on the edges the single-source shortest path function is the only function that satisfies a set of four axioms. In the second section, we give a formal proof of the correctness of an axiomatic characterization of the single-source shortest path problem for directed graphs with general cost functions. The axioms here are more involved because we have to account for potential negative cycles in the graph. The axioms are summarized in three Isabelle locales.</div></td>
+ <td class="abstract mathjax_process">This theory is split into two sections. In the first section, we give a formal proof that a well-known axiomatic characterization of the single-source shortest path problem is correct. Namely, we prove that in a directed graph with a non-negative cost function on the edges the single-source shortest path function is the only function that satisfies a set of four axioms. In the second section, we give a formal proof of the correctness of an axiomatic characterization of the single-source shortest path problem for directed graphs with general cost functions. The axioms here are more involved because we have to account for potential negative cycles in the graph. The axioms are summarized in three Isabelle locales.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{ShortestPath-AFP,
author = {Christine Rizkallah},
title = {An Axiomatic Characterization of the Single-Source Shortest Path Problem},
journal = {Archive of Formal Proofs},
month = may,
year = 2013,
note = {\url{http://isa-afp.org/entries/ShortestPath.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Graph_Theory.html">Graph_Theory</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ShortestPath/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/ShortestPath/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ShortestPath/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-ShortestPath-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-ShortestPath-2019-06-11.tar.gz">
afp-ShortestPath-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-ShortestPath-2018-08-16.tar.gz">
afp-ShortestPath-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-ShortestPath-2017-10-10.tar.gz">
afp-ShortestPath-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-ShortestPath-2016-12-17.tar.gz">
afp-ShortestPath-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-ShortestPath-2016-02-22.tar.gz">
afp-ShortestPath-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-ShortestPath-2015-05-27.tar.gz">
afp-ShortestPath-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-ShortestPath-2014-08-28.tar.gz">
afp-ShortestPath-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-ShortestPath-2013-12-11.tar.gz">
afp-ShortestPath-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-ShortestPath-2013-11-17.tar.gz">
afp-ShortestPath-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-ShortestPath-2013-05-30.tar.gz">
afp-ShortestPath-2013-05-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Show.html b/web/entries/Show.html
--- a/web/entries/Show.html
+++ b/web/entries/Show.html
@@ -1,242 +1,242 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Haskell's Show Class in Isabelle/HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">H</font>askell's
<font class="first">S</font>how
<font class="first">C</font>lass
in
<font class="first">I</font>sabelle/HOL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Haskell's Show Class in Isabelle/HOL</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com) and
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-07-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We implemented a type class for "to-string" functions, similar to
Haskell's Show class. Moreover, we provide instantiations for Isabelle/HOL's
standard types like bool, prod, sum, nats, ints, and rats. It is further
possible, to automatically derive show functions for arbitrary user defined
-datatypes similar to Haskell's "deriving Show".</div></td>
+datatypes similar to Haskell's "deriving Show".</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2015-03-11]: Adapted development to new-style (BNF-based) datatypes.<br>
[2015-04-10]: Moved development for old-style datatypes into subdirectory
"Old_Datatype".<br></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Show-AFP,
author = {Christian Sternagel and René Thiemann},
title = {Haskell's Show Class in Isabelle/HOL},
journal = {Archive of Formal Proofs},
month = jul,
year = 2014,
note = {\url{http://isa-afp.org/entries/Show.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Deriving.html">Deriving</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Affine_Arithmetic.html">Affine_Arithmetic</a>, <a href="CakeML.html">CakeML</a>, <a href="CakeML_Codegen.html">CakeML_Codegen</a>, <a href="Certification_Monads.html">Certification_Monads</a>, <a href="Dict_Construction.html">Dict_Construction</a>, <a href="Monad_Memo_DP.html">Monad_Memo_DP</a>, <a href="Polynomial_Factorization.html">Polynomial_Factorization</a>, <a href="Polynomials.html">Polynomials</a>, <a href="Real_Impl.html">Real_Impl</a>, <a href="XML.html">XML</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Show/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Show/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Show/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Show-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Show-2019-06-11.tar.gz">
afp-Show-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Show-2018-08-16.tar.gz">
afp-Show-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Show-2017-10-10.tar.gz">
afp-Show-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Show-2016-12-17.tar.gz">
afp-Show-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Show-2016-02-22.tar.gz">
afp-Show-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Show-2015-05-27.tar.gz">
afp-Show-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Show-2014-08-29.tar.gz">
afp-Show-2014-08-29.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Show-2014-08-28.tar.gz">
afp-Show-2014-08-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Sigma_Commit_Crypto.html b/web/entries/Sigma_Commit_Crypto.html
--- a/web/entries/Sigma_Commit_Crypto.html
+++ b/web/entries/Sigma_Commit_Crypto.html
@@ -1,208 +1,208 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Sigma Protocols and Commitment Schemes - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>igma
<font class="first">P</font>rotocols
and
<font class="first">C</font>ommitment
<font class="first">S</font>chemes
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Sigma Protocols and Commitment Schemes</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www.turing.ac.uk/people/doctoral-students/david-butler">David Butler</a> and
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-10-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We use CryptHOL to formalise commitment schemes and Sigma-protocols.
Both are widely used fundamental two party cryptographic primitives.
Security for commitment schemes is considered using game-based
definitions whereas the security of Sigma-protocols is considered
using both the game-based and simulation-based security paradigms. In
this work, we first define security for both primitives and then prove
secure multiple case studies: the Schnorr, Chaum-Pedersen and
Okamoto Sigma-protocols as well as a construction that allows for
compound (AND and OR statements) Sigma-protocols and the Pedersen and
Rivest commitment schemes. We also prove that commitment schemes can
be constructed from Sigma-protocols. We formalise this proof at an
abstract level, only assuming the existence of a Sigma-protocol;
consequently, the instantiations of this result for the concrete
-Sigma-protocols we consider come for free.</div></td>
+Sigma-protocols we consider come for free.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Sigma_Commit_Crypto-AFP,
author = {David Butler and Andreas Lochbihler},
title = {Sigma Protocols and Commitment Schemes},
journal = {Archive of Formal Proofs},
month = oct,
year = 2019,
note = {\url{http://isa-afp.org/entries/Sigma_Commit_Crypto.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="CryptHOL.html">CryptHOL</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Sigma_Commit_Crypto/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Sigma_Commit_Crypto/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Sigma_Commit_Crypto/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Sigma_Commit_Crypto-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Sigma_Commit_Crypto-2019-10-08.tar.gz">
afp-Sigma_Commit_Crypto-2019-10-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Signature_Groebner.html b/web/entries/Signature_Groebner.html
--- a/web/entries/Signature_Groebner.html
+++ b/web/entries/Signature_Groebner.html
@@ -1,212 +1,212 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Signature-Based Gröbner Basis Algorithms - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>ignature-Based
<font class="first">G</font>röbner
<font class="first">B</font>asis
<font class="first">A</font>lgorithms
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Signature-Based Gröbner Basis Algorithms</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://risc.jku.at/m/alexander-maletzky/">Alexander Maletzky</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-09-20</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This article formalizes signature-based algorithms for computing
Gr&ouml;bner bases. Such algorithms are, in general, superior to
other algorithms in terms of efficiency, and have not been formalized
in any proof assistant so far. The present development is both
generic, in the sense that most known variants of signature-based
algorithms are covered by it, and effectively executable on concrete
input thanks to Isabelle's code generator. Sample computations of
benchmark problems show that the verified implementation of
signature-based algorithms indeed outperforms the existing
implementation of Buchberger's algorithm in Isabelle/HOL.</p>
<p>Besides total correctness of the algorithms, the article also proves
that under certain conditions they a-priori detect and avoid all
useless zero-reductions, and always return 'minimal' (in
some sense) Gr&ouml;bner bases if an input parameter is chosen in
the right way.</p><p>The formalization follows the recent survey article by
-Eder and Faug&egrave;re.</p></div></td>
+Eder and Faug&egrave;re.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Signature_Groebner-AFP,
author = {Alexander Maletzky},
title = {Signature-Based Gröbner Basis Algorithms},
journal = {Archive of Formal Proofs},
month = sep,
year = 2018,
note = {\url{http://isa-afp.org/entries/Signature_Groebner.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Groebner_Bases.html">Groebner_Bases</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Signature_Groebner/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Signature_Groebner/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Signature_Groebner/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Signature_Groebner-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Signature_Groebner-2019-06-11.tar.gz">
afp-Signature_Groebner-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Signature_Groebner-2018-09-20.tar.gz">
afp-Signature_Groebner-2018-09-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Simpl.html b/web/entries/Simpl.html
--- a/web/entries/Simpl.html
+++ b/web/entries/Simpl.html
@@ -1,297 +1,297 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Sequential Imperative Programming Language Syntax, Semantics, Hoare Logics and Verification Environment - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">S</font>equential
<font class="first">I</font>mperative
<font class="first">P</font>rogramming
<font class="first">L</font>anguage
<font class="first">S</font>yntax,
<font class="first">S</font>emantics,
<font class="first">H</font>oare
<font class="first">L</font>ogics
and
<font class="first">V</font>erification
<font class="first">E</font>nvironment
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Sequential Imperative Programming Language Syntax, Semantics, Hoare Logics and Verification Environment</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Norbert Schirmer (norbert /dot/ schirmer /at/ web /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-02-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We present the theory of Simpl, a sequential imperative programming language. We introduce its syntax, its semantics (big and small-step operational semantics) and Hoare logics for both partial as well as total correctness. We prove soundness and completeness of the Hoare logic. We integrate and automate the Hoare logic in Isabelle/HOL to obtain a practically usable verification environment for imperative programs. Simpl is independent of a concrete programming language but expressive enough to cover all common language features: mutually recursive procedures, abrupt termination and exceptions, runtime faults, local and global variables, pointers and heap, expressions with side effects, pointers to procedures, partial application and closures, dynamic method invocation and also unbounded nondeterminism.</div></td>
+ <td class="abstract mathjax_process">We present the theory of Simpl, a sequential imperative programming language. We introduce its syntax, its semantics (big and small-step operational semantics) and Hoare logics for both partial as well as total correctness. We prove soundness and completeness of the Hoare logic. We integrate and automate the Hoare logic in Isabelle/HOL to obtain a practically usable verification environment for imperative programs. Simpl is independent of a concrete programming language but expressive enough to cover all common language features: mutually recursive procedures, abrupt termination and exceptions, runtime faults, local and global variables, pointers and heap, expressions with side effects, pointers to procedures, partial application and closures, dynamic method invocation and also unbounded nondeterminism.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Simpl-AFP,
author = {Norbert Schirmer},
title = {A Sequential Imperative Programming Language Syntax, Semantics, Hoare Logics and Verification Environment},
journal = {Archive of Formal Proofs},
month = feb,
year = 2008,
note = {\url{http://isa-afp.org/entries/Simpl.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="BDD.html">BDD</a>, <a href="Planarity_Certificates.html">Planarity_Certificates</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Simpl/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Simpl/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Simpl/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Simpl-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Simpl-2019-06-11.tar.gz">
afp-Simpl-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Simpl-2018-08-16.tar.gz">
afp-Simpl-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Simpl-2017-10-10.tar.gz">
afp-Simpl-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Simpl-2016-12-17.tar.gz">
afp-Simpl-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Simpl-2016-02-22.tar.gz">
afp-Simpl-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Simpl-2015-05-27.tar.gz">
afp-Simpl-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Simpl-2014-08-28.tar.gz">
afp-Simpl-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Simpl-2013-12-11.tar.gz">
afp-Simpl-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Simpl-2013-11-17.tar.gz">
afp-Simpl-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Simpl-2013-02-16.tar.gz">
afp-Simpl-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Simpl-2012-05-24.tar.gz">
afp-Simpl-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Simpl-2011-10-11.tar.gz">
afp-Simpl-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Simpl-2011-02-11.tar.gz">
afp-Simpl-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Simpl-2010-07-01.tar.gz">
afp-Simpl-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Simpl-2009-12-12.tar.gz">
afp-Simpl-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Simpl-2009-09-12.tar.gz">
afp-Simpl-2009-09-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Simpl-2009-04-29.tar.gz">
afp-Simpl-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Simpl-2008-06-10.tar.gz">
afp-Simpl-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Simpl-2008-03-07.tar.gz">
afp-Simpl-2008-03-07.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Simple_Firewall.html b/web/entries/Simple_Firewall.html
--- a/web/entries/Simple_Firewall.html
+++ b/web/entries/Simple_Firewall.html
@@ -1,226 +1,226 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Simple Firewall - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>imple
<font class="first">F</font>irewall
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Simple Firewall</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a>,
<a href="http://liftm.de">Julius Michaelis</a> and
<a href="http://cl-informatik.uibk.ac.at/users/mhaslbeck/">Maximilian Haslbeck</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-08-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a simple model of a firewall. The firewall can accept or
drop a packet and can match on interfaces, IP addresses, protocol, and
ports. It was designed to feature nice mathematical properties: The
type of match expressions was carefully crafted such that the
conjunction of two match expressions is only one match expression.
This model is too simplistic to mirror all aspects of the real world.
In the upcoming entry "Iptables Semantics", we will translate the
Linux firewall iptables to this model. For a fixed service (e.g. ssh,
http), we provide an algorithm to compute an overview of the
firewall's filtering behavior. The algorithm computes minimal service
matrices, i.e. graphs which partition the complete IPv4 and IPv6
address space and visualize the allowed accesses between partitions.
For a detailed description, see
<a href="http://dl.ifip.org/db/conf/networking/networking2016/1570232858.pdf">Verified iptables Firewall
-Analysis</a>, IFIP Networking 2016.</div></td>
+Analysis</a>, IFIP Networking 2016.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Simple_Firewall-AFP,
author = {Cornelius Diekmann and Julius Michaelis and Maximilian Haslbeck},
title = {Simple Firewall},
journal = {Archive of Formal Proofs},
month = aug,
year = 2016,
note = {\url{http://isa-afp.org/entries/Simple_Firewall.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="IP_Addresses.html">IP_Addresses</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Routing.html">Routing</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Simple_Firewall/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Simple_Firewall/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Simple_Firewall/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Simple_Firewall-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Simple_Firewall-2019-06-11.tar.gz">
afp-Simple_Firewall-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Simple_Firewall-2018-08-16.tar.gz">
afp-Simple_Firewall-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Simple_Firewall-2017-10-10.tar.gz">
afp-Simple_Firewall-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Simple_Firewall-2016-12-17.tar.gz">
afp-Simple_Firewall-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Simple_Firewall-2016-08-24.tar.gz">
afp-Simple_Firewall-2016-08-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Simplex.html b/web/entries/Simplex.html
--- a/web/entries/Simplex.html
+++ b/web/entries/Simplex.html
@@ -1,220 +1,220 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>An Incremental Simplex Algorithm with Unsatisfiable Core Generation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>n
<font class="first">I</font>ncremental
<font class="first">S</font>implex
<font class="first">A</font>lgorithm
with
<font class="first">U</font>nsatisfiable
<font class="first">C</font>ore
<font class="first">G</font>eneration
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">An Incremental Simplex Algorithm with Unsatisfiable Core Generation</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Filip Marić (filip /at/ matf /dot/ bg /dot/ ac /dot/ rs),
Mirko Spasić (mirko /at/ matf /dot/ bg /dot/ ac /dot/ rs) and
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-08-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present an Isabelle/HOL formalization and total correctness proof
for the incremental version of the Simplex algorithm which is used in
most state-of-the-art SMT solvers. It supports extraction of
satisfying assignments, extraction of minimal unsatisfiable cores, incremental
assertion of constraints and backtracking. The formalization relies on
stepwise program refinement, starting from a simple specification,
going through a number of refinement steps, and ending up in a fully
executable functional implementation. Symmetries present in the
-algorithm are handled with special care.</div></td>
+algorithm are handled with special care.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Simplex-AFP,
author = {Filip Marić and Mirko Spasić and René Thiemann},
title = {An Incremental Simplex Algorithm with Unsatisfiable Core Generation},
journal = {Archive of Formal Proofs},
month = aug,
year = 2018,
note = {\url{http://isa-afp.org/entries/Simplex.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Farkas.html">Farkas</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Simplex/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Simplex/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Simplex/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Simplex-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Simplex-2020-01-14.tar.gz">
afp-Simplex-2020-01-14.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-Simplex-2019-06-11.tar.gz">
afp-Simplex-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Simplex-2018-08-27.tar.gz">
afp-Simplex-2018-08-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Skew_Heap.html b/web/entries/Skew_Heap.html
--- a/web/entries/Skew_Heap.html
+++ b/web/entries/Skew_Heap.html
@@ -1,229 +1,229 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Skew Heap - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>kew
<font class="first">H</font>eap
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Skew Heap</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-08-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Skew heaps are an amazingly simple and lightweight implementation of
priority queues. They were invented by Sleator and Tarjan [SIAM 1986]
and have logarithmic amortized complexity. This entry provides executable
and verified functional skew heaps.
<p>
The amortized complexity of skew heaps is analyzed in the AFP entry
-<a href="http://isa-afp.org/entries/Amortized_Complexity.html">Amortized Complexity</a>.</div></td>
+<a href="http://isa-afp.org/entries/Amortized_Complexity.html">Amortized Complexity</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Skew_Heap-AFP,
author = {Tobias Nipkow},
title = {Skew Heap},
journal = {Archive of Formal Proofs},
month = aug,
year = 2014,
note = {\url{http://isa-afp.org/entries/Skew_Heap.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Amortized_Complexity.html">Amortized_Complexity</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Skew_Heap/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Skew_Heap/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Skew_Heap/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Skew_Heap-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Skew_Heap-2019-06-11.tar.gz">
afp-Skew_Heap-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Skew_Heap-2018-08-16.tar.gz">
afp-Skew_Heap-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Skew_Heap-2017-10-10.tar.gz">
afp-Skew_Heap-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Skew_Heap-2016-12-17.tar.gz">
afp-Skew_Heap-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Skew_Heap-2016-02-22.tar.gz">
afp-Skew_Heap-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Skew_Heap-2015-05-27.tar.gz">
afp-Skew_Heap-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Skew_Heap-2014-08-29.tar.gz">
afp-Skew_Heap-2014-08-29.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Skew_Heap-2014-08-28.tar.gz">
afp-Skew_Heap-2014-08-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Skip_Lists.html b/web/entries/Skip_Lists.html
--- a/web/entries/Skip_Lists.html
+++ b/web/entries/Skip_Lists.html
@@ -1,197 +1,197 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Skip Lists - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>kip
<font class="first">L</font>ists
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Skip Lists</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/users/mhaslbeck/">Max W. Haslbeck</a> and
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-01-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p> Skip lists are sorted linked lists enhanced with shortcuts
and are an alternative to binary search trees. A skip lists consists
of multiple levels of sorted linked lists where a list on level n is a
subsequence of the list on level n − 1. In the ideal case, elements
are skipped in such a way that a lookup in a skip lists takes O(log n)
time. In a randomised skip list the skipped elements are choosen
randomly. </p> <p> This entry contains formalized proofs
of the textbook results about the expected height and the expected
-length of a search path in a randomised skip list. </p></div></td>
+length of a search path in a randomised skip list. </p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Skip_Lists-AFP,
author = {Max W. Haslbeck and Manuel Eberl},
title = {Skip Lists},
journal = {Archive of Formal Proofs},
month = jan,
year = 2020,
note = {\url{http://isa-afp.org/entries/Skip_Lists.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Monad_Normalisation.html">Monad_Normalisation</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Skip_Lists/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Skip_Lists/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Skip_Lists/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Skip_Lists-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Skip_Lists-2020-01-10.tar.gz">
afp-Skip_Lists-2020-01-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Slicing.html b/web/entries/Slicing.html
--- a/web/entries/Slicing.html
+++ b/web/entries/Slicing.html
@@ -1,276 +1,276 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Towards Certified Slicing - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>owards
<font class="first">C</font>ertified
<font class="first">S</font>licing
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Towards Certified Slicing</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php">Daniel Wasserrab</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-09-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Slicing is a widely-used technique with applications in e.g. compiler technology and software security. Thus verification of algorithms in these areas is often based on the correctness of slicing, which should ideally be proven independent of concrete programming languages and with the help of well-known verifying techniques such as proof assistants. As a first step in this direction, this contribution presents a framework for dynamic and static intraprocedural slicing based on control flow and program dependence graphs. Abstracting from concrete syntax we base the framework on a graph representation of the program fulfilling certain structural and well-formedness properties.<br><br>The formalization consists of the basic framework (in subdirectory Basic/), the correctness proof for dynamic slicing (in subdirectory Dynamic/), the correctness proof for static intraprocedural slicing (in subdirectory StaticIntra/) and instantiations of the framework with a simple While language (in subdirectory While/) and the sophisticated object-oriented bytecode language of Jinja (in subdirectory JinjaVM/). For more information on the framework, see the TPHOLS 2008 paper by Wasserrab and Lochbihler and the PLAS 2009 paper by Wasserrab et al.</div></td>
+ <td class="abstract mathjax_process">Slicing is a widely-used technique with applications in e.g. compiler technology and software security. Thus verification of algorithms in these areas is often based on the correctness of slicing, which should ideally be proven independent of concrete programming languages and with the help of well-known verifying techniques such as proof assistants. As a first step in this direction, this contribution presents a framework for dynamic and static intraprocedural slicing based on control flow and program dependence graphs. Abstracting from concrete syntax we base the framework on a graph representation of the program fulfilling certain structural and well-formedness properties.<br><br>The formalization consists of the basic framework (in subdirectory Basic/), the correctness proof for dynamic slicing (in subdirectory Dynamic/), the correctness proof for static intraprocedural slicing (in subdirectory StaticIntra/) and instantiations of the framework with a simple While language (in subdirectory While/) and the sophisticated object-oriented bytecode language of Jinja (in subdirectory JinjaVM/). For more information on the framework, see the TPHOLS 2008 paper by Wasserrab and Lochbihler and the PLAS 2009 paper by Wasserrab et al.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Slicing-AFP,
author = {Daniel Wasserrab},
title = {Towards Certified Slicing},
journal = {Archive of Formal Proofs},
month = sep,
year = 2008,
note = {\url{http://isa-afp.org/entries/Slicing.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Jinja.html">Jinja</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Formal_SSA.html">Formal_SSA</a>, <a href="InformationFlowSlicing.html">InformationFlowSlicing</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Slicing/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Slicing/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Slicing/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Slicing-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Slicing-2019-06-11.tar.gz">
afp-Slicing-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Slicing-2018-08-16.tar.gz">
afp-Slicing-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Slicing-2017-10-10.tar.gz">
afp-Slicing-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Slicing-2016-12-17.tar.gz">
afp-Slicing-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Slicing-2016-02-22.tar.gz">
afp-Slicing-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Slicing-2015-05-27.tar.gz">
afp-Slicing-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Slicing-2014-08-28.tar.gz">
afp-Slicing-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Slicing-2013-12-11.tar.gz">
afp-Slicing-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Slicing-2013-11-17.tar.gz">
afp-Slicing-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Slicing-2013-02-16.tar.gz">
afp-Slicing-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Slicing-2012-05-24.tar.gz">
afp-Slicing-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Slicing-2011-10-11.tar.gz">
afp-Slicing-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Slicing-2011-02-11.tar.gz">
afp-Slicing-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Slicing-2010-07-01.tar.gz">
afp-Slicing-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Slicing-2009-12-12.tar.gz">
afp-Slicing-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Slicing-2009-04-30.tar.gz">
afp-Slicing-2009-04-30.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Slicing-2009-04-29.tar.gz">
afp-Slicing-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Slicing-2008-09-22.tar.gz">
afp-Slicing-2008-09-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Sliding_Window_Algorithm.html b/web/entries/Sliding_Window_Algorithm.html
--- a/web/entries/Sliding_Window_Algorithm.html
+++ b/web/entries/Sliding_Window_Algorithm.html
@@ -1,216 +1,216 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalization of an Algorithm for Greedily Computing Associative Aggregations on Sliding Windows - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalization
of
an
<font class="first">A</font>lgorithm
for
<font class="first">G</font>reedily
<font class="first">C</font>omputing
<font class="first">A</font>ssociative
<font class="first">A</font>ggregations
on
<font class="first">S</font>liding
<font class="first">W</font>indows
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalization of an Algorithm for Greedily Computing Associative Aggregations on Sliding Windows</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Lukas Heimes,
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a> and
Joshua Schneider
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-04-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Basin et al.'s <a
href="https://doi.org/10.1016/j.ipl.2014.09.009">sliding
window algorithm (SWA)</a> is an algorithm for combining the
elements of subsequences of a sequence with an associative operator.
It is greedy and minimizes the number of operator applications. We
formalize the algorithm and verify its functional correctness. We
extend the algorithm with additional operations and provide an
alternative interface to the slide operation that does not require the
-entire input sequence.</div></td>
+entire input sequence.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Sliding_Window_Algorithm-AFP,
author = {Lukas Heimes and Dmitriy Traytel and Joshua Schneider},
title = {Formalization of an Algorithm for Greedily Computing Associative Aggregations on Sliding Windows},
journal = {Archive of Formal Proofs},
month = apr,
year = 2020,
note = {\url{http://isa-afp.org/entries/Sliding_Window_Algorithm.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Sliding_Window_Algorithm/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Sliding_Window_Algorithm/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Sliding_Window_Algorithm/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Sliding_Window_Algorithm-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Sliding_Window_Algorithm-2020-04-12.tar.gz">
afp-Sliding_Window_Algorithm-2020-04-12.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Smooth_Manifolds.html b/web/entries/Smooth_Manifolds.html
--- a/web/entries/Smooth_Manifolds.html
+++ b/web/entries/Smooth_Manifolds.html
@@ -1,198 +1,198 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Smooth Manifolds - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>mooth
<font class="first">M</font>anifolds
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Smooth Manifolds</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a> and
<a href="http://lcs.ios.ac.cn/~bzhan/">Bohua Zhan</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-10-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize the definition and basic properties of smooth manifolds
in Isabelle/HOL. Concepts covered include partition of unity, tangent
and cotangent spaces, and the fundamental theorem of path integrals.
We also examine some concrete manifolds such as spheres and projective
spaces. The formalization makes extensive use of the analysis and
linear algebra libraries in Isabelle/HOL, in particular its
-“types-to-sets” mechanism.</div></td>
+“types-to-sets” mechanism.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Smooth_Manifolds-AFP,
author = {Fabian Immler and Bohua Zhan},
title = {Smooth Manifolds},
journal = {Archive of Formal Proofs},
month = oct,
year = 2018,
note = {\url{http://isa-afp.org/entries/Smooth_Manifolds.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Smooth_Manifolds/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Smooth_Manifolds/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Smooth_Manifolds/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Smooth_Manifolds-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Smooth_Manifolds-2019-06-11.tar.gz">
afp-Smooth_Manifolds-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Smooth_Manifolds-2018-10-23.tar.gz">
afp-Smooth_Manifolds-2018-10-23.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Sort_Encodings.html b/web/entries/Sort_Encodings.html
--- a/web/entries/Sort_Encodings.html
+++ b/web/entries/Sort_Encodings.html
@@ -1,261 +1,261 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Sound and Complete Sort Encodings for First-Order Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>ound
and
<font class="first">C</font>omplete
<font class="first">S</font>ort
<font class="first">E</font>ncodings
for
<font class="first">F</font>irst-Order
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Sound and Complete Sort Encodings for First-Order Logic</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Jasmin Christian Blanchette (j /dot/ c /dot/ blanchette /at/ vu /dot/ nl) and
Andrei Popescu (a /dot/ popescu /at/ mdx /dot/ ac /dot/ uk)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-06-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This is a formalization of the soundness and completeness properties
for various efficient encodings of sorts in unsorted first-order logic
used by Isabelle's Sledgehammer tool.
<p>
Essentially, the encodings proceed as follows:
a many-sorted problem is decorated with (as few as possible) tags or
guards that make the problem monotonic; then sorts can be soundly
erased.
<p>
The development employs a formalization of many-sorted first-order logic
in clausal form (clauses, structures and the basic properties
of the satisfaction relation), which could be of interest as the starting
-point for other formalizations of first-order logic metatheory.</div></td>
+point for other formalizations of first-order logic metatheory.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Sort_Encodings-AFP,
author = {Jasmin Christian Blanchette and Andrei Popescu},
title = {Sound and Complete Sort Encodings for First-Order Logic},
journal = {Archive of Formal Proofs},
month = jun,
year = 2013,
note = {\url{http://isa-afp.org/entries/Sort_Encodings.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Sort_Encodings/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Sort_Encodings/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Sort_Encodings/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Sort_Encodings-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Sort_Encodings-2019-06-11.tar.gz">
afp-Sort_Encodings-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Sort_Encodings-2018-08-16.tar.gz">
afp-Sort_Encodings-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Sort_Encodings-2017-10-10.tar.gz">
afp-Sort_Encodings-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Sort_Encodings-2016-12-17.tar.gz">
afp-Sort_Encodings-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Sort_Encodings-2016-02-22.tar.gz">
afp-Sort_Encodings-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Sort_Encodings-2015-05-27.tar.gz">
afp-Sort_Encodings-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Sort_Encodings-2014-08-28.tar.gz">
afp-Sort_Encodings-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Sort_Encodings-2013-12-11.tar.gz">
afp-Sort_Encodings-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Sort_Encodings-2013-11-17.tar.gz">
afp-Sort_Encodings-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Sort_Encodings-2013-07-04.tar.gz">
afp-Sort_Encodings-2013-07-04.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Sort_Encodings-2013-07-01.tar.gz">
afp-Sort_Encodings-2013-07-01.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Source_Coding_Theorem.html b/web/entries/Source_Coding_Theorem.html
--- a/web/entries/Source_Coding_Theorem.html
+++ b/web/entries/Source_Coding_Theorem.html
@@ -1,212 +1,212 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Source Coding Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>ource
<font class="first">C</font>oding
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Source Coding Theorem</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Quentin Hibon (qh225 /at/ cl /dot/ cam /dot/ ac /dot/ uk) and
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-10-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This document contains a proof of the necessary condition on the code
rate of a source code, namely that this code rate is bounded by the
entropy of the source. This represents one half of Shannon's source
-coding theorem, which is itself an equivalence.</div></td>
+coding theorem, which is itself an equivalence.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Source_Coding_Theorem-AFP,
author = {Quentin Hibon and Lawrence C. Paulson},
title = {Source Coding Theorem},
journal = {Archive of Formal Proofs},
month = oct,
year = 2016,
note = {\url{http://isa-afp.org/entries/Source_Coding_Theorem.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Source_Coding_Theorem/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Source_Coding_Theorem/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Source_Coding_Theorem/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Source_Coding_Theorem-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Source_Coding_Theorem-2019-06-11.tar.gz">
afp-Source_Coding_Theorem-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Source_Coding_Theorem-2018-08-16.tar.gz">
afp-Source_Coding_Theorem-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Source_Coding_Theorem-2017-10-10.tar.gz">
afp-Source_Coding_Theorem-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Source_Coding_Theorem-2016-12-17.tar.gz">
afp-Source_Coding_Theorem-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Source_Coding_Theorem-2016-10-19.tar.gz">
afp-Source_Coding_Theorem-2016-10-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Special_Function_Bounds.html b/web/entries/Special_Function_Bounds.html
--- a/web/entries/Special_Function_Bounds.html
+++ b/web/entries/Special_Function_Bounds.html
@@ -1,232 +1,232 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Real-Valued Special Functions: Upper and Lower Bounds - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">R</font>eal-Valued
<font class="first">S</font>pecial
<font class="first">F</font>unctions:
<font class="first">U</font>pper
and
<font class="first">L</font>ower
<font class="first">B</font>ounds
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Real-Valued Special Functions: Upper and Lower Bounds</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-08-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This development proves upper and lower bounds for several familiar real-valued functions. For sin, cos, exp and sqrt, it defines and verifies infinite families of upper and lower bounds, mostly based on Taylor series expansions. For arctan, ln and exp, it verifies a finite collection of upper and lower bounds, originally obtained from the functions' continued fraction expansions using the computer algebra system Maple. A common theme in these proofs is to take the difference between a function and its approximation, which should be zero at one point, and then consider the sign of the derivative. The immediate purpose of this development is to verify axioms used by MetiTarski, an automatic theorem prover for real-valued special functions. Crucial to MetiTarski's operation is the provision of upper and lower bounds for each function of interest.</div></td>
+ <td class="abstract mathjax_process">This development proves upper and lower bounds for several familiar real-valued functions. For sin, cos, exp and sqrt, it defines and verifies infinite families of upper and lower bounds, mostly based on Taylor series expansions. For arctan, ln and exp, it verifies a finite collection of upper and lower bounds, originally obtained from the functions' continued fraction expansions using the computer algebra system Maple. A common theme in these proofs is to take the difference between a function and its approximation, which should be zero at one point, and then consider the sign of the derivative. The immediate purpose of this development is to verify axioms used by MetiTarski, an automatic theorem prover for real-valued special functions. Crucial to MetiTarski's operation is the provision of upper and lower bounds for each function of interest.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Special_Function_Bounds-AFP,
author = {Lawrence C. Paulson},
title = {Real-Valued Special Functions: Upper and Lower Bounds},
journal = {Archive of Formal Proofs},
month = aug,
year = 2014,
note = {\url{http://isa-afp.org/entries/Special_Function_Bounds.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Sturm_Sequences.html">Sturm_Sequences</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Special_Function_Bounds/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Special_Function_Bounds/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Special_Function_Bounds/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Special_Function_Bounds-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Special_Function_Bounds-2019-06-11.tar.gz">
afp-Special_Function_Bounds-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Special_Function_Bounds-2018-08-16.tar.gz">
afp-Special_Function_Bounds-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Special_Function_Bounds-2017-10-10.tar.gz">
afp-Special_Function_Bounds-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Special_Function_Bounds-2016-12-17.tar.gz">
afp-Special_Function_Bounds-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Special_Function_Bounds-2016-02-22.tar.gz">
afp-Special_Function_Bounds-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Special_Function_Bounds-2015-05-27.tar.gz">
afp-Special_Function_Bounds-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Special_Function_Bounds-2014-09-05.tar.gz">
afp-Special_Function_Bounds-2014-09-05.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Special_Function_Bounds-2014-08-29.tar.gz">
afp-Special_Function_Bounds-2014-08-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Splay_Tree.html b/web/entries/Splay_Tree.html
--- a/web/entries/Splay_Tree.html
+++ b/web/entries/Splay_Tree.html
@@ -1,227 +1,227 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Splay Tree - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>play
<font class="first">T</font>ree
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Splay Tree</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-08-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Splay trees are self-adjusting binary search trees which were invented by Sleator and Tarjan [JACM 1985].
This entry provides executable and verified functional splay trees
as well as the related splay heaps (due to Okasaki).
<p>
The amortized complexity of splay trees and heaps is analyzed in the AFP entry
-<a href="http://isa-afp.org/entries/Amortized_Complexity.html">Amortized Complexity</a>.</div></td>
+<a href="http://isa-afp.org/entries/Amortized_Complexity.html">Amortized Complexity</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2016-07-12]: Moved splay heaps here from Amortized_Complexity</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Splay_Tree-AFP,
author = {Tobias Nipkow},
title = {Splay Tree},
journal = {Archive of Formal Proofs},
month = aug,
year = 2014,
note = {\url{http://isa-afp.org/entries/Splay_Tree.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Amortized_Complexity.html">Amortized_Complexity</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Splay_Tree/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Splay_Tree/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Splay_Tree/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Splay_Tree-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Splay_Tree-2019-06-11.tar.gz">
afp-Splay_Tree-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Splay_Tree-2018-08-16.tar.gz">
afp-Splay_Tree-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Splay_Tree-2017-10-10.tar.gz">
afp-Splay_Tree-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Splay_Tree-2016-12-17.tar.gz">
afp-Splay_Tree-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Splay_Tree-2016-02-22.tar.gz">
afp-Splay_Tree-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Splay_Tree-2015-05-27.tar.gz">
afp-Splay_Tree-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Splay_Tree-2014-08-28.tar.gz">
afp-Splay_Tree-2014-08-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Sqrt_Babylonian.html b/web/entries/Sqrt_Babylonian.html
--- a/web/entries/Sqrt_Babylonian.html
+++ b/web/entries/Sqrt_Babylonian.html
@@ -1,258 +1,258 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Computing N-th Roots using the Babylonian Method - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">C</font>omputing
<font class="first">N</font>-th
<font class="first">R</font>oots
using
the
<font class="first">B</font>abylonian
<font class="first">M</font>ethod
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Computing N-th Roots using the Babylonian Method</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-01-03</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We implement the Babylonian method to compute n-th roots of numbers.
We provide precise algorithms for naturals, integers and rationals, and
offer an approximation algorithm for square roots over linear ordered fields. Moreover, there
-are precise algorithms to compute the floor and the ceiling of n-th roots.</div></td>
+are precise algorithms to compute the floor and the ceiling of n-th roots.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2013-10-16]: Added algorithms to compute floor and ceiling of sqrt of integers.
[2014-07-11]: Moved NthRoot_Impl from Real-Impl to this entry.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Sqrt_Babylonian-AFP,
author = {René Thiemann},
title = {Computing N-th Roots using the Babylonian Method},
journal = {Archive of Formal Proofs},
month = jan,
year = 2013,
note = {\url{http://isa-afp.org/entries/Sqrt_Babylonian.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Cauchy.html">Cauchy</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Polynomial_Factorization.html">Polynomial_Factorization</a>, <a href="Polynomial_Interpolation.html">Polynomial_Interpolation</a>, <a href="QR_Decomposition.html">QR_Decomposition</a>, <a href="Real_Impl.html">Real_Impl</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Sqrt_Babylonian/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Sqrt_Babylonian/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Sqrt_Babylonian/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Sqrt_Babylonian-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Sqrt_Babylonian-2019-06-11.tar.gz">
afp-Sqrt_Babylonian-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Sqrt_Babylonian-2018-08-16.tar.gz">
afp-Sqrt_Babylonian-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Sqrt_Babylonian-2017-10-10.tar.gz">
afp-Sqrt_Babylonian-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Sqrt_Babylonian-2016-12-17.tar.gz">
afp-Sqrt_Babylonian-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Sqrt_Babylonian-2016-02-22.tar.gz">
afp-Sqrt_Babylonian-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Sqrt_Babylonian-2015-05-27.tar.gz">
afp-Sqrt_Babylonian-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Sqrt_Babylonian-2014-08-28.tar.gz">
afp-Sqrt_Babylonian-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Sqrt_Babylonian-2013-12-11.tar.gz">
afp-Sqrt_Babylonian-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Sqrt_Babylonian-2013-11-17.tar.gz">
afp-Sqrt_Babylonian-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Sqrt_Babylonian-2013-02-16.tar.gz">
afp-Sqrt_Babylonian-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Sqrt_Babylonian-2013-01-04.tar.gz">
afp-Sqrt_Babylonian-2013-01-04.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Stable_Matching.html b/web/entries/Stable_Matching.html
--- a/web/entries/Stable_Matching.html
+++ b/web/entries/Stable_Matching.html
@@ -1,210 +1,210 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Stable Matching - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>table
<font class="first">M</font>atching
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Stable Matching</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-10-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We mechanize proofs of several results from the matching with
contracts literature, which generalize those of the classical
two-sided matching scenarios that go by the name of stable marriage.
Our focus is on game theoretic issues. Along the way we develop
-executable algorithms for computing optimal stable matches.</div></td>
+executable algorithms for computing optimal stable matches.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Stable_Matching-AFP,
author = {Peter Gammie},
title = {Stable Matching},
journal = {Archive of Formal Proofs},
month = oct,
year = 2016,
note = {\url{http://isa-afp.org/entries/Stable_Matching.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stable_Matching/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Stable_Matching/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stable_Matching/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Stable_Matching-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Stable_Matching-2019-06-11.tar.gz">
afp-Stable_Matching-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Stable_Matching-2018-08-16.tar.gz">
afp-Stable_Matching-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Stable_Matching-2017-10-10.tar.gz">
afp-Stable_Matching-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Stable_Matching-2016-12-17.tar.gz">
afp-Stable_Matching-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Stable_Matching-2016-10-24.tar.gz">
afp-Stable_Matching-2016-10-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Statecharts.html b/web/entries/Statecharts.html
--- a/web/entries/Statecharts.html
+++ b/web/entries/Statecharts.html
@@ -1,268 +1,268 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formalizing Statecharts using Hierarchical Automata - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormalizing
<font class="first">S</font>tatecharts
using
<font class="first">H</font>ierarchical
<font class="first">A</font>utomata
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formalizing Statecharts using Hierarchical Automata</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Steffen Helke (helke /at/ cs /dot/ tu-berlin /dot/ de) and
Florian Kammüller (flokam /at/ cs /dot/ tu-berlin /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2010-08-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We formalize in Isabelle/HOL the abtract syntax and a synchronous
+ <td class="abstract mathjax_process">We formalize in Isabelle/HOL the abtract syntax and a synchronous
step semantics for the specification language Statecharts. The formalization
is based on Hierarchical Automata which allow a structural decomposition of
Statecharts into Sequential Automata. To support the composition of
Statecharts, we introduce calculating operators to construct a Hierarchical
Automaton in a stepwise manner. Furthermore, we present a complete semantics
of Statecharts including a theory of data spaces, which enables the modelling
of racing effects. We also adapt CTL for
Statecharts to build a bridge for future combinations with model
checking. However the main motivation of this work is to provide a sound and
complete basis for reasoning on Statecharts. As a central meta theorem we
-prove that the well-formedness of a Statechart is preserved by the semantics.</div></td>
+prove that the well-formedness of a Statechart is preserved by the semantics.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Statecharts-AFP,
author = {Steffen Helke and Florian Kammüller},
title = {Formalizing Statecharts using Hierarchical Automata},
journal = {Archive of Formal Proofs},
month = aug,
year = 2010,
note = {\url{http://isa-afp.org/entries/Statecharts.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Statecharts/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Statecharts/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Statecharts/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Statecharts-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Statecharts-2019-06-11.tar.gz">
afp-Statecharts-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Statecharts-2018-08-16.tar.gz">
afp-Statecharts-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Statecharts-2017-10-10.tar.gz">
afp-Statecharts-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Statecharts-2016-12-17.tar.gz">
afp-Statecharts-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Statecharts-2016-02-22.tar.gz">
afp-Statecharts-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Statecharts-2015-05-27.tar.gz">
afp-Statecharts-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Statecharts-2014-08-28.tar.gz">
afp-Statecharts-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Statecharts-2013-12-11.tar.gz">
afp-Statecharts-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Statecharts-2013-11-17.tar.gz">
afp-Statecharts-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Statecharts-2013-02-16.tar.gz">
afp-Statecharts-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Statecharts-2012-05-24.tar.gz">
afp-Statecharts-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Statecharts-2011-10-11.tar.gz">
afp-Statecharts-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Statecharts-2011-02-11.tar.gz">
afp-Statecharts-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Statecharts-2010-08-18.tar.gz">
afp-Statecharts-2010-08-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Stellar_Quorums.html b/web/entries/Stellar_Quorums.html
--- a/web/entries/Stellar_Quorums.html
+++ b/web/entries/Stellar_Quorums.html
@@ -1,190 +1,190 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Stellar Quorum Systems - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>tellar
<font class="first">Q</font>uorum
<font class="first">S</font>ystems
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Stellar Quorum Systems</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Giuliano Losa (giuliano /at/ galois /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-08-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize the static properties of personal Byzantine quorum
systems (PBQSs) and Stellar quorum systems, as described in the paper
-``Stellar Consensus by Reduction'' (to appear at DISC 2019).</div></td>
+``Stellar Consensus by Reduction'' (to appear at DISC 2019).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Stellar_Quorums-AFP,
author = {Giuliano Losa},
title = {Stellar Quorum Systems},
journal = {Archive of Formal Proofs},
month = aug,
year = 2019,
note = {\url{http://isa-afp.org/entries/Stellar_Quorums.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stellar_Quorums/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Stellar_Quorums/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stellar_Quorums/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Stellar_Quorums-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Stellar_Quorums-2019-08-03.tar.gz">
afp-Stellar_Quorums-2019-08-03.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Stern_Brocot.html b/web/entries/Stern_Brocot.html
--- a/web/entries/Stern_Brocot.html
+++ b/web/entries/Stern_Brocot.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Stern-Brocot Tree - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">S</font>tern-Brocot
<font class="first">T</font>ree
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Stern-Brocot Tree</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://peteg.org">Peter Gammie</a> and
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-12-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">The Stern-Brocot tree contains all rational numbers exactly once and in their lowest terms. We formalise the Stern-Brocot tree as a coinductive tree using recursive and iterative specifications, which we have proven equivalent, and show that it indeed contains all the numbers as stated. Following Hinze, we prove that the Stern-Brocot tree can be linearised looplessly into Stern's diatonic sequence (also known as Dijkstra's fusc function) and that it is a permutation of the Bird tree.
+ <td class="abstract mathjax_process">The Stern-Brocot tree contains all rational numbers exactly once and in their lowest terms. We formalise the Stern-Brocot tree as a coinductive tree using recursive and iterative specifications, which we have proven equivalent, and show that it indeed contains all the numbers as stated. Following Hinze, we prove that the Stern-Brocot tree can be linearised looplessly into Stern's diatonic sequence (also known as Dijkstra's fusc function) and that it is a permutation of the Bird tree.
</p><p>
The reasoning stays at an abstract level by appealing to the uniqueness of solutions of guarded recursive equations and lifting algebraic laws point-wise to trees and streams using applicative functors.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Stern_Brocot-AFP,
author = {Peter Gammie and Andreas Lochbihler},
title = {The Stern-Brocot Tree},
journal = {Archive of Formal Proofs},
month = dec,
year = 2015,
note = {\url{http://isa-afp.org/entries/Stern_Brocot.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Applicative_Lifting.html">Applicative_Lifting</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stern_Brocot/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Stern_Brocot/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stern_Brocot/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Stern_Brocot-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Stern_Brocot-2019-06-11.tar.gz">
afp-Stern_Brocot-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Stern_Brocot-2018-08-16.tar.gz">
afp-Stern_Brocot-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Stern_Brocot-2017-10-10.tar.gz">
afp-Stern_Brocot-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Stern_Brocot-2016-12-17.tar.gz">
afp-Stern_Brocot-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Stern_Brocot-2016-02-22.tar.gz">
afp-Stern_Brocot-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Stern_Brocot-2015-12-22.tar.gz">
afp-Stern_Brocot-2015-12-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Stewart_Apollonius.html b/web/entries/Stewart_Apollonius.html
--- a/web/entries/Stewart_Apollonius.html
+++ b/web/entries/Stewart_Apollonius.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Stewart's Theorem and Apollonius' Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>tewart's
<font class="first">T</font>heorem
and
<font class="first">A</font>pollonius'
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Stewart's Theorem and Apollonius' Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-07-31</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry formalizes the two geometric theorems, Stewart's and
Apollonius' theorem. Stewart's Theorem relates the length of
a triangle's cevian to the lengths of the triangle's two
sides. Apollonius' Theorem is a specialisation of Stewart's
theorem, restricting the cevian to be the median. The proof applies
the law of cosines, some basic geometric facts about triangles and
then simply transforms the terms algebraically to yield the
conjectured relation. The formalization in Isabelle can closely follow
the informal proofs described in the Wikipedia articles of those two
-theorems.</div></td>
+theorems.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Stewart_Apollonius-AFP,
author = {Lukas Bulwahn},
title = {Stewart's Theorem and Apollonius' Theorem},
journal = {Archive of Formal Proofs},
month = jul,
year = 2017,
note = {\url{http://isa-afp.org/entries/Stewart_Apollonius.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Triangle.html">Triangle</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stewart_Apollonius/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Stewart_Apollonius/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stewart_Apollonius/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Stewart_Apollonius-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Stewart_Apollonius-2019-06-11.tar.gz">
afp-Stewart_Apollonius-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Stewart_Apollonius-2018-08-16.tar.gz">
afp-Stewart_Apollonius-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Stewart_Apollonius-2017-10-10.tar.gz">
afp-Stewart_Apollonius-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Stewart_Apollonius-2017-08-01.tar.gz">
afp-Stewart_Apollonius-2017-08-01.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Stirling_Formula.html b/web/entries/Stirling_Formula.html
--- a/web/entries/Stirling_Formula.html
+++ b/web/entries/Stirling_Formula.html
@@ -1,212 +1,212 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Stirling's formula - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>tirling's
formula
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Stirling's formula</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-09-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
-This work contains a proof of Stirling's formula both for the
-factorial n! &sim; &radic;<span style="text-decoration:
-overline">2&pi;n</span> (n/e)<sup>n</sup> on natural numbers and the
-real Gamma function &Gamma;(x) &sim; &radic;<span
-style="text-decoration: overline">2&pi;/x</span> (x/e)<sup>x</sup>.
-The proof is based on work by <a
-href="http://www.maths.lancs.ac.uk/~jameson/stirlgamma.pdf">Graham
-Jameson</a>.</div></td>
+ <td class="abstract mathjax_process">
+<p>This work contains a proof of Stirling's formula both for the factorial $n! \sim \sqrt{2\pi n} (n/e)^n$ on natural numbers and the real
+Gamma function $\Gamma(x)\sim \sqrt{2\pi/x} (x/e)^x$. The proof is based on work by <a
+href="http://www.maths.lancs.ac.uk/~jameson/stirlgamma.pdf">Graham Jameson</a>.</p>
+<p>This is then extended to the full asymptotic expansion
+$$\log\Gamma(z) = \big(z - \tfrac{1}{2}\big)\log z - z + \tfrac{1}{2}\log(2\pi) + \sum_{k=1}^{n-1} \frac{B_{k+1}}{k(k+1)} z^{-k}\\
+{} - \frac{1}{n} \int_0^\infty B_n([t])(t + z)^{-n}\,\text{d}t$$
+uniformly for all complex $z\neq 0$ in the cone $\text{arg}(z)\leq \alpha$ for any $\alpha\in(0,\pi)$, with which the above asymptotic
+relation for &Gamma; is also extended to complex arguments.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Stirling_Formula-AFP,
author = {Manuel Eberl},
title = {Stirling's formula},
journal = {Archive of Formal Proofs},
month = sep,
year = 2016,
note = {\url{http://isa-afp.org/entries/Stirling_Formula.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Bernoulli.html">Bernoulli</a>, <a href="Landau_Symbols.html">Landau_Symbols</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Comparison_Sort_Lower_Bound.html">Comparison_Sort_Lower_Bound</a>, <a href="Prime_Number_Theorem.html">Prime_Number_Theorem</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stirling_Formula/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Stirling_Formula/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stirling_Formula/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Stirling_Formula-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Stirling_Formula-2019-06-11.tar.gz">
afp-Stirling_Formula-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Stirling_Formula-2018-08-16.tar.gz">
afp-Stirling_Formula-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Stirling_Formula-2017-10-10.tar.gz">
afp-Stirling_Formula-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Stirling_Formula-2016-12-17.tar.gz">
afp-Stirling_Formula-2016-12-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Stochastic_Matrices.html b/web/entries/Stochastic_Matrices.html
--- a/web/entries/Stochastic_Matrices.html
+++ b/web/entries/Stochastic_Matrices.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Stochastic Matrices and the Perron-Frobenius Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>tochastic
<font class="first">M</font>atrices
and
the
<font class="first">P</font>erron-Frobenius
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Stochastic Matrices and the Perron-Frobenius Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-11-22</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Stochastic matrices are a convenient way to model discrete-time and
finite state Markov chains. The Perron&ndash;Frobenius theorem
tells us something about the existence and uniqueness of non-negative
eigenvectors of a stochastic matrix. In this entry, we formalize
stochastic matrices, link the formalization to the existing AFP-entry
on Markov chains, and apply the Perron&ndash;Frobenius theorem to
prove that stationary distributions always exist, and they are unique
-if the stochastic matrix is irreducible.</div></td>
+if the stochastic matrix is irreducible.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Stochastic_Matrices-AFP,
author = {René Thiemann},
title = {Stochastic Matrices and the Perron-Frobenius Theorem},
journal = {Archive of Formal Proofs},
month = nov,
year = 2017,
note = {\url{http://isa-afp.org/entries/Stochastic_Matrices.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Jordan_Normal_Form.html">Jordan_Normal_Form</a>, <a href="Markov_Models.html">Markov_Models</a>, <a href="Perron_Frobenius.html">Perron_Frobenius</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stochastic_Matrices/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Stochastic_Matrices/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stochastic_Matrices/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Stochastic_Matrices-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Stochastic_Matrices-2019-06-11.tar.gz">
afp-Stochastic_Matrices-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Stochastic_Matrices-2018-08-16.tar.gz">
afp-Stochastic_Matrices-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Stochastic_Matrices-2017-11-23.tar.gz">
afp-Stochastic_Matrices-2017-11-23.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Stone_Algebras.html b/web/entries/Stone_Algebras.html
--- a/web/entries/Stone_Algebras.html
+++ b/web/entries/Stone_Algebras.html
@@ -1,220 +1,220 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Stone Algebras - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>tone
<font class="first">A</font>lgebras
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Stone Algebras</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.cosc.canterbury.ac.nz/walter.guttmann/">Walter Guttmann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-09-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
A range of algebras between lattices and Boolean algebras generalise
the notion of a complement. We develop a hierarchy of these
pseudo-complemented algebras that includes Stone algebras.
Independently of this theory we study filters based on partial orders.
Both theories are combined to prove Chen and Grätzer's construction
theorem for Stone algebras. The latter involves extensive reasoning
about algebraic structures in addition to reasoning in algebraic
-structures.</div></td>
+structures.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Stone_Algebras-AFP,
author = {Walter Guttmann},
title = {Stone Algebras},
journal = {Archive of Formal Proofs},
month = sep,
year = 2016,
note = {\url{http://isa-afp.org/entries/Stone_Algebras.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Stone_Relation_Algebras.html">Stone_Relation_Algebras</a>, <a href="Subset_Boolean_Algebras.html">Subset_Boolean_Algebras</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stone_Algebras/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Stone_Algebras/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stone_Algebras/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Stone_Algebras-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Stone_Algebras-2019-06-28.tar.gz">
afp-Stone_Algebras-2019-06-28.tar.gz
</a>
</li>
<li>Isabelle 2019:
<a href="../release/afp-Stone_Algebras-2019-06-11.tar.gz">
afp-Stone_Algebras-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Stone_Algebras-2018-08-16.tar.gz">
afp-Stone_Algebras-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Stone_Algebras-2017-10-10.tar.gz">
afp-Stone_Algebras-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Stone_Algebras-2016-12-17.tar.gz">
afp-Stone_Algebras-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Stone_Algebras-2016-09-06.tar.gz">
afp-Stone_Algebras-2016-09-06.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Stone_Kleene_Relation_Algebras.html b/web/entries/Stone_Kleene_Relation_Algebras.html
--- a/web/entries/Stone_Kleene_Relation_Algebras.html
+++ b/web/entries/Stone_Kleene_Relation_Algebras.html
@@ -1,210 +1,210 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Stone-Kleene Relation Algebras - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>tone-Kleene
<font class="first">R</font>elation
<font class="first">A</font>lgebras
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Stone-Kleene Relation Algebras</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.cosc.canterbury.ac.nz/walter.guttmann/">Walter Guttmann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-07-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We develop Stone-Kleene relation algebras, which expand Stone relation
algebras with a Kleene star operation to describe reachability in
weighted graphs. Many properties of the Kleene star arise as a special
case of a more general theory of iteration based on Conway semirings
extended by simulation axioms. This includes several theorems
representing complex program transformations. We formally prove the
correctness of Conway's automata-based construction of the Kleene
star of a matrix. We prove numerous results useful for reasoning about
-weighted graphs.</div></td>
+weighted graphs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Stone_Kleene_Relation_Algebras-AFP,
author = {Walter Guttmann},
title = {Stone-Kleene Relation Algebras},
journal = {Archive of Formal Proofs},
month = jul,
year = 2017,
note = {\url{http://isa-afp.org/entries/Stone_Kleene_Relation_Algebras.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Stone_Relation_Algebras.html">Stone_Relation_Algebras</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Aggregation_Algebras.html">Aggregation_Algebras</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stone_Kleene_Relation_Algebras/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Stone_Kleene_Relation_Algebras/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stone_Kleene_Relation_Algebras/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Stone_Kleene_Relation_Algebras-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Stone_Kleene_Relation_Algebras-2019-06-11.tar.gz">
afp-Stone_Kleene_Relation_Algebras-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Stone_Kleene_Relation_Algebras-2018-08-16.tar.gz">
afp-Stone_Kleene_Relation_Algebras-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Stone_Kleene_Relation_Algebras-2017-10-10.tar.gz">
afp-Stone_Kleene_Relation_Algebras-2017-10-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Stone_Relation_Algebras.html b/web/entries/Stone_Relation_Algebras.html
--- a/web/entries/Stone_Relation_Algebras.html
+++ b/web/entries/Stone_Relation_Algebras.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Stone Relation Algebras - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>tone
<font class="first">R</font>elation
<font class="first">A</font>lgebras
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Stone Relation Algebras</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.cosc.canterbury.ac.nz/walter.guttmann/">Walter Guttmann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-02-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We develop Stone relation algebras, which generalise relation algebras
by replacing the underlying Boolean algebra structure with a Stone
algebra. We show that finite matrices over extended real numbers form
an instance. As a consequence, relation-algebraic concepts and methods
can be used for reasoning about weighted graphs. We also develop a
fixpoint calculus and apply it to compare different definitions of
-reflexive-transitive closures in semirings.</div></td>
+reflexive-transitive closures in semirings.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Stone_Relation_Algebras-AFP,
author = {Walter Guttmann},
title = {Stone Relation Algebras},
journal = {Archive of Formal Proofs},
month = feb,
year = 2017,
note = {\url{http://isa-afp.org/entries/Stone_Relation_Algebras.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Stone_Algebras.html">Stone_Algebras</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Stone_Kleene_Relation_Algebras.html">Stone_Kleene_Relation_Algebras</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stone_Relation_Algebras/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Stone_Relation_Algebras/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stone_Relation_Algebras/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Stone_Relation_Algebras-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Stone_Relation_Algebras-2019-06-11.tar.gz">
afp-Stone_Relation_Algebras-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Stone_Relation_Algebras-2018-08-16.tar.gz">
afp-Stone_Relation_Algebras-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Stone_Relation_Algebras-2017-10-10.tar.gz">
afp-Stone_Relation_Algebras-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Stone_Relation_Algebras-2017-02-09.tar.gz">
afp-Stone_Relation_Algebras-2017-02-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Store_Buffer_Reduction.html b/web/entries/Store_Buffer_Reduction.html
--- a/web/entries/Store_Buffer_Reduction.html
+++ b/web/entries/Store_Buffer_Reduction.html
@@ -1,223 +1,223 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Reduction Theorem for Store Buffers - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">R</font>eduction
<font class="first">T</font>heorem
for
<font class="first">S</font>tore
<font class="first">B</font>uffers
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Reduction Theorem for Store Buffers</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Ernie Cohen (ecohen /at/ amazon /dot/ com) and
Norbert Schirmer (norbert /dot/ schirmer /at/ web /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-01-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
When verifying a concurrent program, it is usual to assume that memory
is sequentially consistent. However, most modern multiprocessors
depend on store buffering for efficiency, and provide native
sequential consistency only at a substantial performance penalty. To
regain sequential consistency, a programmer has to follow an
appropriate programming discipline. However, na&iuml;ve disciplines,
such as protecting all shared accesses with locks, are not flexible
enough for building high-performance multiprocessor software. We
present a new discipline for concurrent programming under TSO (total
store order, with store buffer forwarding). It does not depend on
concurrency primitives, such as locks. Instead, threads use ghost
operations to acquire and release ownership of memory addresses. A
thread can write to an address only if no other thread owns it, and
can read from an address only if it owns it or it is shared and the
thread has flushed its store buffer since it last wrote to an address
it did not own. This discipline covers both coarse-grained concurrency
(where data is protected by locks) as well as fine-grained concurrency
(where atomic operations race to memory). We formalize this
discipline in Isabelle/HOL, and prove that if every execution of a
program in a system without store buffers follows the discipline, then
every execution of the program with store buffers is sequentially
consistent. Thus, we can show sequential consistency under TSO by
ordinary assertional reasoning about the program, without having to
-consider store buffers at all.</div></td>
+consider store buffers at all.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Store_Buffer_Reduction-AFP,
author = {Ernie Cohen and Norbert Schirmer},
title = {A Reduction Theorem for Store Buffers},
journal = {Archive of Formal Proofs},
month = jan,
year = 2019,
note = {\url{http://isa-afp.org/entries/Store_Buffer_Reduction.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Store_Buffer_Reduction/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Store_Buffer_Reduction/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Store_Buffer_Reduction/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Store_Buffer_Reduction-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Store_Buffer_Reduction-2019-06-11.tar.gz">
afp-Store_Buffer_Reduction-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Store_Buffer_Reduction-2019-01-11.tar.gz">
afp-Store_Buffer_Reduction-2019-01-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Stream-Fusion.html b/web/entries/Stream-Fusion.html
--- a/web/entries/Stream-Fusion.html
+++ b/web/entries/Stream-Fusion.html
@@ -1,265 +1,265 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Stream Fusion - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>tream
<font class="first">F</font>usion
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Stream Fusion</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Brian Huffman
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2009-04-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Stream Fusion is a system for removing intermediate list structures from Haskell programs; it consists of a Haskell library along with several compiler rewrite rules. (The library is available <a href="http://hackage.haskell.org/package/stream-fusion">online</a>.)<br><br>These theories contain a formalization of much of the Stream Fusion library in HOLCF. Lazy list and stream types are defined, along with coercions between the two types, as well as an equivalence relation for streams that generate the same list. List and stream versions of map, filter, foldr, enumFromTo, append, zipWith, and concatMap are defined, and the stream versions are shown to respect stream equivalence.</div></td>
+ <td class="abstract mathjax_process">Stream Fusion is a system for removing intermediate list structures from Haskell programs; it consists of a Haskell library along with several compiler rewrite rules. (The library is available <a href="http://hackage.haskell.org/package/stream-fusion">online</a>.)<br><br>These theories contain a formalization of much of the Stream Fusion library in HOLCF. Lazy list and stream types are defined, along with coercions between the two types, as well as an equivalence relation for streams that generate the same list. List and stream versions of map, filter, foldr, enumFromTo, append, zipWith, and concatMap are defined, and the stream versions are shown to respect stream equivalence.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Stream-Fusion-AFP,
author = {Brian Huffman},
title = {Stream Fusion},
journal = {Archive of Formal Proofs},
month = apr,
year = 2009,
note = {\url{http://isa-afp.org/entries/Stream-Fusion.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stream-Fusion/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Stream-Fusion/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stream-Fusion/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Stream-Fusion-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Stream-Fusion-2019-06-11.tar.gz">
afp-Stream-Fusion-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Stream-Fusion-2018-08-16.tar.gz">
afp-Stream-Fusion-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Stream-Fusion-2017-10-10.tar.gz">
afp-Stream-Fusion-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Stream-Fusion-2016-12-17.tar.gz">
afp-Stream-Fusion-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Stream-Fusion-2016-02-22.tar.gz">
afp-Stream-Fusion-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Stream-Fusion-2015-05-27.tar.gz">
afp-Stream-Fusion-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Stream-Fusion-2014-08-28.tar.gz">
afp-Stream-Fusion-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Stream-Fusion-2013-12-11.tar.gz">
afp-Stream-Fusion-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Stream-Fusion-2013-11-17.tar.gz">
afp-Stream-Fusion-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Stream-Fusion-2013-02-16.tar.gz">
afp-Stream-Fusion-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Stream-Fusion-2012-05-24.tar.gz">
afp-Stream-Fusion-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Stream-Fusion-2011-10-11.tar.gz">
afp-Stream-Fusion-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Stream-Fusion-2011-02-11.tar.gz">
afp-Stream-Fusion-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Stream-Fusion-2010-07-01.tar.gz">
afp-Stream-Fusion-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Stream-Fusion-2009-12-12.tar.gz">
afp-Stream-Fusion-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Stream-Fusion-2009-05-13.tar.gz">
afp-Stream-Fusion-2009-05-13.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Stream-Fusion-2009-05-11.tar.gz">
afp-Stream-Fusion-2009-05-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Stream_Fusion_Code.html b/web/entries/Stream_Fusion_Code.html
--- a/web/entries/Stream_Fusion_Code.html
+++ b/web/entries/Stream_Fusion_Code.html
@@ -1,233 +1,233 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Stream Fusion in HOL with Code Generation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>tream
<font class="first">F</font>usion
in
<font class="first">H</font>OL
with
<font class="first">C</font>ode
<font class="first">G</font>eneration
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Stream Fusion in HOL with Code Generation</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a> and
Alexandra Maximova (amaximov /at/ student /dot/ ethz /dot/ ch)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-10-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Stream Fusion is a system for removing intermediate list data structures from functional programs, in particular Haskell. This entry adapts stream fusion to Isabelle/HOL and its code generator. We define stream types for finite and possibly infinite lists and stream versions for most of the fusible list functions in the theories List and Coinductive_List, and prove them correct with respect to the conversion functions between lists and streams. The Stream Fusion transformation itself is implemented as a simproc in the preprocessor of the code generator. [Brian Huffman's <a href="http://isa-afp.org/entries/Stream-Fusion.html">AFP entry</a> formalises stream fusion in HOLCF for the domain of lazy lists to prove the GHC compiler rewrite rules correct. In contrast, this work enables Isabelle's code generator to perform stream fusion itself. To that end, it covers both finite and coinductive lists from the HOL library and the Coinductive entry. The fusible list functions require specification and proof principles different from Huffman's.]</div></td>
+ <td class="abstract mathjax_process">Stream Fusion is a system for removing intermediate list data structures from functional programs, in particular Haskell. This entry adapts stream fusion to Isabelle/HOL and its code generator. We define stream types for finite and possibly infinite lists and stream versions for most of the fusible list functions in the theories List and Coinductive_List, and prove them correct with respect to the conversion functions between lists and streams. The Stream Fusion transformation itself is implemented as a simproc in the preprocessor of the code generator. [Brian Huffman's <a href="http://isa-afp.org/entries/Stream-Fusion.html">AFP entry</a> formalises stream fusion in HOLCF for the domain of lazy lists to prove the GHC compiler rewrite rules correct. In contrast, this work enables Isabelle's code generator to perform stream fusion itself. To that end, it covers both finite and coinductive lists from the HOL library and the Coinductive entry. The fusible list functions require specification and proof principles different from Huffman's.]</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Stream_Fusion_Code-AFP,
author = {Andreas Lochbihler and Alexandra Maximova},
title = {Stream Fusion in HOL with Code Generation},
journal = {Archive of Formal Proofs},
month = oct,
year = 2014,
note = {\url{http://isa-afp.org/entries/Stream_Fusion_Code.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Coinductive.html">Coinductive</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stream_Fusion_Code/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Stream_Fusion_Code/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stream_Fusion_Code/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Stream_Fusion_Code-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Stream_Fusion_Code-2019-06-11.tar.gz">
afp-Stream_Fusion_Code-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Stream_Fusion_Code-2018-08-16.tar.gz">
afp-Stream_Fusion_Code-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Stream_Fusion_Code-2017-10-10.tar.gz">
afp-Stream_Fusion_Code-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Stream_Fusion_Code-2016-12-17.tar.gz">
afp-Stream_Fusion_Code-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Stream_Fusion_Code-2016-02-22.tar.gz">
afp-Stream_Fusion_Code-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Stream_Fusion_Code-2015-05-27.tar.gz">
afp-Stream_Fusion_Code-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Stream_Fusion_Code-2014-10-13.tar.gz">
afp-Stream_Fusion_Code-2014-10-13.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Stream_Fusion_Code-2014-10-10.tar.gz">
afp-Stream_Fusion_Code-2014-10-10.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Strong_Security.html b/web/entries/Strong_Security.html
--- a/web/entries/Strong_Security.html
+++ b/web/entries/Strong_Security.html
@@ -1,250 +1,250 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Formalization of Strong Security - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">F</font>ormalization
of
<font class="first">S</font>trong
<font class="first">S</font>ecurity
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Formalization of Strong Security</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Sylvia Grewe (grewe /at/ st /dot/ informatik /dot/ tu-darmstadt /dot/ de),
Alexander Lux (lux /at/ mais /dot/ informatik /dot/ tu-darmstadt /dot/ de),
Heiko Mantel (mantel /at/ mais /dot/ informatik /dot/ tu-darmstadt /dot/ de) and
Jens Sauer (sauer /at/ mais /dot/ informatik /dot/ tu-darmstadt /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-04-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Research in information-flow security aims at developing methods to
+ <td class="abstract mathjax_process">Research in information-flow security aims at developing methods to
identify undesired information leaks within programs from private
sources to public sinks. Noninterference captures this
intuition. Strong security from Sabelfeld and Sands
formalizes noninterference for concurrent systems.
<p>
We present an Isabelle/HOL formalization of strong security for
arbitrary security lattices (Sabelfeld and Sands use
a two-element security lattice in the original publication).
The formalization includes
compositionality proofs for strong security and a soundness proof
for a security type system that checks strong security for programs
in a simple while language with dynamic thread creation.
<p>
Our formalization of the security type system is abstract in the
language for expressions and in the semantic side conditions for
expressions. It can easily be instantiated with different syntactic
approximations for these side conditions. The soundness proof of
such an instantiation boils down to showing that these syntactic
-approximations imply the semantic side conditions.</div></td>
+approximations imply the semantic side conditions.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Strong_Security-AFP,
author = {Sylvia Grewe and Alexander Lux and Heiko Mantel and Jens Sauer},
title = {A Formalization of Strong Security},
journal = {Archive of Formal Proofs},
month = apr,
year = 2014,
note = {\url{http://isa-afp.org/entries/Strong_Security.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="WHATandWHERE_Security.html">WHATandWHERE_Security</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Strong_Security/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Strong_Security/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Strong_Security/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Strong_Security-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Strong_Security-2019-06-11.tar.gz">
afp-Strong_Security-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Strong_Security-2018-08-16.tar.gz">
afp-Strong_Security-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Strong_Security-2017-10-10.tar.gz">
afp-Strong_Security-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Strong_Security-2016-12-17.tar.gz">
afp-Strong_Security-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Strong_Security-2016-02-22.tar.gz">
afp-Strong_Security-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Strong_Security-2015-05-27.tar.gz">
afp-Strong_Security-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Strong_Security-2014-08-28.tar.gz">
afp-Strong_Security-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Strong_Security-2014-04-24.tar.gz">
afp-Strong_Security-2014-04-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Sturm_Sequences.html b/web/entries/Sturm_Sequences.html
--- a/web/entries/Sturm_Sequences.html
+++ b/web/entries/Sturm_Sequences.html
@@ -1,229 +1,229 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Sturm's Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>turm's
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Sturm's Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-01-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Sturm's Theorem states that polynomial sequences with certain
+ <td class="abstract mathjax_process">Sturm's Theorem states that polynomial sequences with certain
properties, so-called Sturm sequences, can be used to count the number
of real roots of a real polynomial. This work contains a proof of
Sturm's Theorem and code for constructing Sturm sequences efficiently.
It also provides the “sturm” proof method, which can decide certain
statements about the roots of real polynomials, such as “the polynomial
P has exactly n roots in the interval I” or “P(x) > Q(x) for all x
-&#8712; &#8477;”.</div></td>
+&#8712; &#8477;”.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Sturm_Sequences-AFP,
author = {Manuel Eberl},
title = {Sturm's Theorem},
journal = {Archive of Formal Proofs},
month = jan,
year = 2014,
note = {\url{http://isa-afp.org/entries/Sturm_Sequences.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Algebraic_Numbers.html">Algebraic_Numbers</a>, <a href="Perron_Frobenius.html">Perron_Frobenius</a>, <a href="Special_Function_Bounds.html">Special_Function_Bounds</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Sturm_Sequences/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Sturm_Sequences/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Sturm_Sequences/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Sturm_Sequences-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Sturm_Sequences-2019-06-11.tar.gz">
afp-Sturm_Sequences-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Sturm_Sequences-2018-08-16.tar.gz">
afp-Sturm_Sequences-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Sturm_Sequences-2017-10-10.tar.gz">
afp-Sturm_Sequences-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Sturm_Sequences-2016-12-17.tar.gz">
afp-Sturm_Sequences-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Sturm_Sequences-2016-02-22.tar.gz">
afp-Sturm_Sequences-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Sturm_Sequences-2015-05-27.tar.gz">
afp-Sturm_Sequences-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Sturm_Sequences-2014-08-28.tar.gz">
afp-Sturm_Sequences-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Sturm_Sequences-2014-01-12.tar.gz">
afp-Sturm_Sequences-2014-01-12.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Sturm_Tarski.html b/web/entries/Sturm_Tarski.html
--- a/web/entries/Sturm_Tarski.html
+++ b/web/entries/Sturm_Tarski.html
@@ -1,224 +1,224 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Sturm-Tarski Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">S</font>turm-Tarski
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Sturm-Tarski Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-09-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We have formalized the Sturm-Tarski theorem (also referred as the Tarski theorem), which generalizes Sturm's theorem. Sturm's theorem is usually used as a way to count distinct real roots, while the Sturm-Tarksi theorem forms the basis for Tarski's classic quantifier elimination for real closed field.</div></td>
+ <td class="abstract mathjax_process">We have formalized the Sturm-Tarski theorem (also referred as the Tarski theorem), which generalizes Sturm's theorem. Sturm's theorem is usually used as a way to count distinct real roots, while the Sturm-Tarksi theorem forms the basis for Tarski's classic quantifier elimination for real closed field.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Sturm_Tarski-AFP,
author = {Wenda Li},
title = {The Sturm-Tarski Theorem},
journal = {Archive of Formal Proofs},
month = sep,
year = 2014,
note = {\url{http://isa-afp.org/entries/Sturm_Tarski.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Budan_Fourier.html">Budan_Fourier</a>, <a href="Count_Complex_Roots.html">Count_Complex_Roots</a>, <a href="Winding_Number_Eval.html">Winding_Number_Eval</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Sturm_Tarski/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Sturm_Tarski/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Sturm_Tarski/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Sturm_Tarski-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Sturm_Tarski-2019-06-11.tar.gz">
afp-Sturm_Tarski-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Sturm_Tarski-2018-08-16.tar.gz">
afp-Sturm_Tarski-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Sturm_Tarski-2017-10-10.tar.gz">
afp-Sturm_Tarski-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Sturm_Tarski-2016-12-17.tar.gz">
afp-Sturm_Tarski-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Sturm_Tarski-2016-02-22.tar.gz">
afp-Sturm_Tarski-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Sturm_Tarski-2015-05-27.tar.gz">
afp-Sturm_Tarski-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Sturm_Tarski-2014-12-05.tar.gz">
afp-Sturm_Tarski-2014-12-05.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Sturm_Tarski-2014-09-20.tar.gz">
afp-Sturm_Tarski-2014-09-20.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Stuttering_Equivalence.html b/web/entries/Stuttering_Equivalence.html
--- a/web/entries/Stuttering_Equivalence.html
+++ b/web/entries/Stuttering_Equivalence.html
@@ -1,253 +1,253 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Stuttering Equivalence - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>tuttering
<font class="first">E</font>quivalence
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Stuttering Equivalence</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.loria.fr/~merz">Stephan Merz</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-05-07</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process"><p>Two omega-sequences are stuttering equivalent if they differ only by finite repetitions of elements. Stuttering equivalence is a fundamental concept in the theory of concurrent and distributed systems. Notably, Lamport argues that refinement notions for such systems should be insensitive to finite stuttering. Peled and Wilke showed that all PLTL (propositional linear-time temporal logic) properties that are insensitive to stuttering equivalence can be expressed without the next-time operator. Stuttering equivalence is also important for certain verification techniques such as partial-order reduction for model checking.</p> <p>We formalize stuttering equivalence in Isabelle/HOL. Our development relies on the notion of stuttering sampling functions that may skip blocks of identical sequence elements. We also encode PLTL and prove the theorem due to Peled and Wilke.</p></div></td>
+ <td class="abstract mathjax_process"><p>Two omega-sequences are stuttering equivalent if they differ only by finite repetitions of elements. Stuttering equivalence is a fundamental concept in the theory of concurrent and distributed systems. Notably, Lamport argues that refinement notions for such systems should be insensitive to finite stuttering. Peled and Wilke showed that all PLTL (propositional linear-time temporal logic) properties that are insensitive to stuttering equivalence can be expressed without the next-time operator. Stuttering equivalence is also important for certain verification techniques such as partial-order reduction for model checking.</p> <p>We formalize stuttering equivalence in Isabelle/HOL. Our development relies on the notion of stuttering sampling functions that may skip blocks of identical sequence elements. We also encode PLTL and prove the theorem due to Peled and Wilke.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2013-01-31]: Added encoding of PLTL and proved Peled and Wilke's theorem. Adjusted abstract accordingly.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Stuttering_Equivalence-AFP,
author = {Stephan Merz},
title = {Stuttering Equivalence},
journal = {Archive of Formal Proofs},
month = may,
year = 2012,
note = {\url{http://isa-afp.org/entries/Stuttering_Equivalence.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="LTL.html">LTL</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Consensus_Refined.html">Consensus_Refined</a>, <a href="Heard_Of.html">Heard_Of</a>, <a href="LTL_to_GBA.html">LTL_to_GBA</a>, <a href="Partial_Order_Reduction.html">Partial_Order_Reduction</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stuttering_Equivalence/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Stuttering_Equivalence/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Stuttering_Equivalence/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Stuttering_Equivalence-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Stuttering_Equivalence-2019-06-11.tar.gz">
afp-Stuttering_Equivalence-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Stuttering_Equivalence-2018-08-16.tar.gz">
afp-Stuttering_Equivalence-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Stuttering_Equivalence-2017-10-10.tar.gz">
afp-Stuttering_Equivalence-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Stuttering_Equivalence-2016-12-17.tar.gz">
afp-Stuttering_Equivalence-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Stuttering_Equivalence-2016-02-22.tar.gz">
afp-Stuttering_Equivalence-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Stuttering_Equivalence-2015-05-27.tar.gz">
afp-Stuttering_Equivalence-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Stuttering_Equivalence-2014-08-28.tar.gz">
afp-Stuttering_Equivalence-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Stuttering_Equivalence-2013-12-11.tar.gz">
afp-Stuttering_Equivalence-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Stuttering_Equivalence-2013-11-17.tar.gz">
afp-Stuttering_Equivalence-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Stuttering_Equivalence-2013-02-16.tar.gz">
afp-Stuttering_Equivalence-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Stuttering_Equivalence-2013-02-02.tar.gz">
afp-Stuttering_Equivalence-2013-02-02.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Stuttering_Equivalence-2012-05-24.tar.gz">
afp-Stuttering_Equivalence-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Stuttering_Equivalence-2012-05-08.tar.gz">
afp-Stuttering_Equivalence-2012-05-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Subresultants.html b/web/entries/Subresultants.html
--- a/web/entries/Subresultants.html
+++ b/web/entries/Subresultants.html
@@ -1,206 +1,206 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Subresultants - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>ubresultants
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Subresultants</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://sjcjoosten.nl/">Sebastiaan Joosten</a>,
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a> and
<a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-04-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize the theory of subresultants and the subresultant
polynomial remainder sequence as described by Brown and Traub. As a
result, we obtain efficient certified algorithms for computing the
-resultant and the greatest common divisor of polynomials.</div></td>
+resultant and the greatest common divisor of polynomials.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Subresultants-AFP,
author = {Sebastiaan Joosten and René Thiemann and Akihisa Yamada},
title = {Subresultants},
journal = {Archive of Formal Proofs},
month = apr,
year = 2017,
note = {\url{http://isa-afp.org/entries/Subresultants.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Jordan_Normal_Form.html">Jordan_Normal_Form</a>, <a href="Polynomial_Factorization.html">Polynomial_Factorization</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Subresultants/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Subresultants/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Subresultants/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Subresultants-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Subresultants-2019-06-11.tar.gz">
afp-Subresultants-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Subresultants-2018-08-16.tar.gz">
afp-Subresultants-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Subresultants-2017-10-10.tar.gz">
afp-Subresultants-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Subresultants-2017-04-07.tar.gz">
afp-Subresultants-2017-04-07.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Subset_Boolean_Algebras.html b/web/entries/Subset_Boolean_Algebras.html
--- a/web/entries/Subset_Boolean_Algebras.html
+++ b/web/entries/Subset_Boolean_Algebras.html
@@ -1,205 +1,205 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Hierarchy of Algebras for Boolean Subsets - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">H</font>ierarchy
of
<font class="first">A</font>lgebras
for
<font class="first">B</font>oolean
<font class="first">S</font>ubsets
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Hierarchy of Algebras for Boolean Subsets</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.cosc.canterbury.ac.nz/walter.guttmann/">Walter Guttmann</a> and
<a href="https://www.informatik.uni-augsburg.de/en/chairs/dbis/pmi/staff/moeller/">Bernhard Möller</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-01-31</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a collection of axiom systems for the construction of
Boolean subalgebras of larger overall algebras. The subalgebras are
defined as the range of a complement-like operation on a semilattice.
This technique has been used, for example, with the antidomain
operation, dynamic negation and Stone algebras. We present a common
ground for these constructions based on a new equational
-axiomatisation of Boolean algebras.</div></td>
+axiomatisation of Boolean algebras.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Subset_Boolean_Algebras-AFP,
author = {Walter Guttmann and Bernhard Möller},
title = {A Hierarchy of Algebras for Boolean Subsets},
journal = {Archive of Formal Proofs},
month = jan,
year = 2020,
note = {\url{http://isa-afp.org/entries/Subset_Boolean_Algebras.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Stone_Algebras.html">Stone_Algebras</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Subset_Boolean_Algebras/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Subset_Boolean_Algebras/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Subset_Boolean_Algebras/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Subset_Boolean_Algebras-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Subset_Boolean_Algebras-2020-01-31.tar.gz">
afp-Subset_Boolean_Algebras-2020-01-31.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/SumSquares.html b/web/entries/SumSquares.html
--- a/web/entries/SumSquares.html
+++ b/web/entries/SumSquares.html
@@ -1,278 +1,278 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Sums of Two and Four Squares - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>ums
of
<font class="first">T</font>wo
and
<font class="first">F</font>our
<font class="first">S</font>quares
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Sums of Two and Four Squares</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Roelof Oosterhuis
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2007-08-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This document presents the mechanised proofs of the following results:<ul><li>any prime number of the form 4m+1 can be written as the sum of two squares;</li><li>any natural number can be written as the sum of four squares</li></ul></div></td>
+ <td class="abstract mathjax_process">This document presents the mechanised proofs of the following results:<ul><li>any prime number of the form 4m+1 can be written as the sum of two squares;</li><li>any natural number can be written as the sum of four squares</li></ul></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{SumSquares-AFP,
author = {Roelof Oosterhuis},
title = {Sums of Two and Four Squares},
journal = {Archive of Formal Proofs},
month = aug,
year = 2007,
note = {\url{http://isa-afp.org/entries/SumSquares.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SumSquares/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/SumSquares/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SumSquares/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-SumSquares-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-SumSquares-2019-06-11.tar.gz">
afp-SumSquares-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-SumSquares-2018-08-16.tar.gz">
afp-SumSquares-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-SumSquares-2017-10-10.tar.gz">
afp-SumSquares-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-SumSquares-2016-12-17.tar.gz">
afp-SumSquares-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-SumSquares-2016-02-22.tar.gz">
afp-SumSquares-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-SumSquares-2015-05-27.tar.gz">
afp-SumSquares-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-SumSquares-2014-08-28.tar.gz">
afp-SumSquares-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-SumSquares-2013-12-11.tar.gz">
afp-SumSquares-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-SumSquares-2013-11-17.tar.gz">
afp-SumSquares-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-SumSquares-2013-02-16.tar.gz">
afp-SumSquares-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-SumSquares-2012-05-24.tar.gz">
afp-SumSquares-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-SumSquares-2011-10-11.tar.gz">
afp-SumSquares-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-SumSquares-2011-02-11.tar.gz">
afp-SumSquares-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-SumSquares-2010-07-01.tar.gz">
afp-SumSquares-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-SumSquares-2009-12-12.tar.gz">
afp-SumSquares-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-SumSquares-2009-04-29.tar.gz">
afp-SumSquares-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-SumSquares-2008-06-10.tar.gz">
afp-SumSquares-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-SumSquares-2007-11-27.tar.gz">
afp-SumSquares-2007-11-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/SuperCalc.html b/web/entries/SuperCalc.html
--- a/web/entries/SuperCalc.html
+++ b/web/entries/SuperCalc.html
@@ -1,229 +1,229 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Variant of the Superposition Calculus - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">V</font>ariant
of
the
<font class="first">S</font>uperposition
<font class="first">C</font>alculus
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Variant of the Superposition Calculus</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://membres-lig.imag.fr/peltier/">Nicolas Peltier</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-09-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We provide a formalization of a variant of the superposition
calculus, together with formal proofs of soundness and refutational
completeness (w.r.t. the usual redundancy criteria based on clause
ordering). This version of the calculus uses all the standard
restrictions of the superposition rules, together with the following
refinement, inspired by the basic superposition calculus: each clause
is associated with a set of terms which are assumed to be in normal
form -- thus any application of the replacement rule on these terms is
blocked. The set is initially empty and terms may be added or removed
at each inference step. The set of terms that are assumed to be in
normal form includes any term introduced by previous unifiers as well
as any term occurring in the parent clauses at a position that is
smaller (according to some given ordering on positions) than a
previously replaced term. The standard superposition calculus
corresponds to the case where the set of irreducible terms is always
-empty.</div></td>
+empty.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{SuperCalc-AFP,
author = {Nicolas Peltier},
title = {A Variant of the Superposition Calculus},
journal = {Archive of Formal Proofs},
month = sep,
year = 2016,
note = {\url{http://isa-afp.org/entries/SuperCalc.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SuperCalc/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/SuperCalc/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/SuperCalc/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-SuperCalc-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-SuperCalc-2019-06-11.tar.gz">
afp-SuperCalc-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-SuperCalc-2018-08-16.tar.gz">
afp-SuperCalc-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-SuperCalc-2017-10-10.tar.gz">
afp-SuperCalc-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-SuperCalc-2016-12-17.tar.gz">
afp-SuperCalc-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-SuperCalc-2016-09-06.tar.gz">
afp-SuperCalc-2016-09-06.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Surprise_Paradox.html b/web/entries/Surprise_Paradox.html
--- a/web/entries/Surprise_Paradox.html
+++ b/web/entries/Surprise_Paradox.html
@@ -1,212 +1,212 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Surprise Paradox - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>urprise
<font class="first">P</font>aradox
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Surprise Paradox</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Joachim Breitner (joachim /at/ cis /dot/ upenn /dot/ edu)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-07-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
In 1964, Fitch showed that the paradox of the surprise hanging can be
resolved by showing that the judge’s verdict is inconsistent. His
formalization builds on Gödel’s coding of provability. In this
theory, we reproduce his proof in Isabelle, building on Paulson’s
-formalisation of Gödel’s incompleteness theorems.</div></td>
+formalisation of Gödel’s incompleteness theorems.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Surprise_Paradox-AFP,
author = {Joachim Breitner},
title = {Surprise Paradox},
journal = {Archive of Formal Proofs},
month = jul,
year = 2016,
note = {\url{http://isa-afp.org/entries/Surprise_Paradox.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Incompleteness.html">Incompleteness</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Surprise_Paradox/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Surprise_Paradox/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Surprise_Paradox/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Surprise_Paradox-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Surprise_Paradox-2019-06-11.tar.gz">
afp-Surprise_Paradox-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Surprise_Paradox-2018-08-16.tar.gz">
afp-Surprise_Paradox-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Surprise_Paradox-2017-10-10.tar.gz">
afp-Surprise_Paradox-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Surprise_Paradox-2016-12-17.tar.gz">
afp-Surprise_Paradox-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Surprise_Paradox-2016-07-17.tar.gz">
afp-Surprise_Paradox-2016-07-17.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Symmetric_Polynomials.html b/web/entries/Symmetric_Polynomials.html
--- a/web/entries/Symmetric_Polynomials.html
+++ b/web/entries/Symmetric_Polynomials.html
@@ -1,219 +1,219 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Symmetric Polynomials - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>ymmetric
<font class="first">P</font>olynomials
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Symmetric Polynomials</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-09-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>A symmetric polynomial is a polynomial in variables
<em>X</em><sub>1</sub>,&hellip;,<em>X</em><sub>n</sub>
that does not discriminate between its variables, i.&thinsp;e. it
is invariant under any permutation of them. These polynomials are
important in the study of the relationship between the coefficients of
a univariate polynomial and its roots in its algebraic
closure.</p> <p>This article provides a definition of
symmetric polynomials and the elementary symmetric polynomials
e<sub>1</sub>,&hellip;,e<sub>n</sub> and
proofs of their basic properties, including three notable
ones:</p> <ul> <li> Vieta's formula, which
gives an explicit expression for the <em>k</em>-th
coefficient of a univariate monic polynomial in terms of its roots
<em>x</em><sub>1</sub>,&hellip;,<em>x</em><sub>n</sub>,
namely
<em>c</em><sub><em>k</em></sub> = (-1)<sup><em>n</em>-<em>k</em></sup>&thinsp;e<sub><em>n</em>-<em>k</em></sub>(<em>x</em><sub>1</sub>,&hellip;,<em>x</em><sub>n</sub>).</li>
<li>Second, the Fundamental Theorem of Symmetric Polynomials,
which states that any symmetric polynomial is itself a uniquely
determined polynomial combination of the elementary symmetric
polynomials.</li> <li>Third, as a corollary of the
previous two, that given a polynomial over some ring
<em>R</em>, any symmetric polynomial combination of its
roots is also in <em>R</em> even when the roots are not.
</ul> <p> Both the symmetry property itself and the
-witness for the Fundamental Theorem are executable. </p></div></td>
+witness for the Fundamental Theorem are executable. </p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Symmetric_Polynomials-AFP,
author = {Manuel Eberl},
title = {Symmetric Polynomials},
journal = {Archive of Formal Proofs},
month = sep,
year = 2018,
note = {\url{http://isa-afp.org/entries/Symmetric_Polynomials.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Polynomials.html">Polynomials</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Pi_Transcendental.html">Pi_Transcendental</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Symmetric_Polynomials/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Symmetric_Polynomials/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Symmetric_Polynomials/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Symmetric_Polynomials-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Symmetric_Polynomials-2019-06-11.tar.gz">
afp-Symmetric_Polynomials-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Symmetric_Polynomials-2018-09-26.tar.gz">
afp-Symmetric_Polynomials-2018-09-26.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Szpilrajn.html b/web/entries/Szpilrajn.html
--- a/web/entries/Szpilrajn.html
+++ b/web/entries/Szpilrajn.html
@@ -1,190 +1,190 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Szpilrajn Extension Theorem - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>zpilrajn
<font class="first">E</font>xtension
<font class="first">T</font>heorem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Szpilrajn Extension Theorem</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Peter Zeller (p_zeller /at/ cs /dot/ uni-kl /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-07-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize the Szpilrajn extension theorem, also known as
order-extension principal: Every strict partial order can be extended
-to a strict linear order.</div></td>
+to a strict linear order.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Szpilrajn-AFP,
author = {Peter Zeller},
title = {Szpilrajn Extension Theorem},
journal = {Archive of Formal Proofs},
month = jul,
year = 2019,
note = {\url{http://isa-afp.org/entries/Szpilrajn.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Szpilrajn/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Szpilrajn/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Szpilrajn/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Szpilrajn-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Szpilrajn-2019-07-28.tar.gz">
afp-Szpilrajn-2019-07-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/TESL_Language.html b/web/entries/TESL_Language.html
--- a/web/entries/TESL_Language.html
+++ b/web/entries/TESL_Language.html
@@ -1,229 +1,229 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Formal Development of a Polychronous Polytimed Coordination Language - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">F</font>ormal
<font class="first">D</font>evelopment
of
a
<font class="first">P</font>olychronous
<font class="first">P</font>olytimed
<font class="first">C</font>oordination
<font class="first">L</font>anguage
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Formal Development of a Polychronous Polytimed Coordination Language</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Hai Nguyen Van (hai /dot/ nguyenvan /dot/ phie /at/ gmail /dot/ com),
Frédéric Boulanger (frederic /dot/ boulanger /at/ centralesupelec /dot/ fr) and
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-07-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The design of complex systems involves different formalisms for
modeling their different parts or aspects. The global model of a
system may therefore consist of a coordination of concurrent
sub-models that use different paradigms. We develop here a theory for
a language used to specify the timed coordination of such
heterogeneous subsystems by addressing the following issues:
<ul><li>the
behavior of the sub-systems is observed only at a series of discrete
instants,</li><li>events may occur in different sub-systems at unrelated
times, leading to polychronous systems, which do not necessarily have
a common base clock,</li><li>coordination between subsystems involves
causality, so the occurrence of an event may enforce the occurrence of
other events, possibly after a certain duration has elapsed or an
event has occurred a given number of times,</li><li>the domain of time
(discrete, rational, continuous...) may be different in the
subsystems, leading to polytimed systems,</li><li>the time frames of
different sub-systems may be related (for instance, time in a GPS
satellite and in a GPS receiver on Earth are related although they are
not the same).</li></ul>
Firstly, a denotational semantics of the language is
defined. Then, in order to be able to incrementally check the behavior
of systems, an operational semantics is given, with proofs of
progress, soundness and completeness with regard to the denotational
semantics. These proofs are made according to a setup that can scale
up when new operators are added to the language. In order for
specifications to be composed in a clean way, the language should be
invariant by stuttering (i.e., adding observation instants at which
-nothing happens). The proof of this invariance is also given.</div></td>
+nothing happens). The proof of this invariance is also given.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{TESL_Language-AFP,
author = {Hai Nguyen Van and Frédéric Boulanger and Burkhart Wolff},
title = {A Formal Development of a Polychronous Polytimed Coordination Language},
journal = {Archive of Formal Proofs},
month = jul,
year = 2019,
note = {\url{http://isa-afp.org/entries/TESL_Language.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/TESL_Language/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/TESL_Language/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/TESL_Language/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-TESL_Language-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-TESL_Language-2019-07-31.tar.gz">
afp-TESL_Language-2019-07-31.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/TLA.html b/web/entries/TLA.html
--- a/web/entries/TLA.html
+++ b/web/entries/TLA.html
@@ -1,274 +1,274 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Definitional Encoding of TLA* in Isabelle/HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">D</font>efinitional
<font class="first">E</font>ncoding
of
<font class="first">T</font>LA*
in
<font class="first">I</font>sabelle/HOL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Definitional Encoding of TLA* in Isabelle/HOL</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://homepages.inf.ed.ac.uk/ggrov">Gudmund Grov</a> and
<a href="http://www.loria.fr/~merz">Stephan Merz</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-11-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We mechanise the logic TLA*
+ <td class="abstract mathjax_process">We mechanise the logic TLA*
<a href="http://www.springerlink.com/content/ax3qk557qkdyt7n6/">[Merz 1999]</a>,
an extension of Lamport's Temporal Logic of Actions (TLA)
<a href="http://dl.acm.org/citation.cfm?doid=177492.177726">[Lamport 1994]</a>
for specifying and reasoning
about concurrent and reactive systems. Aiming at a framework for mechanising] the verification of TLA (or TLA*) specifications, this contribution reuses
some elements from a previous axiomatic encoding of TLA in Isabelle/HOL
by the second author [Merz 1998], which has been part of the Isabelle
distribution. In contrast to that previous work, we give here a shallow,
definitional embedding, with the following highlights:
<ul>
<li>a theory of infinite sequences, including a formalisation of the concepts of stuttering invariance central to TLA and TLA*;
<li>a definition of the semantics of TLA*, which extends TLA by a mutually-recursive definition of formulas and pre-formulas, generalising TLA action formulas;
<li>a substantial set of derived proof rules, including the TLA* axioms and Lamport's proof rules for system verification;
<li>a set of examples illustrating the usage of Isabelle/TLA* for reasoning about systems.
</ul>
Note that this work is unrelated to the ongoing development of a proof system
for the specification language TLA+, which includes an encoding of TLA+ as a
-new Isabelle object logic <a href="http://www.springerlink.com/content/354026160p14j175/">[Chaudhuri et al 2010]</a>.</div></td>
+new Isabelle object logic <a href="http://www.springerlink.com/content/354026160p14j175/">[Chaudhuri et al 2010]</a>.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{TLA-AFP,
author = {Gudmund Grov and Stephan Merz},
title = {A Definitional Encoding of TLA* in Isabelle/HOL},
journal = {Archive of Formal Proofs},
month = nov,
year = 2011,
note = {\url{http://isa-afp.org/entries/TLA.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/TLA/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/TLA/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/TLA/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-TLA-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-TLA-2019-06-11.tar.gz">
afp-TLA-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-TLA-2018-08-16.tar.gz">
afp-TLA-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-TLA-2017-10-10.tar.gz">
afp-TLA-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-TLA-2016-12-17.tar.gz">
afp-TLA-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-TLA-2016-02-22.tar.gz">
afp-TLA-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-TLA-2015-05-27.tar.gz">
afp-TLA-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-TLA-2014-08-28.tar.gz">
afp-TLA-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-TLA-2013-12-11.tar.gz">
afp-TLA-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-TLA-2013-11-17.tar.gz">
afp-TLA-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-TLA-2013-03-02.tar.gz">
afp-TLA-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-TLA-2013-02-16.tar.gz">
afp-TLA-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-TLA-2012-05-24.tar.gz">
afp-TLA-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-TLA-2011-11-27.tar.gz">
afp-TLA-2011-11-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Tail_Recursive_Functions.html b/web/entries/Tail_Recursive_Functions.html
--- a/web/entries/Tail_Recursive_Functions.html
+++ b/web/entries/Tail_Recursive_Functions.html
@@ -1,257 +1,257 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A General Method for the Proof of Theorems on Tail-recursive Functions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">G</font>eneral
<font class="first">M</font>ethod
for
the
<font class="first">P</font>roof
of
<font class="first">T</font>heorems
on
<font class="first">T</font>ail-recursive
<font class="first">F</font>unctions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A General Method for the Proof of Theorems on Tail-recursive Functions</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Pasquale Noce (pasquale /dot/ noce /dot/ lavoro /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2013-12-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
Tail-recursive function definitions are sometimes more straightforward than
alternatives, but proving theorems on them may be roundabout because of the
peculiar form of the resulting recursion induction rules.
</p><p>
This paper describes a proof method that provides a general solution to
this problem by means of suitable invariants over inductive sets, and
illustrates the application of such method by examining two case studies.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Tail_Recursive_Functions-AFP,
author = {Pasquale Noce},
title = {A General Method for the Proof of Theorems on Tail-recursive Functions},
journal = {Archive of Formal Proofs},
month = dec,
year = 2013,
note = {\url{http://isa-afp.org/entries/Tail_Recursive_Functions.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Tail_Recursive_Functions/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Tail_Recursive_Functions/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Tail_Recursive_Functions/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Tail_Recursive_Functions-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Tail_Recursive_Functions-2019-06-11.tar.gz">
afp-Tail_Recursive_Functions-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Tail_Recursive_Functions-2018-08-16.tar.gz">
afp-Tail_Recursive_Functions-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Tail_Recursive_Functions-2017-10-10.tar.gz">
afp-Tail_Recursive_Functions-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Tail_Recursive_Functions-2016-12-17.tar.gz">
afp-Tail_Recursive_Functions-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Tail_Recursive_Functions-2016-02-22.tar.gz">
afp-Tail_Recursive_Functions-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Tail_Recursive_Functions-2015-06-13.tar.gz">
afp-Tail_Recursive_Functions-2015-06-13.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Tail_Recursive_Functions-2015-05-27.tar.gz">
afp-Tail_Recursive_Functions-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Tail_Recursive_Functions-2014-08-28.tar.gz">
afp-Tail_Recursive_Functions-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Tail_Recursive_Functions-2013-12-11.tar.gz">
afp-Tail_Recursive_Functions-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Tail_Recursive_Functions-2013-12-02.tar.gz">
afp-Tail_Recursive_Functions-2013-12-02.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Tarskis_Geometry.html b/web/entries/Tarskis_Geometry.html
--- a/web/entries/Tarskis_Geometry.html
+++ b/web/entries/Tarskis_Geometry.html
@@ -1,260 +1,260 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The independence of Tarski's Euclidean axiom - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
independence
of
<font class="first">T</font>arski's
<font class="first">E</font>uclidean
axiom
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The independence of Tarski's Euclidean axiom</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
T. J. M. Makarios (tjm1983 /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-10-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Tarski's axioms of plane geometry are formalized and, using the standard
real Cartesian model, shown to be consistent. A substantial theory of
the projective plane is developed. Building on this theory, the
Klein-Beltrami model of the hyperbolic plane is defined and shown to
satisfy all of Tarski's axioms except his Euclidean axiom; thus Tarski's
Euclidean axiom is shown to be independent of his other axioms of plane
geometry.
<p>
An earlier version of this work was the subject of the author's
<a href="http://researcharchive.vuw.ac.nz/handle/10063/2315">MSc thesis</a>,
which contains natural-language explanations of some of the
-more interesting proofs.</div></td>
+more interesting proofs.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Tarskis_Geometry-AFP,
author = {T. J. M. Makarios},
title = {The independence of Tarski's Euclidean axiom},
journal = {Archive of Formal Proofs},
month = oct,
year = 2012,
note = {\url{http://isa-afp.org/entries/Tarskis_Geometry.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Tarskis_Geometry/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Tarskis_Geometry/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Tarskis_Geometry/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Tarskis_Geometry-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Tarskis_Geometry-2019-06-11.tar.gz">
afp-Tarskis_Geometry-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Tarskis_Geometry-2018-08-16.tar.gz">
afp-Tarskis_Geometry-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Tarskis_Geometry-2017-10-10.tar.gz">
afp-Tarskis_Geometry-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Tarskis_Geometry-2016-12-17.tar.gz">
afp-Tarskis_Geometry-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Tarskis_Geometry-2016-02-22.tar.gz">
afp-Tarskis_Geometry-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Tarskis_Geometry-2015-05-27.tar.gz">
afp-Tarskis_Geometry-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Tarskis_Geometry-2014-08-28.tar.gz">
afp-Tarskis_Geometry-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Tarskis_Geometry-2013-12-11.tar.gz">
afp-Tarskis_Geometry-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Tarskis_Geometry-2013-11-17.tar.gz">
afp-Tarskis_Geometry-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Tarskis_Geometry-2013-02-16.tar.gz">
afp-Tarskis_Geometry-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Tarskis_Geometry-2012-11-09.tar.gz">
afp-Tarskis_Geometry-2012-11-09.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Tarskis_Geometry-2012-11-08.tar.gz">
afp-Tarskis_Geometry-2012-11-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Taylor_Models.html b/web/entries/Taylor_Models.html
--- a/web/entries/Taylor_Models.html
+++ b/web/entries/Taylor_Models.html
@@ -1,207 +1,207 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Taylor Models - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>aylor
<font class="first">M</font>odels
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Taylor Models</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Christoph Traut and
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-01-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a formally verified implementation of multivariate Taylor
models. Taylor models are a form of rigorous polynomial approximation,
consisting of an approximation polynomial based on Taylor expansions,
combined with a rigorous bound on the approximation error. Taylor
models were introduced as a tool to mitigate the dependency problem of
interval arithmetic. Our implementation automatically computes Taylor
models for the class of elementary functions, expressed by composition
of arithmetic operations and basic functions like exp, sin, or square
-root.</div></td>
+root.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Taylor_Models-AFP,
author = {Christoph Traut and Fabian Immler},
title = {Taylor Models},
journal = {Archive of Formal Proofs},
month = jan,
year = 2018,
note = {\url{http://isa-afp.org/entries/Taylor_Models.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Affine_Arithmetic.html">Affine_Arithmetic</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Taylor_Models/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Taylor_Models/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Taylor_Models/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Taylor_Models-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Taylor_Models-2019-06-11.tar.gz">
afp-Taylor_Models-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Taylor_Models-2018-08-16.tar.gz">
afp-Taylor_Models-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Taylor_Models-2018-01-08.tar.gz">
afp-Taylor_Models-2018-01-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Timed_Automata.html b/web/entries/Timed_Automata.html
--- a/web/entries/Timed_Automata.html
+++ b/web/entries/Timed_Automata.html
@@ -1,228 +1,228 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Timed Automata - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>imed
<font class="first">A</font>utomata
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Timed Automata</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-03-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Timed automata are a widely used formalism for modeling real-time
systems, which is employed in a class of successful model checkers
such as UPPAAL [LPY97], HyTech [HHWt97] or Kronos [Yov97]. This work
formalizes the theory for the subclass of diagonal-free timed
automata, which is sufficient to model many interesting problems. We
first define the basic concepts and semantics of diagonal-free timed
automata. Based on this, we prove two types of decidability results
for the language emptiness problem. The first is the classic result
of Alur and Dill [AD90, AD94], which uses a finite partitioning of
the state space into so-called `regions`. Our second result focuses
on an approach based on `Difference Bound Matrices (DBMs)`, which is
practically used by model checkers. We prove the correctness of the
basic forward analysis operations on DBMs. One of these operations is
the Floyd-Warshall algorithm for the all-pairs shortest paths problem.
To obtain a finite search space, a widening operation has to be used
for this kind of analysis. We use Patricia Bouyer's [Bou04] approach
to prove that this widening operation is correct in the sense that
DBM-based forward analysis in combination with the widening operation
also decides language emptiness. The interesting property of this
proof is that the first decidability result is reused to obtain the
-second one.</div></td>
+second one.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Timed_Automata-AFP,
author = {Simon Wimmer},
title = {Timed Automata},
journal = {Archive of Formal Proofs},
month = mar,
year = 2016,
note = {\url{http://isa-afp.org/entries/Timed_Automata.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Probabilistic_Timed_Automata.html">Probabilistic_Timed_Automata</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Timed_Automata/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Timed_Automata/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Timed_Automata/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Timed_Automata-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Timed_Automata-2019-06-11.tar.gz">
afp-Timed_Automata-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Timed_Automata-2018-08-16.tar.gz">
afp-Timed_Automata-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Timed_Automata-2017-10-10.tar.gz">
afp-Timed_Automata-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Timed_Automata-2016-12-17.tar.gz">
afp-Timed_Automata-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Timed_Automata-2016-03-11.tar.gz">
afp-Timed_Automata-2016-03-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Topology.html b/web/entries/Topology.html
--- a/web/entries/Topology.html
+++ b/web/entries/Topology.html
@@ -1,290 +1,290 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Topology - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>opology
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Topology</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Stefan Friedrich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-04-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This entry contains two theories. The first, <tt>Topology</tt>, develops the basic notions of general topology. The second, which can be viewed as a demonstration of the first, is called <tt>LList_Topology</tt>. It develops the topology of lazy lists.</div></td>
+ <td class="abstract mathjax_process">This entry contains two theories. The first, <tt>Topology</tt>, develops the basic notions of general topology. The second, which can be viewed as a demonstration of the first, is called <tt>LList_Topology</tt>. It develops the topology of lazy lists.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Topology-AFP,
author = {Stefan Friedrich},
title = {Topology},
journal = {Archive of Formal Proofs},
month = apr,
year = 2004,
note = {\url{http://isa-afp.org/entries/Topology.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Coinductive.html">Coinductive</a>, <a href="Lazy-Lists-II.html">Lazy-Lists-II</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Topology/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Topology/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Topology/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Topology-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Topology-2019-06-11.tar.gz">
afp-Topology-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Topology-2018-08-16.tar.gz">
afp-Topology-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Topology-2017-10-10.tar.gz">
afp-Topology-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Topology-2016-12-17.tar.gz">
afp-Topology-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Topology-2016-02-22.tar.gz">
afp-Topology-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Topology-2015-05-27.tar.gz">
afp-Topology-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Topology-2014-08-28.tar.gz">
afp-Topology-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Topology-2013-12-11.tar.gz">
afp-Topology-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Topology-2013-11-17.tar.gz">
afp-Topology-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Topology-2013-03-02.tar.gz">
afp-Topology-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Topology-2013-02-16.tar.gz">
afp-Topology-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Topology-2012-05-24.tar.gz">
afp-Topology-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Topology-2011-10-11.tar.gz">
afp-Topology-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Topology-2011-02-11.tar.gz">
afp-Topology-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Topology-2010-07-01.tar.gz">
afp-Topology-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Topology-2009-12-12.tar.gz">
afp-Topology-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Topology-2009-04-29.tar.gz">
afp-Topology-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Topology-2008-06-10.tar.gz">
afp-Topology-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Topology-2007-11-27.tar.gz">
afp-Topology-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Topology-2005-10-14.tar.gz">
afp-Topology-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Topology-2004-05-21.tar.gz">
afp-Topology-2004-05-21.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Topology-2004-04-27.tar.gz">
afp-Topology-2004-04-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/TortoiseHare.html b/web/entries/TortoiseHare.html
--- a/web/entries/TortoiseHare.html
+++ b/web/entries/TortoiseHare.html
@@ -1,211 +1,211 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Tortoise and Hare Algorithm - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">T</font>ortoise
and
<font class="first">H</font>are
<font class="first">A</font>lgorithm
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Tortoise and Hare Algorithm</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-11-18</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We formalize the Tortoise and Hare cycle-finding algorithm ascribed to Floyd by Knuth, and an improved version due to Brent.</div></td>
+ <td class="abstract mathjax_process">We formalize the Tortoise and Hare cycle-finding algorithm ascribed to Floyd by Knuth, and an improved version due to Brent.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{TortoiseHare-AFP,
author = {Peter Gammie},
title = {The Tortoise and Hare Algorithm},
journal = {Archive of Formal Proofs},
month = nov,
year = 2015,
note = {\url{http://isa-afp.org/entries/TortoiseHare.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/TortoiseHare/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/TortoiseHare/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/TortoiseHare/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-TortoiseHare-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-TortoiseHare-2019-06-11.tar.gz">
afp-TortoiseHare-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-TortoiseHare-2018-08-16.tar.gz">
afp-TortoiseHare-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-TortoiseHare-2017-10-10.tar.gz">
afp-TortoiseHare-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-TortoiseHare-2016-12-17.tar.gz">
afp-TortoiseHare-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-TortoiseHare-2016-02-22.tar.gz">
afp-TortoiseHare-2016-02-22.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Transcendence_Series_Hancl_Rucki.html b/web/entries/Transcendence_Series_Hancl_Rucki.html
--- a/web/entries/Transcendence_Series_Hancl_Rucki.html
+++ b/web/entries/Transcendence_Series_Hancl_Rucki.html
@@ -1,207 +1,207 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Transcendence of Certain Infinite Series - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">T</font>ranscendence
of
<font class="first">C</font>ertain
<font class="first">I</font>nfinite
<font class="first">S</font>eries
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Transcendence of Certain Infinite Series</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~ak2110/">Angeliki Koutsoukou-Argyraki</a> and
<a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-03-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize the proofs of two transcendence criteria by J. Hančl
and P. Rucki that assert the transcendence of the sums of certain
infinite series built up by sequences that fulfil certain properties.
Both proofs make use of Roth's celebrated theorem on diophantine
approximations to algebraic numbers from 1955 which we implement as
-an assumption without having formalised its proof.</div></td>
+an assumption without having formalised its proof.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Transcendence_Series_Hancl_Rucki-AFP,
author = {Angeliki Koutsoukou-Argyraki and Wenda Li},
title = {The Transcendence of Certain Infinite Series},
journal = {Archive of Formal Proofs},
month = mar,
year = 2019,
note = {\url{http://isa-afp.org/entries/Transcendence_Series_Hancl_Rucki.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Prime_Number_Theorem.html">Prime_Number_Theorem</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Transcendence_Series_Hancl_Rucki/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Transcendence_Series_Hancl_Rucki/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Transcendence_Series_Hancl_Rucki/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Transcendence_Series_Hancl_Rucki-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Transcendence_Series_Hancl_Rucki-2019-06-11.tar.gz">
afp-Transcendence_Series_Hancl_Rucki-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Transcendence_Series_Hancl_Rucki-2019-03-28.tar.gz">
afp-Transcendence_Series_Hancl_Rucki-2019-03-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Transformer_Semantics.html b/web/entries/Transformer_Semantics.html
--- a/web/entries/Transformer_Semantics.html
+++ b/web/entries/Transformer_Semantics.html
@@ -1,204 +1,204 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Transformer Semantics - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>ransformer
<font class="first">S</font>emantics
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Transformer Semantics</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-12-11</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
These mathematical components formalise predicate transformer
semantics for programs, yet currently only for partial correctness and
in the absence of faults. A first part for isotone (or monotone),
Sup-preserving and Inf-preserving transformers follows Back and von
Wright's approach, with additional emphasis on the quantalic
structure of algebras of transformers. The second part develops
Sup-preserving and Inf-preserving predicate transformers from the
powerset monad, via its Kleisli category and Eilenberg-Moore algebras,
with emphasis on adjunctions and dualities, as well as isomorphisms
-between relations, state transformers and predicate transformers.</div></td>
+between relations, state transformers and predicate transformers.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Transformer_Semantics-AFP,
author = {Georg Struth},
title = {Transformer Semantics},
journal = {Archive of Formal Proofs},
month = dec,
year = 2018,
note = {\url{http://isa-afp.org/entries/Transformer_Semantics.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Order_Lattice_Props.html">Order_Lattice_Props</a>, <a href="Quantales.html">Quantales</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Hybrid_Systems_VCs.html">Hybrid_Systems_VCs</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Transformer_Semantics/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Transformer_Semantics/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Transformer_Semantics/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Transformer_Semantics-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Transformer_Semantics-2019-06-11.tar.gz">
afp-Transformer_Semantics-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Transformer_Semantics-2018-12-19.tar.gz">
afp-Transformer_Semantics-2018-12-19.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Transition_Systems_and_Automata.html b/web/entries/Transition_Systems_and_Automata.html
--- a/web/entries/Transition_Systems_and_Automata.html
+++ b/web/entries/Transition_Systems_and_Automata.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Transition Systems and Automata - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>ransition
<font class="first">S</font>ystems
and
<font class="first">A</font>utomata
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Transition Systems and Automata</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~brunnerj/">Julian Brunner</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-10-19</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry provides a very abstract theory of transition systems that
can be instantiated to express various types of automata. A transition
system is typically instantiated by providing a set of initial states,
a predicate for enabled transitions, and a transition execution
function. From this, it defines the concepts of finite and infinite
paths as well as the set of reachable states, among other things. Many
useful theorems, from basic path manipulation rules to coinduction and
run construction rules, are proven in this abstract transition system
context. The library comes with instantiations for DFAs, NFAs, and
-Büchi automata.</div></td>
+Büchi automata.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Transition_Systems_and_Automata-AFP,
author = {Julian Brunner},
title = {Transition Systems and Automata},
journal = {Archive of Formal Proofs},
month = oct,
year = 2017,
note = {\url{http://isa-afp.org/entries/Transition_Systems_and_Automata.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Collections.html">Collections</a>, <a href="DFS_Framework.html">DFS_Framework</a>, <a href="Gabow_SCC.html">Gabow_SCC</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Adaptive_State_Counting.html">Adaptive_State_Counting</a>, <a href="Buchi_Complementation.html">Buchi_Complementation</a>, <a href="LTL_Master_Theorem.html">LTL_Master_Theorem</a>, <a href="Partial_Order_Reduction.html">Partial_Order_Reduction</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Transition_Systems_and_Automata/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Transition_Systems_and_Automata/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Transition_Systems_and_Automata/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Transition_Systems_and_Automata-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Transition_Systems_and_Automata-2019-06-11.tar.gz">
afp-Transition_Systems_and_Automata-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Transition_Systems_and_Automata-2018-08-16.tar.gz">
afp-Transition_Systems_and_Automata-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Transition_Systems_and_Automata-2017-10-27.tar.gz">
afp-Transition_Systems_and_Automata-2017-10-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Transitive-Closure-II.html b/web/entries/Transitive-Closure-II.html
--- a/web/entries/Transitive-Closure-II.html
+++ b/web/entries/Transitive-Closure-II.html
@@ -1,263 +1,263 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Executable Transitive Closures - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>xecutable
<font class="first">T</font>ransitive
<font class="first">C</font>losures
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Executable Transitive Closures</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-02-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
We provide a generic work-list algorithm to compute the
(reflexive-)transitive closure of relations where only successors of newly
detected states are generated.
In contrast to our previous work, the relations do not have to be finite,
but each element must only have finitely many (indirect) successors.
Moreover, a subsumption relation can be used instead of pure equality.
An executable variant of the algorithm is available where the generic operations
are instantiated with list operations.
</p><p>
This formalization was performed as part of the IsaFoR/CeTA project,
and it has been used to certify size-change
termination proofs where large transitive closures have to be computed.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Transitive-Closure-II-AFP,
author = {René Thiemann},
title = {Executable Transitive Closures},
journal = {Archive of Formal Proofs},
month = feb,
year = 2012,
note = {\url{http://isa-afp.org/entries/Transitive-Closure-II.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Regular-Sets.html">Regular-Sets</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Transitive-Closure-II/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Transitive-Closure-II/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Transitive-Closure-II/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Transitive-Closure-II-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Transitive-Closure-II-2019-06-11.tar.gz">
afp-Transitive-Closure-II-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Transitive-Closure-II-2018-08-16.tar.gz">
afp-Transitive-Closure-II-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Transitive-Closure-II-2017-10-10.tar.gz">
afp-Transitive-Closure-II-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Transitive-Closure-II-2016-12-17.tar.gz">
afp-Transitive-Closure-II-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Transitive-Closure-II-2016-02-22.tar.gz">
afp-Transitive-Closure-II-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Transitive-Closure-II-2015-05-27.tar.gz">
afp-Transitive-Closure-II-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Transitive-Closure-II-2014-08-28.tar.gz">
afp-Transitive-Closure-II-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Transitive-Closure-II-2013-12-11.tar.gz">
afp-Transitive-Closure-II-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Transitive-Closure-II-2013-11-17.tar.gz">
afp-Transitive-Closure-II-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Transitive-Closure-II-2013-02-16.tar.gz">
afp-Transitive-Closure-II-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Transitive-Closure-II-2012-05-24.tar.gz">
afp-Transitive-Closure-II-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Transitive-Closure-II-2012-03-15.tar.gz">
afp-Transitive-Closure-II-2012-03-15.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Transitive-Closure-II-2012-02-29.tar.gz">
afp-Transitive-Closure-II-2012-02-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Transitive-Closure.html b/web/entries/Transitive-Closure.html
--- a/web/entries/Transitive-Closure.html
+++ b/web/entries/Transitive-Closure.html
@@ -1,267 +1,267 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Executable Transitive Closures of Finite Relations - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>xecutable
<font class="first">T</font>ransitive
<font class="first">C</font>losures
of
<font class="first">F</font>inite
<font class="first">R</font>elations
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Executable Transitive Closures of Finite Relations</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com) and
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2011-03-14</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">We provide a generic work-list algorithm to compute the transitive closure of finite relations where only successors of newly detected states are generated. This algorithm is then instantiated for lists over arbitrary carriers and red black trees (which are faster but require a linear order on the carrier), respectively. Our formalization was performed as part of the IsaFoR/CeTA project where reflexive transitive closures of large tree automata have to be computed.</div></td>
+ <td class="abstract mathjax_process">We provide a generic work-list algorithm to compute the transitive closure of finite relations where only successors of newly detected states are generated. This algorithm is then instantiated for lists over arbitrary carriers and red black trees (which are faster but require a linear order on the carrier), respectively. Our formalization was performed as part of the IsaFoR/CeTA project where reflexive transitive closures of large tree automata have to be computed.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2014-09-04] added example simprocs in Finite_Transitive_Closure_Simprocs</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Transitive-Closure-AFP,
author = {Christian Sternagel and René Thiemann},
title = {Executable Transitive Closures of Finite Relations},
journal = {Archive of Formal Proofs},
month = mar,
year = 2011,
note = {\url{http://isa-afp.org/entries/Transitive-Closure.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE.LGPL">GNU Lesser General Public License (LGPL)</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Collections.html">Collections</a>, <a href="Matrix.html">Matrix</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="KBPs.html">KBPs</a>, <a href="Network_Security_Policy_Verification.html">Network_Security_Policy_Verification</a>, <a href="Planarity_Certificates.html">Planarity_Certificates</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Transitive-Closure/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Transitive-Closure/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Transitive-Closure/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Transitive-Closure-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Transitive-Closure-2019-06-11.tar.gz">
afp-Transitive-Closure-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Transitive-Closure-2018-08-16.tar.gz">
afp-Transitive-Closure-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Transitive-Closure-2017-10-10.tar.gz">
afp-Transitive-Closure-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Transitive-Closure-2016-12-17.tar.gz">
afp-Transitive-Closure-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Transitive-Closure-2016-02-22.tar.gz">
afp-Transitive-Closure-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Transitive-Closure-2015-05-27.tar.gz">
afp-Transitive-Closure-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Transitive-Closure-2014-08-28.tar.gz">
afp-Transitive-Closure-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Transitive-Closure-2013-12-11.tar.gz">
afp-Transitive-Closure-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Transitive-Closure-2013-11-17.tar.gz">
afp-Transitive-Closure-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Transitive-Closure-2013-02-16.tar.gz">
afp-Transitive-Closure-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Transitive-Closure-2012-05-24.tar.gz">
afp-Transitive-Closure-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Transitive-Closure-2011-10-12.tar.gz">
afp-Transitive-Closure-2011-10-12.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Transitive-Closure-2011-10-11.tar.gz">
afp-Transitive-Closure-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Transitive-Closure-2011-03-14.tar.gz">
afp-Transitive-Closure-2011-03-14.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Treaps.html b/web/entries/Treaps.html
--- a/web/entries/Treaps.html
+++ b/web/entries/Treaps.html
@@ -1,218 +1,218 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Treaps - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>reaps
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Treaps</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://cl-informatik.uibk.ac.at/users/mhaslbeck/">Maximilian Haslbeck</a>,
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a> and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-02-06</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p> A Treap is a binary tree whose nodes contain pairs
consisting of some payload and an associated priority. It must have
the search-tree property w.r.t. the payloads and the heap property
w.r.t. the priorities. Treaps are an interesting data structure that
is related to binary search trees (BSTs) in the following way: if one
forgets all the priorities of a treap, the resulting BST is exactly
the same as if one had inserted the elements into an empty BST in
order of ascending priority. This means that a treap behaves like a
BST where we can pretend the elements were inserted in a different
order from the one in which they were actually inserted. </p>
<p> In particular, by choosing these priorities at random upon
insertion of an element, we can pretend that we inserted the elements
in <em>random order</em>, so that the shape of the
resulting tree is that of a random BST no matter in what order we
insert the elements. This is the main result of this
-formalisation.</p></div></td>
+formalisation.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Treaps-AFP,
author = {Maximilian Haslbeck and Manuel Eberl and Tobias Nipkow},
title = {Treaps},
journal = {Archive of Formal Proofs},
month = feb,
year = 2018,
note = {\url{http://isa-afp.org/entries/Treaps.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Comparison_Sort_Lower_Bound.html">Comparison_Sort_Lower_Bound</a>, <a href="Random_BSTs.html">Random_BSTs</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Treaps/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Treaps/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Treaps/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Treaps-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Treaps-2019-06-11.tar.gz">
afp-Treaps-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Treaps-2018-08-16.tar.gz">
afp-Treaps-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Treaps-2018-02-07.tar.gz">
afp-Treaps-2018-02-07.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Treaps-2018-02-06.tar.gz">
afp-Treaps-2018-02-06.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Tree-Automata.html b/web/entries/Tree-Automata.html
--- a/web/entries/Tree-Automata.html
+++ b/web/entries/Tree-Automata.html
@@ -1,282 +1,282 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Tree Automata - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>ree
<font class="first">A</font>utomata
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Tree Automata</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Peter Lammich
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2009-11-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This work presents a machine-checked tree automata library for Standard-ML, OCaml and Haskell. The algorithms are efficient by using appropriate data structures like RB-trees. The available algorithms for non-deterministic automata include membership query, reduction, intersection, union, and emptiness check with computation of a witness for non-emptiness. The executable algorithms are derived from less-concrete, non-executable algorithms using data-refinement techniques. The concrete data structures are from the Isabelle Collections Framework. Moreover, this work contains a formalization of the class of tree-regular languages and its closure properties under set operations.</div></td>
+ <td class="abstract mathjax_process">This work presents a machine-checked tree automata library for Standard-ML, OCaml and Haskell. The algorithms are efficient by using appropriate data structures like RB-trees. The available algorithms for non-deterministic automata include membership query, reduction, intersection, union, and emptiness check with computation of a witness for non-emptiness. The executable algorithms are derived from less-concrete, non-executable algorithms using data-refinement techniques. The concrete data structures are from the Isabelle Collections Framework. Moreover, this work contains a formalization of the class of tree-regular languages and its closure properties under set operations.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Tree-Automata-AFP,
author = {Peter Lammich},
title = {Tree Automata},
journal = {Archive of Formal Proofs},
month = nov,
year = 2009,
note = {\url{http://isa-afp.org/entries/Tree-Automata.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Collections.html">Collections</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Tree-Automata/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Tree-Automata/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Tree-Automata/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Tree-Automata-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Tree-Automata-2019-06-11.tar.gz">
afp-Tree-Automata-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Tree-Automata-2018-08-16.tar.gz">
afp-Tree-Automata-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Tree-Automata-2017-10-10.tar.gz">
afp-Tree-Automata-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Tree-Automata-2016-12-17.tar.gz">
afp-Tree-Automata-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Tree-Automata-2016-02-22.tar.gz">
afp-Tree-Automata-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Tree-Automata-2015-05-27.tar.gz">
afp-Tree-Automata-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Tree-Automata-2014-08-28.tar.gz">
afp-Tree-Automata-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Tree-Automata-2013-12-11.tar.gz">
afp-Tree-Automata-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Tree-Automata-2013-11-17.tar.gz">
afp-Tree-Automata-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Tree-Automata-2013-03-02.tar.gz">
afp-Tree-Automata-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Tree-Automata-2013-02-16.tar.gz">
afp-Tree-Automata-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Tree-Automata-2012-05-24.tar.gz">
afp-Tree-Automata-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Tree-Automata-2012-03-15.tar.gz">
afp-Tree-Automata-2012-03-15.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Tree-Automata-2011-10-12.tar.gz">
afp-Tree-Automata-2011-10-12.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Tree-Automata-2011-10-11.tar.gz">
afp-Tree-Automata-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Tree-Automata-2011-02-11.tar.gz">
afp-Tree-Automata-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Tree-Automata-2010-07-01.tar.gz">
afp-Tree-Automata-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Tree-Automata-2009-12-13.tar.gz">
afp-Tree-Automata-2009-12-13.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Tree-Automata-2009-12-12.tar.gz">
afp-Tree-Automata-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Tree-Automata-2009-11-29.tar.gz">
afp-Tree-Automata-2009-11-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Tree_Decomposition.html b/web/entries/Tree_Decomposition.html
--- a/web/entries/Tree_Decomposition.html
+++ b/web/entries/Tree_Decomposition.html
@@ -1,215 +1,215 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Tree Decomposition - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>ree
<font class="first">D</font>ecomposition
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Tree Decomposition</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://logic.las.tu-berlin.de/Members/Dittmann/">Christoph Dittmann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-05-31</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalize tree decompositions and tree width in Isabelle/HOL,
proving that trees have treewidth 1. We also show that every edge of
a tree decomposition is a separation of the underlying graph. As an
application of this theorem we prove that complete graphs of size n
-have treewidth n-1.</div></td>
+have treewidth n-1.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Tree_Decomposition-AFP,
author = {Christoph Dittmann},
title = {Tree Decomposition},
journal = {Archive of Formal Proofs},
month = may,
year = 2016,
note = {\url{http://isa-afp.org/entries/Tree_Decomposition.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Tree_Decomposition/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Tree_Decomposition/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Tree_Decomposition/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Tree_Decomposition-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Tree_Decomposition-2019-06-11.tar.gz">
afp-Tree_Decomposition-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Tree_Decomposition-2018-08-16.tar.gz">
afp-Tree_Decomposition-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Tree_Decomposition-2017-10-10.tar.gz">
afp-Tree_Decomposition-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Tree_Decomposition-2016-12-17.tar.gz">
afp-Tree_Decomposition-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Tree_Decomposition-2016-06-01.tar.gz">
afp-Tree_Decomposition-2016-06-01.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Tree_Decomposition-2016-05-31.tar.gz">
afp-Tree_Decomposition-2016-05-31.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Triangle.html b/web/entries/Triangle.html
--- a/web/entries/Triangle.html
+++ b/web/entries/Triangle.html
@@ -1,229 +1,229 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Basic Geometric Properties of Triangles - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">B</font>asic
<font class="first">G</font>eometric
<font class="first">P</font>roperties
of
<font class="first">T</font>riangles
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Basic Geometric Properties of Triangles</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-12-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>
This entry contains a definition of angles between vectors and between three
points. Building on this, we prove basic geometric properties of triangles, such
as the Isosceles Triangle Theorem, the Law of Sines and the Law of Cosines, that
the sum of the angles of a triangle is π, and the congruence theorems for
triangles.
</p><p>
The definitions and proofs were developed following those by John Harrison in
HOL Light. However, due to Isabelle's type class system, all definitions and
theorems in the Isabelle formalisation hold for all real inner product spaces.
-</p></div></td>
+</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Triangle-AFP,
author = {Manuel Eberl},
title = {Basic Geometric Properties of Triangles},
journal = {Archive of Formal Proofs},
month = dec,
year = 2015,
note = {\url{http://isa-afp.org/entries/Triangle.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Chord_Segments.html">Chord_Segments</a>, <a href="Ordinary_Differential_Equations.html">Ordinary_Differential_Equations</a>, <a href="Stewart_Apollonius.html">Stewart_Apollonius</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Triangle/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Triangle/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Triangle/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Triangle-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Triangle-2019-06-11.tar.gz">
afp-Triangle-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Triangle-2018-08-16.tar.gz">
afp-Triangle-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Triangle-2017-10-10.tar.gz">
afp-Triangle-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Triangle-2016-12-17.tar.gz">
afp-Triangle-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Triangle-2016-02-22.tar.gz">
afp-Triangle-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Triangle-2016-01-05.tar.gz">
afp-Triangle-2016-01-05.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Trie.html b/web/entries/Trie.html
--- a/web/entries/Trie.html
+++ b/web/entries/Trie.html
@@ -1,223 +1,223 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Trie - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>rie
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Trie</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a> and
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-03-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This article formalizes the ``trie'' data structure invented by
Fredkin [CACM 1960]. It also provides a specialization where the entries
-in the trie are lists.</div></td>
+in the trie are lists.</td>
</tr>
<tr>
<td class="datahead" valign="top">Origin:</td>
<td class="abstract">This article was extracted from existing articles by the authors.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Trie-AFP,
author = {Andreas Lochbihler and Tobias Nipkow},
title = {Trie},
journal = {Archive of Formal Proofs},
month = mar,
year = 2015,
note = {\url{http://isa-afp.org/entries/Trie.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Collections.html">Collections</a>, <a href="Flyspeck-Tame.html">Flyspeck-Tame</a>, <a href="JinjaThreads.html">JinjaThreads</a>, <a href="KBPs.html">KBPs</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Trie/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Trie/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Trie/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Trie-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Trie-2019-06-11.tar.gz">
afp-Trie-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Trie-2018-08-16.tar.gz">
afp-Trie-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Trie-2017-10-10.tar.gz">
afp-Trie-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Trie-2016-12-17.tar.gz">
afp-Trie-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Trie-2016-02-22.tar.gz">
afp-Trie-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Trie-2015-05-27.tar.gz">
afp-Trie-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Trie-2015-03-30.tar.gz">
afp-Trie-2015-03-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Twelvefold_Way.html b/web/entries/Twelvefold_Way.html
--- a/web/entries/Twelvefold_Way.html
+++ b/web/entries/Twelvefold_Way.html
@@ -1,213 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Twelvefold Way - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">T</font>welvefold
<font class="first">W</font>ay
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Twelvefold Way</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-12-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry provides all cardinality theorems of the Twelvefold Way.
The Twelvefold Way systematically classifies twelve related
combinatorial problems concerning two finite sets, which include
counting permutations, combinations, multisets, set partitions and
number partitions. This development builds upon the existing formal
developments with cardinality theorems for those structures. It
provides twelve bijections from the various structures to different
equivalence classes on finite functions, and hence, proves cardinality
-formulae for these equivalence classes on finite functions.</div></td>
+formulae for these equivalence classes on finite functions.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Twelvefold_Way-AFP,
author = {Lukas Bulwahn},
title = {The Twelvefold Way},
journal = {Archive of Formal Proofs},
month = dec,
year = 2016,
note = {\url{http://isa-afp.org/entries/Twelvefold_Way.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Bell_Numbers_Spivey.html">Bell_Numbers_Spivey</a>, <a href="Card_Multisets.html">Card_Multisets</a>, <a href="Card_Number_Partitions.html">Card_Number_Partitions</a>, <a href="Card_Partitions.html">Card_Partitions</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Twelvefold_Way/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Twelvefold_Way/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Twelvefold_Way/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Twelvefold_Way-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Twelvefold_Way-2019-06-11.tar.gz">
afp-Twelvefold_Way-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Twelvefold_Way-2018-08-16.tar.gz">
afp-Twelvefold_Way-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Twelvefold_Way-2017-10-10.tar.gz">
afp-Twelvefold_Way-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Twelvefold_Way-2016-12-30.tar.gz">
afp-Twelvefold_Way-2016-12-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Tycon.html b/web/entries/Tycon.html
--- a/web/entries/Tycon.html
+++ b/web/entries/Tycon.html
@@ -1,255 +1,255 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Type Constructor Classes and Monad Transformers - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>ype
<font class="first">C</font>onstructor
<font class="first">C</font>lasses
and
<font class="first">M</font>onad
<font class="first">T</font>ransformers
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Type Constructor Classes and Monad Transformers</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Brian Huffman
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-06-26</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
These theories contain a formalization of first class type constructors
and axiomatic constructor classes for HOLCF. This work is described
in detail in the ICFP 2012 paper <i>Formal Verification of Monad
Transformers</i> by the author. The formalization is a revised and
updated version of earlier joint work with Matthews and White.
<P>
Based on the hierarchy of type classes in Haskell, we define classes
for functors, monads, monad-plus, etc. Each one includes all the
standard laws as axioms. We also provide a new user command,
tycondef, for defining new type constructors in HOLCF. Using tycondef,
we instantiate the type class hierarchy with various monads and monad
-transformers.</div></td>
+transformers.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Tycon-AFP,
author = {Brian Huffman},
title = {Type Constructor Classes and Monad Transformers},
journal = {Archive of Formal Proofs},
month = jun,
year = 2012,
note = {\url{http://isa-afp.org/entries/Tycon.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Tycon/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Tycon/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Tycon/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Tycon-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Tycon-2019-06-11.tar.gz">
afp-Tycon-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Tycon-2018-08-16.tar.gz">
afp-Tycon-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Tycon-2017-10-10.tar.gz">
afp-Tycon-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Tycon-2016-12-17.tar.gz">
afp-Tycon-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Tycon-2016-02-22.tar.gz">
afp-Tycon-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Tycon-2015-05-27.tar.gz">
afp-Tycon-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Tycon-2014-08-28.tar.gz">
afp-Tycon-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Tycon-2013-12-11.tar.gz">
afp-Tycon-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Tycon-2013-11-17.tar.gz">
afp-Tycon-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Tycon-2013-02-16.tar.gz">
afp-Tycon-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Tycon-2012-06-28.tar.gz">
afp-Tycon-2012-06-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Types_Tableaus_and_Goedels_God.html b/web/entries/Types_Tableaus_and_Goedels_God.html
--- a/web/entries/Types_Tableaus_and_Goedels_God.html
+++ b/web/entries/Types_Tableaus_and_Goedels_God.html
@@ -1,222 +1,222 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Types, Tableaus and Gödel’s God in Isabelle/HOL - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>ypes,
<font class="first">T</font>ableaus
and
<font class="first">G</font>ödel’s
<font class="first">G</font>od
in
<font class="first">I</font>sabelle/HOL
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Types, Tableaus and Gödel’s God in Isabelle/HOL</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
David Fuenmayor (davfuenmayor /at/ gmail /dot/ com) and
<a href="http://christoph-benzmueller.de">Christoph Benzmüller</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-05-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
A computer-formalisation of the essential parts of Fitting's
textbook "Types, Tableaus and Gödel's God" in
Isabelle/HOL is presented. In particular, Fitting's (and
Anderson's) variant of the ontological argument is verified and
confirmed. This variant avoids the modal collapse, which has been
criticised as an undesirable side-effect of Kurt Gödel's (and
Dana Scott's) versions of the ontological argument.
Fitting's work is employing an intensional higher-order modal
logic, which we shallowly embed here in classical higher-order logic.
We then utilize the embedded logic for the formalisation of
-Fitting's argument. (See also the earlier AFP entry ``Gödel's God in Isabelle/HOL''.)</div></td>
+Fitting's argument. (See also the earlier AFP entry ``Gödel's God in Isabelle/HOL''.)</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Types_Tableaus_and_Goedels_God-AFP,
author = {David Fuenmayor and Christoph Benzmüller},
title = {Types, Tableaus and Gödel’s God in Isabelle/HOL},
journal = {Archive of Formal Proofs},
month = may,
year = 2017,
note = {\url{http://isa-afp.org/entries/Types_Tableaus_and_Goedels_God.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Types_Tableaus_and_Goedels_God/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Types_Tableaus_and_Goedels_God/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Types_Tableaus_and_Goedels_God/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Types_Tableaus_and_Goedels_God-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Types_Tableaus_and_Goedels_God-2019-06-11.tar.gz">
afp-Types_Tableaus_and_Goedels_God-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Types_Tableaus_and_Goedels_God-2018-08-16.tar.gz">
afp-Types_Tableaus_and_Goedels_God-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Types_Tableaus_and_Goedels_God-2017-10-10.tar.gz">
afp-Types_Tableaus_and_Goedels_God-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Types_Tableaus_and_Goedels_God-2017-05-02.tar.gz">
afp-Types_Tableaus_and_Goedels_God-2017-05-02.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/UPF.html b/web/entries/UPF.html
--- a/web/entries/UPF.html
+++ b/web/entries/UPF.html
@@ -1,241 +1,241 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Unified Policy Framework (UPF) - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">U</font>nified
<font class="first">P</font>olicy
<font class="first">F</font>ramework
<font class="first">(</font>UPF)
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Unified Policy Framework (UPF)</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www.brucker.ch/">Achim D. Brucker</a>,
Lukas Brügger and
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-11-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present the Unified Policy Framework (UPF), a generic framework
for modelling security (access-control) policies. UPF emphasizes
the view that a policy is a policy decision function that grants or
denies access to resources, permissions, etc. In other words,
instead of modelling the relations of permitted or prohibited
requests directly, we model the concrete function that implements
the policy decision point in a system. In more detail, UPF is
based on the following four principles: 1) Functional representation
of policies, 2) No conflicts are possible, 3) Three-valued decision
type (allow, deny, undefined), 4) Output type not containing the
-decision only.</div></td>
+decision only.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{UPF-AFP,
author = {Achim D. Brucker and Lukas Brügger and Burkhart Wolff},
title = {The Unified Policy Framework (UPF)},
journal = {Archive of Formal Proofs},
month = nov,
year = 2014,
note = {\url{http://isa-afp.org/entries/UPF.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="UPF_Firewall.html">UPF_Firewall</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/UPF/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/UPF/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/UPF/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-UPF-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-UPF-2019-06-11.tar.gz">
afp-UPF-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-UPF-2018-08-16.tar.gz">
afp-UPF-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-UPF-2017-10-10.tar.gz">
afp-UPF-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-UPF-2016-12-17.tar.gz">
afp-UPF-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-UPF-2016-02-22.tar.gz">
afp-UPF-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-UPF-2015-05-27.tar.gz">
afp-UPF-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-UPF-2015-01-28.tar.gz">
afp-UPF-2015-01-28.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-UPF-2014-11-30.tar.gz">
afp-UPF-2014-11-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/UPF_Firewall.html b/web/entries/UPF_Firewall.html
--- a/web/entries/UPF_Firewall.html
+++ b/web/entries/UPF_Firewall.html
@@ -1,225 +1,225 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Formal Network Models and Their Application to Firewall Policies - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>ormal
<font class="first">N</font>etwork
<font class="first">M</font>odels
and
<font class="first">T</font>heir
<font class="first">A</font>pplication
to
<font class="first">F</font>irewall
<font class="first">P</font>olicies
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Formal Network Models and Their Application to Firewall Policies</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www.brucker.ch/">Achim D. Brucker</a>,
Lukas Brügger and
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-01-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We present a formal model of network protocols and their application
to modeling firewall policies. The formalization is based on the
Unified Policy Framework (UPF). The formalization was originally
developed with for generating test cases for testing the security
configuration actual firewall and router (middle-boxes) using
HOL-TestGen. Our work focuses on modeling application level protocols
-on top of tcp/ip.</div></td>
+on top of tcp/ip.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{UPF_Firewall-AFP,
author = {Achim D. Brucker and Lukas Brügger and Burkhart Wolff},
title = {Formal Network Models and Their Application to Firewall Policies},
journal = {Archive of Formal Proofs},
month = jan,
year = 2017,
note = {\url{http://isa-afp.org/entries/UPF_Firewall.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="UPF.html">UPF</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/UPF_Firewall/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/UPF_Firewall/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/UPF_Firewall/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-UPF_Firewall-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-UPF_Firewall-2019-06-11.tar.gz">
afp-UPF_Firewall-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-UPF_Firewall-2018-08-16.tar.gz">
afp-UPF_Firewall-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-UPF_Firewall-2017-10-10.tar.gz">
afp-UPF_Firewall-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-UPF_Firewall-2017-01-11.tar.gz">
afp-UPF_Firewall-2017-01-11.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/UTP.html b/web/entries/UTP.html
--- a/web/entries/UTP.html
+++ b/web/entries/UTP.html
@@ -1,220 +1,220 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Isabelle/UTP: Mechanised Theory Engineering for Unifying Theories of Programming - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">I</font>sabelle/UTP:
<font class="first">M</font>echanised
<font class="first">T</font>heory
<font class="first">E</font>ngineering
for
<font class="first">U</font>nifying
<font class="first">T</font>heories
of
<font class="first">P</font>rogramming
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Isabelle/UTP: Mechanised Theory Engineering for Unifying Theories of Programming</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://www-users.cs.york.ac.uk/~simonf/">Simon Foster</a>,
Frank Zeyda,
Yakoub Nemouchi (yakoub /dot/ nemouchi /at/ york /dot/ ac /dot/ uk),
Pedro Ribeiro and
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-02-01</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Isabelle/UTP is a mechanised theory engineering toolkit based on Hoare
and He’s Unifying Theories of Programming (UTP). UTP enables the
creation of denotational, algebraic, and operational semantics for
different programming languages using an alphabetised relational
calculus. We provide a semantic embedding of the alphabetised
relational calculus in Isabelle/HOL, including new type definitions,
relational constructors, automated proof tactics, and accompanying
algebraic laws. Isabelle/UTP can be used to both capture laws of
programming for different languages, and put these fundamental
theorems to work in the creation of associated verification tools,
using calculi like Hoare logics. This document describes the
-relational core of the UTP in Isabelle/HOL.</div></td>
+relational core of the UTP in Isabelle/HOL.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{UTP-AFP,
author = {Simon Foster and Frank Zeyda and Yakoub Nemouchi and Pedro Ribeiro and Burkhart Wolff},
title = {Isabelle/UTP: Mechanised Theory Engineering for Unifying Theories of Programming},
journal = {Archive of Formal Proofs},
month = feb,
year = 2019,
note = {\url{http://isa-afp.org/entries/UTP.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/UTP/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/UTP/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/UTP/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-UTP-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-UTP-2019-06-11.tar.gz">
afp-UTP-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-UTP-2019-02-06.tar.gz">
afp-UTP-2019-02-06.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Universal_Turing_Machine.html b/web/entries/Universal_Turing_Machine.html
--- a/web/entries/Universal_Turing_Machine.html
+++ b/web/entries/Universal_Turing_Machine.html
@@ -1,200 +1,200 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Universal Turing Machine - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">U</font>niversal
<font class="first">T</font>uring
<font class="first">M</font>achine
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Universal Turing Machine</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Jian Xu,
Xingyuan Zhang,
<a href="http://www.inf.kcl.ac.uk/staff/urbanc/">Christian Urban</a> and
Sebastiaan J. C. Joosten
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-02-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
We formalise results from computability theory: recursive functions,
undecidability of the halting problem, and the existence of a
universal Turing machine. This formalisation is the AFP entry
corresponding to the paper Mechanising Turing Machines and Computability Theory
-in Isabelle/HOL, ITP 2013.</div></td>
+in Isabelle/HOL, ITP 2013.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Universal_Turing_Machine-AFP,
author = {Jian Xu and Xingyuan Zhang and Christian Urban and Sebastiaan J. C. Joosten},
title = {Universal Turing Machine},
journal = {Archive of Formal Proofs},
month = feb,
year = 2019,
note = {\url{http://isa-afp.org/entries/Universal_Turing_Machine.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Universal_Turing_Machine/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Universal_Turing_Machine/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Universal_Turing_Machine/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Universal_Turing_Machine-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Universal_Turing_Machine-2019-06-11.tar.gz">
afp-Universal_Turing_Machine-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Universal_Turing_Machine-2019-02-12.tar.gz">
afp-Universal_Turing_Machine-2019-02-12.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/UpDown_Scheme.html b/web/entries/UpDown_Scheme.html
--- a/web/entries/UpDown_Scheme.html
+++ b/web/entries/UpDown_Scheme.html
@@ -1,235 +1,235 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Verification of the UpDown Scheme - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>erification
of
the
<font class="first">U</font>pDown
<font class="first">S</font>cheme
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Verification of the UpDown Scheme</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-01-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
The UpDown scheme is a recursive scheme used to compute the stiffness matrix
on a special form of sparse grids. Usually, when discretizing a Euclidean
space of dimension d we need O(n^d) points, for n points along each dimension.
Sparse grids are a hierarchical representation where the number of points is
reduced to O(n * log(n)^d). One disadvantage of such sparse grids is that the
algorithm now operate recursively in the dimensions and levels of the sparse grid.
<p>
The UpDown scheme allows us to compute the stiffness matrix on such a sparse
grid. The stiffness matrix represents the influence of each representation
function on the L^2 scalar product. For a detailed description see
Dirk Pflüger's PhD thesis. This formalization was developed as an
-interdisciplinary project (IDP) at the Technische Universität München.</div></td>
+interdisciplinary project (IDP) at the Technische Universität München.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{UpDown_Scheme-AFP,
author = {Johannes Hölzl},
title = {Verification of the UpDown Scheme},
journal = {Archive of Formal Proofs},
month = jan,
year = 2015,
note = {\url{http://isa-afp.org/entries/UpDown_Scheme.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Automatic_Refinement.html">Automatic_Refinement</a>, <a href="Separation_Logic_Imperative_HOL.html">Separation_Logic_Imperative_HOL</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/UpDown_Scheme/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/UpDown_Scheme/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/UpDown_Scheme/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-UpDown_Scheme-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-UpDown_Scheme-2019-06-11.tar.gz">
afp-UpDown_Scheme-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-UpDown_Scheme-2018-08-16.tar.gz">
afp-UpDown_Scheme-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-UpDown_Scheme-2017-10-10.tar.gz">
afp-UpDown_Scheme-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-UpDown_Scheme-2016-12-17.tar.gz">
afp-UpDown_Scheme-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-UpDown_Scheme-2016-02-22.tar.gz">
afp-UpDown_Scheme-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-UpDown_Scheme-2015-05-27.tar.gz">
afp-UpDown_Scheme-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-UpDown_Scheme-2015-01-30.tar.gz">
afp-UpDown_Scheme-2015-01-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Valuation.html b/web/entries/Valuation.html
--- a/web/entries/Valuation.html
+++ b/web/entries/Valuation.html
@@ -1,294 +1,294 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Fundamental Properties of Valuation Theory and Hensel's Lemma - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>undamental
<font class="first">P</font>roperties
of
<font class="first">V</font>aluation
<font class="first">T</font>heory
and
<font class="first">H</font>ensel's
<font class="first">L</font>emma
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Fundamental Properties of Valuation Theory and Hensel's Lemma</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Hidetsune Kobayashi
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2007-08-08</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Convergence with respect to a valuation is discussed as convergence of a Cauchy sequence. Cauchy sequences of polynomials are defined. They are used to formalize Hensel's lemma.</div></td>
+ <td class="abstract mathjax_process">Convergence with respect to a valuation is discussed as convergence of a Cauchy sequence. Cauchy sequences of polynomials are defined. They are used to formalize Hensel's lemma.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Valuation-AFP,
author = {Hidetsune Kobayashi},
title = {Fundamental Properties of Valuation Theory and Hensel's Lemma},
journal = {Archive of Formal Proofs},
month = aug,
year = 2007,
note = {\url{http://isa-afp.org/entries/Valuation.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Group-Ring-Module.html">Group-Ring-Module</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Valuation/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Valuation/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Valuation/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Valuation-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Valuation-2019-06-11.tar.gz">
afp-Valuation-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Valuation-2018-08-16.tar.gz">
afp-Valuation-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Valuation-2017-10-10.tar.gz">
afp-Valuation-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Valuation-2016-12-17.tar.gz">
afp-Valuation-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Valuation-2016-02-22.tar.gz">
afp-Valuation-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Valuation-2015-05-27.tar.gz">
afp-Valuation-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Valuation-2014-08-28.tar.gz">
afp-Valuation-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Valuation-2013-12-11.tar.gz">
afp-Valuation-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Valuation-2013-11-17.tar.gz">
afp-Valuation-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Valuation-2013-03-08.tar.gz">
afp-Valuation-2013-03-08.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Valuation-2013-02-16.tar.gz">
afp-Valuation-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Valuation-2012-05-24.tar.gz">
afp-Valuation-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Valuation-2011-10-11.tar.gz">
afp-Valuation-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Valuation-2011-02-11.tar.gz">
afp-Valuation-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Valuation-2010-07-01.tar.gz">
afp-Valuation-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Valuation-2009-12-12.tar.gz">
afp-Valuation-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Valuation-2009-04-30.tar.gz">
afp-Valuation-2009-04-30.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Valuation-2009-04-29.tar.gz">
afp-Valuation-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Valuation-2008-06-10.tar.gz">
afp-Valuation-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Valuation-2007-11-27.tar.gz">
afp-Valuation-2007-11-27.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/VectorSpace.html b/web/entries/VectorSpace.html
--- a/web/entries/VectorSpace.html
+++ b/web/entries/VectorSpace.html
@@ -1,227 +1,227 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Vector Spaces - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>ector
<font class="first">S</font>paces
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Vector Spaces</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Holden Lee (holdenl /at/ princeton /dot/ edu)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-08-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">This formalisation of basic linear algebra is based completely on locales, building off HOL-Algebra. It includes basic definitions: linear combinations, span, linear independence; linear transformations; interpretation of function spaces as vector spaces; the direct sum of vector spaces, sum of subspaces; the replacement theorem; existence of bases in finite-dimensional; vector spaces, definition of dimension; the rank-nullity theorem. Some concepts are actually defined and proved for modules as they also apply there. Infinite-dimensional vector spaces are supported, but dimension is only supported for finite-dimensional vector spaces. The proofs are standard; the proofs of the replacement theorem and rank-nullity theorem roughly follow the presentation in Linear Algebra by Friedberg, Insel, and Spence. The rank-nullity theorem generalises the existing development in the Archive of Formal Proof (originally using type classes, now using a mix of type classes and locales).</div></td>
+ <td class="abstract mathjax_process">This formalisation of basic linear algebra is based completely on locales, building off HOL-Algebra. It includes basic definitions: linear combinations, span, linear independence; linear transformations; interpretation of function spaces as vector spaces; the direct sum of vector spaces, sum of subspaces; the replacement theorem; existence of bases in finite-dimensional; vector spaces, definition of dimension; the rank-nullity theorem. Some concepts are actually defined and proved for modules as they also apply there. Infinite-dimensional vector spaces are supported, but dimension is only supported for finite-dimensional vector spaces. The proofs are standard; the proofs of the replacement theorem and rank-nullity theorem roughly follow the presentation in Linear Algebra by Friedberg, Insel, and Spence. The rank-nullity theorem generalises the existing development in the Archive of Formal Proof (originally using type classes, now using a mix of type classes and locales).</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{VectorSpace-AFP,
author = {Holden Lee},
title = {Vector Spaces},
journal = {Archive of Formal Proofs},
month = aug,
year = 2014,
note = {\url{http://isa-afp.org/entries/VectorSpace.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Deep_Learning.html">Deep_Learning</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/VectorSpace/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/VectorSpace/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/VectorSpace/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-VectorSpace-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-VectorSpace-2019-06-11.tar.gz">
afp-VectorSpace-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-VectorSpace-2018-08-16.tar.gz">
afp-VectorSpace-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-VectorSpace-2017-10-10.tar.gz">
afp-VectorSpace-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-VectorSpace-2016-12-17.tar.gz">
afp-VectorSpace-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-VectorSpace-2016-02-22.tar.gz">
afp-VectorSpace-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-VectorSpace-2015-05-27.tar.gz">
afp-VectorSpace-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-VectorSpace-2014-09-07.tar.gz">
afp-VectorSpace-2014-09-07.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-VectorSpace-2014-08-31.tar.gz">
afp-VectorSpace-2014-08-31.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-VectorSpace-2014-08-29.tar.gz">
afp-VectorSpace-2014-08-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/VeriComp.html b/web/entries/VeriComp.html
--- a/web/entries/VeriComp.html
+++ b/web/entries/VeriComp.html
@@ -1,200 +1,200 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Generic Framework for Verified Compilers - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">G</font>eneric
<font class="first">F</font>ramework
for
<font class="first">V</font>erified
<font class="first">C</font>ompilers
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Generic Framework for Verified Compilers</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://martin.desharnais.me">Martin Desharnais</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-02-10</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This is a generic framework for formalizing compiler transformations.
It leverages Isabelle/HOL’s locales to abstract over concrete
languages and transformations. It states common definitions for
language semantics, program behaviours, forward and backward
simulations, and compilers. We provide generic operations, such as
simulation and compiler composition, and prove general (partial)
-correctness theorems, resulting in reusable proof components.</div></td>
+correctness theorems, resulting in reusable proof components.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{VeriComp-AFP,
author = {Martin Desharnais},
title = {A Generic Framework for Verified Compilers},
journal = {Archive of Formal Proofs},
month = feb,
year = 2020,
note = {\url{http://isa-afp.org/entries/VeriComp.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/VeriComp/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/VeriComp/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/VeriComp/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-VeriComp-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-VeriComp-2020-02-25.tar.gz">
afp-VeriComp-2020-02-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Verified-Prover.html b/web/entries/Verified-Prover.html
--- a/web/entries/Verified-Prover.html
+++ b/web/entries/Verified-Prover.html
@@ -1,307 +1,307 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Mechanically Verified, Efficient, Sound and Complete Theorem Prover For First Order Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">M</font>echanically
<font class="first">V</font>erified,
<font class="first">E</font>fficient,
<font class="first">S</font>ound
and
<font class="first">C</font>omplete
<font class="first">T</font>heorem
<font class="first">P</font>rover
<font class="first">F</font>or
<font class="first">F</font>irst
<font class="first">O</font>rder
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Mechanically Verified, Efficient, Sound and Complete Theorem Prover For First Order Logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Tom Ridge
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2004-09-28</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Soundness and completeness for a system of first order logic are formally proved, building on James Margetson's formalization of work by Wainer and Wallen. The completeness proofs naturally suggest an algorithm to derive proofs. This algorithm, which can be implemented tail recursively, is formalized in Isabelle/HOL. The algorithm can be executed via the rewriting tactics of Isabelle. Alternatively, the definitions can be exported to OCaml, yielding a directly executable program.</div></td>
+ <td class="abstract mathjax_process">Soundness and completeness for a system of first order logic are formally proved, building on James Margetson's formalization of work by Wainer and Wallen. The completeness proofs naturally suggest an algorithm to derive proofs. This algorithm, which can be implemented tail recursively, is formalized in Isabelle/HOL. The algorithm can be executed via the rewriting tactics of Isabelle. Alternatively, the definitions can be exported to OCaml, yielding a directly executable program.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Verified-Prover-AFP,
author = {Tom Ridge},
title = {A Mechanically Verified, Efficient, Sound and Complete Theorem Prover For First Order Logic},
journal = {Archive of Formal Proofs},
month = sep,
year = 2004,
note = {\url{http://isa-afp.org/entries/Verified-Prover.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Verified-Prover/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Verified-Prover/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Verified-Prover/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Verified-Prover-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Verified-Prover-2019-06-11.tar.gz">
afp-Verified-Prover-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Verified-Prover-2018-08-16.tar.gz">
afp-Verified-Prover-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Verified-Prover-2017-10-10.tar.gz">
afp-Verified-Prover-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Verified-Prover-2016-12-17.tar.gz">
afp-Verified-Prover-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Verified-Prover-2016-02-22.tar.gz">
afp-Verified-Prover-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Verified-Prover-2015-05-27.tar.gz">
afp-Verified-Prover-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Verified-Prover-2014-08-28.tar.gz">
afp-Verified-Prover-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Verified-Prover-2013-12-11.tar.gz">
afp-Verified-Prover-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Verified-Prover-2013-11-17.tar.gz">
afp-Verified-Prover-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Verified-Prover-2013-03-02.tar.gz">
afp-Verified-Prover-2013-03-02.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Verified-Prover-2013-02-16.tar.gz">
afp-Verified-Prover-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Verified-Prover-2012-05-24.tar.gz">
afp-Verified-Prover-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-Verified-Prover-2011-10-11.tar.gz">
afp-Verified-Prover-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-Verified-Prover-2011-02-11.tar.gz">
afp-Verified-Prover-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-Verified-Prover-2010-07-01.tar.gz">
afp-Verified-Prover-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-Verified-Prover-2009-12-12.tar.gz">
afp-Verified-Prover-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-Verified-Prover-2009-04-29.tar.gz">
afp-Verified-Prover-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-Verified-Prover-2008-06-10.tar.gz">
afp-Verified-Prover-2008-06-10.tar.gz
</a>
</li>
<li>Isabelle 2007:
<a href="../release/afp-Verified-Prover-2007-11-27.tar.gz">
afp-Verified-Prover-2007-11-27.tar.gz
</a>
</li>
<li>Isabelle 2005:
<a href="../release/afp-Verified-Prover-2005-10-14.tar.gz">
afp-Verified-Prover-2005-10-14.tar.gz
</a>
</li>
<li>Isabelle 2004:
<a href="../release/afp-Verified-Prover-2004-09-28.tar.gz">
afp-Verified-Prover-2004-09-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/VerifyThis2018.html b/web/entries/VerifyThis2018.html
--- a/web/entries/VerifyThis2018.html
+++ b/web/entries/VerifyThis2018.html
@@ -1,210 +1,210 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>VerifyThis 2018 - Polished Isabelle Solutions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>erifyThis
<font class="first">2</font>018
<font class="first">-</font>
<font class="first">P</font>olished
<font class="first">I</font>sabelle
<font class="first">S</font>olutions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">VerifyThis 2018 - Polished Isabelle Solutions</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Peter Lammich and
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-04-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<a
href="http://www.pm.inf.ethz.ch/research/verifythis.html">VerifyThis
2018</a> was a program verification competition associated with
ETAPS 2018. It was the 7th event in the VerifyThis competition series.
In this entry, we present polished and completed versions of our
-solutions that we created during the competition.</div></td>
+solutions that we created during the competition.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{VerifyThis2018-AFP,
author = {Peter Lammich and Simon Wimmer},
title = {VerifyThis 2018 - Polished Isabelle Solutions},
journal = {Archive of Formal Proofs},
month = apr,
year = 2018,
note = {\url{http://isa-afp.org/entries/VerifyThis2018.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/VerifyThis2018/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/VerifyThis2018/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/VerifyThis2018/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-VerifyThis2018-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-VerifyThis2018-2019-06-11.tar.gz">
afp-VerifyThis2018-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-VerifyThis2018-2018-08-16.tar.gz">
afp-VerifyThis2018-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-VerifyThis2018-2018-04-30.tar.gz">
afp-VerifyThis2018-2018-04-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/VerifyThis2019.html b/web/entries/VerifyThis2019.html
--- a/web/entries/VerifyThis2019.html
+++ b/web/entries/VerifyThis2019.html
@@ -1,199 +1,199 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>VerifyThis 2019 -- Polished Isabelle Solutions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>erifyThis
<font class="first">2</font>019
<font class="first">-</font>-
<font class="first">P</font>olished
<font class="first">I</font>sabelle
<font class="first">S</font>olutions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">VerifyThis 2019 -- Polished Isabelle Solutions</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Peter Lammich and
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-10-16</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
VerifyThis 2019 (http://www.pm.inf.ethz.ch/research/verifythis.html)
was a program verification competition associated with ETAPS 2019. It
was the 8th event in the VerifyThis competition series. In this entry,
we present polished and completed versions of our solutions that we
-created during the competition.</div></td>
+created during the competition.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{VerifyThis2019-AFP,
author = {Peter Lammich and Simon Wimmer},
title = {VerifyThis 2019 -- Polished Isabelle Solutions},
journal = {Archive of Formal Proofs},
month = oct,
year = 2019,
note = {\url{http://isa-afp.org/entries/VerifyThis2019.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/VerifyThis2019/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/VerifyThis2019/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/VerifyThis2019/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-VerifyThis2019-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-VerifyThis2019-2019-10-25.tar.gz">
afp-VerifyThis2019-2019-10-25.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Vickrey_Clarke_Groves.html b/web/entries/Vickrey_Clarke_Groves.html
--- a/web/entries/Vickrey_Clarke_Groves.html
+++ b/web/entries/Vickrey_Clarke_Groves.html
@@ -1,240 +1,240 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>VCG - Combinatorial Vickrey-Clarke-Groves Auctions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">V</font>CG
<font class="first">-</font>
<font class="first">C</font>ombinatorial
<font class="first">V</font>ickrey-Clarke-Groves
<font class="first">A</font>uctions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">VCG - Combinatorial Vickrey-Clarke-Groves Auctions</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Marco B. Caminati,
<a href="http://www.cs.bham.ac.uk/~mmk">Manfred Kerber</a>,
Christoph Lange (math /dot/ semantic /dot/ web /at/ gmail /dot/ com) and
Colin Rowat (c /dot/ rowat /at/ bham /dot/ ac /dot/ uk)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2015-04-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
A VCG auction (named after their inventors Vickrey, Clarke, and
Groves) is a generalization of the single-good, second price Vickrey
auction to the case of a combinatorial auction (multiple goods, from
which any participant can bid on each possible combination). We
formalize in this entry VCG auctions, including tie-breaking and prove
that the functions for the allocation and the price determination are
well-defined. Furthermore we show that the allocation function
allocates goods only to participants, only goods in the auction are
allocated, and no good is allocated twice. We also show that the price
function is non-negative. These properties also hold for the
-automatically extracted Scala code.</div></td>
+automatically extracted Scala code.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Vickrey_Clarke_Groves-AFP,
author = {Marco B. Caminati and Manfred Kerber and Christoph Lange and Colin Rowat},
title = {VCG - Combinatorial Vickrey-Clarke-Groves Auctions},
journal = {Archive of Formal Proofs},
month = apr,
year = 2015,
note = {\url{http://isa-afp.org/entries/Vickrey_Clarke_Groves.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Vickrey_Clarke_Groves/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Vickrey_Clarke_Groves/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Vickrey_Clarke_Groves/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Vickrey_Clarke_Groves-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Vickrey_Clarke_Groves-2019-06-11.tar.gz">
afp-Vickrey_Clarke_Groves-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Vickrey_Clarke_Groves-2018-08-16.tar.gz">
afp-Vickrey_Clarke_Groves-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Vickrey_Clarke_Groves-2017-10-10.tar.gz">
afp-Vickrey_Clarke_Groves-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Vickrey_Clarke_Groves-2016-12-17.tar.gz">
afp-Vickrey_Clarke_Groves-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Vickrey_Clarke_Groves-2016-02-22.tar.gz">
afp-Vickrey_Clarke_Groves-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Vickrey_Clarke_Groves-2015-05-27.tar.gz">
afp-Vickrey_Clarke_Groves-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Vickrey_Clarke_Groves-2015-05-09.tar.gz">
afp-Vickrey_Clarke_Groves-2015-05-09.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Vickrey_Clarke_Groves-2015-04-30.tar.gz">
afp-Vickrey_Clarke_Groves-2015-04-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/VolpanoSmith.html b/web/entries/VolpanoSmith.html
--- a/web/entries/VolpanoSmith.html
+++ b/web/entries/VolpanoSmith.html
@@ -1,280 +1,280 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Correctness Proof for the Volpano/Smith Security Typing System - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">C</font>orrectness
<font class="first">P</font>roof
for
the
<font class="first">V</font>olpano/Smith
<font class="first">S</font>ecurity
<font class="first">T</font>yping
<font class="first">S</font>ystem
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Correctness Proof for the Volpano/Smith Security Typing System</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://pp.info.uni-karlsruhe.de/personhp/gregor_snelting.php">Gregor Snelting</a> and
<a href="http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php">Daniel Wasserrab</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2008-09-02</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">The Volpano/Smith/Irvine security type systems requires that variables are annotated as high (secret) or low (public), and provides typing rules which guarantee that secret values cannot leak to public output ports. This property of a program is called confidentiality. For a simple while-language without threads, our proof shows that typeability in the Volpano/Smith system guarantees noninterference. Noninterference means that if two initial states for program execution are low-equivalent, then the final states are low-equivalent as well. This indeed implies that secret values cannot leak to public ports. The proof defines an abstract syntax and operational semantics for programs, formalizes noninterference, and then proceeds by rule induction on the operational semantics. The mathematically most intricate part is the treatment of implicit flows. Note that the Volpano/Smith system is not flow-sensitive and thus quite unprecise, resulting in false alarms. However, due to the correctness property, all potential breaks of confidentiality are discovered.</div></td>
+ <td class="abstract mathjax_process">The Volpano/Smith/Irvine security type systems requires that variables are annotated as high (secret) or low (public), and provides typing rules which guarantee that secret values cannot leak to public output ports. This property of a program is called confidentiality. For a simple while-language without threads, our proof shows that typeability in the Volpano/Smith system guarantees noninterference. Noninterference means that if two initial states for program execution are low-equivalent, then the final states are low-equivalent as well. This indeed implies that secret values cannot leak to public ports. The proof defines an abstract syntax and operational semantics for programs, formalizes noninterference, and then proceeds by rule induction on the operational semantics. The mathematically most intricate part is the treatment of implicit flows. Note that the Volpano/Smith system is not flow-sensitive and thus quite unprecise, resulting in false alarms. However, due to the correctness property, all potential breaks of confidentiality are discovered.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{VolpanoSmith-AFP,
author = {Gregor Snelting and Daniel Wasserrab},
title = {A Correctness Proof for the Volpano/Smith Security Typing System},
journal = {Archive of Formal Proofs},
month = sep,
year = 2008,
note = {\url{http://isa-afp.org/entries/VolpanoSmith.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/VolpanoSmith/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/VolpanoSmith/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/VolpanoSmith/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-VolpanoSmith-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-VolpanoSmith-2019-06-11.tar.gz">
afp-VolpanoSmith-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-VolpanoSmith-2018-08-16.tar.gz">
afp-VolpanoSmith-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-VolpanoSmith-2017-10-10.tar.gz">
afp-VolpanoSmith-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-VolpanoSmith-2016-12-17.tar.gz">
afp-VolpanoSmith-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-VolpanoSmith-2016-02-22.tar.gz">
afp-VolpanoSmith-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-VolpanoSmith-2015-05-27.tar.gz">
afp-VolpanoSmith-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-VolpanoSmith-2014-08-28.tar.gz">
afp-VolpanoSmith-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-VolpanoSmith-2013-12-11.tar.gz">
afp-VolpanoSmith-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-VolpanoSmith-2013-11-17.tar.gz">
afp-VolpanoSmith-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-VolpanoSmith-2013-02-16.tar.gz">
afp-VolpanoSmith-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-VolpanoSmith-2012-05-24.tar.gz">
afp-VolpanoSmith-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-VolpanoSmith-2011-10-11.tar.gz">
afp-VolpanoSmith-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-VolpanoSmith-2011-02-11.tar.gz">
afp-VolpanoSmith-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-VolpanoSmith-2010-07-01.tar.gz">
afp-VolpanoSmith-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-VolpanoSmith-2009-12-12.tar.gz">
afp-VolpanoSmith-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-VolpanoSmith-2009-04-29.tar.gz">
afp-VolpanoSmith-2009-04-29.tar.gz
</a>
</li>
<li>Isabelle 2008:
<a href="../release/afp-VolpanoSmith-2008-09-05.tar.gz">
afp-VolpanoSmith-2008-09-05.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/WHATandWHERE_Security.html b/web/entries/WHATandWHERE_Security.html
--- a/web/entries/WHATandWHERE_Security.html
+++ b/web/entries/WHATandWHERE_Security.html
@@ -1,259 +1,259 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Formalization of Declassification with WHAT-and-WHERE-Security - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">A</font>
<font class="first">F</font>ormalization
of
<font class="first">D</font>eclassification
with
<font class="first">W</font>HAT-and-WHERE-Security
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">A Formalization of Declassification with WHAT-and-WHERE-Security</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Sylvia Grewe (grewe /at/ st /dot/ informatik /dot/ tu-darmstadt /dot/ de),
Alexander Lux (lux /at/ mais /dot/ informatik /dot/ tu-darmstadt /dot/ de),
Heiko Mantel (mantel /at/ mais /dot/ informatik /dot/ tu-darmstadt /dot/ de) and
Jens Sauer (sauer /at/ mais /dot/ informatik /dot/ tu-darmstadt /dot/ de)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-04-23</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Research in information-flow security aims at developing methods to
+ <td class="abstract mathjax_process">Research in information-flow security aims at developing methods to
identify undesired information leaks within programs from private
sources to public sinks. Noninterference captures this intuition by
requiring that no information whatsoever flows from private sources
to public sinks. However, in practice this definition is often too
strict: Depending on the intuitive desired security policy, the
controlled declassification of certain private information (WHAT) at
certain points in the program (WHERE) might not result in an
undesired information leak.
<p>
We present an Isabelle/HOL formalization of such a security property
for controlled declassification, namely WHAT&WHERE-security from
"Scheduler-Independent Declassification" by Lux, Mantel, and Perner.
The formalization includes
compositionality proofs for and a soundness proof for a security
type system that checks for programs in a simple while language with
dynamic thread creation.
<p>
Our formalization of the security type system is abstract in the
language for expressions and in the semantic side conditions for
expressions. It can easily be instantiated with different syntactic
approximations for these side conditions. The soundness proof of
such an instantiation boils down to showing that these syntactic
approximations imply the semantic side conditions.
<p>
This Isabelle/HOL formalization uses theories from the entry
-Strong Security.</div></td>
+Strong Security.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{WHATandWHERE_Security-AFP,
author = {Sylvia Grewe and Alexander Lux and Heiko Mantel and Jens Sauer},
title = {A Formalization of Declassification with WHAT-and-WHERE-Security},
journal = {Archive of Formal Proofs},
month = apr,
year = 2014,
note = {\url{http://isa-afp.org/entries/WHATandWHERE_Security.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Strong_Security.html">Strong_Security</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/WHATandWHERE_Security/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/WHATandWHERE_Security/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/WHATandWHERE_Security/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-WHATandWHERE_Security-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-WHATandWHERE_Security-2019-06-11.tar.gz">
afp-WHATandWHERE_Security-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-WHATandWHERE_Security-2018-08-16.tar.gz">
afp-WHATandWHERE_Security-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-WHATandWHERE_Security-2017-10-10.tar.gz">
afp-WHATandWHERE_Security-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-WHATandWHERE_Security-2016-12-17.tar.gz">
afp-WHATandWHERE_Security-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-WHATandWHERE_Security-2016-02-22.tar.gz">
afp-WHATandWHERE_Security-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-WHATandWHERE_Security-2015-05-27.tar.gz">
afp-WHATandWHERE_Security-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-WHATandWHERE_Security-2014-08-28.tar.gz">
afp-WHATandWHERE_Security-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-WHATandWHERE_Security-2014-04-24.tar.gz">
afp-WHATandWHERE_Security-2014-04-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/WOOT_Strong_Eventual_Consistency.html b/web/entries/WOOT_Strong_Eventual_Consistency.html
--- a/web/entries/WOOT_Strong_Eventual_Consistency.html
+++ b/web/entries/WOOT_Strong_Eventual_Consistency.html
@@ -1,209 +1,209 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Strong Eventual Consistency of the Collaborative Editing Framework WOOT - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">S</font>trong
<font class="first">E</font>ventual
<font class="first">C</font>onsistency
of
the
<font class="first">C</font>ollaborative
<font class="first">E</font>diting
<font class="first">F</font>ramework
<font class="first">W</font>OOT
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Strong Eventual Consistency of the Collaborative Editing Framework WOOT</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="https://orcid.org/0000-0003-3290-5034">Emin Karayel</a> and
Edgar Gonzàlez (edgargip /at/ google /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2020-03-25</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
Commutative Replicated Data Types (CRDTs) are a promising new class of
data structures for large-scale shared mutable content in applications
that only require eventual consistency. The WithOut Operational
Transforms (WOOT) framework is a CRDT for collaborative text editing
introduced by Oster et al. (CSCW 2006) for which the eventual
consistency property was verified only for a bounded model to date. We
-contribute a formal proof for WOOTs strong eventual consistency.</div></td>
+contribute a formal proof for WOOTs strong eventual consistency.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{WOOT_Strong_Eventual_Consistency-AFP,
author = {Emin Karayel and Edgar Gonzàlez},
title = {Strong Eventual Consistency of the Collaborative Editing Framework WOOT},
journal = {Archive of Formal Proofs},
month = mar,
year = 2020,
note = {\url{http://isa-afp.org/entries/WOOT_Strong_Eventual_Consistency.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Certification_Monads.html">Certification_Monads</a>, <a href="Datatype_Order_Generator.html">Datatype_Order_Generator</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/WOOT_Strong_Eventual_Consistency/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/WOOT_Strong_Eventual_Consistency/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/WOOT_Strong_Eventual_Consistency/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-WOOT_Strong_Eventual_Consistency-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-WOOT_Strong_Eventual_Consistency-2020-03-26.tar.gz">
afp-WOOT_Strong_Eventual_Consistency-2020-03-26.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/WebAssembly.html b/web/entries/WebAssembly.html
--- a/web/entries/WebAssembly.html
+++ b/web/entries/WebAssembly.html
@@ -1,206 +1,206 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>WebAssembly - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">W</font>ebAssembly
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">WebAssembly</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://www.cl.cam.ac.uk/~caw77/">Conrad Watt</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-04-29</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This is a mechanised specification of the WebAssembly language, drawn
mainly from the previously published paper formalisation of Haas et
al. Also included is a full proof of soundness of the type system,
together with a verified type checker and interpreter. We include only
a partial procedure for the extraction of the type checker and
-interpreter here. For more details, please see our paper in CPP 2018.</div></td>
+interpreter here. For more details, please see our paper in CPP 2018.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{WebAssembly-AFP,
author = {Conrad Watt},
title = {WebAssembly},
journal = {Archive of Formal Proofs},
month = apr,
year = 2018,
note = {\url{http://isa-afp.org/entries/WebAssembly.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Native_Word.html">Native_Word</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/WebAssembly/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/WebAssembly/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/WebAssembly/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-WebAssembly-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-WebAssembly-2019-06-11.tar.gz">
afp-WebAssembly-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-WebAssembly-2018-08-16.tar.gz">
afp-WebAssembly-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-WebAssembly-2018-04-30.tar.gz">
afp-WebAssembly-2018-04-30.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-WebAssembly-2018-04-29.tar.gz">
afp-WebAssembly-2018-04-29.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Weight_Balanced_Trees.html b/web/entries/Weight_Balanced_Trees.html
--- a/web/entries/Weight_Balanced_Trees.html
+++ b/web/entries/Weight_Balanced_Trees.html
@@ -1,204 +1,204 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Weight-Balanced Trees - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">W</font>eight-Balanced
<font class="first">T</font>rees
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Weight-Balanced Trees</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a> and
Stefan Dirix
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2018-03-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This theory provides a verified implementation of weight-balanced
trees following the work of <a
href="https://doi.org/10.1017/S0956796811000104">Hirai
and Yamamoto</a> who proved that all parameters in a certain
range are valid, i.e. guarantee that insertion and deletion preserve
weight-balance. Instead of a general theorem we provide parameterized
proofs of preservation of the invariant that work for many (all?)
-valid parameters.</div></td>
+valid parameters.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Weight_Balanced_Trees-AFP,
author = {Tobias Nipkow and Stefan Dirix},
title = {Weight-Balanced Trees},
journal = {Archive of Formal Proofs},
month = mar,
year = 2018,
note = {\url{http://isa-afp.org/entries/Weight_Balanced_Trees.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Weight_Balanced_Trees/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Weight_Balanced_Trees/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Weight_Balanced_Trees/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Weight_Balanced_Trees-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Weight_Balanced_Trees-2019-06-11.tar.gz">
afp-Weight_Balanced_Trees-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Weight_Balanced_Trees-2018-08-16.tar.gz">
afp-Weight_Balanced_Trees-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Weight_Balanced_Trees-2018-03-13.tar.gz">
afp-Weight_Balanced_Trees-2018-03-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Well_Quasi_Orders.html b/web/entries/Well_Quasi_Orders.html
--- a/web/entries/Well_Quasi_Orders.html
+++ b/web/entries/Well_Quasi_Orders.html
@@ -1,264 +1,264 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Well-Quasi-Orders - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">W</font>ell-Quasi-Orders
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Well-Quasi-Orders</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2012-04-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Based on Isabelle/HOL's type class for preorders,
+ <td class="abstract mathjax_process">Based on Isabelle/HOL's type class for preorders,
we introduce a type class for well-quasi-orders (wqo)
which is characterized by the absence of "bad" sequences
(our proofs are along the lines of the proof of Nash-Williams,
from which we also borrow terminology). Our main results are
instantiations for the product type, the list type, and a type of finite trees,
which (almost) directly follow from our proofs of (1) Dickson's Lemma, (2)
Higman's Lemma, and (3) Kruskal's Tree Theorem. More concretely:
<ul>
<li>If the sets A and B are wqo then their Cartesian product is wqo.</li>
<li>If the set A is wqo then the set of finite lists over A is wqo.</li>
<li>If the set A is wqo then the set of finite trees over A is wqo.</li>
</ul>
-The research was funded by the Austrian Science Fund (FWF): J3202.</div></td>
+The research was funded by the Austrian Science Fund (FWF): J3202.</td>
</tr>
<tr>
<td class="datahead" valign="top">Change history:</td>
<td class="abstract">[2012-06-11]: Added Kruskal's Tree Theorem.<br>
[2012-12-19]: New variant of Kruskal's tree theorem for terms (as opposed to
variadic terms, i.e., trees), plus finite version of the tree theorem as
corollary.<br>
[2013-05-16]: Simplified construction of minimal bad sequences.<br>
[2014-07-09]: Simplified proofs of Higman's lemma and Kruskal's tree theorem,
based on homogeneous sequences.<br>
[2016-01-03]: An alternative proof of Higman's lemma by open induction.<br>
[2017-06-08]: Proved (classical) equivalence to inductive definition of
almost-full relations according to the ITP 2012 paper "Stop When You Are
Almost-Full" by Vytiniotis, Coquand, and Wahlstedt.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Well_Quasi_Orders-AFP,
author = {Christian Sternagel},
title = {Well-Quasi-Orders},
journal = {Archive of Formal Proofs},
month = apr,
year = 2012,
note = {\url{http://isa-afp.org/entries/Well_Quasi_Orders.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Abstract-Rewriting.html">Abstract-Rewriting</a>, <a href="Open_Induction.html">Open_Induction</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Decreasing-Diagrams-II.html">Decreasing-Diagrams-II</a>, <a href="Myhill-Nerode.html">Myhill-Nerode</a>, <a href="Polynomials.html">Polynomials</a>, <a href="Saturation_Framework.html">Saturation_Framework</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Well_Quasi_Orders/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Well_Quasi_Orders/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Well_Quasi_Orders/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Well_Quasi_Orders-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Well_Quasi_Orders-2019-06-11.tar.gz">
afp-Well_Quasi_Orders-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Well_Quasi_Orders-2018-08-16.tar.gz">
afp-Well_Quasi_Orders-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Well_Quasi_Orders-2017-10-10.tar.gz">
afp-Well_Quasi_Orders-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Well_Quasi_Orders-2016-12-17.tar.gz">
afp-Well_Quasi_Orders-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Well_Quasi_Orders-2016-02-22.tar.gz">
afp-Well_Quasi_Orders-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-Well_Quasi_Orders-2015-05-27.tar.gz">
afp-Well_Quasi_Orders-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-Well_Quasi_Orders-2014-08-28.tar.gz">
afp-Well_Quasi_Orders-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-Well_Quasi_Orders-2013-12-11.tar.gz">
afp-Well_Quasi_Orders-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-Well_Quasi_Orders-2013-11-17.tar.gz">
afp-Well_Quasi_Orders-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-Well_Quasi_Orders-2013-02-16.tar.gz">
afp-Well_Quasi_Orders-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-Well_Quasi_Orders-2012-05-24.tar.gz">
afp-Well_Quasi_Orders-2012-05-24.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Winding_Number_Eval.html b/web/entries/Winding_Number_Eval.html
--- a/web/entries/Winding_Number_Eval.html
+++ b/web/entries/Winding_Number_Eval.html
@@ -1,214 +1,214 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Evaluate Winding Numbers through Cauchy Indices - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">E</font>valuate
<font class="first">W</font>inding
<font class="first">N</font>umbers
through
<font class="first">C</font>auchy
<font class="first">I</font>ndices
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Evaluate Winding Numbers through Cauchy Indices</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-10-17</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
In complex analysis, the winding number measures the number of times a
path (counterclockwise) winds around a point, while the Cauchy index
can approximate how the path winds. This entry provides a
formalisation of the Cauchy index, which is then shown to be related
to the winding number. In addition, this entry also offers a tactic
that enables users to evaluate the winding number by calculating
-Cauchy indices.</div></td>
+Cauchy indices.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Winding_Number_Eval-AFP,
author = {Wenda Li},
title = {Evaluate Winding Numbers through Cauchy Indices},
journal = {Archive of Formal Proofs},
month = oct,
year = 2017,
note = {\url{http://isa-afp.org/entries/Winding_Number_Eval.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Budan_Fourier.html">Budan_Fourier</a>, <a href="Sturm_Tarski.html">Sturm_Tarski</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Count_Complex_Roots.html">Count_Complex_Roots</a>, <a href="Zeta_Function.html">Zeta_Function</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Winding_Number_Eval/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Winding_Number_Eval/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Winding_Number_Eval/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Winding_Number_Eval-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Winding_Number_Eval-2019-06-11.tar.gz">
afp-Winding_Number_Eval-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Winding_Number_Eval-2018-08-16.tar.gz">
afp-Winding_Number_Eval-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Winding_Number_Eval-2017-10-18.tar.gz">
afp-Winding_Number_Eval-2017-10-18.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Word_Lib.html b/web/entries/Word_Lib.html
--- a/web/entries/Word_Lib.html
+++ b/web/entries/Word_Lib.html
@@ -1,226 +1,226 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Finite Machine Word Library - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">F</font>inite
<font class="first">M</font>achine
<font class="first">W</font>ord
<font class="first">L</font>ibrary
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Finite Machine Word Library</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Joel Beeren,
Matthew Fernandez,
Xin Gao,
<a href="http://www.cse.unsw.edu.au/~kleing/">Gerwin Klein</a>,
Rafal Kolanski,
Japheth Lim,
Corey Lewis,
Daniel Matichuk and
Thomas Sewell
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2016-06-09</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry contains an extension to the Isabelle library for
fixed-width machine words. In particular, the entry adds quickcheck setup
for words, printing as hexadecimals, additional operations, reasoning
about alignment, signed words, enumerations of words, normalisation of
word numerals, and an extensive library of properties about generic
fixed-width words, as well as an instantiation of many of these to the
-commonly used 32 and 64-bit bases.</div></td>
+commonly used 32 and 64-bit bases.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Word_Lib-AFP,
author = {Joel Beeren and Matthew Fernandez and Xin Gao and Gerwin Klein and Rafal Kolanski and Japheth Lim and Corey Lewis and Daniel Matichuk and Thomas Sewell},
title = {Finite Machine Word Library},
journal = {Archive of Formal Proofs},
month = jun,
year = 2016,
note = {\url{http://isa-afp.org/entries/Word_Lib.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Complx.html">Complx</a>, <a href="IEEE_Floating_Point.html">IEEE_Floating_Point</a>, <a href="Interval_Arithmetic_Word32.html">Interval_Arithmetic_Word32</a>, <a href="IP_Addresses.html">IP_Addresses</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Word_Lib/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Word_Lib/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Word_Lib/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Word_Lib-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Word_Lib-2019-06-11.tar.gz">
afp-Word_Lib-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Word_Lib-2018-08-16.tar.gz">
afp-Word_Lib-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Word_Lib-2017-10-10.tar.gz">
afp-Word_Lib-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-Word_Lib-2016-12-17.tar.gz">
afp-Word_Lib-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-Word_Lib-2016-06-09.tar.gz">
afp-Word_Lib-2016-06-09.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/WorkerWrapper.html b/web/entries/WorkerWrapper.html
--- a/web/entries/WorkerWrapper.html
+++ b/web/entries/WorkerWrapper.html
@@ -1,267 +1,267 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Worker/Wrapper Transformation - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">W</font>orker/Wrapper
<font class="first">T</font>ransformation
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Worker/Wrapper Transformation</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2009-10-30</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">Gill and Hutton formalise the worker/wrapper transformation, building on the work of Launchbury and Peyton-Jones who developed it as a way of changing the type at which a recursive function operates. This development establishes the soundness of the technique and several examples of its use.</div></td>
+ <td class="abstract mathjax_process">Gill and Hutton formalise the worker/wrapper transformation, building on the work of Launchbury and Peyton-Jones who developed it as a way of changing the type at which a recursive function operates. This development establishes the soundness of the technique and several examples of its use.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{WorkerWrapper-AFP,
author = {Peter Gammie},
title = {The Worker/Wrapper Transformation},
journal = {Archive of Formal Proofs},
month = oct,
year = 2009,
note = {\url{http://isa-afp.org/entries/WorkerWrapper.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/WorkerWrapper/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/WorkerWrapper/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/WorkerWrapper/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-WorkerWrapper-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-WorkerWrapper-2019-06-11.tar.gz">
afp-WorkerWrapper-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-WorkerWrapper-2018-08-16.tar.gz">
afp-WorkerWrapper-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-WorkerWrapper-2017-10-10.tar.gz">
afp-WorkerWrapper-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-WorkerWrapper-2016-12-17.tar.gz">
afp-WorkerWrapper-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-WorkerWrapper-2016-02-22.tar.gz">
afp-WorkerWrapper-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-WorkerWrapper-2015-05-27.tar.gz">
afp-WorkerWrapper-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-WorkerWrapper-2014-08-28.tar.gz">
afp-WorkerWrapper-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-WorkerWrapper-2013-12-11.tar.gz">
afp-WorkerWrapper-2013-12-11.tar.gz
</a>
</li>
<li>Isabelle 2013-1:
<a href="../release/afp-WorkerWrapper-2013-11-17.tar.gz">
afp-WorkerWrapper-2013-11-17.tar.gz
</a>
</li>
<li>Isabelle 2013:
<a href="../release/afp-WorkerWrapper-2013-02-16.tar.gz">
afp-WorkerWrapper-2013-02-16.tar.gz
</a>
</li>
<li>Isabelle 2012:
<a href="../release/afp-WorkerWrapper-2012-05-24.tar.gz">
afp-WorkerWrapper-2012-05-24.tar.gz
</a>
</li>
<li>Isabelle 2011-1:
<a href="../release/afp-WorkerWrapper-2011-10-11.tar.gz">
afp-WorkerWrapper-2011-10-11.tar.gz
</a>
</li>
<li>Isabelle 2011:
<a href="../release/afp-WorkerWrapper-2011-02-11.tar.gz">
afp-WorkerWrapper-2011-02-11.tar.gz
</a>
</li>
<li>Isabelle 2009-2:
<a href="../release/afp-WorkerWrapper-2010-07-01.tar.gz">
afp-WorkerWrapper-2010-07-01.tar.gz
</a>
</li>
<li>Isabelle 2009-1:
<a href="../release/afp-WorkerWrapper-2009-12-12.tar.gz">
afp-WorkerWrapper-2009-12-12.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-WorkerWrapper-2009-10-31.tar.gz">
afp-WorkerWrapper-2009-10-31.tar.gz
</a>
</li>
<li>Isabelle 2009:
<a href="../release/afp-WorkerWrapper-2009-10-30.tar.gz">
afp-WorkerWrapper-2009-10-30.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/XML.html b/web/entries/XML.html
--- a/web/entries/XML.html
+++ b/web/entries/XML.html
@@ -1,223 +1,223 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>XML - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">X</font>ML
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">XML</td>
</tr>
<tr>
<td class="datahead">
Authors:
</td>
<td class="data">
Christian Sternagel (c /dot/ sternagel /at/ gmail /dot/ com) and
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-10-03</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
This entry provides an XML library for Isabelle/HOL. This includes parsing
and pretty printing of XML trees as well as combinators for transforming XML
trees into arbitrary user-defined data. The main contribution of this entry is
an interface (fit for code generation) that allows for communication between
verified programs formalized in Isabelle/HOL and the outside world via XML.
This library was developed as part of the IsaFoR/CeTA project
-to which we refer for examples of its usage.</div></td>
+to which we refer for examples of its usage.</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{XML-AFP,
author = {Christian Sternagel and René Thiemann},
title = {XML},
journal = {Archive of Formal Proofs},
month = oct,
year = 2014,
note = {\url{http://isa-afp.org/entries/XML.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Certification_Monads.html">Certification_Monads</a>, <a href="Show.html">Show</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/XML/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/XML/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/XML/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-XML-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-XML-2019-06-11.tar.gz">
afp-XML-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-XML-2018-08-16.tar.gz">
afp-XML-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-XML-2017-10-10.tar.gz">
afp-XML-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-XML-2016-12-17.tar.gz">
afp-XML-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-XML-2016-02-22.tar.gz">
afp-XML-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-XML-2015-05-27.tar.gz">
afp-XML-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-XML-2014-10-08.tar.gz">
afp-XML-2014-10-08.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/ZFC_in_HOL.html b/web/entries/ZFC_in_HOL.html
--- a/web/entries/ZFC_in_HOL.html
+++ b/web/entries/ZFC_in_HOL.html
@@ -1,221 +1,221 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Zermelo Fraenkel Set Theory in Higher-Order Logic - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">Z</font>ermelo
<font class="first">F</font>raenkel
<font class="first">S</font>et
<font class="first">T</font>heory
in
<font class="first">H</font>igher-Order
<font class="first">L</font>ogic
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">Zermelo Fraenkel Set Theory in Higher-Order Logic</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-10-24</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This entry is a new formalisation of ZFC set theory in Isabelle/HOL. It is
logically equivalent to Obua's HOLZF; the point is to have the closest
possible integration with the rest of Isabelle/HOL, minimising the amount of
new notations and exploiting type classes.</p>
<p>There is a type <em>V</em> of sets and a function <em>elts :: V =&gt; V
set</em> mapping a set to its elements. Classes simply have type <em>V
set</em>, and a predicate identifies the small classes: those that correspond
to actual sets. Type classes connected with orders and lattices are used to
minimise the amount of new notation for concepts such as the subset relation,
union and intersection. Basic concepts — Cartesian products, disjoint sums,
natural numbers, functions, etc. — are formalised.</p>
<p>More advanced set-theoretic concepts, such as transfinite induction,
ordinals, cardinals and the transitive closure of a set, are also provided.
The definition of addition and multiplication for general sets (not just
ordinals) follows Kirby.</p>
<p>The theory provides two type classes with the aim of facilitating
developments that combine <em>V</em> with other Isabelle/HOL types:
<em>embeddable</em>, the class of types that can be injected into <em>V</em>
(including <em>V</em> itself as well as <em>V*V</em>, etc.), and
<em>small</em>, the class of types that correspond to some ZF set.</p>
extra-history =
Change history:
[2020-01-28]: Generalisation of the "small" predicate and order types to arbitrary sets;
ordinal exponentiation;
introduction of the coercion ord_of_nat :: "nat => V";
-numerous new lemmas. (revision 6081d5be8d08)</div></td>
+numerous new lemmas. (revision 6081d5be8d08)</td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{ZFC_in_HOL-AFP,
author = {Lawrence C. Paulson},
title = {Zermelo Fraenkel Set Theory in Higher-Order Logic},
journal = {Archive of Formal Proofs},
month = oct,
year = 2019,
note = {\url{http://isa-afp.org/entries/ZFC_in_HOL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ZFC_in_HOL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/ZFC_in_HOL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/ZFC_in_HOL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-ZFC_in_HOL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-ZFC_in_HOL-2019-11-04.tar.gz">
afp-ZFC_in_HOL-2019-11-04.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Zeta_3_Irrational.html b/web/entries/Zeta_3_Irrational.html
--- a/web/entries/Zeta_3_Irrational.html
+++ b/web/entries/Zeta_3_Irrational.html
@@ -1,198 +1,198 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Irrationality of ζ(3) - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">I</font>rrationality
of
ζ(3)
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Irrationality of ζ(3)</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2019-12-27</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This article provides a formalisation of Beukers's
straightforward analytic proof that ζ(3) is irrational. This was first
proven by Apéry (which is why this result is also often called
‘Apéry's Theorem’) using a more algebraic approach. This
formalisation follows <a
href="http://people.math.sc.edu/filaseta/gradcourses/Math785/Math785Notes4.pdf">Filaseta's
-presentation</a> of Beukers's proof.</p></div></td>
+presentation</a> of Beukers's proof.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Zeta_3_Irrational-AFP,
author = {Manuel Eberl},
title = {The Irrationality of ζ(3)},
journal = {Archive of Formal Proofs},
month = dec,
year = 2019,
note = {\url{http://isa-afp.org/entries/Zeta_3_Irrational.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="E_Transcendental.html">E_Transcendental</a>, <a href="Prime_Distribution_Elementary.html">Prime_Distribution_Elementary</a>, <a href="Prime_Number_Theorem.html">Prime_Number_Theorem</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Zeta_3_Irrational/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Zeta_3_Irrational/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Zeta_3_Irrational/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Zeta_3_Irrational-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Zeta_3_Irrational-2019-12-28.tar.gz">
afp-Zeta_3_Irrational-2019-12-28.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/Zeta_Function.html b/web/entries/Zeta_Function.html
--- a/web/entries/Zeta_Function.html
+++ b/web/entries/Zeta_Function.html
@@ -1,228 +1,228 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>The Hurwitz and Riemann ζ Functions - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> <font class="first">T</font>he
<font class="first">H</font>urwitz
and
<font class="first">R</font>iemann
ζ
<font class="first">F</font>unctions
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">The Hurwitz and Riemann ζ Functions</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2017-10-12</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>This entry builds upon the results about formal and analytic Dirichlet
series to define the Hurwitz &zeta; function &zeta;(<em>a</em>,<em>s</em>) and,
based on that, the Riemann &zeta; function &zeta;(<em>s</em>).
This is done by first defining them for &real;(<em>z</em>) > 1
and then successively extending the domain to the left using the
Euler&ndash;MacLaurin formula.</p>
<p>Apart from the most basic facts such as analyticity, the following
results are provided:</p>
<ul>
<li>the Stieltjes constants and the Laurent expansion of
&zeta;(<em>s</em>) at <em>s</em> = 1</li>
<li>the non-vanishing of &zeta;(<em>s</em>)
for &real;(<em>z</em>) &ge; 1</li>
<li>the relationship between &zeta;(<em>a</em>,<em>s</em>) and &Gamma;</li>
<li>the special values at negative integers and positive even integers</li>
<li>Hurwitz's formula and the reflection formula for &zeta;(<em>s</em>)</li>
<li>the <a href="https://arxiv.org/abs/math/0405478">
Hadjicostas&ndash;Chapman formula</a></li>
</ul>
<p>The entry also contains Euler's analytic proof of the infinitude of primes,
-based on the fact that &zeta;(<i>s</i>) has a pole at <i>s</i> = 1.</p></div></td>
+based on the fact that &zeta;(<i>s</i>) has a pole at <i>s</i> = 1.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{Zeta_Function-AFP,
author = {Manuel Eberl},
title = {The Hurwitz and Riemann ζ Functions},
journal = {Archive of Formal Proofs},
month = oct,
year = 2017,
note = {\url{http://isa-afp.org/entries/Zeta_Function.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
<tr><td class="datahead">Depends on:</td>
<td class="data"><a href="Bernoulli.html">Bernoulli</a>, <a href="Dirichlet_Series.html">Dirichlet_Series</a>, <a href="Euler_MacLaurin.html">Euler_MacLaurin</a>, <a href="Winding_Number_Eval.html">Winding_Number_Eval</a> </td></tr>
<tr><td class="datahead">Used by:</td>
<td class="data"><a href="Dirichlet_L.html">Dirichlet_L</a>, <a href="Prime_Distribution_Elementary.html">Prime_Distribution_Elementary</a>, <a href="Prime_Number_Theorem.html">Prime_Number_Theorem</a> </td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Zeta_Function/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/Zeta_Function/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/Zeta_Function/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-Zeta_Function-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-Zeta_Function-2019-06-11.tar.gz">
afp-Zeta_Function-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-Zeta_Function-2018-08-16.tar.gz">
afp-Zeta_Function-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-Zeta_Function-2017-10-16.tar.gz">
afp-Zeta_Function-2017-10-16.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/entries/pGCL.html b/web/entries/pGCL.html
--- a/web/entries/pGCL.html
+++ b/web/entries/pGCL.html
@@ -1,230 +1,230 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>pGCL for Isabelle - Archive of Formal Proofs
</title>
<link rel="stylesheet" type="text/css" href="../front.css">
<link rel="icon" href="../images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="../rss.xml">
<!-- MathJax for LaTeX support in abstracts -->
<script>
MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']]
},
processEscapes: true,
svg: {
fontCache: 'global'
}
};
</script>
<script id="MathJax-script" async src="../components/mathjax/es5/tex-mml-chtml.js"></script>
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="../images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="../index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="../about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="../submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="../updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="../search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="../statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="../topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="../download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1> pGCL
for
<font class="first">I</font>sabelle
</h1>
<p>&nbsp;</p>
<table width="80%" class="data">
<tbody>
<tr>
<td class="datahead" width="20%">Title:</td>
<td class="data" width="80%">pGCL for Isabelle</td>
</tr>
<tr>
<td class="datahead">
Author:
</td>
<td class="data">
David Cock (david /dot/ cock /at/ nicta /dot/ com /dot/ au)
</td>
</tr>
<tr>
<td class="datahead">Submission date:</td>
<td class="data">2014-07-13</td>
</tr>
<tr>
<td class="datahead" valign="top">Abstract:</td>
- <td class="abstract"><div class="mathjax_process">
+ <td class="abstract mathjax_process">
<p>pGCL is both a programming language and a specification language that
incorporates both probabilistic and nondeterministic choice, in a unified
manner. Program verification is by refinement or annotation (or both), using
either Hoare triples, or weakest-precondition entailment, in the style of
GCL.</p>
<p> This package provides both a shallow embedding of the language
primitives, and an annotation and refinement framework. The generated
-document includes a brief tutorial.</p></div></td>
+document includes a brief tutorial.</p></td>
</tr>
<tr>
<td class="datahead" valign="top">BibTeX:</td>
<td class="formatted">
<pre>@article{pGCL-AFP,
author = {David Cock},
title = {pGCL for Isabelle},
journal = {Archive of Formal Proofs},
month = jul,
year = 2014,
note = {\url{http://isa-afp.org/entries/pGCL.html},
Formal proof development},
ISSN = {2150-914x},
}</pre>
</td>
</tr>
<tr><td class="datahead">License:</td>
<td class="data"><a href="http://isa-afp.org/LICENSE">BSD License</a></td></tr>
</tbody>
</table>
<p></p>
<table class="links">
<tbody>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/pGCL/outline.pdf">Proof outline</a><br>
<a href="../browser_info/current/AFP/pGCL/document.pdf">Proof document</a>
</td>
</tr>
<tr>
<td class="links">
<a href="../browser_info/current/AFP/pGCL/index.html">Browse theories</a>
</td></tr>
<tr>
<td class="links">
<a href="../release/afp-pGCL-current.tar.gz">Download this entry</a>
</td>
</tr>
<tr><td class="links">Older releases:
<ul>
<li>Isabelle 2019:
<a href="../release/afp-pGCL-2019-06-11.tar.gz">
afp-pGCL-2019-06-11.tar.gz
</a>
</li>
<li>Isabelle 2018:
<a href="../release/afp-pGCL-2018-08-16.tar.gz">
afp-pGCL-2018-08-16.tar.gz
</a>
</li>
<li>Isabelle 2017:
<a href="../release/afp-pGCL-2017-10-10.tar.gz">
afp-pGCL-2017-10-10.tar.gz
</a>
</li>
<li>Isabelle 2016-1:
<a href="../release/afp-pGCL-2016-12-17.tar.gz">
afp-pGCL-2016-12-17.tar.gz
</a>
</li>
<li>Isabelle 2016:
<a href="../release/afp-pGCL-2016-02-22.tar.gz">
afp-pGCL-2016-02-22.tar.gz
</a>
</li>
<li>Isabelle 2015:
<a href="../release/afp-pGCL-2015-05-27.tar.gz">
afp-pGCL-2015-05-27.tar.gz
</a>
</li>
<li>Isabelle 2014:
<a href="../release/afp-pGCL-2014-08-28.tar.gz">
afp-pGCL-2014-08-28.tar.gz
</a>
</li>
<li>Isabelle 2013-2:
<a href="../release/afp-pGCL-2014-07-13.tar.gz">
afp-pGCL-2014-07-13.tar.gz
</a>
</li>
</ul>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="../jquery.min.js"></script>
<script src="../script.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/index.html b/web/index.html
--- a/web/index.html
+++ b/web/index.html
@@ -1,4851 +1,4876 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Archive of Formal Proofs</title>
<link rel="stylesheet" type="text/css" href="front.css">
<link rel="icon" href="images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="rss.xml">
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1><font class="first">A</font>rchive of
<font class="first">F</font>ormal
<font class="first">P</font>roofs</h1>
</h1>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td>
The Archive of Formal Proofs is a collection of proof libraries, examples, and larger scientific developments,
mechanically checked in the theorem prover <a href="http://isabelle.in.tum.de/">Isabelle</a>. It is organized in the way
of a scientific journal, is indexed by <a href="http://dblp.uni-trier.de/db/journals/afp/">dblp</a> and has an ISSN:
2150-914x. Submissions are refereed. The preferred citation style is available <a href="citing.html">[here]</a>. We encourage companion AFP submissions to conference and journal publications.
<br><br>A <a href="http://devel.isa-afp.org">development version</a> of the archive is available as well. </td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2020</td>
</tr>
<tr>
<td class="entry">
+ 2020-04-27: <a href="entries/Attack_Trees.html">Attack Trees in Isabelle for GDPR compliance of IoT healthcare systems</a>
+ <br>
+ Author:
+ <a href="http://www.cs.mdx.ac.uk/people/florian-kammueller/">Florian Kammueller</a>
+ </td>
+ </tr>
+ <tr>
+ <td class="entry">
+ 2020-04-16: <a href="entries/ADS_Functor.html">Authenticated Data Structures As Functors</a>
+ <br>
+ Authors:
+ <a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
+ and Ognjen Marić
+ </td>
+ </tr>
+ <tr>
+ <td class="entry">
2020-04-10: <a href="entries/Sliding_Window_Algorithm.html">Formalization of an Algorithm for Greedily Computing Associative Aggregations on Sliding Windows</a>
<br>
Authors:
Lukas Heimes,
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
and Joshua Schneider
</td>
</tr>
<tr>
<td class="entry">
2020-04-09: <a href="entries/Saturation_Framework.html">A Comprehensive Framework for Saturation Theorem Proving</a>
<br>
Author:
<a href="https://www.mpi-inf.mpg.de/departments/automation-of-logic/people/sophie-tourret/">Sophie Tourret</a>
</td>
</tr>
<tr>
<td class="entry">
2020-04-09: <a href="entries/MFODL_Monitor_Optimized.html">Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations</a>
<br>
Authors:
Thibault Dardinier,
Lukas Heimes,
Martin Raszyk,
Joshua Schneider
and <a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="entry">
+ 2020-04-07: <a href="entries/Lucas_Theorem.html">Lucas's Theorem</a>
+ <br>
+ Author:
+ Chelsea Edmonds
+ </td>
+ </tr>
+ <tr>
+ <td class="entry">
2020-03-25: <a href="entries/WOOT_Strong_Eventual_Consistency.html">Strong Eventual Consistency of the Collaborative Editing Framework WOOT</a>
<br>
Authors:
<a href="https://orcid.org/0000-0003-3290-5034">Emin Karayel</a>
and Edgar Gonzàlez
</td>
</tr>
<tr>
<td class="entry">
2020-03-22: <a href="entries/Furstenberg_Topology.html">Furstenberg's topology and his proof of the infinitude of primes</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2020-03-12: <a href="entries/Relational-Incorrectness-Logic.html">An Under-Approximate Relational Logic</a>
<br>
Author:
<a href="https://people.eng.unimelb.edu.au/tobym/">Toby Murray</a>
</td>
</tr>
<tr>
<td class="entry">
2020-03-07: <a href="entries/Hello_World.html">Hello World</a>
<br>
Authors:
<a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a>
and <a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="entry">
2020-02-21: <a href="entries/Goodstein_Lambda.html">Implementing the Goodstein Function in &lambda;-Calculus</a>
<br>
Author:
Bertram Felgenhauer
</td>
</tr>
<tr>
<td class="entry">
2020-02-10: <a href="entries/VeriComp.html">A Generic Framework for Verified Compilers</a>
<br>
Author:
<a href="https://martin.desharnais.me">Martin Desharnais</a>
</td>
</tr>
<tr>
<td class="entry">
2020-02-01: <a href="entries/Arith_Prog_Rel_Primes.html">Arithmetic progressions and relative primes</a>
<br>
Author:
<a href="https://josephcmac.github.io/">José Manuel Rodríguez Caballero</a>
</td>
</tr>
<tr>
<td class="entry">
2020-01-31: <a href="entries/Subset_Boolean_Algebras.html">A Hierarchy of Algebras for Boolean Subsets</a>
<br>
Authors:
<a href="http://www.cosc.canterbury.ac.nz/walter.guttmann/">Walter Guttmann</a>
and <a href="https://www.informatik.uni-augsburg.de/en/chairs/dbis/pmi/staff/moeller/">Bernhard Möller</a>
</td>
</tr>
<tr>
<td class="entry">
2020-01-17: <a href="entries/Mersenne_Primes.html">Mersenne primes and the Lucas–Lehmer test</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2020-01-16: <a href="entries/Approximation_Algorithms.html">Verified Approximation Algorithms</a>
<br>
Authors:
Robin Eßmann,
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
and <a href="https://simon-robillard.net/">Simon Robillard</a>
</td>
</tr>
<tr>
<td class="entry">
2020-01-13: <a href="entries/Closest_Pair_Points.html">Closest Pair of Points Algorithms</a>
<br>
Authors:
Martin Rau
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2020-01-09: <a href="entries/Skip_Lists.html">Skip Lists</a>
<br>
Authors:
<a href="http://cl-informatik.uibk.ac.at/users/mhaslbeck/">Max W. Haslbeck</a>
and <a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2020-01-06: <a href="entries/Bicategory.html">Bicategories</a>
<br>
Author:
Eugene W. Stark
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2019</td>
</tr>
<tr>
<td class="entry">
2019-12-27: <a href="entries/Zeta_3_Irrational.html">The Irrationality of ζ(3)</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2019-12-20: <a href="entries/Hybrid_Logic.html">Formalizing a Seligman-Style Tableau System for Hybrid Logic</a>
<br>
Author:
<a href="https://people.compute.dtu.dk/ahfrom/">Asta Halkjær From</a>
</td>
</tr>
<tr>
<td class="entry">
2019-12-18: <a href="entries/Poincare_Bendixson.html">The Poincaré-Bendixson Theorem</a>
<br>
Authors:
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a>
and <a href="https://www.cs.cmu.edu/~yongkiat/">Yong Kiam Tan</a>
</td>
</tr>
<tr>
<td class="entry">
2019-12-16: <a href="entries/Poincare_Disc.html">Poincaré Disc Model</a>
<br>
Authors:
<a href="http://poincare.matf.bg.ac.rs/~danijela">Danijela Simić</a>,
Filip Marić
and Pierre Boutry
</td>
</tr>
<tr>
<td class="entry">
2019-12-16: <a href="entries/Complex_Geometry.html">Complex Geometry</a>
<br>
Authors:
Filip Marić
and <a href="http://poincare.matf.bg.ac.rs/~danijela">Danijela Simić</a>
</td>
</tr>
<tr>
<td class="entry">
2019-12-10: <a href="entries/Gauss_Sums.html">Gauss Sums and the Pólya–Vinogradov Inequality</a>
<br>
Authors:
<a href="https://people.epfl.ch/rodrigo.raya">Rodrigo Raya</a>
and <a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2019-12-04: <a href="entries/Generalized_Counting_Sort.html">An Efficient Generalization of Counting Sort for Large, possibly Infinite Key Ranges</a>
<br>
Author:
Pasquale Noce
</td>
</tr>
<tr>
<td class="entry">
2019-11-27: <a href="entries/Interval_Arithmetic_Word32.html">Interval Arithmetic on 32-bit Words</a>
<br>
Author:
Brandon Bohrer
</td>
</tr>
<tr>
<td class="entry">
2019-10-24: <a href="entries/ZFC_in_HOL.html">Zermelo Fraenkel Set Theory in Higher-Order Logic</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="entry">
2019-10-22: <a href="entries/Isabelle_C.html">Isabelle/C</a>
<br>
Authors:
<a href="https://www.lri.fr/~ftuong/">Frédéric Tuong</a>
and <a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="entry">
2019-10-16: <a href="entries/VerifyThis2019.html">VerifyThis 2019 -- Polished Isabelle Solutions</a>
<br>
Authors:
Peter Lammich
and <a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
</td>
</tr>
<tr>
<td class="entry">
2019-10-08: <a href="entries/Aristotles_Assertoric_Syllogistic.html">Aristotle's Assertoric Syllogistic</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~ak2110/">Angeliki Koutsoukou-Argyraki</a>
</td>
</tr>
<tr>
<td class="entry">
2019-10-07: <a href="entries/Sigma_Commit_Crypto.html">Sigma Protocols and Commitment Schemes</a>
<br>
Authors:
<a href="https://www.turing.ac.uk/people/doctoral-students/david-butler">David Butler</a>
and <a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="entry">
2019-10-04: <a href="entries/Clean.html">Clean - An Abstract Imperative Programming Language and its Theory</a>
<br>
Authors:
<a href="https://www.lri.fr/~ftuong/">Frédéric Tuong</a>
and <a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="entry">
2019-09-16: <a href="entries/Generic_Join.html">Formalization of Multiway-Join Algorithms</a>
<br>
Author:
Thibault Dardinier
</td>
</tr>
<tr>
<td class="entry">
2019-09-10: <a href="entries/Hybrid_Systems_VCs.html">Verification Components for Hybrid Systems</a>
<br>
Author:
Jonathan Julian Huerta y Munive
</td>
</tr>
<tr>
<td class="entry">
2019-09-06: <a href="entries/Fourier.html">Fourier Series</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C Paulson</a>
</td>
</tr>
<tr>
<td class="entry">
2019-08-30: <a href="entries/Jacobson_Basic_Algebra.html">A Case Study in Basic Algebra</a>
<br>
Author:
<a href="http://www21.in.tum.de/~ballarin/">Clemens Ballarin</a>
</td>
</tr>
<tr>
<td class="entry">
2019-08-16: <a href="entries/Adaptive_State_Counting.html">Formalisation of an Adaptive State Counting Algorithm</a>
<br>
Author:
Robert Sachtleben
</td>
</tr>
<tr>
<td class="entry">
2019-08-14: <a href="entries/Laplace_Transform.html">Laplace Transform</a>
<br>
Author:
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a>
</td>
</tr>
<tr>
<td class="entry">
2019-08-06: <a href="entries/Linear_Programming.html">Linear Programming</a>
<br>
Authors:
<a href="http://www.parsert.com/">Julian Parsert</a>
and <a href="http://cl-informatik.uibk.ac.at/cek/">Cezary Kaliszyk</a>
</td>
</tr>
<tr>
<td class="entry">
2019-08-06: <a href="entries/C2KA_DistributedSystems.html">Communicating Concurrent Kleene Algebra for Distributed Systems Specification</a>
<br>
Authors:
Maxime Buyse
and <a href="https://carleton.ca/jaskolka/">Jason Jaskolka</a>
</td>
</tr>
<tr>
<td class="entry">
2019-08-05: <a href="entries/IMO2019.html">Selected Problems from the International Mathematical Olympiad 2019</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2019-08-01: <a href="entries/Stellar_Quorums.html">Stellar Quorum Systems</a>
<br>
Author:
Giuliano Losa
</td>
</tr>
<tr>
<td class="entry">
2019-07-30: <a href="entries/TESL_Language.html">A Formal Development of a Polychronous Polytimed Coordination Language</a>
<br>
Authors:
Hai Nguyen Van,
Frédéric Boulanger
and <a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="entry">
2019-07-27: <a href="entries/Szpilrajn.html">Szpilrajn Extension Theorem</a>
<br>
Author:
Peter Zeller
</td>
</tr>
<tr>
<td class="entry">
2019-07-18: <a href="entries/FOL_Seq_Calc1.html">A Sequent Calculus for First-Order Logic</a>
<br>
Author:
<a href="https://people.compute.dtu.dk/ahfrom/">Asta Halkjær From</a>
</td>
</tr>
<tr>
<td class="entry">
2019-07-08: <a href="entries/CakeML_Codegen.html">A Verified Code Generator from Isabelle/HOL to CakeML</a>
<br>
Author:
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="entry">
2019-07-04: <a href="entries/MFOTL_Monitor.html">Formalization of a Monitoring Algorithm for Metric First-Order Temporal Logic</a>
<br>
Authors:
Joshua Schneider
and <a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="entry">
2019-06-27: <a href="entries/Complete_Non_Orders.html">Complete Non-Orders and Fixed Points</a>
<br>
Authors:
<a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
and <a href="http://group-mmm.org/~dubut/">Jérémy Dubut</a>
</td>
</tr>
<tr>
<td class="entry">
2019-06-25: <a href="entries/Priority_Search_Trees.html">Priority Search Trees</a>
<br>
Authors:
Peter Lammich
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2019-06-25: <a href="entries/Prim_Dijkstra_Simple.html">Purely Functional, Simple, and Efficient Implementation of Prim and Dijkstra</a>
<br>
Authors:
Peter Lammich
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2019-06-21: <a href="entries/Linear_Inequalities.html">Linear Inequalities</a>
<br>
Authors:
<a href="http://cl-informatik.uibk.ac.at/users/bottesch/">Ralph Bottesch</a>,
Alban Reynaud
and <a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2019-06-16: <a href="entries/Nullstellensatz.html">Hilbert's Nullstellensatz</a>
<br>
Author:
<a href="https://risc.jku.at/m/alexander-maletzky/">Alexander Maletzky</a>
</td>
</tr>
<tr>
<td class="entry">
2019-06-15: <a href="entries/Groebner_Macaulay.html">Gröbner Bases, Macaulay Matrices and Dubé's Degree Bounds</a>
<br>
Author:
<a href="https://risc.jku.at/m/alexander-maletzky/">Alexander Maletzky</a>
</td>
</tr>
<tr>
<td class="entry">
2019-06-13: <a href="entries/IMP2_Binary_Heap.html">Binary Heaps for IMP2</a>
<br>
Author:
Simon Griebel
</td>
</tr>
<tr>
<td class="entry">
2019-06-03: <a href="entries/Differential_Game_Logic.html">Differential Game Logic</a>
<br>
Author:
<a href="http://www.cs.cmu.edu/~aplatzer/">André Platzer</a>
</td>
</tr>
<tr>
<td class="entry">
2019-05-30: <a href="entries/KD_Tree.html">Multidimensional Binary Search Trees</a>
<br>
Author:
Martin Rau
</td>
</tr>
<tr>
<td class="entry">
2019-05-14: <a href="entries/LambdaAuth.html">Formalization of Generic Authenticated Data Structures</a>
<br>
Authors:
Matthias Brun
and <a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="entry">
2019-05-09: <a href="entries/Multi_Party_Computation.html">Multi-Party Computation</a>
<br>
Authors:
<a href="http://homepages.inf.ed.ac.uk/da/">David Aspinall</a>
and <a href="https://www.turing.ac.uk/people/doctoral-students/david-butler">David Butler</a>
</td>
</tr>
<tr>
<td class="entry">
2019-04-26: <a href="entries/HOL-CSP.html">HOL-CSP Version 2.0</a>
<br>
Authors:
Safouan Taha,
Lina Ye
and <a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="entry">
2019-04-16: <a href="entries/LTL_Master_Theorem.html">A Compositional and Unified Translation of LTL into ω-Automata</a>
<br>
Authors:
Benedikt Seidl
and Salomon Sickert
</td>
</tr>
<tr>
<td class="entry">
2019-04-06: <a href="entries/Binding_Syntax_Theory.html">A General Theory of Syntax with Bindings</a>
<br>
Authors:
Lorenzo Gheri
and Andrei Popescu
</td>
</tr>
<tr>
<td class="entry">
2019-03-27: <a href="entries/Transcendence_Series_Hancl_Rucki.html">The Transcendence of Certain Infinite Series</a>
<br>
Authors:
<a href="https://www.cl.cam.ac.uk/~ak2110/">Angeliki Koutsoukou-Argyraki</a>
and <a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="entry">
2019-03-24: <a href="entries/QHLProver.html">Quantum Hoare Logic</a>
<br>
Authors:
Junyi Liu,
<a href="http://lcs.ios.ac.cn/~bzhan/">Bohua Zhan</a>,
Shuling Wang,
Shenggang Ying,
Tao Liu,
Yangjia Li,
Mingsheng Ying
and Naijun Zhan
</td>
</tr>
<tr>
<td class="entry">
2019-03-09: <a href="entries/Safe_OCL.html">Safe OCL</a>
<br>
Author:
Denis Nikiforov
</td>
</tr>
<tr>
<td class="entry">
2019-02-21: <a href="entries/Prime_Distribution_Elementary.html">Elementary Facts About the Distribution of Primes</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2019-02-14: <a href="entries/Kruskal.html">Kruskal's Algorithm for Minimum Spanning Forest</a>
<br>
Authors:
<a href="http://in.tum.de/~haslbema/">Maximilian P.L. Haslbeck</a>,
Peter Lammich
and Julian Biendarra
</td>
</tr>
<tr>
<td class="entry">
2019-02-11: <a href="entries/Probabilistic_Prime_Tests.html">Probabilistic Primality Testing</a>
<br>
Authors:
Daniel Stüwe
and <a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2019-02-08: <a href="entries/Universal_Turing_Machine.html">Universal Turing Machine</a>
<br>
Authors:
Jian Xu,
Xingyuan Zhang,
<a href="http://www.inf.kcl.ac.uk/staff/urbanc/">Christian Urban</a>
and Sebastiaan J. C. Joosten
</td>
</tr>
<tr>
<td class="entry">
2019-02-01: <a href="entries/UTP.html">Isabelle/UTP: Mechanised Theory Engineering for Unifying Theories of Programming</a>
<br>
Authors:
<a href="https://www-users.cs.york.ac.uk/~simonf/">Simon Foster</a>,
Frank Zeyda,
Yakoub Nemouchi,
Pedro Ribeiro
and <a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="entry">
2019-02-01: <a href="entries/List_Inversions.html">The Inversions of a List</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2019-01-17: <a href="entries/Farkas.html">Farkas' Lemma and Motzkin's Transposition Theorem</a>
<br>
Authors:
<a href="http://cl-informatik.uibk.ac.at/users/bottesch/">Ralph Bottesch</a>,
<a href="http://cl-informatik.uibk.ac.at/users/mhaslbeck/">Max W. Haslbeck</a>
and <a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2019-01-15: <a href="entries/IMP2.html">IMP2 – Simple Program Verification in Isabelle/HOL</a>
<br>
Authors:
Peter Lammich
and <a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
</td>
</tr>
<tr>
<td class="entry">
2019-01-15: <a href="entries/Higher_Order_Terms.html">An Algebra for Higher-Order Terms</a>
<br>
Author:
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="entry">
2019-01-07: <a href="entries/Store_Buffer_Reduction.html">A Reduction Theorem for Store Buffers</a>
<br>
Authors:
Ernie Cohen
and Norbert Schirmer
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2018</td>
</tr>
<tr>
<td class="entry">
2018-12-26: <a href="entries/Core_DOM.html">A Formal Model of the Document Object Model</a>
<br>
Authors:
<a href="https://www.brucker.ch/">Achim D. Brucker</a>
and <a href="http://www.dcs.shef.ac.uk/cgi-bin/makeperson?M.Herzberg">Michael Herzberg</a>
</td>
</tr>
<tr>
<td class="entry">
2018-12-25: <a href="entries/Concurrent_Revisions.html">Formalization of Concurrent Revisions</a>
<br>
Author:
Roy Overbeek
</td>
</tr>
<tr>
<td class="entry">
2018-12-21: <a href="entries/Auto2_Imperative_HOL.html">Verifying Imperative Programs using Auto2</a>
<br>
Author:
<a href="http://lcs.ios.ac.cn/~bzhan/">Bohua Zhan</a>
</td>
</tr>
<tr>
<td class="entry">
2018-12-17: <a href="entries/Constructive_Cryptography.html">Constructive Cryptography in HOL</a>
<br>
Authors:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
and S. Reza Sefidgar
</td>
</tr>
<tr>
<td class="entry">
2018-12-11: <a href="entries/Transformer_Semantics.html">Transformer Semantics</a>
<br>
Author:
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="entry">
2018-12-11: <a href="entries/Quantales.html">Quantales</a>
<br>
Author:
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="entry">
2018-12-11: <a href="entries/Order_Lattice_Props.html">Properties of Orderings and Lattices</a>
<br>
Author:
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="entry">
2018-11-23: <a href="entries/Graph_Saturation.html">Graph Saturation</a>
<br>
Author:
Sebastiaan J. C. Joosten
</td>
</tr>
<tr>
<td class="entry">
2018-11-23: <a href="entries/Functional_Ordered_Resolution_Prover.html">A Verified Functional Implementation of Bachmair and Ganzinger's Ordered Resolution Prover</a>
<br>
Authors:
<a href="https://people.compute.dtu.dk/andschl/">Anders Schlichtkrull</a>,
Jasmin Christian Blanchette
and <a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="entry">
2018-11-20: <a href="entries/Auto2_HOL.html">Auto2 Prover</a>
<br>
Author:
<a href="http://lcs.ios.ac.cn/~bzhan/">Bohua Zhan</a>
</td>
</tr>
<tr>
<td class="entry">
2018-11-16: <a href="entries/Matroids.html">Matroids</a>
<br>
Author:
Jonas Keinholz
</td>
</tr>
<tr>
<td class="entry">
2018-11-06: <a href="entries/Generic_Deriving.html">Deriving generic class instances for datatypes</a>
<br>
Authors:
Jonas Rädle
and <a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="entry">
2018-10-30: <a href="entries/GewirthPGCProof.html">Formalisation and Evaluation of Alan Gewirth's Proof for the Principle of Generic Consistency in Isabelle/HOL</a>
<br>
Authors:
David Fuenmayor
and <a href="http://christoph-benzmueller.de">Christoph Benzmüller</a>
</td>
</tr>
<tr>
<td class="entry">
2018-10-29: <a href="entries/Epistemic_Logic.html">Epistemic Logic</a>
<br>
Author:
<a href="https://people.compute.dtu.dk/ahfrom/">Asta Halkjær From</a>
</td>
</tr>
<tr>
<td class="entry">
2018-10-22: <a href="entries/Smooth_Manifolds.html">Smooth Manifolds</a>
<br>
Authors:
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a>
and <a href="http://lcs.ios.ac.cn/~bzhan/">Bohua Zhan</a>
</td>
</tr>
<tr>
<td class="entry">
2018-10-19: <a href="entries/Randomised_BSTs.html">Randomised Binary Search Trees</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2018-10-19: <a href="entries/Lambda_Free_EPO.html">Formalization of the Embedding Path Order for Lambda-Free Higher-Order Terms</a>
<br>
Author:
Alexander Bentkamp
</td>
</tr>
<tr>
<td class="entry">
2018-10-12: <a href="entries/Factored_Transition_System_Bounding.html">Upper Bounding Diameters of State Spaces of Factored Transition Systems</a>
<br>
Authors:
Friedrich Kurz
and <a href="http://home.in.tum.de/~mansour/">Mohammad Abdulaziz</a>
</td>
</tr>
<tr>
<td class="entry">
2018-09-28: <a href="entries/Pi_Transcendental.html">The Transcendence of π</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2018-09-25: <a href="entries/Symmetric_Polynomials.html">Symmetric Polynomials</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2018-09-20: <a href="entries/Signature_Groebner.html">Signature-Based Gröbner Basis Algorithms</a>
<br>
Author:
<a href="https://risc.jku.at/m/alexander-maletzky/">Alexander Maletzky</a>
</td>
</tr>
<tr>
<td class="entry">
2018-09-19: <a href="entries/Prime_Number_Theorem.html">The Prime Number Theorem</a>
<br>
Authors:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
and <a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="entry">
2018-09-15: <a href="entries/Aggregation_Algebras.html">Aggregation Algebras</a>
<br>
Author:
<a href="http://www.cosc.canterbury.ac.nz/walter.guttmann/">Walter Guttmann</a>
</td>
</tr>
<tr>
<td class="entry">
2018-09-14: <a href="entries/Octonions.html">Octonions</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~ak2110/">Angeliki Koutsoukou-Argyraki</a>
</td>
</tr>
<tr>
<td class="entry">
2018-09-05: <a href="entries/Quaternions.html">Quaternions</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="entry">
2018-09-02: <a href="entries/Budan_Fourier.html">The Budan-Fourier Theorem and Counting Real Roots with Multiplicity</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="entry">
2018-08-24: <a href="entries/Simplex.html">An Incremental Simplex Algorithm with Unsatisfiable Core Generation</a>
<br>
Authors:
Filip Marić,
Mirko Spasić
and <a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2018-08-14: <a href="entries/Minsky_Machines.html">Minsky Machines</a>
<br>
Author:
Bertram Felgenhauer
</td>
</tr>
<tr>
<td class="entry">
2018-07-16: <a href="entries/DiscretePricing.html">Pricing in discrete financial models</a>
<br>
Author:
<a href="http://lig-membres.imag.fr/mechenim/">Mnacho Echenim</a>
</td>
</tr>
<tr>
<td class="entry">
2018-07-04: <a href="entries/Neumann_Morgenstern_Utility.html">Von-Neumann-Morgenstern Utility Theorem</a>
<br>
Authors:
<a href="http://www.parsert.com/">Julian Parsert</a>
and <a href="http://cl-informatik.uibk.ac.at/cek/">Cezary Kaliszyk</a>
</td>
</tr>
<tr>
<td class="entry">
2018-06-23: <a href="entries/Pell.html">Pell's Equation</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2018-06-14: <a href="entries/Projective_Geometry.html">Projective Geometry</a>
<br>
Author:
<a href="https://sites.google.com/site/anthonybordg/">Anthony Bordg</a>
</td>
</tr>
<tr>
<td class="entry">
2018-06-14: <a href="entries/Localization_Ring.html">The Localization of a Commutative Ring</a>
<br>
Author:
<a href="https://sites.google.com/site/anthonybordg/">Anthony Bordg</a>
</td>
</tr>
<tr>
<td class="entry">
2018-06-05: <a href="entries/Partial_Order_Reduction.html">Partial Order Reduction</a>
<br>
Author:
<a href="http://www21.in.tum.de/~brunnerj/">Julian Brunner</a>
</td>
</tr>
<tr>
<td class="entry">
2018-05-27: <a href="entries/Optimal_BST.html">Optimal Binary Search Trees</a>
<br>
Authors:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
and Dániel Somogyi
</td>
</tr>
<tr>
<td class="entry">
2018-05-25: <a href="entries/Hidden_Markov_Models.html">Hidden Markov Models</a>
<br>
Author:
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
</td>
</tr>
<tr>
<td class="entry">
2018-05-24: <a href="entries/Probabilistic_Timed_Automata.html">Probabilistic Timed Automata</a>
<br>
Authors:
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
and <a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>
</td>
</tr>
<tr>
<td class="entry">
2018-05-23: <a href="entries/Irrationality_J_Hancl.html">Irrational Rapidly Convergent Series</a>
<br>
Authors:
<a href="https://www.cl.cam.ac.uk/~ak2110/">Angeliki Koutsoukou-Argyraki</a>
and <a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="entry">
2018-05-23: <a href="entries/AxiomaticCategoryTheory.html">Axiom Systems for Category Theory in Free Logic</a>
<br>
Authors:
<a href="http://christoph-benzmueller.de">Christoph Benzmüller</a>
and <a href="http://www.cs.cmu.edu/~scott/">Dana Scott</a>
</td>
</tr>
<tr>
<td class="entry">
2018-05-22: <a href="entries/Monad_Memo_DP.html">Monadification, Memoization and Dynamic Programming</a>
<br>
Authors:
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>,
Shuwei Hu
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2018-05-10: <a href="entries/OpSets.html">OpSets: Sequential Specifications for Replicated Datatypes</a>
<br>
Authors:
Martin Kleppmann,
Victor B. F. Gomes,
Dominic P. Mulligan
and Alastair R. Beresford
</td>
</tr>
<tr>
<td class="entry">
2018-05-07: <a href="entries/Modular_Assembly_Kit_Security.html">An Isabelle/HOL Formalization of the Modular Assembly Kit for Security Properties</a>
<br>
Authors:
Oliver Bračevac,
Richard Gay,
Sylvia Grewe,
Heiko Mantel,
Henning Sudbrock
and Markus Tasch
</td>
</tr>
<tr>
<td class="entry">
2018-04-29: <a href="entries/WebAssembly.html">WebAssembly</a>
<br>
Author:
<a href="http://www.cl.cam.ac.uk/~caw77/">Conrad Watt</a>
</td>
</tr>
<tr>
<td class="entry">
2018-04-27: <a href="entries/VerifyThis2018.html">VerifyThis 2018 - Polished Isabelle Solutions</a>
<br>
Authors:
Peter Lammich
and <a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
</td>
</tr>
<tr>
<td class="entry">
2018-04-24: <a href="entries/BNF_CC.html">Bounded Natural Functors with Covariance and Contravariance</a>
<br>
Authors:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
and Joshua Schneider
</td>
</tr>
<tr>
<td class="entry">
2018-03-22: <a href="entries/Fishburn_Impossibility.html">The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency</a>
<br>
Authors:
<a href="http://dss.in.tum.de/staff/brandt.html">Felix Brandt</a>,
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>,
<a href="http://dss.in.tum.de/staff/christian-saile.html">Christian Saile</a>
and <a href="http://dss.in.tum.de/staff/christian-stricker.html">Christian Stricker</a>
</td>
</tr>
<tr>
<td class="entry">
2018-03-13: <a href="entries/Weight_Balanced_Trees.html">Weight-Balanced Trees</a>
<br>
Authors:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
and Stefan Dirix
</td>
</tr>
<tr>
<td class="entry">
2018-03-12: <a href="entries/CakeML.html">CakeML</a>
<br>
Authors:
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
and Yu Zhang
</td>
</tr>
<tr>
<td class="entry">
2018-03-01: <a href="entries/Architectural_Design_Patterns.html">A Theory of Architectural Design Patterns</a>
<br>
Author:
<a href="http://marmsoler.com">Diego Marmsoler</a>
</td>
</tr>
<tr>
<td class="entry">
2018-02-26: <a href="entries/Hoare_Time.html">Hoare Logics for Time Bounds</a>
<br>
Authors:
<a href="http://www.in.tum.de/~haslbema">Maximilian P. L. Haslbeck</a>
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2018-02-06: <a href="entries/Treaps.html">Treaps</a>
<br>
Authors:
<a href="http://cl-informatik.uibk.ac.at/users/mhaslbeck/">Maximilian Haslbeck</a>,
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2018-02-06: <a href="entries/LLL_Factorization.html">A verified factorization algorithm for integer polynomials with polynomial complexity</a>
<br>
Authors:
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a>,
<a href="http://sjcjoosten.nl/">Sebastiaan Joosten</a>,
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
and <a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="entry">
2018-02-06: <a href="entries/First_Order_Terms.html">First-Order Terms</a>
<br>
Authors:
Christian Sternagel
and <a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2018-02-06: <a href="entries/Error_Function.html">The Error Function</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2018-02-02: <a href="entries/LLL_Basis_Reduction.html">A verified LLL algorithm</a>
<br>
Authors:
<a href="http://cl-informatik.uibk.ac.at/users/bottesch/">Ralph Bottesch</a>,
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a>,
<a href="http://cl-informatik.uibk.ac.at/users/mhaslbeck/">Maximilian Haslbeck</a>,
<a href="http://sjcjoosten.nl/">Sebastiaan Joosten</a>,
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
and <a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="entry">
2018-01-18: <a href="entries/Ordered_Resolution_Prover.html">Formalization of Bachmair and Ganzinger's Ordered Resolution Prover</a>
<br>
Authors:
<a href="https://people.compute.dtu.dk/andschl/">Anders Schlichtkrull</a>,
Jasmin Christian Blanchette,
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
and Uwe Waldmann
</td>
</tr>
<tr>
<td class="entry">
2018-01-16: <a href="entries/Gromov_Hyperbolicity.html">Gromov Hyperbolicity</a>
<br>
Author:
Sebastien Gouezel
</td>
</tr>
<tr>
<td class="entry">
2018-01-11: <a href="entries/Green.html">An Isabelle/HOL formalisation of Green's Theorem</a>
<br>
Authors:
<a href="http://home.in.tum.de/~mansour/">Mohammad Abdulaziz</a>
and <a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="entry">
2018-01-08: <a href="entries/Taylor_Models.html">Taylor Models</a>
<br>
Authors:
Christoph Traut
and <a href="http://home.in.tum.de/~immler/">Fabian Immler</a>
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2017</td>
</tr>
<tr>
<td class="entry">
2017-12-22: <a href="entries/Falling_Factorial_Sum.html">The Falling Factorial of a Sum</a>
<br>
Author:
Lukas Bulwahn
</td>
</tr>
<tr>
<td class="entry">
2017-12-21: <a href="entries/Median_Of_Medians_Selection.html">The Median-of-Medians Selection Algorithm</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-12-21: <a href="entries/Mason_Stothers.html">The Mason–Stothers Theorem</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-12-21: <a href="entries/Dirichlet_L.html">Dirichlet L-Functions and Dirichlet's Theorem</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-12-19: <a href="entries/BNF_Operations.html">Operations on Bounded Natural Functors</a>
<br>
Authors:
Jasmin Christian Blanchette,
Andrei Popescu
and <a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="entry">
2017-12-18: <a href="entries/Knuth_Morris_Pratt.html">The string search algorithm by Knuth, Morris and Pratt</a>
<br>
Authors:
Fabian Hellauer
and Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2017-11-22: <a href="entries/Stochastic_Matrices.html">Stochastic Matrices and the Perron-Frobenius Theorem</a>
<br>
Author:
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2017-11-09: <a href="entries/IMAP-CRDT.html">The IMAP CmRDT</a>
<br>
Authors:
Tim Jungnickel,
Lennart Oldenburg
and Matthias Loibl
</td>
</tr>
<tr>
<td class="entry">
2017-11-06: <a href="entries/Hybrid_Multi_Lane_Spatial_Logic.html">Hybrid Multi-Lane Spatial Logic</a>
<br>
Author:
Sven Linker
</td>
</tr>
<tr>
<td class="entry">
2017-10-26: <a href="entries/Kuratowski_Closure_Complement.html">The Kuratowski Closure-Complement Theorem</a>
<br>
Authors:
<a href="http://peteg.org">Peter Gammie</a>
and Gianpaolo Gioiosa
</td>
</tr>
<tr>
<td class="entry">
2017-10-19: <a href="entries/Transition_Systems_and_Automata.html">Transition Systems and Automata</a>
<br>
Author:
<a href="http://www21.in.tum.de/~brunnerj/">Julian Brunner</a>
</td>
</tr>
<tr>
<td class="entry">
2017-10-19: <a href="entries/Buchi_Complementation.html">Büchi Complementation</a>
<br>
Author:
<a href="http://www21.in.tum.de/~brunnerj/">Julian Brunner</a>
</td>
</tr>
<tr>
<td class="entry">
2017-10-17: <a href="entries/Winding_Number_Eval.html">Evaluate Winding Numbers through Cauchy Indices</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="entry">
2017-10-17: <a href="entries/Count_Complex_Roots.html">Count the Number of Complex Roots</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="entry">
2017-10-14: <a href="entries/Diophantine_Eqns_Lin_Hom.html">Homogeneous Linear Diophantine Equations</a>
<br>
Authors:
Florian Messner,
<a href="http://www.parsert.com/">Julian Parsert</a>,
Jonas Schöpf
and Christian Sternagel
</td>
</tr>
<tr>
<td class="entry">
2017-10-12: <a href="entries/Zeta_Function.html">The Hurwitz and Riemann ζ Functions</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-10-12: <a href="entries/Linear_Recurrences.html">Linear Recurrences</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-10-12: <a href="entries/Dirichlet_Series.html">Dirichlet Series</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-09-21: <a href="entries/Lowe_Ontological_Argument.html">Computer-assisted Reconstruction and Assessment of E. J. Lowe's Modal Ontological Argument</a>
<br>
Authors:
David Fuenmayor
and <a href="http://christoph-benzmueller.de">Christoph Benzmüller</a>
</td>
</tr>
<tr>
<td class="entry">
2017-09-17: <a href="entries/PLM.html">Representation and Partial Automation of the Principia Logico-Metaphysica in Isabelle/HOL</a>
<br>
Author:
Daniel Kirchner
</td>
</tr>
<tr>
<td class="entry">
2017-09-06: <a href="entries/AnselmGod.html">Anselm's God in Isabelle/HOL</a>
<br>
Author:
<a href="https://philpapers.org/profile/805">Ben Blumson</a>
</td>
</tr>
<tr>
<td class="entry">
2017-09-01: <a href="entries/First_Welfare_Theorem.html">Microeconomics and the First Welfare Theorem</a>
<br>
Authors:
<a href="http://www.parsert.com/">Julian Parsert</a>
and <a href="http://cl-informatik.uibk.ac.at/cek/">Cezary Kaliszyk</a>
</td>
</tr>
<tr>
<td class="entry">
2017-08-20: <a href="entries/Root_Balanced_Tree.html">Root-Balanced Tree</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2017-08-20: <a href="entries/Orbit_Stabiliser.html">Orbit-Stabiliser Theorem with Application to Rotational Symmetries</a>
<br>
Author:
Jonas Rädle
</td>
</tr>
<tr>
<td class="entry">
2017-08-16: <a href="entries/LambdaMu.html">The LambdaMu-calculus</a>
<br>
Authors:
Cristina Matache,
Victor B. F. Gomes
and Dominic P. Mulligan
</td>
</tr>
<tr>
<td class="entry">
2017-07-31: <a href="entries/Stewart_Apollonius.html">Stewart's Theorem and Apollonius' Theorem</a>
<br>
Author:
Lukas Bulwahn
</td>
</tr>
<tr>
<td class="entry">
2017-07-28: <a href="entries/DynamicArchitectures.html">Dynamic Architectures</a>
<br>
Author:
<a href="http://marmsoler.com">Diego Marmsoler</a>
</td>
</tr>
<tr>
<td class="entry">
2017-07-21: <a href="entries/Decl_Sem_Fun_PL.html">Declarative Semantics for Functional Languages</a>
<br>
Author:
<a href="http://homes.soic.indiana.edu/jsiek/">Jeremy Siek</a>
</td>
</tr>
<tr>
<td class="entry">
2017-07-15: <a href="entries/HOLCF-Prelude.html">HOLCF-Prelude</a>
<br>
Authors:
Joachim Breitner,
Brian Huffman,
Neil Mitchell
and Christian Sternagel
</td>
</tr>
<tr>
<td class="entry">
2017-07-13: <a href="entries/Minkowskis_Theorem.html">Minkowski's Theorem</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-07-09: <a href="entries/Name_Carrying_Type_Inference.html">Verified Metatheory and Type Inference for a Name-Carrying Simply-Typed Lambda Calculus</a>
<br>
Author:
Michael Rawson
</td>
</tr>
<tr>
<td class="entry">
2017-07-07: <a href="entries/CRDT.html">A framework for establishing Strong Eventual Consistency for Conflict-free Replicated Datatypes</a>
<br>
Authors:
Victor B. F. Gomes,
Martin Kleppmann,
Dominic P. Mulligan
and Alastair R. Beresford
</td>
</tr>
<tr>
<td class="entry">
2017-07-06: <a href="entries/Stone_Kleene_Relation_Algebras.html">Stone-Kleene Relation Algebras</a>
<br>
Author:
<a href="http://www.cosc.canterbury.ac.nz/walter.guttmann/">Walter Guttmann</a>
</td>
</tr>
<tr>
<td class="entry">
2017-06-21: <a href="entries/Propositional_Proof_Systems.html">Propositional Proof Systems</a>
<br>
Authors:
<a href="http://liftm.de">Julius Michaelis</a>
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2017-06-13: <a href="entries/PSemigroupsConvolution.html">Partial Semigroups and Convolution Algebras</a>
<br>
Authors:
Brijesh Dongol,
Victor B. F. Gomes,
Ian J. Hayes
and <a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="entry">
2017-06-06: <a href="entries/Buffons_Needle.html">Buffon's Needle Problem</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-06-01: <a href="entries/Prpu_Maxflow.html">Formalizing Push-Relabel Algorithms</a>
<br>
Authors:
Peter Lammich
and S. Reza Sefidgar
</td>
</tr>
<tr>
<td class="entry">
2017-06-01: <a href="entries/Flow_Networks.html">Flow Networks and the Min-Cut-Max-Flow Theorem</a>
<br>
Authors:
Peter Lammich
and S. Reza Sefidgar
</td>
</tr>
<tr>
<td class="entry">
2017-05-25: <a href="entries/Optics.html">Optics</a>
<br>
Authors:
<a href="https://www-users.cs.york.ac.uk/~simonf/">Simon Foster</a>
and Frank Zeyda
</td>
</tr>
<tr>
<td class="entry">
2017-05-24: <a href="entries/Security_Protocol_Refinement.html">Developing Security Protocols by Refinement</a>
<br>
Authors:
Christoph Sprenger
and Ivano Somaini
</td>
</tr>
<tr>
<td class="entry">
2017-05-24: <a href="entries/Dict_Construction.html">Dictionary Construction</a>
<br>
Author:
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="entry">
2017-05-08: <a href="entries/Floyd_Warshall.html">The Floyd-Warshall Algorithm for Shortest Paths</a>
<br>
Authors:
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
and Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2017-05-05: <a href="entries/Probabilistic_While.html">Probabilistic while loop</a>
<br>
Author:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="entry">
2017-05-05: <a href="entries/Monomorphic_Monad.html">Effect polymorphism in higher-order logic</a>
<br>
Author:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="entry">
2017-05-05: <a href="entries/Monad_Normalisation.html">Monad normalisation</a>
<br>
Authors:
Joshua Schneider,
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
and <a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="entry">
2017-05-05: <a href="entries/Game_Based_Crypto.html">Game-based cryptography in HOL</a>
<br>
Authors:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>,
S. Reza Sefidgar
and Bhargav Bhatt
</td>
</tr>
<tr>
<td class="entry">
2017-05-05: <a href="entries/CryptHOL.html">CryptHOL</a>
<br>
Author:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="entry">
2017-05-04: <a href="entries/MonoidalCategory.html">Monoidal Categories</a>
<br>
Author:
Eugene W. Stark
</td>
</tr>
<tr>
<td class="entry">
2017-05-01: <a href="entries/Types_Tableaus_and_Goedels_God.html">Types, Tableaus and Gödel’s God in Isabelle/HOL</a>
<br>
Authors:
David Fuenmayor
and <a href="http://christoph-benzmueller.de">Christoph Benzmüller</a>
</td>
</tr>
<tr>
<td class="entry">
2017-04-28: <a href="entries/LocalLexing.html">Local Lexing</a>
<br>
Author:
Steven Obua
</td>
</tr>
<tr>
<td class="entry">
2017-04-19: <a href="entries/Constructor_Funs.html">Constructor Functions</a>
<br>
Author:
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="entry">
2017-04-18: <a href="entries/Lazy_Case.html">Lazifying case constants</a>
<br>
Author:
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="entry">
2017-04-06: <a href="entries/Subresultants.html">Subresultants</a>
<br>
Authors:
<a href="http://sjcjoosten.nl/">Sebastiaan Joosten</a>,
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
and <a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="entry">
2017-04-04: <a href="entries/Random_BSTs.html">Expected Shape of Random Binary Search Trees</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-03-15: <a href="entries/Quick_Sort_Cost.html">The number of comparisons in QuickSort</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-03-15: <a href="entries/Comparison_Sort_Lower_Bound.html">Lower bound on comparison-based sorting algorithms</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-03-10: <a href="entries/Euler_MacLaurin.html">The Euler–MacLaurin Formula</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-02-28: <a href="entries/Elliptic_Curves_Group_Law.html">The Group Law for Elliptic Curves</a>
<br>
Author:
<a href="http://www.in.tum.de/~berghofe">Stefan Berghofer</a>
</td>
</tr>
<tr>
<td class="entry">
2017-02-26: <a href="entries/Menger.html">Menger's Theorem</a>
<br>
Author:
<a href="http://logic.las.tu-berlin.de/Members/Dittmann/">Christoph Dittmann</a>
</td>
</tr>
<tr>
<td class="entry">
2017-02-13: <a href="entries/Differential_Dynamic_Logic.html">Differential Dynamic Logic</a>
<br>
Author:
Brandon Bohrer
</td>
</tr>
<tr>
<td class="entry">
2017-02-10: <a href="entries/Abstract_Soundness.html">Abstract Soundness</a>
<br>
Authors:
Jasmin Christian Blanchette,
Andrei Popescu
and <a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="entry">
2017-02-07: <a href="entries/Stone_Relation_Algebras.html">Stone Relation Algebras</a>
<br>
Author:
<a href="http://www.cosc.canterbury.ac.nz/walter.guttmann/">Walter Guttmann</a>
</td>
</tr>
<tr>
<td class="entry">
2017-01-31: <a href="entries/Key_Agreement_Strong_Adversaries.html">Refining Authenticated Key Agreement with Strong Adversaries</a>
<br>
Authors:
Joseph Lallemand
and Christoph Sprenger
</td>
</tr>
<tr>
<td class="entry">
2017-01-24: <a href="entries/Bernoulli.html">Bernoulli Numbers</a>
<br>
Authors:
Lukas Bulwahn
and <a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-01-17: <a href="entries/Minimal_SSA.html">Minimal Static Single Assignment Form</a>
<br>
Authors:
Max Wagner
and <a href="http://pp.ipd.kit.edu/person.php?id=88">Denis Lohner</a>
</td>
</tr>
<tr>
<td class="entry">
2017-01-17: <a href="entries/Bertrands_Postulate.html">Bertrand's postulate</a>
<br>
Authors:
Julian Biendarra
and <a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-01-12: <a href="entries/E_Transcendental.html">The Transcendence of e</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2017-01-08: <a href="entries/UPF_Firewall.html">Formal Network Models and Their Application to Firewall Policies</a>
<br>
Authors:
<a href="https://www.brucker.ch/">Achim D. Brucker</a>,
Lukas Brügger
and <a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="entry">
2017-01-03: <a href="entries/Password_Authentication_Protocol.html">Verification of a Diffie-Hellman Password-based Authentication Protocol by Extending the Inductive Method</a>
<br>
Author:
Pasquale Noce
</td>
</tr>
<tr>
<td class="entry">
2017-01-01: <a href="entries/FOL_Harrison.html">First-Order Logic According to Harrison</a>
<br>
Authors:
<a href="https://people.compute.dtu.dk/aleje/">Alexander Birch Jensen</a>,
<a href="https://people.compute.dtu.dk/andschl/">Anders Schlichtkrull</a>
and <a href="https://people.compute.dtu.dk/jovi/">Jørgen Villadsen</a>
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2016</td>
</tr>
<tr>
<td class="entry">
2016-12-30: <a href="entries/Concurrent_Ref_Alg.html">Concurrent Refinement Algebra and Rely Quotients</a>
<br>
Authors:
Julian Fell,
Ian J. Hayes
and <a href="http://andrius.velykis.lt">Andrius Velykis</a>
</td>
</tr>
<tr>
<td class="entry">
2016-12-29: <a href="entries/Twelvefold_Way.html">The Twelvefold Way</a>
<br>
Author:
Lukas Bulwahn
</td>
</tr>
<tr>
<td class="entry">
2016-12-20: <a href="entries/Proof_Strategy_Language.html">Proof Strategy Language</a>
<br>
Author:
Yutaka Nagashima
</td>
</tr>
<tr>
<td class="entry">
2016-12-07: <a href="entries/Paraconsistency.html">Paraconsistency</a>
<br>
Authors:
<a href="https://people.compute.dtu.dk/andschl/">Anders Schlichtkrull</a>
and <a href="https://people.compute.dtu.dk/jovi/">Jørgen Villadsen</a>
</td>
</tr>
<tr>
<td class="entry">
2016-11-29: <a href="entries/Complx.html">COMPLX: A Verification Framework for Concurrent Imperative Programs</a>
<br>
Authors:
Sidney Amani,
June Andronick,
Maksym Bortin,
Corey Lewis,
Christine Rizkallah
and Joseph Tuong
</td>
</tr>
<tr>
<td class="entry">
2016-11-23: <a href="entries/Abs_Int_ITP2012.html">Abstract Interpretation of Annotated Commands</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2016-11-16: <a href="entries/Separata.html">Separata: Isabelle tactics for Separation Algebra</a>
<br>
Authors:
Zhe Hou,
David Sanan,
Alwen Tiu,
Rajeev Gore
and Ranald Clouston
</td>
</tr>
<tr>
<td class="entry">
2016-11-12: <a href="entries/Nested_Multisets_Ordinals.html">Formalization of Nested Multisets, Hereditary Multisets, and Syntactic Ordinals</a>
<br>
Authors:
Jasmin Christian Blanchette,
Mathias Fleury
and <a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="entry">
2016-11-12: <a href="entries/Lambda_Free_KBOs.html">Formalization of Knuth–Bendix Orders for Lambda-Free Higher-Order Terms</a>
<br>
Authors:
Heiko Becker,
Jasmin Christian Blanchette,
Uwe Waldmann
and Daniel Wand
</td>
</tr>
<tr>
<td class="entry">
2016-11-10: <a href="entries/Deep_Learning.html">Expressiveness of Deep Learning</a>
<br>
Author:
Alexander Bentkamp
</td>
</tr>
<tr>
<td class="entry">
2016-10-25: <a href="entries/Modal_Logics_for_NTS.html">Modal Logics for Nominal Transition Systems</a>
<br>
Authors:
Tjark Weber,
Lars-Henrik Eriksson,
Joachim Parrow,
Johannes Borgström
and Ramunas Gutkovas
</td>
</tr>
<tr>
<td class="entry">
2016-10-24: <a href="entries/Stable_Matching.html">Stable Matching</a>
<br>
Author:
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="entry">
2016-10-21: <a href="entries/LOFT.html">LOFT — Verified Migration of Linux Firewalls to SDN</a>
<br>
Authors:
<a href="http://liftm.de">Julius Michaelis</a>
and <a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a>
</td>
</tr>
<tr>
<td class="entry">
2016-10-19: <a href="entries/Source_Coding_Theorem.html">Source Coding Theorem</a>
<br>
Authors:
Quentin Hibon
and <a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="entry">
2016-10-19: <a href="entries/SPARCv8.html">A formal model for the SPARCv8 ISA and a proof of non-interference for the LEON3 processor</a>
<br>
Authors:
Zhe Hou,
David Sanan,
Alwen Tiu
and Yang Liu
</td>
</tr>
<tr>
<td class="entry">
2016-10-14: <a href="entries/Berlekamp_Zassenhaus.html">The Factorization Algorithm of Berlekamp and Zassenhaus</a>
<br>
Authors:
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a>,
<a href="http://sjcjoosten.nl/">Sebastiaan Joosten</a>,
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
and <a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="entry">
2016-10-11: <a href="entries/Chord_Segments.html">Intersecting Chords Theorem</a>
<br>
Author:
Lukas Bulwahn
</td>
</tr>
<tr>
<td class="entry">
2016-10-05: <a href="entries/Lp.html">Lp spaces</a>
<br>
Author:
Sebastien Gouezel
</td>
</tr>
<tr>
<td class="entry">
2016-09-30: <a href="entries/Fisher_Yates.html">Fisher–Yates shuffle</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2016-09-29: <a href="entries/Allen_Calculus.html">Allen's Interval Calculus</a>
<br>
Author:
Fadoua Ghourabi
</td>
</tr>
<tr>
<td class="entry">
2016-09-23: <a href="entries/Lambda_Free_RPOs.html">Formalization of Recursive Path Orders for Lambda-Free Higher-Order Terms</a>
<br>
Authors:
Jasmin Christian Blanchette,
Uwe Waldmann
and Daniel Wand
</td>
</tr>
<tr>
<td class="entry">
2016-09-09: <a href="entries/Iptables_Semantics.html">Iptables Semantics</a>
<br>
Authors:
<a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a>
and <a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="entry">
2016-09-06: <a href="entries/SuperCalc.html">A Variant of the Superposition Calculus</a>
<br>
Author:
<a href="http://membres-lig.imag.fr/peltier/">Nicolas Peltier</a>
</td>
</tr>
<tr>
<td class="entry">
2016-09-06: <a href="entries/Stone_Algebras.html">Stone Algebras</a>
<br>
Author:
<a href="http://www.cosc.canterbury.ac.nz/walter.guttmann/">Walter Guttmann</a>
</td>
</tr>
<tr>
<td class="entry">
2016-09-01: <a href="entries/Stirling_Formula.html">Stirling's formula</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2016-08-31: <a href="entries/Routing.html">Routing</a>
<br>
Authors:
<a href="http://liftm.de">Julius Michaelis</a>
and <a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a>
</td>
</tr>
<tr>
<td class="entry">
2016-08-24: <a href="entries/Simple_Firewall.html">Simple Firewall</a>
<br>
Authors:
<a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a>,
<a href="http://liftm.de">Julius Michaelis</a>
and <a href="http://cl-informatik.uibk.ac.at/users/mhaslbeck/">Maximilian Haslbeck</a>
</td>
</tr>
<tr>
<td class="entry">
2016-08-18: <a href="entries/InfPathElimination.html">Infeasible Paths Elimination by Symbolic Execution Techniques: Proof of Correctness and Preservation of Paths</a>
<br>
Authors:
Romain Aissat,
Frederic Voisin
and <a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="entry">
2016-08-12: <a href="entries/EdmondsKarp_Maxflow.html">Formalizing the Edmonds-Karp Algorithm</a>
<br>
Authors:
Peter Lammich
and S. Reza Sefidgar
</td>
</tr>
<tr>
<td class="entry">
2016-08-08: <a href="entries/Refine_Imperative_HOL.html">The Imperative Refinement Framework</a>
<br>
Author:
Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2016-08-07: <a href="entries/Ptolemys_Theorem.html">Ptolemy's Theorem</a>
<br>
Author:
Lukas Bulwahn
</td>
</tr>
<tr>
<td class="entry">
2016-07-17: <a href="entries/Surprise_Paradox.html">Surprise Paradox</a>
<br>
Author:
Joachim Breitner
</td>
</tr>
<tr>
<td class="entry">
2016-07-14: <a href="entries/Pairing_Heap.html">Pairing Heap</a>
<br>
Authors:
Hauke Brinkop
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2016-07-05: <a href="entries/DFS_Framework.html">A Framework for Verifying Depth-First Search Algorithms</a>
<br>
Authors:
Peter Lammich
and René Neumann
</td>
</tr>
<tr>
<td class="entry">
2016-07-01: <a href="entries/Buildings.html">Chamber Complexes, Coxeter Systems, and Buildings</a>
<br>
Author:
<a href="http://ualberta.ca/~jsylvest/">Jeremy Sylvestre</a>
</td>
</tr>
<tr>
<td class="entry">
2016-06-30: <a href="entries/Rewriting_Z.html">The Z Property</a>
<br>
Authors:
Bertram Felgenhauer,
Julian Nagele,
Vincent van Oostrom
and Christian Sternagel
</td>
</tr>
<tr>
<td class="entry">
2016-06-30: <a href="entries/Resolution_FOL.html">The Resolution Calculus for First-Order Logic</a>
<br>
Author:
<a href="https://people.compute.dtu.dk/andschl/">Anders Schlichtkrull</a>
</td>
</tr>
<tr>
<td class="entry">
2016-06-28: <a href="entries/IP_Addresses.html">IP Addresses</a>
<br>
Authors:
<a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a>,
<a href="http://liftm.de">Julius Michaelis</a>
and <a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="entry">
2016-06-28: <a href="entries/Dependent_SIFUM_Refinement.html">Compositional Security-Preserving Refinement for Concurrent Imperative Programs</a>
<br>
Authors:
<a href="https://people.eng.unimelb.edu.au/tobym/">Toby Murray</a>,
Robert Sison,
Edward Pierzchalski
and Christine Rizkallah
</td>
</tr>
<tr>
<td class="entry">
2016-06-26: <a href="entries/Category3.html">Category Theory with Adjunctions and Limits</a>
<br>
Author:
Eugene W. Stark
</td>
</tr>
<tr>
<td class="entry">
2016-06-26: <a href="entries/Card_Multisets.html">Cardinality of Multisets</a>
<br>
Author:
Lukas Bulwahn
</td>
</tr>
<tr>
<td class="entry">
2016-06-25: <a href="entries/Dependent_SIFUM_Type_Systems.html">A Dependent Security Type System for Concurrent Imperative Programs</a>
<br>
Authors:
<a href="https://people.eng.unimelb.edu.au/tobym/">Toby Murray</a>,
Robert Sison,
Edward Pierzchalski
and Christine Rizkallah
</td>
</tr>
<tr>
<td class="entry">
2016-06-21: <a href="entries/Catalan_Numbers.html">Catalan Numbers</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2016-06-18: <a href="entries/Algebraic_VCs.html">Program Construction and Verification Components Based on Kleene Algebra</a>
<br>
Authors:
Victor B. F. Gomes
and <a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="entry">
2016-06-13: <a href="entries/Noninterference_Concurrent_Composition.html">Conservation of CSP Noninterference Security under Concurrent Composition</a>
<br>
Author:
Pasquale Noce
</td>
</tr>
<tr>
<td class="entry">
2016-06-09: <a href="entries/Word_Lib.html">Finite Machine Word Library</a>
<br>
Authors:
Joel Beeren,
Matthew Fernandez,
Xin Gao,
<a href="http://www.cse.unsw.edu.au/~kleing/">Gerwin Klein</a>,
Rafal Kolanski,
Japheth Lim,
Corey Lewis,
Daniel Matichuk
and Thomas Sewell
</td>
</tr>
<tr>
<td class="entry">
2016-05-31: <a href="entries/Tree_Decomposition.html">Tree Decomposition</a>
<br>
Author:
<a href="http://logic.las.tu-berlin.de/Members/Dittmann/">Christoph Dittmann</a>
</td>
</tr>
<tr>
<td class="entry">
2016-05-24: <a href="entries/Posix-Lexing.html">POSIX Lexing with Derivatives of Regular Expressions</a>
<br>
Authors:
<a href="http://kcl.academia.edu/FahadAusaf">Fahad Ausaf</a>,
<a href="https://rd.host.cs.st-andrews.ac.uk">Roy Dyckhoff</a>
and <a href="http://www.inf.kcl.ac.uk/staff/urbanc/">Christian Urban</a>
</td>
</tr>
<tr>
<td class="entry">
2016-05-24: <a href="entries/Card_Equiv_Relations.html">Cardinality of Equivalence Relations</a>
<br>
Author:
Lukas Bulwahn
</td>
</tr>
<tr>
<td class="entry">
2016-05-20: <a href="entries/Perron_Frobenius.html">Perron-Frobenius Theorem for Spectral Radius Analysis</a>
<br>
Authors:
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a>,
<a href="http://www21.in.tum.de/~kuncar/">Ondřej Kunčar</a>,
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
and <a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="entry">
2016-05-20: <a href="entries/Incredible_Proof_Machine.html">The meta theory of the Incredible Proof Machine</a>
<br>
Authors:
Joachim Breitner
and <a href="http://pp.ipd.kit.edu/person.php?id=88">Denis Lohner</a>
</td>
</tr>
<tr>
<td class="entry">
2016-05-18: <a href="entries/FLP.html">A Constructive Proof for FLP</a>
<br>
Authors:
Benjamin Bisping,
Paul-David Brodmann,
Tim Jungnickel,
Christina Rickmann,
Henning Seidler,
Anke Stüber,
Arno Wilhelm-Weidner,
Kirstin Peters
and <a href="https://www.mtv.tu-berlin.de/nestmann/">Uwe Nestmann</a>
</td>
</tr>
<tr>
<td class="entry">
2016-05-09: <a href="entries/MFMC_Countable.html">A Formal Proof of the Max-Flow Min-Cut Theorem for Countable Networks</a>
<br>
Author:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="entry">
2016-05-05: <a href="entries/Randomised_Social_Choice.html">Randomised Social Choice Theory</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2016-05-04: <a href="entries/SDS_Impossibility.html">The Incompatibility of SD-Efficiency and SD-Strategy-Proofness</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2016-05-04: <a href="entries/Bell_Numbers_Spivey.html">Spivey's Generalized Recurrence for Bell Numbers</a>
<br>
Author:
Lukas Bulwahn
</td>
</tr>
<tr>
<td class="entry">
2016-05-02: <a href="entries/Groebner_Bases.html">Gröbner Bases Theory</a>
<br>
Authors:
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a>
and <a href="https://risc.jku.at/m/alexander-maletzky/">Alexander Maletzky</a>
</td>
</tr>
<tr>
<td class="entry">
2016-04-28: <a href="entries/No_FTL_observers.html">No Faster-Than-Light Observers</a>
<br>
Authors:
Mike Stannett
and <a href="http://www.renyi.hu/~nemeti/">István Németi</a>
</td>
</tr>
<tr>
<td class="entry">
2016-04-27: <a href="entries/ROBDD.html">Algorithms for Reduced Ordered Binary Decision Diagrams</a>
<br>
Authors:
<a href="http://liftm.de">Julius Michaelis</a>,
<a href="http://cl-informatik.uibk.ac.at/users/mhaslbeck/">Maximilian Haslbeck</a>,
Peter Lammich
and <a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="entry">
2016-04-27: <a href="entries/CYK.html">A formalisation of the Cocke-Younger-Kasami algorithm</a>
<br>
Author:
Maksym Bortin
</td>
</tr>
<tr>
<td class="entry">
2016-04-26: <a href="entries/Noninterference_Sequential_Composition.html">Conservation of CSP Noninterference Security under Sequential Composition</a>
<br>
Author:
Pasquale Noce
</td>
</tr>
<tr>
<td class="entry">
2016-04-12: <a href="entries/KAD.html">Kleene Algebras with Domain</a>
<br>
Authors:
Victor B. F. Gomes,
<a href="http://www.cosc.canterbury.ac.nz/walter.guttmann/">Walter Guttmann</a>,
<a href="http://www.hoefner-online.de/">Peter Höfner</a>,
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
and Tjark Weber
</td>
</tr>
<tr>
<td class="entry">
2016-03-11: <a href="entries/PropResPI.html">Propositional Resolution and Prime Implicates Generation</a>
<br>
Author:
<a href="http://membres-lig.imag.fr/peltier/">Nicolas Peltier</a>
</td>
</tr>
<tr>
<td class="entry">
2016-03-08: <a href="entries/Timed_Automata.html">Timed Automata</a>
<br>
Author:
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
</td>
</tr>
<tr>
<td class="entry">
2016-03-08: <a href="entries/Cartan_FP.html">The Cartan Fixed Point Theorems</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="entry">
2016-03-01: <a href="entries/LTL.html">Linear Temporal Logic</a>
<br>
Author:
Salomon Sickert
</td>
</tr>
<tr>
<td class="entry">
2016-02-17: <a href="entries/List_Update.html">Analysis of List Update Algorithms</a>
<br>
Authors:
<a href="http://in.tum.de/~haslbema/">Maximilian P.L. Haslbeck</a>
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2016-02-05: <a href="entries/Formal_SSA.html">Verified Construction of Static Single Assignment Form</a>
<br>
Authors:
Sebastian Ullrich
and <a href="http://pp.ipd.kit.edu/person.php?id=88">Denis Lohner</a>
</td>
</tr>
<tr>
<td class="entry">
2016-01-29: <a href="entries/Polynomial_Interpolation.html">Polynomial Interpolation</a>
<br>
Authors:
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
and <a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="entry">
2016-01-29: <a href="entries/Polynomial_Factorization.html">Polynomial Factorization</a>
<br>
Authors:
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
and <a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="entry">
2016-01-20: <a href="entries/Knot_Theory.html">Knot Theory</a>
<br>
Author:
T.V.H. Prathamesh
</td>
</tr>
<tr>
<td class="entry">
2016-01-18: <a href="entries/Matrix_Tensor.html">Tensor Product of Matrices</a>
<br>
Author:
T.V.H. Prathamesh
</td>
</tr>
<tr>
<td class="entry">
2016-01-14: <a href="entries/Card_Number_Partitions.html">Cardinality of Number Partitions</a>
<br>
Author:
Lukas Bulwahn
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2015</td>
</tr>
<tr>
<td class="entry">
2015-12-28: <a href="entries/Triangle.html">Basic Geometric Properties of Triangles</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2015-12-28: <a href="entries/Prime_Harmonic_Series.html">The Divergence of the Prime Harmonic Series</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2015-12-28: <a href="entries/Liouville_Numbers.html">Liouville numbers</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2015-12-28: <a href="entries/Descartes_Sign_Rule.html">Descartes' Rule of Signs</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2015-12-22: <a href="entries/Stern_Brocot.html">The Stern-Brocot Tree</a>
<br>
Authors:
<a href="http://peteg.org">Peter Gammie</a>
and <a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="entry">
2015-12-22: <a href="entries/Applicative_Lifting.html">Applicative Lifting</a>
<br>
Authors:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
and Joshua Schneider
</td>
</tr>
<tr>
<td class="entry">
2015-12-22: <a href="entries/Algebraic_Numbers.html">Algebraic Numbers in Isabelle/HOL</a>
<br>
Authors:
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>,
<a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
and <a href="http://sjcjoosten.nl/">Sebastiaan Joosten</a>
</td>
</tr>
<tr>
<td class="entry">
2015-12-12: <a href="entries/Card_Partitions.html">Cardinality of Set Partitions</a>
<br>
Author:
Lukas Bulwahn
</td>
</tr>
<tr>
<td class="entry">
2015-12-02: <a href="entries/Latin_Square.html">Latin Square</a>
<br>
Author:
Alexander Bentkamp
</td>
</tr>
<tr>
<td class="entry">
2015-12-01: <a href="entries/Ergodic_Theory.html">Ergodic Theory</a>
<br>
Author:
Sebastien Gouezel
</td>
</tr>
<tr>
<td class="entry">
2015-11-19: <a href="entries/Euler_Partition.html">Euler's Partition Theorem</a>
<br>
Author:
Lukas Bulwahn
</td>
</tr>
<tr>
<td class="entry">
2015-11-18: <a href="entries/TortoiseHare.html">The Tortoise and Hare Algorithm</a>
<br>
Author:
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="entry">
2015-11-11: <a href="entries/Planarity_Certificates.html">Planarity Certificates</a>
<br>
Author:
<a href="http://www21.in.tum.de/~noschinl/">Lars Noschinski</a>
</td>
</tr>
<tr>
<td class="entry">
2015-11-02: <a href="entries/Parity_Game.html">Positional Determinacy of Parity Games</a>
<br>
Author:
<a href="http://logic.las.tu-berlin.de/Members/Dittmann/">Christoph Dittmann</a>
</td>
</tr>
<tr>
<td class="entry">
2015-09-16: <a href="entries/Isabelle_Meta_Model.html">A Meta-Model for the Isabelle API</a>
<br>
Authors:
<a href="https://www.lri.fr/~ftuong/">Frédéric Tuong</a>
and <a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="entry">
2015-09-04: <a href="entries/LTL_to_DRA.html">Converting Linear Temporal Logic to Deterministic (Generalized) Rabin Automata</a>
<br>
Author:
Salomon Sickert
</td>
</tr>
<tr>
<td class="entry">
2015-08-21: <a href="entries/Jordan_Normal_Form.html">Matrices, Jordan Normal Forms, and Spectral Radius Theory</a>
<br>
Authors:
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
and <a href="http://group-mmm.org/~ayamada/">Akihisa Yamada</a>
</td>
</tr>
<tr>
<td class="entry">
2015-08-20: <a href="entries/Decreasing-Diagrams-II.html">Decreasing Diagrams II</a>
<br>
Author:
Bertram Felgenhauer
</td>
</tr>
<tr>
<td class="entry">
2015-08-18: <a href="entries/Noninterference_Inductive_Unwinding.html">The Inductive Unwinding Theorem for CSP Noninterference Security</a>
<br>
Author:
Pasquale Noce
</td>
</tr>
<tr>
<td class="entry">
2015-08-12: <a href="entries/Rep_Fin_Groups.html">Representations of Finite Groups</a>
<br>
Author:
<a href="http://ualberta.ca/~jsylvest/">Jeremy Sylvestre</a>
</td>
</tr>
<tr>
<td class="entry">
2015-08-10: <a href="entries/Encodability_Process_Calculi.html">Analysing and Comparing Encodability Criteria for Process Calculi</a>
<br>
Authors:
Kirstin Peters
and <a href="http://theory.stanford.edu/~rvg/">Rob van Glabbeek</a>
</td>
</tr>
<tr>
<td class="entry">
2015-07-21: <a href="entries/Case_Labeling.html">Generating Cases from Labeled Subgoals</a>
<br>
Author:
<a href="http://www21.in.tum.de/~noschinl/">Lars Noschinski</a>
</td>
</tr>
<tr>
<td class="entry">
2015-07-14: <a href="entries/Landau_Symbols.html">Landau Symbols</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2015-07-14: <a href="entries/Akra_Bazzi.html">The Akra-Bazzi theorem and the Master theorem</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2015-07-07: <a href="entries/Hermite.html">Hermite Normal Form</a>
<br>
Authors:
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a>
and <a href="http://www.unirioja.es/cu/jearansa">Jesús Aransay</a>
</td>
</tr>
<tr>
<td class="entry">
2015-06-27: <a href="entries/Derangements.html">Derangements Formula</a>
<br>
Author:
Lukas Bulwahn
</td>
</tr>
<tr>
<td class="entry">
2015-06-11: <a href="entries/Noninterference_Ipurge_Unwinding.html">The Ipurge Unwinding Theorem for CSP Noninterference Security</a>
<br>
Author:
Pasquale Noce
</td>
</tr>
<tr>
<td class="entry">
2015-06-11: <a href="entries/Noninterference_Generic_Unwinding.html">The Generic Unwinding Theorem for CSP Noninterference Security</a>
<br>
Author:
Pasquale Noce
</td>
</tr>
<tr>
<td class="entry">
2015-06-11: <a href="entries/Multirelations.html">Binary Multirelations</a>
<br>
Authors:
<a href="http://www.sci.kagoshima-u.ac.jp/~furusawa/">Hitoshi Furusawa</a>
and <a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="entry">
2015-06-11: <a href="entries/List_Interleaving.html">Reasoning about Lists via List Interleaving</a>
<br>
Author:
Pasquale Noce
</td>
</tr>
<tr>
<td class="entry">
2015-06-07: <a href="entries/Dynamic_Tables.html">Parameterized Dynamic Tables</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2015-05-28: <a href="entries/Formula_Derivatives.html">Derivatives of Logical Formulas</a>
<br>
Author:
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="entry">
2015-05-27: <a href="entries/Probabilistic_System_Zoo.html">A Zoo of Probabilistic Systems</a>
<br>
Authors:
<a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>,
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
and <a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="entry">
2015-04-30: <a href="entries/Vickrey_Clarke_Groves.html">VCG - Combinatorial Vickrey-Clarke-Groves Auctions</a>
<br>
Authors:
Marco B. Caminati,
<a href="http://www.cs.bham.ac.uk/~mmk">Manfred Kerber</a>,
Christoph Lange
and Colin Rowat
</td>
</tr>
<tr>
<td class="entry">
2015-04-15: <a href="entries/Residuated_Lattices.html">Residuated Lattices</a>
<br>
Authors:
Victor B. F. Gomes
and <a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="entry">
2015-04-13: <a href="entries/ConcurrentIMP.html">Concurrent IMP</a>
<br>
Author:
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="entry">
2015-04-13: <a href="entries/ConcurrentGC.html">Relaxing Safely: Verified On-the-Fly Garbage Collection for x86-TSO</a>
<br>
Authors:
<a href="http://peteg.org">Peter Gammie</a>,
<a href="https://www.cs.purdue.edu/homes/hosking/">Tony Hosking</a>
and Kai Engelhardt
</td>
</tr>
<tr>
<td class="entry">
2015-03-30: <a href="entries/Trie.html">Trie</a>
<br>
Authors:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2015-03-18: <a href="entries/Consensus_Refined.html">Consensus Refined</a>
<br>
Authors:
Ognjen Maric
and Christoph Sprenger
</td>
</tr>
<tr>
<td class="entry">
2015-03-11: <a href="entries/Deriving.html">Deriving class instances for datatypes</a>
<br>
Authors:
Christian Sternagel
and <a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2015-02-20: <a href="entries/Call_Arity.html">The Safety of Call Arity</a>
<br>
Author:
Joachim Breitner
</td>
</tr>
<tr>
<td class="entry">
2015-02-12: <a href="entries/QR_Decomposition.html">QR Decomposition</a>
<br>
Authors:
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a>
and <a href="http://www.unirioja.es/cu/jearansa">Jesús Aransay</a>
</td>
</tr>
<tr>
<td class="entry">
2015-02-12: <a href="entries/Echelon_Form.html">Echelon Form</a>
<br>
Authors:
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a>
and <a href="http://www.unirioja.es/cu/jearansa">Jesús Aransay</a>
</td>
</tr>
<tr>
<td class="entry">
2015-02-05: <a href="entries/Finite_Automata_HF.html">Finite Automata in Hereditarily Finite Set Theory</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="entry">
2015-01-28: <a href="entries/UpDown_Scheme.html">Verification of the UpDown Scheme</a>
<br>
Author:
<a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2014</td>
</tr>
<tr>
<td class="entry">
2014-11-28: <a href="entries/UPF.html">The Unified Policy Framework (UPF)</a>
<br>
Authors:
<a href="https://www.brucker.ch/">Achim D. Brucker</a>,
Lukas Brügger
and <a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="entry">
2014-10-23: <a href="entries/AODV.html">Loop freedom of the (untimed) AODV routing protocol</a>
<br>
Authors:
<a href="http://www.tbrk.org">Timothy Bourke</a>
and <a href="http://www.hoefner-online.de/">Peter Höfner</a>
</td>
</tr>
<tr>
<td class="entry">
2014-10-13: <a href="entries/Lifting_Definition_Option.html">Lifting Definition Option</a>
<br>
Author:
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2014-10-10: <a href="entries/Stream_Fusion_Code.html">Stream Fusion in HOL with Code Generation</a>
<br>
Authors:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
and Alexandra Maximova
</td>
</tr>
<tr>
<td class="entry">
2014-10-09: <a href="entries/Density_Compiler.html">A Verified Compiler for Probability Density Functions</a>
<br>
Authors:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>,
<a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2014-10-08: <a href="entries/RefinementReactive.html">Formalization of Refinement Calculus for Reactive Systems</a>
<br>
Author:
Viorel Preoteasa
</td>
</tr>
<tr>
<td class="entry">
2014-10-03: <a href="entries/XML.html">XML</a>
<br>
Authors:
Christian Sternagel
and <a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2014-10-03: <a href="entries/Certification_Monads.html">Certification Monads</a>
<br>
Authors:
Christian Sternagel
and <a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2014-09-25: <a href="entries/Imperative_Insertion_Sort.html">Imperative Insertion Sort</a>
<br>
Author:
Christian Sternagel
</td>
</tr>
<tr>
<td class="entry">
2014-09-19: <a href="entries/Sturm_Tarski.html">The Sturm-Tarski Theorem</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="entry">
2014-09-15: <a href="entries/Cayley_Hamilton.html">The Cayley-Hamilton Theorem</a>
<br>
Authors:
<a href="http://nm.wu.ac.at/nm/sadelsbe">Stephan Adelsberger</a>,
<a href="http://www.logic.at/people/hetzl/">Stefan Hetzl</a>
and Florian Pollak
</td>
</tr>
<tr>
<td class="entry">
2014-09-09: <a href="entries/Jordan_Hoelder.html">The Jordan-Hölder Theorem</a>
<br>
Author:
Jakob von Raumer
</td>
</tr>
<tr>
<td class="entry">
2014-09-04: <a href="entries/Priority_Queue_Braun.html">Priority Queues Based on Braun Trees</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2014-09-03: <a href="entries/Gauss_Jordan.html">Gauss-Jordan Algorithm and Its Applications</a>
<br>
Authors:
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a>
and <a href="http://www.unirioja.es/cu/jearansa">Jesús Aransay</a>
</td>
</tr>
<tr>
<td class="entry">
2014-08-29: <a href="entries/VectorSpace.html">Vector Spaces</a>
<br>
Author:
Holden Lee
</td>
</tr>
<tr>
<td class="entry">
2014-08-29: <a href="entries/Special_Function_Bounds.html">Real-Valued Special Functions: Upper and Lower Bounds</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="entry">
2014-08-13: <a href="entries/Skew_Heap.html">Skew Heap</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2014-08-12: <a href="entries/Splay_Tree.html">Splay Tree</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2014-07-29: <a href="entries/Show.html">Haskell's Show Class in Isabelle/HOL</a>
<br>
Authors:
Christian Sternagel
and <a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2014-07-18: <a href="entries/CISC-Kernel.html">Formal Specification of a Generic Separation Kernel</a>
<br>
Authors:
Freek Verbeek,
Sergey Tverdyshev,
Oto Havle,
Holger Blasum,
Bruno Langenstein,
Werner Stephan,
Yakoub Nemouchi,
Abderrahmane Feliachi,
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
and Julien Schmaltz
</td>
</tr>
<tr>
<td class="entry">
2014-07-13: <a href="entries/pGCL.html">pGCL for Isabelle</a>
<br>
Author:
David Cock
</td>
</tr>
<tr>
<td class="entry">
2014-07-07: <a href="entries/Amortized_Complexity.html">Amortized Complexity Verified</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2014-07-04: <a href="entries/Network_Security_Policy_Verification.html">Network Security Policy Verification</a>
<br>
Author:
<a href="http://net.in.tum.de/~diekmann">Cornelius Diekmann</a>
</td>
</tr>
<tr>
<td class="entry">
2014-07-03: <a href="entries/Pop_Refinement.html">Pop-Refinement</a>
<br>
Author:
<a href="http://www.kestrel.edu/~coglio">Alessandro Coglio</a>
</td>
</tr>
<tr>
<td class="entry">
2014-06-12: <a href="entries/MSO_Regex_Equivalence.html">Decision Procedures for MSO on Words Based on Derivatives of Regular Expressions</a>
<br>
Authors:
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2014-06-08: <a href="entries/Boolean_Expression_Checkers.html">Boolean Expression Checkers</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2014-05-28: <a href="entries/Promela.html">Promela Formalization</a>
<br>
Author:
René Neumann
</td>
</tr>
<tr>
<td class="entry">
2014-05-28: <a href="entries/LTL_to_GBA.html">Converting Linear-Time Temporal Logic to Generalized Büchi Automata</a>
<br>
Authors:
Alexander Schimpf
and Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2014-05-28: <a href="entries/Gabow_SCC.html">Verified Efficient Implementation of Gabow's Strongly Connected Components Algorithm</a>
<br>
Author:
Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2014-05-28: <a href="entries/CAVA_LTL_Modelchecker.html">A Fully Verified Executable LTL Model Checker</a>
<br>
Authors:
<a href="https://www7.in.tum.de/~esparza/">Javier Esparza</a>,
Peter Lammich,
René Neumann,
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>,
Alexander Schimpf
and <a href="http://www.irit.fr/~Jan-Georg.Smaus">Jan-Georg Smaus</a>
</td>
</tr>
<tr>
<td class="entry">
2014-05-28: <a href="entries/CAVA_Automata.html">The CAVA Automata Library</a>
<br>
Author:
Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2014-05-23: <a href="entries/Roy_Floyd_Warshall.html">Transitive closure according to Roy-Floyd-Warshall</a>
<br>
Author:
Makarius Wenzel
</td>
</tr>
<tr>
<td class="entry">
2014-05-23: <a href="entries/Noninterference_CSP.html">Noninterference Security in Communicating Sequential Processes</a>
<br>
Author:
Pasquale Noce
</td>
</tr>
<tr>
<td class="entry">
2014-05-21: <a href="entries/Regular_Algebras.html">Regular Algebras</a>
<br>
Authors:
<a href="https://www-users.cs.york.ac.uk/~simonf/">Simon Foster</a>
and <a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="entry">
2014-04-28: <a href="entries/ComponentDependencies.html">Formalisation and Analysis of Component Dependencies</a>
<br>
Author:
Maria Spichkova
</td>
</tr>
<tr>
<td class="entry">
2014-04-23: <a href="entries/WHATandWHERE_Security.html">A Formalization of Declassification with WHAT-and-WHERE-Security</a>
<br>
Authors:
Sylvia Grewe,
Alexander Lux,
Heiko Mantel
and Jens Sauer
</td>
</tr>
<tr>
<td class="entry">
2014-04-23: <a href="entries/Strong_Security.html">A Formalization of Strong Security</a>
<br>
Authors:
Sylvia Grewe,
Alexander Lux,
Heiko Mantel
and Jens Sauer
</td>
</tr>
<tr>
<td class="entry">
2014-04-23: <a href="entries/SIFUM_Type_Systems.html">A Formalization of Assumptions and Guarantees for Compositional Noninterference</a>
<br>
Authors:
Sylvia Grewe,
Heiko Mantel
and Daniel Schoepe
</td>
</tr>
<tr>
<td class="entry">
2014-04-22: <a href="entries/Bounded_Deducibility_Security.html">Bounded-Deducibility Security</a>
<br>
Authors:
Andrei Popescu
and Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2014-04-16: <a href="entries/HyperCTL.html">A shallow embedding of HyperCTL*</a>
<br>
Authors:
<a href="http://www.react.uni-saarland.de/people/rabe.html">Markus N. Rabe</a>,
Peter Lammich
and Andrei Popescu
</td>
</tr>
<tr>
<td class="entry">
2014-04-16: <a href="entries/Abstract_Completeness.html">Abstract Completeness</a>
<br>
Authors:
Jasmin Christian Blanchette,
Andrei Popescu
and <a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="entry">
2014-04-13: <a href="entries/Discrete_Summation.html">Discrete Summation</a>
<br>
Author:
<a href="http://isabelle.in.tum.de/~haftmann">Florian Haftmann</a>
</td>
</tr>
<tr>
<td class="entry">
2014-04-03: <a href="entries/GPU_Kernel_PL.html">Syntax and semantics of a GPU kernel programming language</a>
<br>
Author:
John Wickerson
</td>
</tr>
<tr>
<td class="entry">
2014-03-11: <a href="entries/Probabilistic_Noninterference.html">Probabilistic Noninterference</a>
<br>
Authors:
Andrei Popescu
and <a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>
</td>
</tr>
<tr>
<td class="entry">
2014-03-08: <a href="entries/AWN.html">Mechanization of the Algebra for Wireless Networks (AWN)</a>
<br>
Author:
<a href="http://www.tbrk.org">Timothy Bourke</a>
</td>
</tr>
<tr>
<td class="entry">
2014-02-18: <a href="entries/Partial_Function_MR.html">Mutually Recursive Partial Functions</a>
<br>
Author:
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2014-02-13: <a href="entries/Random_Graph_Subgraph_Threshold.html">Properties of Random Graphs -- Subgraph Containment</a>
<br>
Author:
<a href="https://www21.in.tum.de/~hupel/">Lars Hupel</a>
</td>
</tr>
<tr>
<td class="entry">
2014-02-11: <a href="entries/Selection_Heap_Sort.html">Verification of Selection and Heap Sort Using Locales</a>
<br>
Author:
<a href="http://www.matf.bg.ac.rs/~danijela">Danijela Petrovic</a>
</td>
</tr>
<tr>
<td class="entry">
2014-02-07: <a href="entries/Affine_Arithmetic.html">Affine Arithmetic</a>
<br>
Author:
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a>
</td>
</tr>
<tr>
<td class="entry">
2014-02-06: <a href="entries/Real_Impl.html">Implementing field extensions of the form Q[sqrt(b)]</a>
<br>
Author:
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2014-01-30: <a href="entries/Regex_Equivalence.html">Unified Decision Procedures for Regular Expression Equivalence</a>
<br>
Authors:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
and <a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="entry">
2014-01-28: <a href="entries/Secondary_Sylow.html">Secondary Sylow Theorems</a>
<br>
Author:
Jakob von Raumer
</td>
</tr>
<tr>
<td class="entry">
2014-01-25: <a href="entries/Relation_Algebra.html">Relation Algebra</a>
<br>
Authors:
Alasdair Armstrong,
<a href="https://www-users.cs.york.ac.uk/~simonf/">Simon Foster</a>,
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
and Tjark Weber
</td>
</tr>
<tr>
<td class="entry">
2014-01-23: <a href="entries/KAT_and_DRA.html">Kleene Algebra with Tests and Demonic Refinement Algebras</a>
<br>
Authors:
Alasdair Armstrong,
Victor B. F. Gomes
and <a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
</td>
</tr>
<tr>
<td class="entry">
2014-01-16: <a href="entries/Featherweight_OCL.html">Featherweight OCL: A Proposal for a Machine-Checked Formal Semantics for OCL 2.5</a>
<br>
Authors:
<a href="https://www.brucker.ch/">Achim D. Brucker</a>,
<a href="https://www.lri.fr/~ftuong/">Frédéric Tuong</a>
and <a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
</td>
</tr>
<tr>
<td class="entry">
2014-01-11: <a href="entries/Sturm_Sequences.html">Sturm's Theorem</a>
<br>
Author:
<a href="https://www21.in.tum.de/~eberlm">Manuel Eberl</a>
</td>
</tr>
<tr>
<td class="entry">
2014-01-11: <a href="entries/CryptoBasedCompositionalProperties.html">Compositional Properties of Crypto-Based Components</a>
<br>
Author:
Maria Spichkova
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2013</td>
</tr>
<tr>
<td class="entry">
2013-12-01: <a href="entries/Tail_Recursive_Functions.html">A General Method for the Proof of Theorems on Tail-recursive Functions</a>
<br>
Author:
Pasquale Noce
</td>
</tr>
<tr>
<td class="entry">
2013-11-17: <a href="entries/Incompleteness.html">Gödel's Incompleteness Theorems</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="entry">
2013-11-17: <a href="entries/HereditarilyFinite.html">The Hereditarily Finite Sets</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="entry">
2013-11-15: <a href="entries/Coinductive_Languages.html">A Codatatype of Formal Languages</a>
<br>
Author:
<a href="http://people.inf.ethz.ch/trayteld/">Dmitriy Traytel</a>
</td>
</tr>
<tr>
<td class="entry">
2013-11-14: <a href="entries/FocusStreamsCaseStudies.html">Stream Processing Components: Isabelle/HOL Formalisation and Case Studies</a>
<br>
Author:
Maria Spichkova
</td>
</tr>
<tr>
<td class="entry">
2013-11-12: <a href="entries/GoedelGod.html">Gödel's God in Isabelle/HOL</a>
<br>
Authors:
<a href="http://christoph-benzmueller.de">Christoph Benzmüller</a>
and <a href="http://www.logic.at/staff/bruno/">Bruno Woltzenlogel Paleo</a>
</td>
</tr>
<tr>
<td class="entry">
2013-11-01: <a href="entries/Decreasing-Diagrams.html">Decreasing Diagrams</a>
<br>
Author:
<a href="http://cl-informatik.uibk.ac.at/users/hzankl">Harald Zankl</a>
</td>
</tr>
<tr>
<td class="entry">
2013-10-02: <a href="entries/Automatic_Refinement.html">Automatic Data Refinement</a>
<br>
Author:
Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2013-09-17: <a href="entries/Native_Word.html">Native Word</a>
<br>
Author:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="entry">
2013-07-27: <a href="entries/IEEE_Floating_Point.html">A Formal Model of IEEE Floating Point Arithmetic</a>
<br>
Author:
Lei Yu
</td>
</tr>
<tr>
<td class="entry">
2013-07-22: <a href="entries/Pratt_Certificate.html">Pratt's Primality Certificates</a>
<br>
Authors:
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
and <a href="http://www21.in.tum.de/~noschinl/">Lars Noschinski</a>
</td>
</tr>
<tr>
<td class="entry">
2013-07-22: <a href="entries/Lehmer.html">Lehmer's Theorem</a>
<br>
Authors:
<a href="http://home.in.tum.de/~wimmers/">Simon Wimmer</a>
and <a href="http://www21.in.tum.de/~noschinl/">Lars Noschinski</a>
</td>
</tr>
<tr>
<td class="entry">
2013-07-19: <a href="entries/Koenigsberg_Friendship.html">The Königsberg Bridge Problem and the Friendship Theorem</a>
<br>
Author:
<a href="https://www.cl.cam.ac.uk/~wl302/">Wenda Li</a>
</td>
</tr>
<tr>
<td class="entry">
2013-06-27: <a href="entries/Sort_Encodings.html">Sound and Complete Sort Encodings for First-Order Logic</a>
<br>
Authors:
Jasmin Christian Blanchette
and Andrei Popescu
</td>
</tr>
<tr>
<td class="entry">
2013-05-22: <a href="entries/ShortestPath.html">An Axiomatic Characterization of the Single-Source Shortest Path Problem</a>
<br>
Author:
Christine Rizkallah
</td>
</tr>
<tr>
<td class="entry">
2013-04-28: <a href="entries/Graph_Theory.html">Graph Theory</a>
<br>
Author:
<a href="http://www21.in.tum.de/~noschinl/">Lars Noschinski</a>
</td>
</tr>
<tr>
<td class="entry">
2013-04-15: <a href="entries/Containers.html">Light-weight Containers</a>
<br>
Author:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="entry">
2013-02-21: <a href="entries/Nominal2.html">Nominal 2</a>
<br>
Authors:
<a href="http://www.inf.kcl.ac.uk/staff/urbanc/">Christian Urban</a>,
<a href="http://www.in.tum.de/~berghofe">Stefan Berghofer</a>
and <a href="http://cl-informatik.uibk.ac.at/cek/">Cezary Kaliszyk</a>
</td>
</tr>
<tr>
<td class="entry">
2013-01-31: <a href="entries/Launchbury.html">The Correctness of Launchbury's Natural Semantics for Lazy Evaluation</a>
<br>
Author:
Joachim Breitner
</td>
</tr>
<tr>
<td class="entry">
2013-01-19: <a href="entries/Ribbon_Proofs.html">Ribbon Proofs</a>
<br>
Author:
John Wickerson
</td>
</tr>
<tr>
<td class="entry">
2013-01-16: <a href="entries/Rank_Nullity_Theorem.html">Rank-Nullity Theorem in Linear Algebra</a>
<br>
Authors:
<a href="http://www.unirioja.es/cu/jodivaso/">Jose Divasón</a>
and <a href="http://www.unirioja.es/cu/jearansa">Jesús Aransay</a>
</td>
</tr>
<tr>
<td class="entry">
2013-01-15: <a href="entries/Kleene_Algebra.html">Kleene Algebra</a>
<br>
Authors:
Alasdair Armstrong,
<a href="http://staffwww.dcs.shef.ac.uk/people/G.Struth/">Georg Struth</a>
and Tjark Weber
</td>
</tr>
<tr>
<td class="entry">
2013-01-03: <a href="entries/Sqrt_Babylonian.html">Computing N-th Roots using the Babylonian Method</a>
<br>
Author:
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2012</td>
</tr>
<tr>
<td class="entry">
2012-11-14: <a href="entries/Separation_Logic_Imperative_HOL.html">A Separation Logic Framework for Imperative HOL</a>
<br>
Authors:
Peter Lammich
and Rene Meis
</td>
</tr>
<tr>
<td class="entry">
2012-11-02: <a href="entries/Open_Induction.html">Open Induction</a>
<br>
Authors:
Mizuhito Ogawa
and Christian Sternagel
</td>
</tr>
<tr>
<td class="entry">
2012-10-30: <a href="entries/Tarskis_Geometry.html">The independence of Tarski's Euclidean axiom</a>
<br>
Author:
T. J. M. Makarios
</td>
</tr>
<tr>
<td class="entry">
2012-10-27: <a href="entries/Bondy.html">Bondy's Theorem</a>
<br>
Authors:
<a href="http://www.andrew.cmu.edu/user/avigad/">Jeremy Avigad</a>
and <a href="http://www.logic.at/people/hetzl/">Stefan Hetzl</a>
</td>
</tr>
<tr>
<td class="entry">
2012-09-10: <a href="entries/Possibilistic_Noninterference.html">Possibilistic Noninterference</a>
<br>
Authors:
Andrei Popescu
and <a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>
</td>
</tr>
<tr>
<td class="entry">
2012-08-07: <a href="entries/Datatype_Order_Generator.html">Generating linear orders for datatypes</a>
<br>
Author:
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2012-08-05: <a href="entries/Impossible_Geometry.html">Proving the Impossibility of Trisecting an Angle and Doubling the Cube</a>
<br>
Authors:
Ralph Romanos
and <a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="entry">
2012-07-27: <a href="entries/Heard_Of.html">Verifying Fault-Tolerant Distributed Algorithms in the Heard-Of Model</a>
<br>
Authors:
Henri Debrat
and <a href="http://www.loria.fr/~merz">Stephan Merz</a>
</td>
</tr>
<tr>
<td class="entry">
2012-07-01: <a href="entries/PCF.html">Logical Relations for PCF</a>
<br>
Author:
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="entry">
2012-06-26: <a href="entries/Tycon.html">Type Constructor Classes and Monad Transformers</a>
<br>
Author:
Brian Huffman
</td>
</tr>
<tr>
<td class="entry">
2012-05-29: <a href="entries/Psi_Calculi.html">Psi-calculi in Isabelle</a>
<br>
Author:
<a href="http://www.itu.dk/people/jebe">Jesper Bengtson</a>
</td>
</tr>
<tr>
<td class="entry">
2012-05-29: <a href="entries/Pi_Calculus.html">The pi-calculus in nominal logic</a>
<br>
Author:
<a href="http://www.itu.dk/people/jebe">Jesper Bengtson</a>
</td>
</tr>
<tr>
<td class="entry">
2012-05-29: <a href="entries/CCS.html">CCS in nominal logic</a>
<br>
Author:
<a href="http://www.itu.dk/people/jebe">Jesper Bengtson</a>
</td>
</tr>
<tr>
<td class="entry">
2012-05-27: <a href="entries/Circus.html">Isabelle/Circus</a>
<br>
Authors:
Abderrahmane Feliachi,
<a href="https://www.lri.fr/~wolff/">Burkhart Wolff</a>
and Marie-Claude Gaudel
</td>
</tr>
<tr>
<td class="entry">
2012-05-11: <a href="entries/Separation_Algebra.html">Separation Algebra</a>
<br>
Authors:
<a href="http://www.cse.unsw.edu.au/~kleing/">Gerwin Klein</a>,
Rafal Kolanski
and Andrew Boyton
</td>
</tr>
<tr>
<td class="entry">
2012-05-07: <a href="entries/Stuttering_Equivalence.html">Stuttering Equivalence</a>
<br>
Author:
<a href="http://www.loria.fr/~merz">Stephan Merz</a>
</td>
</tr>
<tr>
<td class="entry">
2012-05-02: <a href="entries/Inductive_Confidentiality.html">Inductive Study of Confidentiality</a>
<br>
Author:
<a href="http://www.dmi.unict.it/~giamp/">Giampaolo Bella</a>
</td>
</tr>
<tr>
<td class="entry">
2012-04-26: <a href="entries/Ordinary_Differential_Equations.html">Ordinary Differential Equations</a>
<br>
Authors:
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a>
and <a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>
</td>
</tr>
<tr>
<td class="entry">
2012-04-13: <a href="entries/Well_Quasi_Orders.html">Well-Quasi-Orders</a>
<br>
Author:
Christian Sternagel
</td>
</tr>
<tr>
<td class="entry">
2012-03-01: <a href="entries/Abortable_Linearizable_Modules.html">Abortable Linearizable Modules</a>
<br>
Authors:
Rachid Guerraoui,
<a href="http://lara.epfl.ch/~kuncak/">Viktor Kuncak</a>
and Giuliano Losa
</td>
</tr>
<tr>
<td class="entry">
2012-02-29: <a href="entries/Transitive-Closure-II.html">Executable Transitive Closures</a>
<br>
Author:
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2012-02-06: <a href="entries/Girth_Chromatic.html">A Probabilistic Proof of the Girth-Chromatic Number Theorem</a>
<br>
Author:
<a href="http://www21.in.tum.de/~noschinl/">Lars Noschinski</a>
</td>
</tr>
<tr>
<td class="entry">
2012-01-30: <a href="entries/Refine_Monadic.html">Refinement for Monadic Programs</a>
<br>
Author:
Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2012-01-30: <a href="entries/Dijkstra_Shortest_Path.html">Dijkstra's Shortest Path Algorithm</a>
<br>
Authors:
Benedikt Nordhoff
and Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2012-01-03: <a href="entries/Markov_Models.html">Markov Models</a>
<br>
Authors:
<a href="http://in.tum.de/~hoelzl">Johannes Hölzl</a>
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2011</td>
</tr>
<tr>
<td class="entry">
2011-11-19: <a href="entries/TLA.html">A Definitional Encoding of TLA* in Isabelle/HOL</a>
<br>
Authors:
<a href="http://homepages.inf.ed.ac.uk/ggrov">Gudmund Grov</a>
and <a href="http://www.loria.fr/~merz">Stephan Merz</a>
</td>
</tr>
<tr>
<td class="entry">
2011-11-09: <a href="entries/Efficient-Mergesort.html">Efficient Mergesort</a>
<br>
Author:
Christian Sternagel
</td>
</tr>
<tr>
<td class="entry">
2011-09-22: <a href="entries/PseudoHoops.html">Pseudo Hoops</a>
<br>
Authors:
George Georgescu,
Laurentiu Leustean
and Viorel Preoteasa
</td>
</tr>
<tr>
<td class="entry">
2011-09-22: <a href="entries/MonoBoolTranAlgebra.html">Algebra of Monotonic Boolean Transformers</a>
<br>
Author:
Viorel Preoteasa
</td>
</tr>
<tr>
<td class="entry">
2011-09-22: <a href="entries/LatticeProperties.html">Lattice Properties</a>
<br>
Author:
Viorel Preoteasa
</td>
</tr>
<tr>
<td class="entry">
2011-08-26: <a href="entries/Myhill-Nerode.html">The Myhill-Nerode Theorem Based on Regular Expressions</a>
<br>
Authors:
Chunhan Wu,
Xingyuan Zhang
and <a href="http://www.inf.kcl.ac.uk/staff/urbanc/">Christian Urban</a>
</td>
</tr>
<tr>
<td class="entry">
2011-08-19: <a href="entries/Gauss-Jordan-Elim-Fun.html">Gauss-Jordan Elimination for Matrices Represented as Functions</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2011-07-21: <a href="entries/Max-Card-Matching.html">Maximum Cardinality Matching</a>
<br>
Author:
Christine Rizkallah
</td>
</tr>
<tr>
<td class="entry">
2011-05-17: <a href="entries/KBPs.html">Knowledge-based programs</a>
<br>
Author:
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="entry">
2011-04-01: <a href="entries/General-Triangle.html">The General Triangle Is Unique</a>
<br>
Author:
Joachim Breitner
</td>
</tr>
<tr>
<td class="entry">
2011-03-14: <a href="entries/Transitive-Closure.html">Executable Transitive Closures of Finite Relations</a>
<br>
Authors:
Christian Sternagel
and <a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2011-02-23: <a href="entries/Nat-Interval-Logic.html">Interval Temporal Logic on Natural Numbers</a>
<br>
Author:
David Trachtenherz
</td>
</tr>
<tr>
<td class="entry">
2011-02-23: <a href="entries/List-Infinite.html">Infinite Lists</a>
<br>
Author:
David Trachtenherz
</td>
</tr>
<tr>
<td class="entry">
2011-02-23: <a href="entries/AutoFocus-Stream.html">AutoFocus Stream Processing for Single-Clocking and Multi-Clocking Semantics</a>
<br>
Author:
David Trachtenherz
</td>
</tr>
<tr>
<td class="entry">
2011-02-07: <a href="entries/LightweightJava.html">Lightweight Java</a>
<br>
Authors:
<a href="http://rok.strnisa.com/lj/">Rok Strniša</a>
and <a href="http://research.microsoft.com/people/mattpark/">Matthew Parkinson</a>
</td>
</tr>
<tr>
<td class="entry">
2011-01-10: <a href="entries/RIPEMD-160-SPARK.html">RIPEMD-160</a>
<br>
Author:
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a>
</td>
</tr>
<tr>
<td class="entry">
2011-01-08: <a href="entries/Lower_Semicontinuous.html">Lower Semicontinuous Functions</a>
<br>
Author:
Bogdan Grechuk
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2010</td>
</tr>
<tr>
<td class="entry">
2010-12-17: <a href="entries/Marriage.html">Hall's Marriage Theorem</a>
<br>
Authors:
Dongchen Jiang
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2010-11-16: <a href="entries/Shivers-CFA.html">Shivers' Control Flow Analysis</a>
<br>
Author:
Joachim Breitner
</td>
</tr>
<tr>
<td class="entry">
2010-10-28: <a href="entries/Finger-Trees.html">Finger Trees</a>
<br>
Authors:
Benedikt Nordhoff,
Stefan Körner
and Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2010-10-28: <a href="entries/Binomial-Queues.html">Functional Binomial Queues</a>
<br>
Author:
René Neumann
</td>
</tr>
<tr>
<td class="entry">
2010-10-28: <a href="entries/Binomial-Heaps.html">Binomial Heaps and Skew Binomial Heaps</a>
<br>
Authors:
Rene Meis,
Finn Nielsen
and Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2010-08-29: <a href="entries/Lam-ml-Normalization.html">Strong Normalization of Moggis's Computational Metalanguage</a>
<br>
Author:
Christian Doczkal
</td>
</tr>
<tr>
<td class="entry">
2010-08-10: <a href="entries/Polynomials.html">Executable Multivariate Polynomials</a>
<br>
Authors:
Christian Sternagel,
<a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>,
<a href="https://risc.jku.at/m/alexander-maletzky/">Alexander Maletzky</a>,
<a href="http://home.in.tum.de/~immler/">Fabian Immler</a>,
<a href="http://isabelle.in.tum.de/~haftmann">Florian Haftmann</a>,
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
and Alexander Bentkamp
</td>
</tr>
<tr>
<td class="entry">
2010-08-08: <a href="entries/Statecharts.html">Formalizing Statecharts using Hierarchical Automata</a>
<br>
Authors:
Steffen Helke
and Florian Kammüller
</td>
</tr>
<tr>
<td class="entry">
2010-06-24: <a href="entries/Free-Groups.html">Free Groups</a>
<br>
Author:
Joachim Breitner
</td>
</tr>
<tr>
<td class="entry">
2010-06-20: <a href="entries/Category2.html">Category Theory</a>
<br>
Author:
Alexander Katovsky
</td>
</tr>
<tr>
<td class="entry">
2010-06-17: <a href="entries/Matrix.html">Executable Matrix Operations on Matrices of Arbitrary Dimensions</a>
<br>
Authors:
Christian Sternagel
and <a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2010-06-14: <a href="entries/Abstract-Rewriting.html">Abstract Rewriting</a>
<br>
Authors:
Christian Sternagel
and <a href="http://cl-informatik.uibk.ac.at/~thiemann/">René Thiemann</a>
</td>
</tr>
<tr>
<td class="entry">
2010-05-28: <a href="entries/GraphMarkingIBP.html">Verification of the Deutsch-Schorr-Waite Graph Marking Algorithm using Data Refinement</a>
<br>
Authors:
Viorel Preoteasa
and <a href="http://users.abo.fi/Ralph-Johan.Back/">Ralph-Johan Back</a>
</td>
</tr>
<tr>
<td class="entry">
2010-05-28: <a href="entries/DataRefinementIBP.html">Semantics and Data Refinement of Invariant Based Programs</a>
<br>
Authors:
Viorel Preoteasa
and <a href="http://users.abo.fi/Ralph-Johan.Back/">Ralph-Johan Back</a>
</td>
</tr>
<tr>
<td class="entry">
2010-05-22: <a href="entries/Robbins-Conjecture.html">A Complete Proof of the Robbins Conjecture</a>
<br>
Author:
Matthew Wampler-Doty
</td>
</tr>
<tr>
<td class="entry">
2010-05-12: <a href="entries/Regular-Sets.html">Regular Sets and Expressions</a>
<br>
Authors:
<a href="http://www.in.tum.de/~krauss">Alexander Krauss</a>
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2010-04-30: <a href="entries/Locally-Nameless-Sigma.html">Locally Nameless Sigma Calculus</a>
<br>
Authors:
Ludovic Henrio,
Florian Kammüller,
Bianca Lutz
and Henry Sudhof
</td>
</tr>
<tr>
<td class="entry">
2010-03-29: <a href="entries/Free-Boolean-Algebra.html">Free Boolean Algebra</a>
<br>
Author:
Brian Huffman
</td>
</tr>
<tr>
<td class="entry">
2010-03-23: <a href="entries/InformationFlowSlicing_Inter.html">Inter-Procedural Information Flow Noninterference via Slicing</a>
<br>
Author:
<a href="http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php">Daniel Wasserrab</a>
</td>
</tr>
<tr>
<td class="entry">
2010-03-23: <a href="entries/InformationFlowSlicing.html">Information Flow Noninterference via Slicing</a>
<br>
Author:
<a href="http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php">Daniel Wasserrab</a>
</td>
</tr>
<tr>
<td class="entry">
2010-02-20: <a href="entries/List-Index.html">List Index</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2010-02-12: <a href="entries/Coinductive.html">Coinductive</a>
<br>
Author:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2009</td>
</tr>
<tr>
<td class="entry">
2009-12-09: <a href="entries/DPT-SAT-Solver.html">A Fast SAT Solver for Isabelle in Standard ML</a>
<br>
Author:
Armin Heller
</td>
</tr>
<tr>
<td class="entry">
2009-12-03: <a href="entries/Presburger-Automata.html">Formalizing the Logic-Automaton Connection</a>
<br>
Authors:
<a href="http://www.in.tum.de/~berghofe">Stefan Berghofer</a>
and Markus Reiter
</td>
</tr>
<tr>
<td class="entry">
2009-11-25: <a href="entries/Tree-Automata.html">Tree Automata</a>
<br>
Author:
Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2009-11-25: <a href="entries/Collections.html">Collections Framework</a>
<br>
Author:
Peter Lammich
</td>
</tr>
<tr>
<td class="entry">
2009-11-22: <a href="entries/Perfect-Number-Thm.html">Perfect Number Theorem</a>
<br>
Author:
Mark Ijbema
</td>
</tr>
<tr>
<td class="entry">
2009-11-13: <a href="entries/HRB-Slicing.html">Backing up Slicing: Verifying the Interprocedural Two-Phase Horwitz-Reps-Binkley Slicer</a>
<br>
Author:
<a href="http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php">Daniel Wasserrab</a>
</td>
</tr>
<tr>
<td class="entry">
2009-10-30: <a href="entries/WorkerWrapper.html">The Worker/Wrapper Transformation</a>
<br>
Author:
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="entry">
2009-09-01: <a href="entries/Ordinals_and_Cardinals.html">Ordinals and Cardinals</a>
<br>
Author:
Andrei Popescu
</td>
</tr>
<tr>
<td class="entry">
2009-08-28: <a href="entries/SequentInvertibility.html">Invertibility in Sequent Calculi</a>
<br>
Author:
Peter Chapman
</td>
</tr>
<tr>
<td class="entry">
2009-08-04: <a href="entries/CofGroups.html">An Example of a Cofinitary Group in Isabelle/HOL</a>
<br>
Author:
<a href="http://kasterma.net">Bart Kastermans</a>
</td>
</tr>
<tr>
<td class="entry">
2009-05-06: <a href="entries/FinFun.html">Code Generation for Functions as Data</a>
<br>
Author:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="entry">
2009-04-29: <a href="entries/Stream-Fusion.html">Stream Fusion</a>
<br>
Author:
Brian Huffman
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2008</td>
</tr>
<tr>
<td class="entry">
2008-12-12: <a href="entries/BytecodeLogicJmlTypes.html">A Bytecode Logic for JML and Types</a>
<br>
Authors:
Lennart Beringer
and <a href="http://www.tcs.informatik.uni-muenchen.de/~mhofmann">Martin Hofmann</a>
</td>
</tr>
<tr>
<td class="entry">
2008-11-10: <a href="entries/SIFPL.html">Secure information flow and program logics</a>
<br>
Authors:
Lennart Beringer
and <a href="http://www.tcs.informatik.uni-muenchen.de/~mhofmann">Martin Hofmann</a>
</td>
</tr>
<tr>
<td class="entry">
2008-11-09: <a href="entries/SenSocialChoice.html">Some classical results in Social Choice Theory</a>
<br>
Author:
<a href="http://peteg.org">Peter Gammie</a>
</td>
</tr>
<tr>
<td class="entry">
2008-11-07: <a href="entries/FunWithTilings.html">Fun With Tilings</a>
<br>
Authors:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
and <a href="https://www.cl.cam.ac.uk/~lp15/">Lawrence C. Paulson</a>
</td>
</tr>
<tr>
<td class="entry">
2008-10-15: <a href="entries/Huffman.html">The Textbook Proof of Huffman's Algorithm</a>
<br>
Author:
Jasmin Christian Blanchette
</td>
</tr>
<tr>
<td class="entry">
2008-09-16: <a href="entries/Slicing.html">Towards Certified Slicing</a>
<br>
Author:
<a href="http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php">Daniel Wasserrab</a>
</td>
</tr>
<tr>
<td class="entry">
2008-09-02: <a href="entries/VolpanoSmith.html">A Correctness Proof for the Volpano/Smith Security Typing System</a>
<br>
Authors:
<a href="http://pp.info.uni-karlsruhe.de/personhp/gregor_snelting.php">Gregor Snelting</a>
and <a href="http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php">Daniel Wasserrab</a>
</td>
</tr>
<tr>
<td class="entry">
2008-09-01: <a href="entries/ArrowImpossibilityGS.html">Arrow and Gibbard-Satterthwaite</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2008-08-26: <a href="entries/FunWithFunctions.html">Fun With Functions</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2008-07-23: <a href="entries/SATSolverVerification.html">Formal Verification of Modern SAT Solvers</a>
<br>
Author:
Filip Marić
</td>
</tr>
<tr>
<td class="entry">
2008-04-05: <a href="entries/Recursion-Theory-I.html">Recursion Theory I</a>
<br>
Author:
Michael Nedzelsky
</td>
</tr>
<tr>
<td class="entry">
2008-02-29: <a href="entries/Simpl.html">A Sequential Imperative Programming Language Syntax, Semantics, Hoare Logics and Verification Environment</a>
<br>
Author:
Norbert Schirmer
</td>
</tr>
<tr>
<td class="entry">
2008-02-29: <a href="entries/BDD.html">BDD Normalisation</a>
<br>
Authors:
Veronika Ortner
and Norbert Schirmer
</td>
</tr>
<tr>
<td class="entry">
2008-02-18: <a href="entries/NormByEval.html">Normalization by Evaluation</a>
<br>
Authors:
<a href="http://www.linta.de/~aehlig/">Klaus Aehlig</a>
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2008-01-11: <a href="entries/LinearQuantifierElim.html">Quantifier Elimination for Linear Arithmetic</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2007</td>
</tr>
<tr>
<td class="entry">
2007-12-14: <a href="entries/Program-Conflict-Analysis.html">Formalization of Conflict Analysis of Programs with Procedures, Thread Creation, and Monitors</a>
<br>
Authors:
Peter Lammich
and <a href="http://cs.uni-muenster.de/u/mmo/">Markus Müller-Olm</a>
</td>
</tr>
<tr>
<td class="entry">
2007-12-03: <a href="entries/JinjaThreads.html">Jinja with Threads</a>
<br>
Author:
<a href="http://www.andreas-lochbihler.de">Andreas Lochbihler</a>
</td>
</tr>
<tr>
<td class="entry">
2007-11-06: <a href="entries/MuchAdoAboutTwo.html">Much Ado About Two</a>
<br>
Author:
<a href="http://www21.in.tum.de/~boehmes/">Sascha Böhme</a>
</td>
</tr>
<tr>
<td class="entry">
2007-08-12: <a href="entries/SumSquares.html">Sums of Two and Four Squares</a>
<br>
Author:
Roelof Oosterhuis
</td>
</tr>
<tr>
<td class="entry">
2007-08-12: <a href="entries/Fermat3_4.html">Fermat's Last Theorem for Exponents 3 and 4 and the Parametrisation of Pythagorean Triples</a>
<br>
Author:
Roelof Oosterhuis
</td>
</tr>
<tr>
<td class="entry">
2007-08-08: <a href="entries/Valuation.html">Fundamental Properties of Valuation Theory and Hensel's Lemma</a>
<br>
Author:
Hidetsune Kobayashi
</td>
</tr>
<tr>
<td class="entry">
2007-08-02: <a href="entries/POPLmark-deBruijn.html">POPLmark Challenge Via de Bruijn Indices</a>
<br>
Author:
<a href="http://www.in.tum.de/~berghofe">Stefan Berghofer</a>
</td>
</tr>
<tr>
<td class="entry">
2007-08-02: <a href="entries/FOL-Fitting.html">First-Order Logic According to Fitting</a>
<br>
Author:
<a href="http://www.in.tum.de/~berghofe">Stefan Berghofer</a>
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2006</td>
</tr>
<tr>
<td class="entry">
2006-09-09: <a href="entries/HotelKeyCards.html">Hotel Key Card System</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2006-08-08: <a href="entries/Abstract-Hoare-Logics.html">Abstract Hoare Logics</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2006-05-22: <a href="entries/Flyspeck-Tame.html">Flyspeck I: Tame Graphs</a>
<br>
Authors:
Gertrud Bauer
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2006-05-15: <a href="entries/CoreC++.html">CoreC++</a>
<br>
Author:
<a href="http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php">Daniel Wasserrab</a>
</td>
</tr>
<tr>
<td class="entry">
2006-03-31: <a href="entries/FeatherweightJava.html">A Theory of Featherweight Java in Isabelle/HOL</a>
<br>
Authors:
<a href="http://www.cs.cornell.edu/~jnfoster/">J. Nathan Foster</a>
and <a href="http://research.microsoft.com/en-us/people/dimitris/">Dimitrios Vytiniotis</a>
</td>
</tr>
<tr>
<td class="entry">
2006-03-15: <a href="entries/ClockSynchInst.html">Instances of Schneider's generalized protocol of clock synchronization</a>
<br>
Author:
<a href="http://www.cs.famaf.unc.edu.ar/~damian/">Damián Barsotti</a>
</td>
</tr>
<tr>
<td class="entry">
2006-03-14: <a href="entries/Cauchy.html">Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality</a>
<br>
Author:
Benjamin Porter
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2005</td>
</tr>
<tr>
<td class="entry">
2005-11-11: <a href="entries/Ordinal.html">Countable Ordinals</a>
<br>
Author:
Brian Huffman
</td>
</tr>
<tr>
<td class="entry">
2005-10-12: <a href="entries/FFT.html">Fast Fourier Transform</a>
<br>
Author:
<a href="http://www21.in.tum.de/~ballarin/">Clemens Ballarin</a>
</td>
</tr>
<tr>
<td class="entry">
2005-06-24: <a href="entries/GenClock.html">Formalization of a Generalized Protocol for Clock Synchronization</a>
<br>
Author:
Alwen Tiu
</td>
</tr>
<tr>
<td class="entry">
2005-06-22: <a href="entries/DiskPaxos.html">Proving the Correctness of Disk Paxos</a>
<br>
Authors:
<a href="http://www.fceia.unr.edu.ar/~mauro/">Mauro Jaskelioff</a>
and <a href="http://www.loria.fr/~merz">Stephan Merz</a>
</td>
</tr>
<tr>
<td class="entry">
2005-06-20: <a href="entries/JiveDataStoreModel.html">Jive Data and Store Model</a>
<br>
Authors:
Nicole Rauch
and Norbert Schirmer
</td>
</tr>
<tr>
<td class="entry">
2005-06-01: <a href="entries/Jinja.html">Jinja is not Java</a>
<br>
Authors:
<a href="http://www.cse.unsw.edu.au/~kleing/">Gerwin Klein</a>
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2005-05-02: <a href="entries/RSAPSS.html">SHA1, RSA, PSS and more</a>
<br>
Authors:
Christina Lindenberg
and Kai Wirt
</td>
</tr>
<tr>
<td class="entry">
2005-04-21: <a href="entries/Category.html">Category Theory to Yoneda's Lemma</a>
<br>
Author:
<a href="http://users.rsise.anu.edu.au/~okeefe/">Greg O'Keefe</a>
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<table width="80%" class="entries">
<tbody>
<tr>
<td class="head">2004</td>
</tr>
<tr>
<td class="entry">
2004-12-09: <a href="entries/FileRefinement.html">File Refinement</a>
<br>
Authors:
<a href="http://www.mit.edu/~kkz/">Karen Zee</a>
and <a href="http://lara.epfl.ch/~kuncak/">Viktor Kuncak</a>
</td>
</tr>
<tr>
<td class="entry">
2004-11-19: <a href="entries/Integration.html">Integration theory and random variables</a>
<br>
Author:
<a href="http://www-lti.informatik.rwth-aachen.de/~richter/">Stefan Richter</a>
</td>
</tr>
<tr>
<td class="entry">
2004-09-28: <a href="entries/Verified-Prover.html">A Mechanically Verified, Efficient, Sound and Complete Theorem Prover For First Order Logic</a>
<br>
Author:
Tom Ridge
</td>
</tr>
<tr>
<td class="entry">
2004-09-20: <a href="entries/Ramsey-Infinite.html">Ramsey's theorem, infinitary version</a>
<br>
Author:
Tom Ridge
</td>
</tr>
<tr>
<td class="entry">
2004-09-20: <a href="entries/Completeness.html">Completeness theorem</a>
<br>
Authors:
James Margetson
and Tom Ridge
</td>
</tr>
<tr>
<td class="entry">
2004-07-09: <a href="entries/Compiling-Exceptions-Correctly.html">Compiling Exceptions Correctly</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2004-06-24: <a href="entries/Depth-First-Search.html">Depth First Search</a>
<br>
Authors:
Toshiaki Nishihara
and Yasuhiko Minamide
</td>
</tr>
<tr>
<td class="entry">
2004-05-18: <a href="entries/Group-Ring-Module.html">Groups, Rings and Modules</a>
<br>
Authors:
Hidetsune Kobayashi,
L. Chen
and H. Murao
</td>
</tr>
<tr>
<td class="entry">
2004-04-26: <a href="entries/Topology.html">Topology</a>
<br>
Author:
Stefan Friedrich
</td>
</tr>
<tr>
<td class="entry">
2004-04-26: <a href="entries/Lazy-Lists-II.html">Lazy Lists II</a>
<br>
Author:
Stefan Friedrich
</td>
</tr>
<tr>
<td class="entry">
2004-04-05: <a href="entries/BinarySearchTree.html">Binary Search Trees</a>
<br>
Author:
<a href="http://lara.epfl.ch/~kuncak/">Viktor Kuncak</a>
</td>
</tr>
<tr>
<td class="entry">
2004-03-30: <a href="entries/Functional-Automata.html">Functional Automata</a>
<br>
Author:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2004-03-19: <a href="entries/MiniML.html">Mini ML</a>
<br>
Authors:
Wolfgang Naraschewski
and <a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
</td>
</tr>
<tr>
<td class="entry">
2004-03-19: <a href="entries/AVL-Trees.html">AVL Trees</a>
<br>
Authors:
<a href="http://www21.in.tum.de/~nipkow">Tobias Nipkow</a>
and Cornelia Pusch
</td>
</tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
</body>
</html>
\ No newline at end of file
diff --git a/web/rss.xml b/web/rss.xml
--- a/web/rss.xml
+++ b/web/rss.xml
@@ -1,587 +1,588 @@
<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel>
<atom:link href="https://www.isa-afp.org/rss.xml" rel="self" type="application/rss+xml" />
<title>Archive of Formal Proofs</title>
<link>https://www.isa-afp.org</link>
<description>
The Archive of Formal Proofs is a collection of proof libraries, examples,
and larger scientific developments, mechanically checked
in the theorem prover Isabelle.
</description>
- <pubDate>10 Apr 2020 00:00:00 +0000</pubDate>
+ <pubDate>27 Apr 2020 00:00:00 +0000</pubDate>
+ <item>
+ <title>Attack Trees in Isabelle for GDPR compliance of IoT healthcare systems</title>
+ <link>https://www.isa-afp.org/entries/Attack_Trees.html</link>
+ <guid>https://www.isa-afp.org/entries/Attack_Trees.html</guid>
+ <dc:creator> Florian Kammueller </dc:creator>
+ <pubDate>27 Apr 2020 00:00:00 +0000</pubDate>
+ <description>
+In this article, we present a proof theory for Attack Trees. Attack
+Trees are a well established and useful model for the construction of
+attacks on systems since they allow a stepwise exploration of high
+level attacks in application scenarios. Using the expressiveness of
+Higher Order Logic in Isabelle, we develop a generic
+theory of Attack Trees with a state-based semantics based on Kripke
+structures and CTL. The resulting framework
+allows mechanically supported logic analysis of the meta-theory of the
+proof calculus of Attack Trees and at the same time the developed
+proof theory enables application to case studies. A central
+correctness and completeness result proved in Isabelle establishes a
+connection between the notion of Attack Tree validity and CTL. The
+application is illustrated on the example of a healthcare IoT system
+and GDPR compliance verification.</description>
+ </item>
+ <item>
+ <title>Authenticated Data Structures As Functors</title>
+ <link>https://www.isa-afp.org/entries/ADS_Functor.html</link>
+ <guid>https://www.isa-afp.org/entries/ADS_Functor.html</guid>
+ <dc:creator> Andreas Lochbihler, Ognjen Marić </dc:creator>
+ <pubDate>16 Apr 2020 00:00:00 +0000</pubDate>
+ <description>
+Authenticated data structures allow several systems to convince each
+other that they are referring to the same data structure, even if each
+of them knows only a part of the data structure. Using inclusion
+proofs, knowledgeable systems can selectively share their knowledge
+with other systems and the latter can verify the authenticity of what
+is being shared. In this article, we show how to modularly define
+authenticated data structures, their inclusion proofs, and operations
+thereon as datatypes in Isabelle/HOL, using a shallow embedding.
+Modularity allows us to construct complicated trees from reusable
+building blocks, which we call Merkle functors. Merkle functors
+include sums, products, and function spaces and are closed under
+composition and least fixpoints. As a practical application, we model
+the hierarchical transactions of &lt;a
+href=&#34;https://www.canton.io&#34;&gt;Canton&lt;/a&gt;, a
+practical interoperability protocol for distributed ledgers, as
+authenticated data structures. This is a first step towards
+formalizing the Canton protocol and verifying its integrity and
+security guarantees.</description>
+ </item>
<item>
<title>Formalization of an Algorithm for Greedily Computing Associative Aggregations on Sliding Windows</title>
<link>https://www.isa-afp.org/entries/Sliding_Window_Algorithm.html</link>
<guid>https://www.isa-afp.org/entries/Sliding_Window_Algorithm.html</guid>
<dc:creator> Lukas Heimes, Dmitriy Traytel, Joshua Schneider </dc:creator>
<pubDate>10 Apr 2020 00:00:00 +0000</pubDate>
<description>
Basin et al.&#39;s &lt;a
href=&#34;https://doi.org/10.1016/j.ipl.2014.09.009&#34;&gt;sliding
window algorithm (SWA)&lt;/a&gt; is an algorithm for combining the
elements of subsequences of a sequence with an associative operator.
It is greedy and minimizes the number of operator applications. We
formalize the algorithm and verify its functional correctness. We
extend the algorithm with additional operations and provide an
alternative interface to the slide operation that does not require the
entire input sequence.</description>
</item>
<item>
<title>A Comprehensive Framework for Saturation Theorem Proving</title>
<link>https://www.isa-afp.org/entries/Saturation_Framework.html</link>
<guid>https://www.isa-afp.org/entries/Saturation_Framework.html</guid>
<dc:creator> Sophie Tourret </dc:creator>
<pubDate>09 Apr 2020 00:00:00 +0000</pubDate>
<description>
This Isabelle/HOL formalization is the companion of the technical
report “A comprehensive framework for saturation theorem proving”,
itself companion of the eponym IJCAR 2020 paper, written by Uwe
Waldmann, Sophie Tourret, Simon Robillard and Jasmin Blanchette. It
verifies a framework for formal refutational completeness proofs of
abstract provers that implement saturation calculi, such as ordered
resolution or superposition, and allows to model entire prover
architectures in such a way that the static refutational completeness
of a calculus immediately implies the dynamic refutational
completeness of a prover implementing the calculus using a variant of
the given clause loop. The technical report “A comprehensive
framework for saturation theorem proving” is available &lt;a
href=&#34;http://matryoshka.gforge.inria.fr/pubs/satur_report.pdf&#34;&gt;on
the Matryoshka website&lt;/a&gt;. The names of the Isabelle lemmas and
theorems corresponding to the results in the report are indicated in
the margin of the report.</description>
</item>
<item>
<title>Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations</title>
<link>https://www.isa-afp.org/entries/MFODL_Monitor_Optimized.html</link>
<guid>https://www.isa-afp.org/entries/MFODL_Monitor_Optimized.html</guid>
<dc:creator> Thibault Dardinier, Lukas Heimes, Martin Raszyk, Joshua Schneider, Dmitriy Traytel </dc:creator>
<pubDate>09 Apr 2020 00:00:00 +0000</pubDate>
<description>
A monitor is a runtime verification tool that solves the following
problem: Given a stream of time-stamped events and a policy formulated
in a specification language, decide whether the policy is satisfied at
every point in the stream. We verify the correctness of an executable
monitor for specifications given as formulas in metric first-order
dynamic logic (MFODL), which combines the features of metric
first-order temporal logic (MFOTL) and metric dynamic logic. Thus,
MFODL supports real-time constraints, first-order parameters, and
regular expressions. Additionally, the monitor supports aggregation
operations such as count and sum. This formalization, which is
described in a &lt;a
href=&#34;http://people.inf.ethz.ch/trayteld/papers/ijcar20-verimonplus/verimonplus.pdf&#34;&gt;
forthcoming paper at IJCAR 2020&lt;/a&gt;, significantly extends &lt;a
href=&#34;https://www.isa-afp.org/entries/MFOTL_Monitor.html&#34;&gt;previous
work on a verified monitor&lt;/a&gt; for MFOTL. Apart from the
addition of regular expressions and aggregations, we implemented &lt;a
href=&#34;https://www.isa-afp.org/entries/Generic_Join.html&#34;&gt;multi-way
joins&lt;/a&gt; and a specialized sliding window algorithm to further
optimize the monitor.</description>
</item>
<item>
+ <title>Lucas's Theorem</title>
+ <link>https://www.isa-afp.org/entries/Lucas_Theorem.html</link>
+ <guid>https://www.isa-afp.org/entries/Lucas_Theorem.html</guid>
+ <dc:creator> Chelsea Edmonds </dc:creator>
+ <pubDate>07 Apr 2020 00:00:00 +0000</pubDate>
+ <description>
+This work presents a formalisation of a generating function proof for
+Lucas&#39;s theorem. We first outline extensions to the existing
+Formal Power Series (FPS) library, including an equivalence relation
+for coefficients modulo &lt;em&gt;n&lt;/em&gt;, an alternate binomial theorem statement,
+and a formalised proof of the Freshman&#39;s dream (mod &lt;em&gt;p&lt;/em&gt;) lemma.
+The second part of the work presents the formal proof of Lucas&#39;s
+Theorem. Working backwards, the formalisation first proves a well
+known corollary of the theorem which is easier to formalise, and then
+applies induction to prove the original theorem statement. The proof
+of the corollary aims to provide a good example of a formalised
+generating function equivalence proof using the FPS library. The final
+theorem statement is intended to be integrated into the formalised
+proof of Hilbert&#39;s 10th Problem.</description>
+ </item>
+ <item>
<title>Strong Eventual Consistency of the Collaborative Editing Framework WOOT</title>
<link>https://www.isa-afp.org/entries/WOOT_Strong_Eventual_Consistency.html</link>
<guid>https://www.isa-afp.org/entries/WOOT_Strong_Eventual_Consistency.html</guid>
<dc:creator> Emin Karayel, Edgar Gonzàlez </dc:creator>
<pubDate>25 Mar 2020 00:00:00 +0000</pubDate>
<description>
Commutative Replicated Data Types (CRDTs) are a promising new class of
data structures for large-scale shared mutable content in applications
that only require eventual consistency. The WithOut Operational
Transforms (WOOT) framework is a CRDT for collaborative text editing
introduced by Oster et al. (CSCW 2006) for which the eventual
consistency property was verified only for a bounded model to date. We
contribute a formal proof for WOOTs strong eventual consistency.</description>
</item>
<item>
<title>Furstenberg's topology and his proof of the infinitude of primes</title>
<link>https://www.isa-afp.org/entries/Furstenberg_Topology.html</link>
<guid>https://www.isa-afp.org/entries/Furstenberg_Topology.html</guid>
<dc:creator> Manuel Eberl </dc:creator>
<pubDate>22 Mar 2020 00:00:00 +0000</pubDate>
<description>
&lt;p&gt;This article gives a formal version of Furstenberg&#39;s
topological proof of the infinitude of primes. He defines a topology
on the integers based on arithmetic progressions (or, equivalently,
residue classes). Using some fairly obvious properties of this
topology, the infinitude of primes is then easily obtained.&lt;/p&gt;
&lt;p&gt;Apart from this, this topology is also fairly ‘nice’ in
general: it is second countable, metrizable, and perfect. All of these
(well-known) facts are formally proven, including an explicit metric
for the topology given by Zulfeqarr.&lt;/p&gt;</description>
</item>
<item>
<title>An Under-Approximate Relational Logic</title>
<link>https://www.isa-afp.org/entries/Relational-Incorrectness-Logic.html</link>
<guid>https://www.isa-afp.org/entries/Relational-Incorrectness-Logic.html</guid>
<dc:creator> Toby Murray </dc:creator>
<pubDate>12 Mar 2020 00:00:00 +0000</pubDate>
<description>
Recently, authors have proposed under-approximate logics for reasoning
about programs. So far, all such logics have been confined to
reasoning about individual program behaviours. Yet there exist many
over-approximate relational logics for reasoning about pairs of
programs and relating their behaviours. We present the first
under-approximate relational logic, for the simple imperative language
IMP. We prove our logic is both sound and complete. Additionally, we
show how reasoning in this logic can be decomposed into non-relational
reasoning in an under-approximate Hoare logic, mirroring Beringer’s
result for over-approximate relational logics. We illustrate the
application of our logic on some small examples in which we provably
demonstrate the presence of insecurity.</description>
</item>
<item>
<title>Hello World</title>
<link>https://www.isa-afp.org/entries/Hello_World.html</link>
<guid>https://www.isa-afp.org/entries/Hello_World.html</guid>
<dc:creator> Cornelius Diekmann, Lars Hupel </dc:creator>
<pubDate>07 Mar 2020 00:00:00 +0000</pubDate>
<description>
In this article, we present a formalization of the well-known
&#34;Hello, World!&#34; code, including a formal framework for
reasoning about IO. Our model is inspired by the handling of IO in
Haskell. We start by formalizing the 🌍 and embrace the IO monad
afterwards. Then we present a sample main :: IO (), followed by its
proof of correctness.</description>
</item>
<item>
<title>Implementing the Goodstein Function in &lambda;-Calculus</title>
<link>https://www.isa-afp.org/entries/Goodstein_Lambda.html</link>
<guid>https://www.isa-afp.org/entries/Goodstein_Lambda.html</guid>
<dc:creator> Bertram Felgenhauer </dc:creator>
<pubDate>21 Feb 2020 00:00:00 +0000</pubDate>
<description>
In this formalization, we develop an implementation of the Goodstein
function G in plain &amp;lambda;-calculus, linked to a concise, self-contained
specification. The implementation works on a Church-encoded
representation of countable ordinals. The initial conversion to
hereditary base 2 is not covered, but the material is sufficient to
compute the particular value G(16), and easily extends to other fixed
arguments.</description>
</item>
<item>
<title>A Generic Framework for Verified Compilers</title>
<link>https://www.isa-afp.org/entries/VeriComp.html</link>
<guid>https://www.isa-afp.org/entries/VeriComp.html</guid>
<dc:creator> Martin Desharnais </dc:creator>
<pubDate>10 Feb 2020 00:00:00 +0000</pubDate>
<description>
This is a generic framework for formalizing compiler transformations.
It leverages Isabelle/HOL’s locales to abstract over concrete
languages and transformations. It states common definitions for
language semantics, program behaviours, forward and backward
simulations, and compilers. We provide generic operations, such as
simulation and compiler composition, and prove general (partial)
correctness theorems, resulting in reusable proof components.</description>
</item>
<item>
<title>Arithmetic progressions and relative primes</title>
<link>https://www.isa-afp.org/entries/Arith_Prog_Rel_Primes.html</link>
<guid>https://www.isa-afp.org/entries/Arith_Prog_Rel_Primes.html</guid>
<dc:creator> José Manuel Rodríguez Caballero </dc:creator>
<pubDate>01 Feb 2020 00:00:00 +0000</pubDate>
<description>
This article provides a formalization of the solution obtained by the
author of the Problem “ARITHMETIC PROGRESSIONS” from the
&lt;a href=&#34;https://www.ocf.berkeley.edu/~wwu/riddles/putnam.shtml&#34;&gt;
Putnam exam problems of 2002&lt;/a&gt;. The statement of the problem is
as follows: For which integers &lt;em&gt;n&lt;/em&gt; &gt; 1 does the set of positive
integers less than and relatively prime to &lt;em&gt;n&lt;/em&gt; constitute an
arithmetic progression?</description>
</item>
<item>
<title>A Hierarchy of Algebras for Boolean Subsets</title>
<link>https://www.isa-afp.org/entries/Subset_Boolean_Algebras.html</link>
<guid>https://www.isa-afp.org/entries/Subset_Boolean_Algebras.html</guid>
<dc:creator> Walter Guttmann, Bernhard Möller </dc:creator>
<pubDate>31 Jan 2020 00:00:00 +0000</pubDate>
<description>
We present a collection of axiom systems for the construction of
Boolean subalgebras of larger overall algebras. The subalgebras are
defined as the range of a complement-like operation on a semilattice.
This technique has been used, for example, with the antidomain
operation, dynamic negation and Stone algebras. We present a common
ground for these constructions based on a new equational
axiomatisation of Boolean algebras.</description>
</item>
<item>
<title>Mersenne primes and the Lucas–Lehmer test</title>
<link>https://www.isa-afp.org/entries/Mersenne_Primes.html</link>
<guid>https://www.isa-afp.org/entries/Mersenne_Primes.html</guid>
<dc:creator> Manuel Eberl </dc:creator>
<pubDate>17 Jan 2020 00:00:00 +0000</pubDate>
<description>
&lt;p&gt;This article provides formal proofs of basic properties of
Mersenne numbers, i. e. numbers of the form
2&lt;sup&gt;&lt;em&gt;n&lt;/em&gt;&lt;/sup&gt; - 1, and especially of
Mersenne primes.&lt;/p&gt; &lt;p&gt;In particular, an efficient,
verified, and executable version of the Lucas&amp;ndash;Lehmer test is
developed. This test decides primality for Mersenne numbers in time
polynomial in &lt;em&gt;n&lt;/em&gt;.&lt;/p&gt;</description>
</item>
<item>
<title>Verified Approximation Algorithms</title>
<link>https://www.isa-afp.org/entries/Approximation_Algorithms.html</link>
<guid>https://www.isa-afp.org/entries/Approximation_Algorithms.html</guid>
<dc:creator> Robin Eßmann, Tobias Nipkow, Simon Robillard </dc:creator>
<pubDate>16 Jan 2020 00:00:00 +0000</pubDate>
<description>
We present the first formal verification of approximation algorithms
for NP-complete optimization problems: vertex cover, independent set,
load balancing, and bin packing. The proofs correct incompletenesses
in existing proofs and improve the approximation ratio in one case.</description>
</item>
<item>
<title>Closest Pair of Points Algorithms</title>
<link>https://www.isa-afp.org/entries/Closest_Pair_Points.html</link>
<guid>https://www.isa-afp.org/entries/Closest_Pair_Points.html</guid>
<dc:creator> Martin Rau, Tobias Nipkow </dc:creator>
<pubDate>13 Jan 2020 00:00:00 +0000</pubDate>
<description>
This entry provides two related verified divide-and-conquer algorithms
solving the fundamental &lt;em&gt;Closest Pair of Points&lt;/em&gt;
problem in Computational Geometry. Functional correctness and the
optimal running time of &lt;em&gt;O&lt;/em&gt;(&lt;em&gt;n&lt;/em&gt; log &lt;em&gt;n&lt;/em&gt;) are
proved. Executable code is generated which is empirically competitive
with handwritten reference implementations.</description>
</item>
<item>
<title>Skip Lists</title>
<link>https://www.isa-afp.org/entries/Skip_Lists.html</link>
<guid>https://www.isa-afp.org/entries/Skip_Lists.html</guid>
<dc:creator> Max W. Haslbeck, Manuel Eberl </dc:creator>
<pubDate>09 Jan 2020 00:00:00 +0000</pubDate>
<description>
&lt;p&gt; Skip lists are sorted linked lists enhanced with shortcuts
and are an alternative to binary search trees. A skip lists consists
of multiple levels of sorted linked lists where a list on level n is a
subsequence of the list on level n − 1. In the ideal case, elements
are skipped in such a way that a lookup in a skip lists takes O(log n)
time. In a randomised skip list the skipped elements are choosen
randomly. &lt;/p&gt; &lt;p&gt; This entry contains formalized proofs
of the textbook results about the expected height and the expected
length of a search path in a randomised skip list. &lt;/p&gt;</description>
</item>
<item>
<title>Bicategories</title>
<link>https://www.isa-afp.org/entries/Bicategory.html</link>
<guid>https://www.isa-afp.org/entries/Bicategory.html</guid>
<dc:creator> Eugene W. Stark </dc:creator>
<pubDate>06 Jan 2020 00:00:00 +0000</pubDate>
<description>
Taking as a starting point the author&#39;s previous work on
developing aspects of category theory in Isabelle/HOL, this article
gives a compatible formalization of the notion of
&#34;bicategory&#34; and develops a framework within which formal
proofs of facts about bicategories can be given. The framework
includes a number of basic results, including the Coherence Theorem,
the Strictness Theorem, pseudofunctors and biequivalence, and facts
about internal equivalences and adjunctions in a bicategory. As a
driving application and demonstration of the utility of the framework,
it is used to give a formal proof of a theorem, due to Carboni,
Kasangian, and Street, that characterizes up to biequivalence the
bicategories of spans in a category with pullbacks. The formalization
effort necessitated the filling-in of many details that were not
evident from the brief presentation in the original paper, as well as
identifying a few minor corrections along the way.</description>
</item>
<item>
<title>The Irrationality of ζ(3)</title>
<link>https://www.isa-afp.org/entries/Zeta_3_Irrational.html</link>
<guid>https://www.isa-afp.org/entries/Zeta_3_Irrational.html</guid>
<dc:creator> Manuel Eberl </dc:creator>
<pubDate>27 Dec 2019 00:00:00 +0000</pubDate>
<description>
&lt;p&gt;This article provides a formalisation of Beukers&#39;s
straightforward analytic proof that ζ(3) is irrational. This was first
proven by Apéry (which is why this result is also often called
‘Apéry&#39;s Theorem’) using a more algebraic approach. This
formalisation follows &lt;a
href=&#34;http://people.math.sc.edu/filaseta/gradcourses/Math785/Math785Notes4.pdf&#34;&gt;Filaseta&#39;s
presentation&lt;/a&gt; of Beukers&#39;s proof.&lt;/p&gt;</description>
</item>
<item>
<title>Formalizing a Seligman-Style Tableau System for Hybrid Logic</title>
<link>https://www.isa-afp.org/entries/Hybrid_Logic.html</link>
<guid>https://www.isa-afp.org/entries/Hybrid_Logic.html</guid>
<dc:creator> Asta Halkjær From </dc:creator>
<pubDate>20 Dec 2019 00:00:00 +0000</pubDate>
<description>
This work is a formalization of soundness and completeness proofs
for a Seligman-style tableau system for hybrid logic. The completeness
result is obtained via a synthetic approach using maximally
consistent sets of tableau blocks. The formalization differs from
the cited work in a few ways. First, to avoid the need to backtrack in
the construction of a tableau, the formalized system has no unnamed
initial segment, and therefore no Name rule. Second, I show that the
full Bridge rule is admissible in the system. Third, I start from rules
restricted to only extend the branch with new formulas, including only
witnessing diamonds that are not already witnessed, and show that
the unrestricted rules are admissible. Similarly, I start from simpler
versions of the @-rules and show the general ones admissible. Finally,
the GoTo rule is restricted using a notion of coins such that each
application consumes a coin and coins are earned through applications of
the remaining rules. I show that if a branch can be closed then it can
be closed starting from a single coin. These restrictions are imposed
to rule out some means of nontermination.</description>
</item>
<item>
<title>The Poincaré-Bendixson Theorem</title>
<link>https://www.isa-afp.org/entries/Poincare_Bendixson.html</link>
<guid>https://www.isa-afp.org/entries/Poincare_Bendixson.html</guid>
<dc:creator> Fabian Immler, Yong Kiam Tan </dc:creator>
<pubDate>18 Dec 2019 00:00:00 +0000</pubDate>
<description>
The Poincaré-Bendixson theorem is a classical result in the study of
(continuous) dynamical systems. Colloquially, it restricts the
possible behaviors of planar dynamical systems: such systems cannot be
chaotic. In practice, it is a useful tool for proving the existence of
(limiting) periodic behavior in planar systems. The theorem is an
interesting and challenging benchmark for formalized mathematics
because proofs in the literature rely on geometric sketches and only
hint at symmetric cases. It also requires a substantial background of
mathematical theories, e.g., the Jordan curve theorem, real analysis,
ordinary differential equations, and limiting (long-term) behavior of
dynamical systems.</description>
</item>
<item>
<title>Poincaré Disc Model</title>
<link>https://www.isa-afp.org/entries/Poincare_Disc.html</link>
<guid>https://www.isa-afp.org/entries/Poincare_Disc.html</guid>
<dc:creator> Danijela Simić, Filip Marić, Pierre Boutry </dc:creator>
<pubDate>16 Dec 2019 00:00:00 +0000</pubDate>
<description>
We describe formalization of the Poincaré disc model of hyperbolic
geometry within the Isabelle/HOL proof assistant. The model is defined
within the extended complex plane (one dimensional complex projectives
space &amp;#8450;P1), formalized in the AFP entry “Complex Geometry”.
Points, lines, congruence of pairs of points, betweenness of triples
of points, circles, and isometries are defined within the model. It is
shown that the model satisfies all Tarski&#39;s axioms except the
Euclid&#39;s axiom. It is shown that it satisfies its negation and
the limiting parallels axiom (which proves it to be a model of
hyperbolic geometry).</description>
</item>
<item>
<title>Complex Geometry</title>
<link>https://www.isa-afp.org/entries/Complex_Geometry.html</link>
<guid>https://www.isa-afp.org/entries/Complex_Geometry.html</guid>
<dc:creator> Filip Marić, Danijela Simić </dc:creator>
<pubDate>16 Dec 2019 00:00:00 +0000</pubDate>
<description>
A formalization of geometry of complex numbers is presented.
Fundamental objects that are investigated are the complex plane
extended by a single infinite point, its objects (points, lines and
circles), and groups of transformations that act on them (e.g.,
inversions and Möbius transformations). Most objects are defined
algebraically, but correspondence with classical geometric definitions
is shown.</description>
</item>
<item>
<title>Gauss Sums and the Pólya–Vinogradov Inequality</title>
<link>https://www.isa-afp.org/entries/Gauss_Sums.html</link>
<guid>https://www.isa-afp.org/entries/Gauss_Sums.html</guid>
<dc:creator> Rodrigo Raya, Manuel Eberl </dc:creator>
<pubDate>10 Dec 2019 00:00:00 +0000</pubDate>
<description>
&lt;p&gt;This article provides a full formalisation of Chapter 8 of
Apostol&#39;s &lt;em&gt;&lt;a
href=&#34;https://www.springer.com/de/book/9780387901633&#34;&gt;Introduction
to Analytic Number Theory&lt;/a&gt;&lt;/em&gt;. Subjects that are
covered are:&lt;/p&gt; &lt;ul&gt; &lt;li&gt;periodic arithmetic
functions and their finite Fourier series&lt;/li&gt;
&lt;li&gt;(generalised) Ramanujan sums&lt;/li&gt; &lt;li&gt;Gauss sums
and separable characters&lt;/li&gt; &lt;li&gt;induced moduli and
primitive characters&lt;/li&gt; &lt;li&gt;the
Pólya&amp;mdash;Vinogradov inequality&lt;/li&gt; &lt;/ul&gt;</description>
</item>
<item>
<title>An Efficient Generalization of Counting Sort for Large, possibly Infinite Key Ranges</title>
<link>https://www.isa-afp.org/entries/Generalized_Counting_Sort.html</link>
<guid>https://www.isa-afp.org/entries/Generalized_Counting_Sort.html</guid>
<dc:creator> Pasquale Noce </dc:creator>
<pubDate>04 Dec 2019 00:00:00 +0000</pubDate>
<description>
Counting sort is a well-known algorithm that sorts objects of any kind
mapped to integer keys, or else to keys in one-to-one correspondence
with some subset of the integers (e.g. alphabet letters). However, it
is suitable for direct use, viz. not just as a subroutine of another
sorting algorithm (e.g. radix sort), only if the key range is not
significantly larger than the number of the objects to be sorted.
This paper describes a tail-recursive generalization of counting sort
making use of a bounded number of counters, suitable for direct use in
case of a large, or even infinite key range of any kind, subject to
the only constraint of being a subset of an arbitrary linear order.
After performing a pen-and-paper analysis of how such algorithm has to
be designed to maximize its efficiency, this paper formalizes the
resulting generalized counting sort (GCsort) algorithm and then
formally proves its correctness properties, namely that (a) the
counters&#39; number is maximized never exceeding the fixed upper
bound, (b) objects are conserved, (c) objects get sorted, and (d) the
algorithm is stable.</description>
</item>
<item>
<title>Interval Arithmetic on 32-bit Words</title>
<link>https://www.isa-afp.org/entries/Interval_Arithmetic_Word32.html</link>
<guid>https://www.isa-afp.org/entries/Interval_Arithmetic_Word32.html</guid>
<dc:creator> Brandon Bohrer </dc:creator>
<pubDate>27 Nov 2019 00:00:00 +0000</pubDate>
<description>
Interval_Arithmetic implements conservative interval arithmetic
computations, then uses this interval arithmetic to implement a simple
programming language where all terms have 32-bit signed word values,
with explicit infinities for terms outside the representable bounds.
Our target use case is interpreters for languages that must have a
well-understood low-level behavior. We include a formalization of
bounded-length strings which are used for the identifiers of our
language. Bounded-length identifiers are useful in some applications,
for example the &lt;a href=&#34;https://www.isa-afp.org/entries/Differential_Dynamic_Logic.html&#34;&gt;Differential_Dynamic_Logic&lt;/a&gt; article,
where a Euclidean space indexed by identifiers demands that identifiers
are finitely many.</description>
</item>
<item>
<title>Zermelo Fraenkel Set Theory in Higher-Order Logic</title>
<link>https://www.isa-afp.org/entries/ZFC_in_HOL.html</link>
<guid>https://www.isa-afp.org/entries/ZFC_in_HOL.html</guid>
<dc:creator> Lawrence C. Paulson </dc:creator>
<pubDate>24 Oct 2019 00:00:00 +0000</pubDate>
<description>
&lt;p&gt;This entry is a new formalisation of ZFC set theory in Isabelle/HOL. It is
logically equivalent to Obua&#39;s HOLZF; the point is to have the closest
possible integration with the rest of Isabelle/HOL, minimising the amount of
new notations and exploiting type classes.&lt;/p&gt;
&lt;p&gt;There is a type &lt;em&gt;V&lt;/em&gt; of sets and a function &lt;em&gt;elts :: V =&amp;gt; V
set&lt;/em&gt; mapping a set to its elements. Classes simply have type &lt;em&gt;V
set&lt;/em&gt;, and a predicate identifies the small classes: those that correspond
to actual sets. Type classes connected with orders and lattices are used to
minimise the amount of new notation for concepts such as the subset relation,
union and intersection. Basic concepts — Cartesian products, disjoint sums,
natural numbers, functions, etc. — are formalised.&lt;/p&gt;
&lt;p&gt;More advanced set-theoretic concepts, such as transfinite induction,
ordinals, cardinals and the transitive closure of a set, are also provided.
The definition of addition and multiplication for general sets (not just
ordinals) follows Kirby.&lt;/p&gt;
&lt;p&gt;The theory provides two type classes with the aim of facilitating
developments that combine &lt;em&gt;V&lt;/em&gt; with other Isabelle/HOL types:
&lt;em&gt;embeddable&lt;/em&gt;, the class of types that can be injected into &lt;em&gt;V&lt;/em&gt;
(including &lt;em&gt;V&lt;/em&gt; itself as well as &lt;em&gt;V*V&lt;/em&gt;, etc.), and
&lt;em&gt;small&lt;/em&gt;, the class of types that correspond to some ZF set.&lt;/p&gt;
extra-history =
Change history:
[2020-01-28]: Generalisation of the &#34;small&#34; predicate and order types to arbitrary sets;
ordinal exponentiation;
introduction of the coercion ord_of_nat :: &#34;nat =&gt; V&#34;;
numerous new lemmas. (revision 6081d5be8d08)</description>
</item>
<item>
<title>Isabelle/C</title>
<link>https://www.isa-afp.org/entries/Isabelle_C.html</link>
<guid>https://www.isa-afp.org/entries/Isabelle_C.html</guid>
<dc:creator> Frédéric Tuong, Burkhart Wolff </dc:creator>
<pubDate>22 Oct 2019 00:00:00 +0000</pubDate>
<description>
We present a framework for C code in C11 syntax deeply integrated into
the Isabelle/PIDE development environment. Our framework provides an
abstract interface for verification back-ends to be plugged-in
independently. Thus, various techniques such as deductive program
verification or white-box testing can be applied to the same source,
which is part of an integrated PIDE document model. Semantic back-ends
are free to choose the supported C fragment and its semantics. In
particular, they can differ on the chosen memory model or the
specification mechanism for framing conditions. Our framework supports
semantic annotations of C sources in the form of comments. Annotations
serve to locally control back-end settings, and can express the term
focus to which an annotation refers. Both the logical and the
syntactic context are available when semantic annotations are
evaluated. As a consequence, a formula in an annotation can refer both
to HOL or C variables. Our approach demonstrates the degree of
maturity and expressive power the Isabelle/PIDE sub-system has
achieved in recent years. Our integration technique employs Lex and
Yacc style grammars to ensure efficient deterministic parsing. This
is the core-module of Isabelle/C; the AFP package for Clean and
Clean_wrapper as well as AutoCorres and AutoCorres_wrapper (available
via git) are applications of this front-end.</description>
</item>
<item>
<title>VerifyThis 2019 -- Polished Isabelle Solutions</title>
<link>https://www.isa-afp.org/entries/VerifyThis2019.html</link>
<guid>https://www.isa-afp.org/entries/VerifyThis2019.html</guid>
<dc:creator> Peter Lammich, Simon Wimmer </dc:creator>
<pubDate>16 Oct 2019 00:00:00 +0000</pubDate>
<description>
VerifyThis 2019 (http://www.pm.inf.ethz.ch/research/verifythis.html)
was a program verification competition associated with ETAPS 2019. It
was the 8th event in the VerifyThis competition series. In this entry,
we present polished and completed versions of our solutions that we
created during the competition.</description>
</item>
- <item>
- <title>Aristotle's Assertoric Syllogistic</title>
- <link>https://www.isa-afp.org/entries/Aristotles_Assertoric_Syllogistic.html</link>
- <guid>https://www.isa-afp.org/entries/Aristotles_Assertoric_Syllogistic.html</guid>
- <dc:creator> Angeliki Koutsoukou-Argyraki </dc:creator>
- <pubDate>08 Oct 2019 00:00:00 +0000</pubDate>
- <description>
-We formalise with Isabelle/HOL some basic elements of Aristotle&#39;s
-assertoric syllogistic following the &lt;a
-href=&#34;https://plato.stanford.edu/entries/aristotle-logic/&#34;&gt;article from the Stanford Encyclopedia of Philosophy by Robin Smith.&lt;/a&gt; To
-this end, we use a set theoretic formulation (covering both individual
-and general predication). In particular, we formalise the deductions
-in the Figures and after that we present Aristotle&#39;s
-metatheoretical observation that all deductions in the Figures can in
-fact be reduced to either Barbara or Celarent. As the formal proofs
-prove to be straightforward, the interest of this entry lies in
-illustrating the functionality of Isabelle and high efficiency of
-Sledgehammer for simple exercises in philosophy.</description>
- </item>
- <item>
- <title>Sigma Protocols and Commitment Schemes</title>
- <link>https://www.isa-afp.org/entries/Sigma_Commit_Crypto.html</link>
- <guid>https://www.isa-afp.org/entries/Sigma_Commit_Crypto.html</guid>
- <dc:creator> David Butler, Andreas Lochbihler </dc:creator>
- <pubDate>07 Oct 2019 00:00:00 +0000</pubDate>
- <description>
-We use CryptHOL to formalise commitment schemes and Sigma-protocols.
-Both are widely used fundamental two party cryptographic primitives.
-Security for commitment schemes is considered using game-based
-definitions whereas the security of Sigma-protocols is considered
-using both the game-based and simulation-based security paradigms. In
-this work, we first define security for both primitives and then prove
-secure multiple case studies: the Schnorr, Chaum-Pedersen and
-Okamoto Sigma-protocols as well as a construction that allows for
-compound (AND and OR statements) Sigma-protocols and the Pedersen and
-Rivest commitment schemes. We also prove that commitment schemes can
-be constructed from Sigma-protocols. We formalise this proof at an
-abstract level, only assuming the existence of a Sigma-protocol;
-consequently, the instantiations of this result for the concrete
-Sigma-protocols we consider come for free.</description>
- </item>
- <item>
- <title>Clean - An Abstract Imperative Programming Language and its Theory</title>
- <link>https://www.isa-afp.org/entries/Clean.html</link>
- <guid>https://www.isa-afp.org/entries/Clean.html</guid>
- <dc:creator> Frédéric Tuong, Burkhart Wolff </dc:creator>
- <pubDate>04 Oct 2019 00:00:00 +0000</pubDate>
- <description>
-Clean is based on a simple, abstract execution model for an imperative
-target language. “Abstract” is understood in contrast to “Concrete
-Semantics”; alternatively, the term “shallow-style embedding” could be
-used. It strives for a type-safe notion of program-variables, an
-incremental construction of the typed state-space, support of
-incremental verification, and open-world extensibility of new type
-definitions being intertwined with the program definitions. Clean is
-based on a “no-frills” state-exception monad with the usual
-definitions of bind and unit for the compositional glue of state-based
-computations. Clean offers conditionals and loops supporting C-like
-control-flow operators such as break and return. The state-space
-construction is based on the extensible record package. Direct
-recursion of procedures is supported. Clean’s design strives for
-extreme simplicity. It is geared towards symbolic execution and proven
-correct verification tools. The underlying libraries of this package,
-however, deliberately restrict themselves to the most elementary
-infrastructure for these tasks. The package is intended to serve as
-demonstrator semantic backend for Isabelle/C, or for the
-test-generation techniques.</description>
- </item>
</channel>
</rss>
diff --git a/web/statistics.html b/web/statistics.html
--- a/web/statistics.html
+++ b/web/statistics.html
@@ -1,303 +1,303 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Archive of Formal Proofs</title>
<link rel="stylesheet" type="text/css" href="front.css">
<link rel="icon" href="images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="rss.xml">
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1><font class="first">S</font>tatistics
</h1>
<p>&nbsp;</p>
<table width="80%" class="descr">
<tbody>
<tr><td>
<h2>Statistics</h2>
<table>
-<tr><td>Number of Articles:</td><td class="statsnumber">527</td></tr>
-<tr><td>Number of Authors:</td><td class="statsnumber">347</td></tr>
-<tr><td>Number of lemmas:</td><td class="statsnumber">~143,000</td></tr>
-<tr><td>Lines of Code:</td><td class="statsnumber">~2,484,800</td></tr>
+<tr><td>Number of Articles:</td><td class="statsnumber">530</td></tr>
+<tr><td>Number of Authors:</td><td class="statsnumber">350</td></tr>
+<tr><td>Number of lemmas:</td><td class="statsnumber">~143,600</td></tr>
+<tr><td>Lines of Code:</td><td class="statsnumber">~2,491,800</td></tr>
</table>
<h4>Most used AFP articles:</h4>
<table id="most_used">
<tr>
<th></th><th>Name</th><th>Used by ? articles</th>
</tr>
<tr><td>1.</td>
<td><a href="entries/List-Index.html">List-Index</a></td>
<td>14</td>
</tr>
<tr><td>2.</td>
<td><a href="entries/Coinductive.html">Coinductive</a></td>
<td>12</td>
</tr>
<td></td>
<td><a href="entries/Collections.html">Collections</a></td>
<td>12</td>
</tr>
<td></td>
<td><a href="entries/Regular-Sets.html">Regular-Sets</a></td>
<td>12</td>
</tr>
<tr><td>3.</td>
<td><a href="entries/Landau_Symbols.html">Landau_Symbols</a></td>
<td>11</td>
</tr>
<tr><td>4.</td>
<td><a href="entries/Show.html">Show</a></td>
<td>10</td>
</tr>
<tr><td>5.</td>
<td><a href="entries/Abstract-Rewriting.html">Abstract-Rewriting</a></td>
<td>9</td>
</tr>
<td></td>
<td><a href="entries/Automatic_Refinement.html">Automatic_Refinement</a></td>
<td>9</td>
</tr>
<td></td>
<td><a href="entries/Deriving.html">Deriving</a></td>
<td>9</td>
</tr>
<tr><td>6.</td>
<td><a href="entries/Jordan_Normal_Form.html">Jordan_Normal_Form</a></td>
<td>8</td>
</tr>
<td></td>
<td><a href="entries/Native_Word.html">Native_Word</a></td>
<td>8</td>
</tr>
</table>
<script>
// DATA
var years = [2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020];
-var no_articles = [14, 22, 29, 37, 52, 64, 86, 103, 128, 151, 208, 253, 326, 396, 455, 511, 527];
-var no_loc = [61000.0, 96800.0, 131400.0, 238900.0, 353800.0, 435900.0, 517100.0, 568100.0, 740400.0, 828100.0, 1038200.0, 1216500.0, 1580000.0, 1829400.0, 2102900.0, 2397100.0, 2484800.0 ];
-var no_authors = [14, 11, 6, 6, 10, 6, 24, 11, 17, 16, 36, 20, 63, 31, 28, 38, 10];
-var no_authors_series = [14, 25, 31, 37, 47, 53, 77, 88, 105, 121, 157, 177, 240, 271, 299, 337, 347];
-var all_articles = [ "MiniML","AVL-Trees","Functional-Automata","BinarySearchTree","Lazy-Lists-II","Topology","Group-Ring-Module","Depth-First-Search","Compiling-Exceptions-Correctly","Completeness","Ramsey-Infinite","Verified-Prover","Integration","FileRefinement","Category","RSAPSS","Jinja","JiveDataStoreModel","DiskPaxos","GenClock","FFT","Ordinal","Cauchy","ClockSynchInst","FeatherweightJava","CoreC++","Flyspeck-Tame","Abstract-Hoare-Logics","HotelKeyCards","FOL-Fitting","POPLmark-deBruijn","Valuation","SumSquares","Fermat3_4","MuchAdoAboutTwo","JinjaThreads","Program-Conflict-Analysis","LinearQuantifierElim","NormByEval","Simpl","BDD","Recursion-Theory-I","SATSolverVerification","FunWithFunctions","ArrowImpossibilityGS","VolpanoSmith","Slicing","Huffman","FunWithTilings","SenSocialChoice","SIFPL","BytecodeLogicJmlTypes","Stream-Fusion","FinFun","CofGroups","SequentInvertibility","Ordinals_and_Cardinals","WorkerWrapper","HRB-Slicing","Perfect-Number-Thm","Collections","Tree-Automata","Presburger-Automata","DPT-SAT-Solver","Coinductive","List-Index","InformationFlowSlicing","InformationFlowSlicing_Inter","Free-Boolean-Algebra","Locally-Nameless-Sigma","Regular-Sets","Robbins-Conjecture","DataRefinementIBP","GraphMarkingIBP","Abstract-Rewriting","Matrix","Category2","Free-Groups","Statecharts","Polynomials","Lam-ml-Normalization","Binomial-Queues","Binomial-Heaps","Finger-Trees","Shivers-CFA","Marriage","Lower_Semicontinuous","RIPEMD-160-SPARK","LightweightJava","List-Infinite","AutoFocus-Stream","Nat-Interval-Logic","Transitive-Closure","General-Triangle","KBPs","Max-Card-Matching","Gauss-Jordan-Elim-Fun","Myhill-Nerode","LatticeProperties","MonoBoolTranAlgebra","PseudoHoops","Efficient-Mergesort","TLA","Markov_Models","Dijkstra_Shortest_Path","Refine_Monadic","Girth_Chromatic","Transitive-Closure-II","Abortable_Linearizable_Modules","Well_Quasi_Orders","Ordinary_Differential_Equations","Inductive_Confidentiality","Stuttering_Equivalence","Separation_Algebra","Circus","Psi_Calculi","CCS","Pi_Calculus","Tycon","PCF","Heard_Of","Impossible_Geometry","Datatype_Order_Generator","Possibilistic_Noninterference","Bondy","Tarskis_Geometry","Open_Induction","Separation_Logic_Imperative_HOL","Sqrt_Babylonian","Kleene_Algebra","Rank_Nullity_Theorem","Ribbon_Proofs","Launchbury","Nominal2","Containers","Graph_Theory","ShortestPath","Sort_Encodings","Koenigsberg_Friendship","Lehmer","Pratt_Certificate","IEEE_Floating_Point","Native_Word","Automatic_Refinement","Decreasing-Diagrams","GoedelGod","FocusStreamsCaseStudies","Coinductive_Languages","Incompleteness","HereditarilyFinite","Tail_Recursive_Functions","CryptoBasedCompositionalProperties","Sturm_Sequences","Featherweight_OCL","KAT_and_DRA","Relation_Algebra","Secondary_Sylow","Regex_Equivalence","Real_Impl","Affine_Arithmetic","Selection_Heap_Sort","Random_Graph_Subgraph_Threshold","Partial_Function_MR","AWN","Probabilistic_Noninterference","GPU_Kernel_PL","Discrete_Summation","HyperCTL","Abstract_Completeness","Bounded_Deducibility_Security","SIFUM_Type_Systems","WHATandWHERE_Security","Strong_Security","ComponentDependencies","Regular_Algebras","Noninterference_CSP","Roy_Floyd_Warshall","Gabow_SCC","CAVA_Automata","CAVA_LTL_Modelchecker","LTL_to_GBA","Promela","Boolean_Expression_Checkers","MSO_Regex_Equivalence","Pop_Refinement","Network_Security_Policy_Verification","Amortized_Complexity","pGCL","CISC-Kernel","Show","Splay_Tree","Skew_Heap","VectorSpace","Special_Function_Bounds","Gauss_Jordan","Priority_Queue_Braun","Jordan_Hoelder","Cayley_Hamilton","Sturm_Tarski","Imperative_Insertion_Sort","Certification_Monads","XML","RefinementReactive","Density_Compiler","Stream_Fusion_Code","Lifting_Definition_Option","AODV","UPF","UpDown_Scheme","Finite_Automata_HF","QR_Decomposition","Echelon_Form","Call_Arity","Deriving","Consensus_Refined","Trie","ConcurrentIMP","ConcurrentGC","Residuated_Lattices","Vickrey_Clarke_Groves","Probabilistic_System_Zoo","Formula_Derivatives","Dynamic_Tables","Noninterference_Ipurge_Unwinding","Noninterference_Generic_Unwinding","List_Interleaving","Multirelations","Derangements","Hermite","Akra_Bazzi","Landau_Symbols","Case_Labeling","Encodability_Process_Calculi","Rep_Fin_Groups","Noninterference_Inductive_Unwinding","Decreasing-Diagrams-II","Jordan_Normal_Form","LTL_to_DRA","Isabelle_Meta_Model","Parity_Game","Planarity_Certificates","TortoiseHare","Euler_Partition","Ergodic_Theory","Latin_Square","Card_Partitions","Applicative_Lifting","Algebraic_Numbers","Stern_Brocot","Liouville_Numbers","Triangle","Prime_Harmonic_Series","Descartes_Sign_Rule","Card_Number_Partitions","Matrix_Tensor","Knot_Theory","Polynomial_Factorization","Polynomial_Interpolation","Formal_SSA","List_Update","LTL","Cartan_FP","Timed_Automata","PropResPI","KAD","Noninterference_Sequential_Composition","ROBDD","CYK","No_FTL_observers","Groebner_Bases","Bell_Numbers_Spivey","SDS_Impossibility","Randomised_Social_Choice","MFMC_Countable","FLP","Perron_Frobenius","Incredible_Proof_Machine","Posix-Lexing","Card_Equiv_Relations","Tree_Decomposition","Word_Lib","Noninterference_Concurrent_Composition","Algebraic_VCs","Catalan_Numbers","Dependent_SIFUM_Type_Systems","Card_Multisets","Category3","Dependent_SIFUM_Refinement","IP_Addresses","Rewriting_Z","Resolution_FOL","Buildings","DFS_Framework","Pairing_Heap","Surprise_Paradox","Ptolemys_Theorem","Refine_Imperative_HOL","EdmondsKarp_Maxflow","InfPathElimination","Simple_Firewall","Routing","Stirling_Formula","Stone_Algebras","SuperCalc","Iptables_Semantics","Lambda_Free_RPOs","Allen_Calculus","Fisher_Yates","Lp","Chord_Segments","Berlekamp_Zassenhaus","Source_Coding_Theorem","SPARCv8","LOFT","Stable_Matching","Modal_Logics_for_NTS","Deep_Learning","Lambda_Free_KBOs","Nested_Multisets_Ordinals","Separata","Abs_Int_ITP2012","Complx","Paraconsistency","Proof_Strategy_Language","Twelvefold_Way","Concurrent_Ref_Alg","FOL_Harrison","Password_Authentication_Protocol","UPF_Firewall","E_Transcendental","Bertrands_Postulate","Minimal_SSA","Bernoulli","Key_Agreement_Strong_Adversaries","Stone_Relation_Algebras","Abstract_Soundness","Differential_Dynamic_Logic","Menger","Elliptic_Curves_Group_Law","Euler_MacLaurin","Quick_Sort_Cost","Comparison_Sort_Lower_Bound","Random_BSTs","Subresultants","Lazy_Case","Constructor_Funs","LocalLexing","Types_Tableaus_and_Goedels_God","MonoidalCategory","Game_Based_Crypto","Monomorphic_Monad","Probabilistic_While","Monad_Normalisation","CryptHOL","Floyd_Warshall","Security_Protocol_Refinement","Dict_Construction","Optics","Flow_Networks","Prpu_Maxflow","Buffons_Needle","PSemigroupsConvolution","Propositional_Proof_Systems","Stone_Kleene_Relation_Algebras","CRDT","Name_Carrying_Type_Inference","Minkowskis_Theorem","HOLCF-Prelude","Decl_Sem_Fun_PL","DynamicArchitectures","Stewart_Apollonius","LambdaMu","Orbit_Stabiliser","Root_Balanced_Tree","First_Welfare_Theorem","AnselmGod","PLM","Lowe_Ontological_Argument","Dirichlet_Series","Zeta_Function","Linear_Recurrences","Diophantine_Eqns_Lin_Hom","Winding_Number_Eval","Count_Complex_Roots","Buchi_Complementation","Transition_Systems_and_Automata","Kuratowski_Closure_Complement","Hybrid_Multi_Lane_Spatial_Logic","IMAP-CRDT","Stochastic_Matrices","Knuth_Morris_Pratt","BNF_Operations","Dirichlet_L","Mason_Stothers","Median_Of_Medians_Selection","Falling_Factorial_Sum","Taylor_Models","Green","Gromov_Hyperbolicity","Ordered_Resolution_Prover","LLL_Basis_Reduction","Treaps","First_Order_Terms","Error_Function","LLL_Factorization","Hoare_Time","Architectural_Design_Patterns","CakeML","Weight_Balanced_Trees","Fishburn_Impossibility","BNF_CC","VerifyThis2018","WebAssembly","Modular_Assembly_Kit_Security","OpSets","Monad_Memo_DP","AxiomaticCategoryTheory","Irrationality_J_Hancl","Probabilistic_Timed_Automata","Hidden_Markov_Models","Optimal_BST","Partial_Order_Reduction","Projective_Geometry","Localization_Ring","Pell","Neumann_Morgenstern_Utility","DiscretePricing","Minsky_Machines","Simplex","Budan_Fourier","Quaternions","Octonions","Aggregation_Algebras","Prime_Number_Theorem","Signature_Groebner","Symmetric_Polynomials","Pi_Transcendental","Factored_Transition_System_Bounding","Randomised_BSTs","Lambda_Free_EPO","Smooth_Manifolds","Epistemic_Logic","GewirthPGCProof","Generic_Deriving","Matroids","Auto2_HOL","Functional_Ordered_Resolution_Prover","Graph_Saturation","Transformer_Semantics","Order_Lattice_Props","Quantales","Constructive_Cryptography","Auto2_Imperative_HOL","Concurrent_Revisions","Core_DOM","Store_Buffer_Reduction","Higher_Order_Terms","IMP2","Farkas","List_Inversions","UTP","Universal_Turing_Machine","Probabilistic_Prime_Tests","Kruskal","Prime_Distribution_Elementary","Safe_OCL","QHLProver","Transcendence_Series_Hancl_Rucki","Binding_Syntax_Theory","LTL_Master_Theorem","HOL-CSP","Multi_Party_Computation","LambdaAuth","KD_Tree","Differential_Game_Logic","IMP2_Binary_Heap","Groebner_Macaulay","Nullstellensatz","Linear_Inequalities","Priority_Search_Trees","Prim_Dijkstra_Simple","Complete_Non_Orders","MFOTL_Monitor","CakeML_Codegen","FOL_Seq_Calc1","Szpilrajn","TESL_Language","Stellar_Quorums","IMO2019","C2KA_DistributedSystems","Linear_Programming","Laplace_Transform","Adaptive_State_Counting","Jacobson_Basic_Algebra","Fourier","Hybrid_Systems_VCs","Generic_Join","Clean","Sigma_Commit_Crypto","Aristotles_Assertoric_Syllogistic","VerifyThis2019","Isabelle_C","ZFC_in_HOL","Interval_Arithmetic_Word32","Generalized_Counting_Sort","Gauss_Sums","Poincare_Disc","Complex_Geometry","Poincare_Bendixson","Hybrid_Logic","Zeta_3_Irrational","Bicategory","Skip_Lists","Closest_Pair_Points","Approximation_Algorithms","Mersenne_Primes","Subset_Boolean_Algebras","Arith_Prog_Rel_Primes","VeriComp","Goodstein_Lambda","Hello_World","Relational-Incorrectness-Logic","Furstenberg_Topology","WOOT_Strong_Eventual_Consistency","MFODL_Monitor_Optimized","Saturation_Framework","Sliding_Window_Algorithm"];
-var years_loc_articles = [ 2004 , , , , , , , , , , , , , ,2005 , , , , , , , ,2006 , , , , , , ,2007 , , , , , , , ,2008 , , , , , , , , , , , , , , ,2009 , , , , , , , , , , , ,2010 , , , , , , , , , , , , , , , , , , , , , ,2011 , , , , , , , , , , , , , , , , ,2012 , , , , , , , , , , , , , , , , , , , , , , , , ,2013 , , , , , , , , , , , , , , , , , , , , , , ,2014 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,2015 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,2016 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,2017 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,2018 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,2019 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,2020 , , , , , , , , , , , , , , ,];
-var loc_articles = [ "1825","839","1544","1096","1058","2419","44269","205","142","1994","209","1109","3792","506","1141","3769","17203","3119","6430","1145","447","2537","1275","1583","1838","12833","13114","2685","1228","3556","4269","9649","970","2847","1741","79734","4738","3396","2186","31192","10664","6723","30332","180","793","1047","14411","2080","254","2221","5958","3462","799","1540","684","6654","8","2627","27490","330","32412","5025","4380","210","9533","447","2380","3399","611","6311","2042","840","713","1024","5632","1427","4078","2230","6061","22617","1602","1587","3370","2449","2592","260","1620","16","2930","7805","6557","6381","992","125","10136","332","235","1828","1039","1784","4425","303","4461","11857","2834","8583","1043","408","2940","2613","38065","3243","1480","2611","3153","27588","2580","25275","2266","4107","7701","1245","260","5309","73","9729","719","6673","1511","4354","1248","1908","6215","4974","10067","7894","538","3830","4595","202","848","1781","5209","10327","1524","150","5292","706","10774","2248","1458","1958","3067","11485","1860","1190","1219","2175","1144","14861","2215","1964","166","10685","6420","572","590","1698","465","883","4133","2138","1403","2280","1959","2467","220","5432","4431","9390","3999","4463","406","5930","1829","12832","2973","9486","4560","931","645","104","2338","1653","9143","752","2113","875","1727","627","931","1201","1296","7880","1922","90","28055","2879","2796","1116","4863","5259","8842","1169","6178","527","1601","6194","1782","5327","1085","4103","952","2446","1089","1060","2362","468","2074","3761","2148","710","16080","8267","908","1063","21109","9679","8635","3142","9161","690","435","13711","478","898","2716","10063","1162","401","498","495","741","843","3622","4616","6228","4123","8166","12108","3178","518","17583","2876","2411","5496","2453","886","1162","17386","509","702","5036","9700","4287","5331","3813","656","329","1057","8313","3257","2593","553","8478","206","17950","8773","3315","398","2960","12833","9483","373","173","384","18976","2545","6119","3777","1017","1889","4338","9356","20034","4053","3420","319","3204","169","19540","541","14618","2648","7033","7590","3898","3244","4507","855","2289","5004","1349","276","4339","1475","3482","7115","9662","601","1722","852","2194","12220","4116","590","13558","1695","4484","1640","835","694","737","3346","105","68","10492","1127","8000","4135","4711","1200","378","9435","2078","14059","639","2015","3930","4869","398","1531","5554","5614","1991","4205","478","4121","3146","3471","88","480","1261","1877","2193","250","10669","854","7463","5301","3079","2808","8849","5261","2330","5861","945","6519","992","489","809","8891","3133","338","854","493","4585","9457","15962","6074","10249","1818","2288","785","3260","8438","3278","12949","592","841","3383","3642","11559","13548","3734","5742","530","1044","7674","1042","1259","5297","2754","1390","1622","2173","13358","805","10027","2667","541","1271","3955","5318","9770","2765","934","11924","1743","2355","7917","1490","449","685","1811","1132","3248","3578","1644","2983","2218","7221","4968","2767","16973","34365","3259","6022","1900","372","10334","16867","3018","3300","5304","4576","10486","1000","15843","4437","9487","5543","3300","1264","2973","805","10253","2606","5262","472","3365","1869","2599","13350","945","192","4455","527","713","782","2335","2134","9934","2089","3736","3105","2350","3195","3812","176","1726","8509","6605","4830","5723","4561","10314","14529","6402","4407","1907","51409","2450","3948","2758","1699","3154","944","906","596","375","691","766","2564","11343","3387","744"];
+var no_articles = [14, 22, 29, 37, 52, 64, 86, 103, 128, 151, 208, 253, 326, 396, 455, 511, 530];
+var no_loc = [61000.0, 96800.0, 131600.0, 239100.0, 354000.0, 436100.0, 517300.0, 568300.0, 740600.0, 828300.0, 1039800.0, 1218100.0, 1582400.0, 1832000.0, 2105500.0, 2399800.0, 2491800.0 ];
+var no_authors = [14, 11, 6, 6, 10, 6, 24, 11, 17, 16, 36, 20, 63, 31, 28, 38, 13];
+var no_authors_series = [14, 25, 31, 37, 47, 53, 77, 88, 105, 121, 157, 177, 240, 271, 299, 337, 350];
+var all_articles = [ "MiniML","AVL-Trees","Functional-Automata","BinarySearchTree","Lazy-Lists-II","Topology","Group-Ring-Module","Depth-First-Search","Compiling-Exceptions-Correctly","Completeness","Ramsey-Infinite","Verified-Prover","Integration","FileRefinement","Category","RSAPSS","Jinja","JiveDataStoreModel","DiskPaxos","GenClock","FFT","Ordinal","Cauchy","ClockSynchInst","FeatherweightJava","CoreC++","Flyspeck-Tame","Abstract-Hoare-Logics","HotelKeyCards","FOL-Fitting","POPLmark-deBruijn","Valuation","SumSquares","Fermat3_4","MuchAdoAboutTwo","JinjaThreads","Program-Conflict-Analysis","LinearQuantifierElim","NormByEval","Simpl","BDD","Recursion-Theory-I","SATSolverVerification","FunWithFunctions","ArrowImpossibilityGS","VolpanoSmith","Slicing","Huffman","FunWithTilings","SenSocialChoice","SIFPL","BytecodeLogicJmlTypes","Stream-Fusion","FinFun","CofGroups","SequentInvertibility","Ordinals_and_Cardinals","WorkerWrapper","HRB-Slicing","Perfect-Number-Thm","Collections","Tree-Automata","Presburger-Automata","DPT-SAT-Solver","Coinductive","List-Index","InformationFlowSlicing","InformationFlowSlicing_Inter","Free-Boolean-Algebra","Locally-Nameless-Sigma","Regular-Sets","Robbins-Conjecture","DataRefinementIBP","GraphMarkingIBP","Abstract-Rewriting","Matrix","Category2","Free-Groups","Statecharts","Polynomials","Lam-ml-Normalization","Binomial-Queues","Binomial-Heaps","Finger-Trees","Shivers-CFA","Marriage","Lower_Semicontinuous","RIPEMD-160-SPARK","LightweightJava","List-Infinite","AutoFocus-Stream","Nat-Interval-Logic","Transitive-Closure","General-Triangle","KBPs","Max-Card-Matching","Gauss-Jordan-Elim-Fun","Myhill-Nerode","LatticeProperties","MonoBoolTranAlgebra","PseudoHoops","Efficient-Mergesort","TLA","Markov_Models","Dijkstra_Shortest_Path","Refine_Monadic","Girth_Chromatic","Transitive-Closure-II","Abortable_Linearizable_Modules","Well_Quasi_Orders","Ordinary_Differential_Equations","Inductive_Confidentiality","Stuttering_Equivalence","Separation_Algebra","Circus","Psi_Calculi","CCS","Pi_Calculus","Tycon","PCF","Heard_Of","Impossible_Geometry","Datatype_Order_Generator","Possibilistic_Noninterference","Bondy","Tarskis_Geometry","Open_Induction","Separation_Logic_Imperative_HOL","Sqrt_Babylonian","Kleene_Algebra","Rank_Nullity_Theorem","Ribbon_Proofs","Launchbury","Nominal2","Containers","Graph_Theory","ShortestPath","Sort_Encodings","Koenigsberg_Friendship","Lehmer","Pratt_Certificate","IEEE_Floating_Point","Native_Word","Automatic_Refinement","Decreasing-Diagrams","GoedelGod","FocusStreamsCaseStudies","Coinductive_Languages","Incompleteness","HereditarilyFinite","Tail_Recursive_Functions","CryptoBasedCompositionalProperties","Sturm_Sequences","Featherweight_OCL","KAT_and_DRA","Relation_Algebra","Secondary_Sylow","Regex_Equivalence","Real_Impl","Affine_Arithmetic","Selection_Heap_Sort","Random_Graph_Subgraph_Threshold","Partial_Function_MR","AWN","Probabilistic_Noninterference","GPU_Kernel_PL","Discrete_Summation","HyperCTL","Abstract_Completeness","Bounded_Deducibility_Security","SIFUM_Type_Systems","WHATandWHERE_Security","Strong_Security","ComponentDependencies","Regular_Algebras","Noninterference_CSP","Roy_Floyd_Warshall","Gabow_SCC","CAVA_Automata","CAVA_LTL_Modelchecker","LTL_to_GBA","Promela","Boolean_Expression_Checkers","MSO_Regex_Equivalence","Pop_Refinement","Network_Security_Policy_Verification","Amortized_Complexity","pGCL","CISC-Kernel","Show","Splay_Tree","Skew_Heap","VectorSpace","Special_Function_Bounds","Gauss_Jordan","Priority_Queue_Braun","Jordan_Hoelder","Cayley_Hamilton","Sturm_Tarski","Imperative_Insertion_Sort","Certification_Monads","XML","RefinementReactive","Density_Compiler","Stream_Fusion_Code","Lifting_Definition_Option","AODV","UPF","UpDown_Scheme","Finite_Automata_HF","QR_Decomposition","Echelon_Form","Call_Arity","Deriving","Consensus_Refined","Trie","ConcurrentIMP","ConcurrentGC","Residuated_Lattices","Vickrey_Clarke_Groves","Probabilistic_System_Zoo","Formula_Derivatives","Dynamic_Tables","Noninterference_Ipurge_Unwinding","Noninterference_Generic_Unwinding","List_Interleaving","Multirelations","Derangements","Hermite","Akra_Bazzi","Landau_Symbols","Case_Labeling","Encodability_Process_Calculi","Rep_Fin_Groups","Noninterference_Inductive_Unwinding","Decreasing-Diagrams-II","Jordan_Normal_Form","LTL_to_DRA","Isabelle_Meta_Model","Parity_Game","Planarity_Certificates","TortoiseHare","Euler_Partition","Ergodic_Theory","Latin_Square","Card_Partitions","Applicative_Lifting","Algebraic_Numbers","Stern_Brocot","Liouville_Numbers","Triangle","Prime_Harmonic_Series","Descartes_Sign_Rule","Card_Number_Partitions","Matrix_Tensor","Knot_Theory","Polynomial_Factorization","Polynomial_Interpolation","Formal_SSA","List_Update","LTL","Cartan_FP","Timed_Automata","PropResPI","KAD","Noninterference_Sequential_Composition","ROBDD","CYK","No_FTL_observers","Groebner_Bases","Bell_Numbers_Spivey","SDS_Impossibility","Randomised_Social_Choice","MFMC_Countable","FLP","Perron_Frobenius","Incredible_Proof_Machine","Posix-Lexing","Card_Equiv_Relations","Tree_Decomposition","Word_Lib","Noninterference_Concurrent_Composition","Algebraic_VCs","Catalan_Numbers","Dependent_SIFUM_Type_Systems","Card_Multisets","Category3","Dependent_SIFUM_Refinement","IP_Addresses","Rewriting_Z","Resolution_FOL","Buildings","DFS_Framework","Pairing_Heap","Surprise_Paradox","Ptolemys_Theorem","Refine_Imperative_HOL","EdmondsKarp_Maxflow","InfPathElimination","Simple_Firewall","Routing","Stirling_Formula","Stone_Algebras","SuperCalc","Iptables_Semantics","Lambda_Free_RPOs","Allen_Calculus","Fisher_Yates","Lp","Chord_Segments","Berlekamp_Zassenhaus","Source_Coding_Theorem","SPARCv8","LOFT","Stable_Matching","Modal_Logics_for_NTS","Deep_Learning","Lambda_Free_KBOs","Nested_Multisets_Ordinals","Separata","Abs_Int_ITP2012","Complx","Paraconsistency","Proof_Strategy_Language","Twelvefold_Way","Concurrent_Ref_Alg","FOL_Harrison","Password_Authentication_Protocol","UPF_Firewall","E_Transcendental","Bertrands_Postulate","Minimal_SSA","Bernoulli","Key_Agreement_Strong_Adversaries","Stone_Relation_Algebras","Abstract_Soundness","Differential_Dynamic_Logic","Menger","Elliptic_Curves_Group_Law","Euler_MacLaurin","Quick_Sort_Cost","Comparison_Sort_Lower_Bound","Random_BSTs","Subresultants","Lazy_Case","Constructor_Funs","LocalLexing","Types_Tableaus_and_Goedels_God","MonoidalCategory","Game_Based_Crypto","Monomorphic_Monad","Probabilistic_While","Monad_Normalisation","CryptHOL","Floyd_Warshall","Security_Protocol_Refinement","Dict_Construction","Optics","Flow_Networks","Prpu_Maxflow","Buffons_Needle","PSemigroupsConvolution","Propositional_Proof_Systems","Stone_Kleene_Relation_Algebras","CRDT","Name_Carrying_Type_Inference","Minkowskis_Theorem","HOLCF-Prelude","Decl_Sem_Fun_PL","DynamicArchitectures","Stewart_Apollonius","LambdaMu","Orbit_Stabiliser","Root_Balanced_Tree","First_Welfare_Theorem","AnselmGod","PLM","Lowe_Ontological_Argument","Dirichlet_Series","Zeta_Function","Linear_Recurrences","Diophantine_Eqns_Lin_Hom","Winding_Number_Eval","Count_Complex_Roots","Buchi_Complementation","Transition_Systems_and_Automata","Kuratowski_Closure_Complement","Hybrid_Multi_Lane_Spatial_Logic","IMAP-CRDT","Stochastic_Matrices","Knuth_Morris_Pratt","BNF_Operations","Dirichlet_L","Mason_Stothers","Median_Of_Medians_Selection","Falling_Factorial_Sum","Taylor_Models","Green","Gromov_Hyperbolicity","Ordered_Resolution_Prover","LLL_Basis_Reduction","Treaps","First_Order_Terms","Error_Function","LLL_Factorization","Hoare_Time","Architectural_Design_Patterns","CakeML","Weight_Balanced_Trees","Fishburn_Impossibility","BNF_CC","VerifyThis2018","WebAssembly","Modular_Assembly_Kit_Security","OpSets","Monad_Memo_DP","AxiomaticCategoryTheory","Irrationality_J_Hancl","Probabilistic_Timed_Automata","Hidden_Markov_Models","Optimal_BST","Partial_Order_Reduction","Projective_Geometry","Localization_Ring","Pell","Neumann_Morgenstern_Utility","DiscretePricing","Minsky_Machines","Simplex","Budan_Fourier","Quaternions","Octonions","Aggregation_Algebras","Prime_Number_Theorem","Signature_Groebner","Symmetric_Polynomials","Pi_Transcendental","Factored_Transition_System_Bounding","Randomised_BSTs","Lambda_Free_EPO","Smooth_Manifolds","Epistemic_Logic","GewirthPGCProof","Generic_Deriving","Matroids","Auto2_HOL","Functional_Ordered_Resolution_Prover","Graph_Saturation","Transformer_Semantics","Order_Lattice_Props","Quantales","Constructive_Cryptography","Auto2_Imperative_HOL","Concurrent_Revisions","Core_DOM","Store_Buffer_Reduction","Higher_Order_Terms","IMP2","Farkas","List_Inversions","UTP","Universal_Turing_Machine","Probabilistic_Prime_Tests","Kruskal","Prime_Distribution_Elementary","Safe_OCL","QHLProver","Transcendence_Series_Hancl_Rucki","Binding_Syntax_Theory","LTL_Master_Theorem","HOL-CSP","Multi_Party_Computation","LambdaAuth","KD_Tree","Differential_Game_Logic","IMP2_Binary_Heap","Groebner_Macaulay","Nullstellensatz","Linear_Inequalities","Priority_Search_Trees","Prim_Dijkstra_Simple","Complete_Non_Orders","MFOTL_Monitor","CakeML_Codegen","FOL_Seq_Calc1","Szpilrajn","TESL_Language","Stellar_Quorums","IMO2019","C2KA_DistributedSystems","Linear_Programming","Laplace_Transform","Adaptive_State_Counting","Jacobson_Basic_Algebra","Fourier","Hybrid_Systems_VCs","Generic_Join","Clean","Sigma_Commit_Crypto","Aristotles_Assertoric_Syllogistic","VerifyThis2019","Isabelle_C","ZFC_in_HOL","Interval_Arithmetic_Word32","Generalized_Counting_Sort","Gauss_Sums","Poincare_Disc","Complex_Geometry","Poincare_Bendixson","Hybrid_Logic","Zeta_3_Irrational","Bicategory","Skip_Lists","Closest_Pair_Points","Approximation_Algorithms","Mersenne_Primes","Subset_Boolean_Algebras","Arith_Prog_Rel_Primes","VeriComp","Goodstein_Lambda","Hello_World","Relational-Incorrectness-Logic","Furstenberg_Topology","WOOT_Strong_Eventual_Consistency","Lucas_Theorem","MFODL_Monitor_Optimized","Saturation_Framework","Sliding_Window_Algorithm","ADS_Functor","Attack_Trees"];
+var years_loc_articles = [ 2004 , , , , , , , , , , , , , ,2005 , , , , , , , ,2006 , , , , , , ,2007 , , , , , , , ,2008 , , , , , , , , , , , , , , ,2009 , , , , , , , , , , , ,2010 , , , , , , , , , , , , , , , , , , , , , ,2011 , , , , , , , , , , , , , , , , ,2012 , , , , , , , , , , , , , , , , , , , , , , , , ,2013 , , , , , , , , , , , , , , , , , , , , , , ,2014 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,2015 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,2016 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,2017 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,2018 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,2019 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,2020 , , , , , , , , , , , , , , , , , ,];
+var loc_articles = [ "1825","839","1544","1096","1058","2419","44269","205","142","1994","209","1109","3792","506","1141","3769","17203","3119","6430","1145","447","2537","1275","1583","1838","12833","13313","2685","1228","3556","4269","9649","970","2847","1741","79734","4738","3396","2186","31192","10664","6723","30332","180","793","1047","14413","2080","254","2221","5958","3462","799","1540","684","6654","8","2627","27490","330","32412","5025","4380","210","9533","447","2380","3399","611","6311","2042","840","713","1024","5632","1427","4078","2230","6061","22617","1602","1587","3370","2449","2592","260","1620","16","2930","7805","6557","6381","992","125","10136","332","235","1828","1039","1784","4425","303","4461","11857","2834","8583","1043","408","2940","2613","38065","3243","1480","2611","3153","27588","2580","25275","2266","4107","7701","1245","260","5309","73","9729","719","6673","1511","4354","1248","1908","6215","4974","10067","7894","538","3830","4595","202","848","1781","5209","10327","1524","150","5292","706","10774","2248","1458","1958","3067","11485","1860","1190","1219","2175","1144","14861","2215","1964","166","10685","6420","572","590","1698","465","883","4133","2138","1403","2280","1959","2467","220","5432","4431","9390","3999","4463","406","5930","1829","12832","4337","9486","4560","931","645","104","2338","1653","9143","752","2113","875","1727","627","931","1201","1296","7880","1922","90","28055","2879","2796","1116","4863","5259","8842","1169","6178","527","1601","6194","1782","5327","1085","4103","952","2446","1089","1060","2362","468","2074","3761","2177","710","16080","8267","908","1063","21109","9679","8653","3142","9161","690","435","13711","478","898","2716","10063","1162","401","498","495","741","843","3622","4616","6228","4123","8166","12108","3178","518","17583","2876","2411","5496","2453","886","1162","17386","509","702","5036","9700","4287","5331","3813","656","329","1057","8313","3257","2593","553","8478","206","17950","8773","3315","398","2960","12833","9483","373","173","384","18976","2545","6119","3777","1017","2608","4338","9356","20034","4053","3420","319","3204","169","19540","541","14618","2648","7033","7590","3898","3244","4546","855","2289","5004","1349","276","4339","1475","3482","7115","9662","601","1722","852","2194","12220","4116","590","13558","1695","4484","1640","835","694","737","3346","105","68","10492","1127","8000","4135","4711","1200","378","9435","2078","14059","639","2015","3930","4869","398","1531","5554","5614","1991","4205","478","4121","3146","3471","88","480","1261","1877","2193","250","10669","854","7463","5301","3079","2808","8849","5261","2327","6135","945","6519","992","489","809","8891","3133","338","854","493","4585","9457","15962","6179","10249","1818","2288","785","3260","8438","3278","12949","592","841","3383","3642","11559","13548","3734","5742","530","1044","7674","1042","1259","5297","2754","1390","1622","2173","13358","805","10027","2667","541","1271","3955","5318","9770","2765","934","11924","1743","2355","7917","1490","449","685","1811","1132","3178","3578","1644","2983","2218","7221","4968","2767","16973","34365","3259","6022","1900","372","10334","16867","3018","3300","5304","4576","10486","1000","15843","4437","9487","5543","3300","1264","2973","805","10253","2606","5262","472","3365","1869","2599","13350","945","192","4455","527","713","782","2335","2134","9934","2089","3736","3105","2350","3195","3812","176","1726","8509","6605","4830","5723","4561","10314","14529","6402","4407","1907","51409","2450","3936","2758","1699","3154","944","906","596","375","691","766","2564","332","11343","3076","744","2351","1939"];
</script>
<h4>Growth in number of articles:</h4>
<script src="Chart.js"></script>
<div class="chart">
<canvas id="NumberOfArticles" width="400" height="400"></canvas>
</div>
<script>
var ctx = document.getElementById("NumberOfArticles");
var myChart = new Chart(ctx, {
type: 'bar',
data: {
labels: years,
datasets: [{
label: 'size of the AFP in # of articles',
data: no_articles,
backgroundColor: "rgba(46, 45, 78, 1)"
}],
},
options: {
responsive: true,
maintainAspectRatio: true,
scales: {
yAxes: [{
ticks: {
beginAtZero:true
}
}]
},
}
});
</script>
<h4>Growth in lines of code:</h4>
<div class="chart">
<canvas id="NumberOfLoc" width="400" height="400"></canvas>
</div>
<script>
var ctx = document.getElementById("NumberOfLoc");
var myChart = new Chart(ctx, {
type: 'bar',
data: {
labels: years,
datasets: [{
label: 'size of the AFP in lines of code',
data: no_loc,
backgroundColor: "rgba(101, 99, 136, 1)"
}],
},
options: {
responsive: true,
maintainAspectRatio: true,
scales: {
yAxes: [{
ticks: {
beginAtZero:true
}
}]
},
}
});
</script>
<h4>Growth in number of authors:</h4>
<div class="chart">
<canvas id="NumberOfAuthors" width="400" height="400"></canvas>
</div>
<script>
var ctx = document.getElementById("NumberOfAuthors");
var myChart = new Chart(ctx, {
type: 'bar',
data: {
labels: years,
datasets: [{
label: 'new authors per year',
data: no_authors,
backgroundColor: "rgba(101, 99, 136, 1)"
},
{
label: 'number of authors contributing (cumulative)',
data: no_authors_series,
backgroundColor: "rgba(0, 15, 48, 1)"
}],
},
options: {
responsive: true,
maintainAspectRatio: true,
scales: {
yAxes: [{
ticks: {
beginAtZero:true
}
}]
},
}
});
</script>
<h4>Size of articles:</h4>
<div style="width: 800px" class="chart">
<canvas id="LocArticles" width="800" height="400"></canvas>
</div>
<script>
var ctx = document.getElementById("LocArticles");
var myChart = new Chart(ctx, {
type: 'bar',
data: {
labels: years_loc_articles,
datasets: [{
label: 'loc per article',
data: loc_articles,
backgroundColor: "rgba(101, 99, 136, 1)"
}]
},
options: {
responsive: true,
maintainAspectRatio: true,
scales: {
xAxes: [{
categoryPercentage: 1,
barPercentage: 0.9,
ticks: {
autoSkip: false
}
}],
yAxes: [{
ticks: {
beginAtZero:true
}
}]
},
tooltips: {
callbacks: {
title: function(tooltipItem, data) {
return all_articles[tooltipItem[0].index];
}
}
}
}
});
</script>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
<script src="Chart.js"></script>
</body>
</html>
\ No newline at end of file
diff --git a/web/submitting.html b/web/submitting.html
--- a/web/submitting.html
+++ b/web/submitting.html
@@ -1,153 +1,155 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Archive of Formal Proofs</title>
<link rel="stylesheet" type="text/css" href="front.css">
<link rel="icon" href="images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="rss.xml">
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1><font class="first">S</font>ubmission
<font class="first">G</font>uidelines
</h1>
<p>&nbsp;</p>
<table width="80%" class="descr">
<tbody>
<tr><td>
<p>Please send your submission
<a href="https://ci.isabelle.systems/afp-submission/">via this web page</a>.
</p>
<p><strong>The submission must follow the following Isabelle style rules.</strong>
For additional guidelines on Isabelle proofs, also see the this <a href="http://proofcraft.org/blog/isabelle-style.html">guide</a> (feel free to follow all of these; only the below are mandatory).
<strong>Technical details about the submission process and the format of the submission are explained on the submission site.</strong></p>
<ul>
<li>No use of the commands <code>sorry</code> or <code>back</code>.</li>
<li>Instantiations must not use Isabelle-generated names such as
<code>xa</code> &mdash; use Isar, the <code>subgoal</code> command
or <code>rename_tac</code> to avoid such names.</li>
<li>No use of the command <code>smt_oracle</code>.</li>
<li>If your theories contain calls to <code>nitpick</code>,
<code>quickcheck</code>, or <code>nunchaku</code> those calls must
include the <code>expect</code> parameter. Alternatively the
<code>expect</code> parameter must be set globally via, e.g.
<code>nitpick_params</code>.</li>
<li><code>apply</code> scripts should be indented by subgoal as in
the Isabelle distribution. If an <code>apply</code> command is
applied to a state with <code>n+1</code> subgoals, it must be
indented by <code>n</code> spaces relative to the first
<code>apply</code> in the sequence.</li>
<li>Only named lemmas should carry attributes such as <code>[simp]</code>.</li>
<li>We prefer structured Isar proofs over apply style, but do not
mandate them.</li>
<li>If there are proof steps that take significant time, i.e. longer
than roughly 1 min, please add a short comment to that step, so
maintainers will know what to expect.</li>
<li>The entry must contain a ROOT file with one session that has the
name of the entry. We strongly encourage precisely one session per
entry, but exceptions can be made. All sessions must be in group
(AFP), and all theory files of the submission must be contained in
at least one session. See also the example <a href="https://bitbucket.org/isa-afp/afp-2018/src/default/thys/Example-Submission/ROOT">ROOT</a> file
in the <a href="entries/Example-Submission.html">Example submission</a>.
</li>
<li>The entry should cite all sources that the theories are based on,
for example textbooks or research articles containing informal versions of the proofs.</li>
</ul>
+<p>Your submission must contain an abstract to be displayed on the web site &ndash; usually this will be the same as the abstract of your proof document in the <tt>root.tex</tt> file. You can use LaTeX formulae in this web site abstract, either inline formulae in the form <tt>$a+b$</tt> or <tt>\(a+b\)</tt> or display formulae in the form <tt>$$a + b$$</tt> or <tt>\[a + b\]</tt>. Other occurrences of these characters must be escaped (e.g. <tt>\$</tt> or <tt>\\(</tt>). Note that LaTeX in the title of an entry is <em>not</em> allowed. Most basic LaTeX functionality should be supported. For details on what parts of LaTeX are supported, see the <a href="https://docs.mathjax.org/en/v2.7-latest/tex.html">MathJax documentation.</a></p>
+
<p>It is possible and encouraged to build on other archive entries
in your submission. There is a standardised way to
<a href="using.html">refer to other AFP entries</a> in your
theories.</p>
<p>Your submission will be refereed and you will receive notification
as soon as possible. If accepted, you must agree to maintain your
archive entry or nominate someone else to maintain it. The Isabelle
development team will assist with maintenance, but it does not have the
resources to fully maintain the complete archive.</p>
<p>If you have questions regarding your submission, please email <a
href="&#109;&#97;&#105;&#108;&#116;&#111;:&#97;&#102;&#112;-&#115;&#117;&#98;&#109;&#105;&#116;&#64;&#105;&#110;&#46;&#116;&#117;&#109;&#46;&#100;&#101;">&#97;&#102;&#112;-&#115;&#117;&#98;&#109;&#105;&#116;&#64;&#105;&#110;&#46;&#116;&#117;&#109;&#46;&#100;&#101;</a>.
If you need help with Isabelle, please use the
<a href="mailto:isabelle-users@cl.cam.ac.uk">isabelle-users@cl.cam.ac.uk</a>
mailing list. It is always a good idea to <a
href="https://lists.cam.ac.uk/mailman/listinfo/cl-isabelle-users">subscribe</a>.</p>
</td></tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
</body>
</html>
\ No newline at end of file
diff --git a/web/topics.html b/web/topics.html
--- a/web/topics.html
+++ b/web/topics.html
@@ -1,871 +1,871 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Archive of Formal Proofs</title>
<link rel="stylesheet" type="text/css" href="front.css">
<link rel="icon" href="images/favicon.ico" type="image/icon">
<link rel="alternate" type="application/rss+xml" title="RSS" href="rss.xml">
</head>
<body class="mathjax_ignore">
<table width="100%">
<tbody>
<tr>
<!-- Navigation -->
<td width="20%" align="center" valign="top">
<p>&nbsp;</p>
<a href="https://www.isa-afp.org/">
<img src="images/isabelle.png" width="100" height="88" border=0>
</a>
<p>&nbsp;</p>
<p>&nbsp;</p>
<table class="nav" width="80%">
<tr>
<td class="nav" width="100%"><a href="index.html">Home</a></td>
</tr>
<tr>
<td class="nav"><a href="about.html">About</a></td>
</tr>
<tr>
<td class="nav"><a href="submitting.html">Submission</a></td>
</tr>
<tr>
<td class="nav"><a href="updating.html">Updating Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="using.html">Using Entries</a></td>
</tr>
<tr>
<td class="nav"><a href="search.html">Search</a></td>
</tr>
<tr>
<td class="nav"><a href="statistics.html">Statistics</a></td>
</tr>
<tr>
<td class="nav"><a href="topics.html">Index</a></td>
</tr>
<tr>
<td class="nav"><a href="download.html">Download</a></td>
</tr>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td>
<!-- Content -->
<td width="80%" valign="top">
<div align="center">
<p>&nbsp;</p>
<h1><font class="first">I</font>ndex by <font class="first">T</font>opic
</h1>
<p>&nbsp;</p>
<table width="80%" class="descr">
<tbody>
<tr>
<td>
- <h2>Computer Science</h2>
+ <h2>Computer science</h2>
<div class="list">
</div>
- <h3>Automata and Formal Languages</h3>
+ <h3>Automata and formal languages</h3>
<div class="list">
<a href="entries/Partial_Order_Reduction.html">Partial_Order_Reduction</a> &nbsp;
<a href="entries/C2KA_DistributedSystems.html">C2KA_DistributedSystems</a> &nbsp;
<a href="entries/Posix-Lexing.html">Posix-Lexing</a> &nbsp;
<a href="entries/LocalLexing.html">LocalLexing</a> &nbsp;
<a href="entries/KBPs.html">KBPs</a> &nbsp;
<a href="entries/Regular-Sets.html">Regular-Sets</a> &nbsp;
<a href="entries/Regex_Equivalence.html">Regex_Equivalence</a> &nbsp;
<a href="entries/MSO_Regex_Equivalence.html">MSO_Regex_Equivalence</a> &nbsp;
<a href="entries/Formula_Derivatives.html">Formula_Derivatives</a> &nbsp;
<a href="entries/Myhill-Nerode.html">Myhill-Nerode</a> &nbsp;
<a href="entries/Universal_Turing_Machine.html">Universal_Turing_Machine</a> &nbsp;
<a href="entries/CYK.html">CYK</a> &nbsp;
<a href="entries/Presburger-Automata.html">Presburger-Automata</a> &nbsp;
<a href="entries/Functional-Automata.html">Functional-Automata</a> &nbsp;
<a href="entries/Statecharts.html">Statecharts</a> &nbsp;
<a href="entries/Stuttering_Equivalence.html">Stuttering_Equivalence</a> &nbsp;
<a href="entries/Coinductive_Languages.html">Coinductive_Languages</a> &nbsp;
<a href="entries/Tree-Automata.html">Tree-Automata</a> &nbsp;
<a href="entries/Kleene_Algebra.html">Kleene_Algebra</a> &nbsp;
<a href="entries/KAT_and_DRA.html">KAT_and_DRA</a> &nbsp;
<a href="entries/KAD.html">KAD</a> &nbsp;
<a href="entries/Regular_Algebras.html">Regular_Algebras</a> &nbsp;
<a href="entries/Markov_Models.html">Markov_Models</a> &nbsp;
<a href="entries/Probabilistic_System_Zoo.html">Probabilistic_System_Zoo</a> &nbsp;
<a href="entries/CAVA_Automata.html">CAVA_Automata</a> &nbsp;
<a href="entries/LTL.html">LTL</a> &nbsp;
<a href="entries/LTL_to_GBA.html">LTL_to_GBA</a> &nbsp;
<a href="entries/CAVA_LTL_Modelchecker.html">CAVA_LTL_Modelchecker</a> &nbsp;
<a href="entries/Probabilistic_Timed_Automata.html">Probabilistic_Timed_Automata</a> &nbsp;
<a href="entries/Finite_Automata_HF.html">Finite_Automata_HF</a> &nbsp;
<a href="entries/LTL_to_DRA.html">LTL_to_DRA</a> &nbsp;
<a href="entries/Timed_Automata.html">Timed_Automata</a> &nbsp;
<a href="entries/Stochastic_Matrices.html">Stochastic_Matrices</a> &nbsp;
<a href="entries/Buchi_Complementation.html">Buchi_Complementation</a> &nbsp;
<a href="entries/Transition_Systems_and_Automata.html">Transition_Systems_and_Automata</a> &nbsp;
<a href="entries/Factored_Transition_System_Bounding.html">Factored_Transition_System_Bounding</a> &nbsp;
<a href="entries/LTL_Master_Theorem.html">LTL_Master_Theorem</a> &nbsp;
<a href="entries/MFOTL_Monitor.html">MFOTL_Monitor</a> &nbsp;
<a href="entries/Adaptive_State_Counting.html">Adaptive_State_Counting</a> &nbsp;
<a href="entries/MFODL_Monitor_Optimized.html">MFODL_Monitor_Optimized</a> &nbsp;
</div>
<h3>Algorithms</h3>
<div class="list">
<a href="entries/Knuth_Morris_Pratt.html">Knuth_Morris_Pratt</a> &nbsp;
<a href="entries/Probabilistic_While.html">Probabilistic_While</a> &nbsp;
<a href="entries/Comparison_Sort_Lower_Bound.html">Comparison_Sort_Lower_Bound</a> &nbsp;
<a href="entries/Quick_Sort_Cost.html">Quick_Sort_Cost</a> &nbsp;
<a href="entries/TortoiseHare.html">TortoiseHare</a> &nbsp;
<a href="entries/Selection_Heap_Sort.html">Selection_Heap_Sort</a> &nbsp;
<a href="entries/VerifyThis2018.html">VerifyThis2018</a> &nbsp;
<a href="entries/CYK.html">CYK</a> &nbsp;
<a href="entries/Boolean_Expression_Checkers.html">Boolean_Expression_Checkers</a> &nbsp;
<a href="entries/Efficient-Mergesort.html">Efficient-Mergesort</a> &nbsp;
<a href="entries/SATSolverVerification.html">SATSolverVerification</a> &nbsp;
<a href="entries/MuchAdoAboutTwo.html">MuchAdoAboutTwo</a> &nbsp;
<a href="entries/First_Order_Terms.html">First_Order_Terms</a> &nbsp;
<a href="entries/Monad_Memo_DP.html">Monad_Memo_DP</a> &nbsp;
<a href="entries/Hidden_Markov_Models.html">Hidden_Markov_Models</a> &nbsp;
<a href="entries/Imperative_Insertion_Sort.html">Imperative_Insertion_Sort</a> &nbsp;
<a href="entries/Formal_SSA.html">Formal_SSA</a> &nbsp;
<a href="entries/ROBDD.html">ROBDD</a> &nbsp;
<a href="entries/Median_Of_Medians_Selection.html">Median_Of_Medians_Selection</a> &nbsp;
<a href="entries/Fisher_Yates.html">Fisher_Yates</a> &nbsp;
<a href="entries/Optimal_BST.html">Optimal_BST</a> &nbsp;
<a href="entries/IMP2.html">IMP2</a> &nbsp;
<a href="entries/Auto2_Imperative_HOL.html">Auto2_Imperative_HOL</a> &nbsp;
<a href="entries/List_Inversions.html">List_Inversions</a> &nbsp;
<a href="entries/IMP2_Binary_Heap.html">IMP2_Binary_Heap</a> &nbsp;
<a href="entries/MFOTL_Monitor.html">MFOTL_Monitor</a> &nbsp;
<a href="entries/Adaptive_State_Counting.html">Adaptive_State_Counting</a> &nbsp;
<a href="entries/Generic_Join.html">Generic_Join</a> &nbsp;
<a href="entries/VerifyThis2019.html">VerifyThis2019</a> &nbsp;
<a href="entries/Generalized_Counting_Sort.html">Generalized_Counting_Sort</a> &nbsp;
<a href="entries/MFODL_Monitor_Optimized.html">MFODL_Monitor_Optimized</a> &nbsp;
<a href="entries/Sliding_Window_Algorithm.html">Sliding_Window_Algorithm</a> &nbsp;
<strong>Graph:</strong>
<a href="entries/DFS_Framework.html">DFS_Framework</a> &nbsp;
<a href="entries/Prpu_Maxflow.html">Prpu_Maxflow</a> &nbsp;
<a href="entries/Floyd_Warshall.html">Floyd_Warshall</a> &nbsp;
<a href="entries/Roy_Floyd_Warshall.html">Roy_Floyd_Warshall</a> &nbsp;
<a href="entries/Dijkstra_Shortest_Path.html">Dijkstra_Shortest_Path</a> &nbsp;
<a href="entries/EdmondsKarp_Maxflow.html">EdmondsKarp_Maxflow</a> &nbsp;
<a href="entries/Depth-First-Search.html">Depth-First-Search</a> &nbsp;
<a href="entries/GraphMarkingIBP.html">GraphMarkingIBP</a> &nbsp;
<a href="entries/Transitive-Closure.html">Transitive-Closure</a> &nbsp;
<a href="entries/Transitive-Closure-II.html">Transitive-Closure-II</a> &nbsp;
<a href="entries/Gabow_SCC.html">Gabow_SCC</a> &nbsp;
<a href="entries/Kruskal.html">Kruskal</a> &nbsp;
<a href="entries/Prim_Dijkstra_Simple.html">Prim_Dijkstra_Simple</a> &nbsp;
<strong>Distributed:</strong>
<a href="entries/DiskPaxos.html">DiskPaxos</a> &nbsp;
<a href="entries/GenClock.html">GenClock</a> &nbsp;
<a href="entries/ClockSynchInst.html">ClockSynchInst</a> &nbsp;
<a href="entries/Heard_Of.html">Heard_Of</a> &nbsp;
<a href="entries/Consensus_Refined.html">Consensus_Refined</a> &nbsp;
<a href="entries/Abortable_Linearizable_Modules.html">Abortable_Linearizable_Modules</a> &nbsp;
<a href="entries/IMAP-CRDT.html">IMAP-CRDT</a> &nbsp;
<a href="entries/CRDT.html">CRDT</a> &nbsp;
<a href="entries/OpSets.html">OpSets</a> &nbsp;
<a href="entries/Stellar_Quorums.html">Stellar_Quorums</a> &nbsp;
<a href="entries/WOOT_Strong_Eventual_Consistency.html">WOOT_Strong_Eventual_Consistency</a> &nbsp;
<strong>Concurrent:</strong>
<a href="entries/ConcurrentGC.html">ConcurrentGC</a> &nbsp;
<strong>Online:</strong>
<a href="entries/List_Update.html">List_Update</a> &nbsp;
<strong>Geometry:</strong>
<a href="entries/Closest_Pair_Points.html">Closest_Pair_Points</a> &nbsp;
<strong>Approximation:</strong>
<a href="entries/Approximation_Algorithms.html">Approximation_Algorithms</a> &nbsp;
<strong>Mathematical:</strong>
<a href="entries/FFT.html">FFT</a> &nbsp;
<a href="entries/Gauss-Jordan-Elim-Fun.html">Gauss-Jordan-Elim-Fun</a> &nbsp;
<a href="entries/UpDown_Scheme.html">UpDown_Scheme</a> &nbsp;
<a href="entries/Polynomials.html">Polynomials</a> &nbsp;
<a href="entries/Gauss_Jordan.html">Gauss_Jordan</a> &nbsp;
<a href="entries/Echelon_Form.html">Echelon_Form</a> &nbsp;
<a href="entries/QR_Decomposition.html">QR_Decomposition</a> &nbsp;
<a href="entries/Hermite.html">Hermite</a> &nbsp;
<a href="entries/Groebner_Bases.html">Groebner_Bases</a> &nbsp;
<a href="entries/Diophantine_Eqns_Lin_Hom.html">Diophantine_Eqns_Lin_Hom</a> &nbsp;
<a href="entries/Taylor_Models.html">Taylor_Models</a> &nbsp;
<a href="entries/LLL_Basis_Reduction.html">LLL_Basis_Reduction</a> &nbsp;
<a href="entries/Signature_Groebner.html">Signature_Groebner</a> &nbsp;
<strong>Optimization:</strong>
<a href="entries/Simplex.html">Simplex</a> &nbsp;
</div>
<h3>Concurrency</h3>
<div class="list">
<a href="entries/FLP.html">FLP</a> &nbsp;
<a href="entries/Concurrent_Ref_Alg.html">Concurrent_Ref_Alg</a> &nbsp;
<a href="entries/Concurrent_Revisions.html">Concurrent_Revisions</a> &nbsp;
<a href="entries/Store_Buffer_Reduction.html">Store_Buffer_Reduction</a> &nbsp;
<a href="entries/TESL_Language.html">TESL_Language</a> &nbsp;
- <strong>Process Calculi:</strong>
+ <strong>Process calculi:</strong>
<a href="entries/Noninterference_Generic_Unwinding.html">Noninterference_Generic_Unwinding</a> &nbsp;
<a href="entries/AODV.html">AODV</a> &nbsp;
<a href="entries/AWN.html">AWN</a> &nbsp;
<a href="entries/CCS.html">CCS</a> &nbsp;
<a href="entries/Pi_Calculus.html">Pi_Calculus</a> &nbsp;
<a href="entries/Psi_Calculi.html">Psi_Calculi</a> &nbsp;
<a href="entries/Encodability_Process_Calculi.html">Encodability_Process_Calculi</a> &nbsp;
<a href="entries/Circus.html">Circus</a> &nbsp;
<a href="entries/Noninterference_Sequential_Composition.html">Noninterference_Sequential_Composition</a> &nbsp;
<a href="entries/Noninterference_Concurrent_Composition.html">Noninterference_Concurrent_Composition</a> &nbsp;
<a href="entries/Modal_Logics_for_NTS.html">Modal_Logics_for_NTS</a> &nbsp;
<a href="entries/HOL-CSP.html">HOL-CSP</a> &nbsp;
</div>
- <h3>Data Structures</h3>
+ <h3>Data structures</h3>
<div class="list">
<a href="entries/Generic_Deriving.html">Generic_Deriving</a> &nbsp;
<a href="entries/Random_BSTs.html">Random_BSTs</a> &nbsp;
<a href="entries/Randomised_BSTs.html">Randomised_BSTs</a> &nbsp;
<a href="entries/List_Interleaving.html">List_Interleaving</a> &nbsp;
<a href="entries/Refine_Imperative_HOL.html">Refine_Imperative_HOL</a> &nbsp;
<a href="entries/Amortized_Complexity.html">Amortized_Complexity</a> &nbsp;
<a href="entries/Dynamic_Tables.html">Dynamic_Tables</a> &nbsp;
<a href="entries/AVL-Trees.html">AVL-Trees</a> &nbsp;
<a href="entries/BDD.html">BDD</a> &nbsp;
<a href="entries/BinarySearchTree.html">BinarySearchTree</a> &nbsp;
<a href="entries/Splay_Tree.html">Splay_Tree</a> &nbsp;
<a href="entries/Root_Balanced_Tree.html">Root_Balanced_Tree</a> &nbsp;
<a href="entries/Skew_Heap.html">Skew_Heap</a> &nbsp;
<a href="entries/Pairing_Heap.html">Pairing_Heap</a> &nbsp;
<a href="entries/Priority_Queue_Braun.html">Priority_Queue_Braun</a> &nbsp;
<a href="entries/Binomial-Queues.html">Binomial-Queues</a> &nbsp;
<a href="entries/Binomial-Heaps.html">Binomial-Heaps</a> &nbsp;
<a href="entries/Finger-Trees.html">Finger-Trees</a> &nbsp;
<a href="entries/Trie.html">Trie</a> &nbsp;
<a href="entries/FinFun.html">FinFun</a> &nbsp;
<a href="entries/Collections.html">Collections</a> &nbsp;
<a href="entries/Containers.html">Containers</a> &nbsp;
<a href="entries/FileRefinement.html">FileRefinement</a> &nbsp;
<a href="entries/Datatype_Order_Generator.html">Datatype_Order_Generator</a> &nbsp;
<a href="entries/Deriving.html">Deriving</a> &nbsp;
<a href="entries/List-Index.html">List-Index</a> &nbsp;
<a href="entries/List-Infinite.html">List-Infinite</a> &nbsp;
<a href="entries/Matrix.html">Matrix</a> &nbsp;
<a href="entries/Matrix_Tensor.html">Matrix_Tensor</a> &nbsp;
<a href="entries/Huffman.html">Huffman</a> &nbsp;
<a href="entries/Lazy-Lists-II.html">Lazy-Lists-II</a> &nbsp;
<a href="entries/IEEE_Floating_Point.html">IEEE_Floating_Point</a> &nbsp;
<a href="entries/Native_Word.html">Native_Word</a> &nbsp;
<a href="entries/XML.html">XML</a> &nbsp;
<a href="entries/ROBDD.html">ROBDD</a> &nbsp;
<a href="entries/IMAP-CRDT.html">IMAP-CRDT</a> &nbsp;
<a href="entries/Word_Lib.html">Word_Lib</a> &nbsp;
<a href="entries/CRDT.html">CRDT</a> &nbsp;
<a href="entries/KD_Tree.html">KD_Tree</a> &nbsp;
<a href="entries/Taylor_Models.html">Taylor_Models</a> &nbsp;
<a href="entries/Treaps.html">Treaps</a> &nbsp;
<a href="entries/Skip_Lists.html">Skip_Lists</a> &nbsp;
<a href="entries/Weight_Balanced_Trees.html">Weight_Balanced_Trees</a> &nbsp;
<a href="entries/OpSets.html">OpSets</a> &nbsp;
<a href="entries/Optimal_BST.html">Optimal_BST</a> &nbsp;
<a href="entries/Core_DOM.html">Core_DOM</a> &nbsp;
<a href="entries/Auto2_Imperative_HOL.html">Auto2_Imperative_HOL</a> &nbsp;
<a href="entries/IMP2_Binary_Heap.html">IMP2_Binary_Heap</a> &nbsp;
<a href="entries/Priority_Search_Trees.html">Priority_Search_Trees</a> &nbsp;
<a href="entries/Interval_Arithmetic_Word32.html">Interval_Arithmetic_Word32</a> &nbsp;
+ <a href="entries/ADS_Functor.html">ADS_Functor</a> &nbsp;
</div>
- <h3>Functional Programming</h3>
+ <h3>Functional programming</h3>
<div class="list">
<a href="entries/Optics.html">Optics</a> &nbsp;
<a href="entries/CryptHOL.html">CryptHOL</a> &nbsp;
<a href="entries/Probabilistic_While.html">Probabilistic_While</a> &nbsp;
<a href="entries/Monad_Normalisation.html">Monad_Normalisation</a> &nbsp;
<a href="entries/Monomorphic_Monad.html">Monomorphic_Monad</a> &nbsp;
<a href="entries/Show.html">Show</a> &nbsp;
<a href="entries/Certification_Monads.html">Certification_Monads</a> &nbsp;
<a href="entries/Partial_Function_MR.html">Partial_Function_MR</a> &nbsp;
<a href="entries/Lifting_Definition_Option.html">Lifting_Definition_Option</a> &nbsp;
<a href="entries/Coinductive.html">Coinductive</a> &nbsp;
<a href="entries/Stream-Fusion.html">Stream-Fusion</a> &nbsp;
<a href="entries/Tycon.html">Tycon</a> &nbsp;
<a href="entries/Monad_Memo_DP.html">Monad_Memo_DP</a> &nbsp;
<a href="entries/XML.html">XML</a> &nbsp;
<a href="entries/Tail_Recursive_Functions.html">Tail_Recursive_Functions</a> &nbsp;
<a href="entries/Stream_Fusion_Code.html">Stream_Fusion_Code</a> &nbsp;
<a href="entries/Applicative_Lifting.html">Applicative_Lifting</a> &nbsp;
<a href="entries/HOLCF-Prelude.html">HOLCF-Prelude</a> &nbsp;
<a href="entries/BNF_CC.html">BNF_CC</a> &nbsp;
<a href="entries/Binding_Syntax_Theory.html">Binding_Syntax_Theory</a> &nbsp;
<a href="entries/Generalized_Counting_Sort.html">Generalized_Counting_Sort</a> &nbsp;
<a href="entries/Hello_World.html">Hello_World</a> &nbsp;
</div>
<h3>Hardware</h3>
<div class="list">
<a href="entries/SPARCv8.html">SPARCv8</a> &nbsp;
</div>
- <h3>Machine Learning</h3>
+ <h3>Machine learning</h3>
<div class="list">
<a href="entries/Deep_Learning.html">Deep_Learning</a> &nbsp;
</div>
<h3>Networks</h3>
<div class="list">
<a href="entries/UPF_Firewall.html">UPF_Firewall</a> &nbsp;
<a href="entries/IP_Addresses.html">IP_Addresses</a> &nbsp;
<a href="entries/Simple_Firewall.html">Simple_Firewall</a> &nbsp;
<a href="entries/Iptables_Semantics.html">Iptables_Semantics</a> &nbsp;
<a href="entries/Routing.html">Routing</a> &nbsp;
<a href="entries/LOFT.html">LOFT</a> &nbsp;
</div>
- <h3>Programming Languages</h3>
+ <h3>Programming languages</h3>
<div class="list">
<a href="entries/Clean.html">Clean</a> &nbsp;
<a href="entries/Decl_Sem_Fun_PL.html">Decl_Sem_Fun_PL</a> &nbsp;
- <strong>Language Definitions:</strong>
+ <strong>Language definitions:</strong>
<a href="entries/CakeML.html">CakeML</a> &nbsp;
<a href="entries/WebAssembly.html">WebAssembly</a> &nbsp;
<a href="entries/pGCL.html">pGCL</a> &nbsp;
<a href="entries/GPU_Kernel_PL.html">GPU_Kernel_PL</a> &nbsp;
<a href="entries/LightweightJava.html">LightweightJava</a> &nbsp;
<a href="entries/CoreC++.html">CoreC++</a> &nbsp;
<a href="entries/FeatherweightJava.html">FeatherweightJava</a> &nbsp;
<a href="entries/Jinja.html">Jinja</a> &nbsp;
<a href="entries/JinjaThreads.html">JinjaThreads</a> &nbsp;
<a href="entries/Locally-Nameless-Sigma.html">Locally-Nameless-Sigma</a> &nbsp;
<a href="entries/AutoFocus-Stream.html">AutoFocus-Stream</a> &nbsp;
<a href="entries/FocusStreamsCaseStudies.html">FocusStreamsCaseStudies</a> &nbsp;
<a href="entries/Isabelle_Meta_Model.html">Isabelle_Meta_Model</a> &nbsp;
<a href="entries/Simpl.html">Simpl</a> &nbsp;
<a href="entries/Complx.html">Complx</a> &nbsp;
<a href="entries/Safe_OCL.html">Safe_OCL</a> &nbsp;
<a href="entries/Isabelle_C.html">Isabelle_C</a> &nbsp;
- <strong>Lambda Calculi:</strong>
+ <strong>Lambda calculi:</strong>
<a href="entries/Higher_Order_Terms.html">Higher_Order_Terms</a> &nbsp;
<a href="entries/Launchbury.html">Launchbury</a> &nbsp;
<a href="entries/PCF.html">PCF</a> &nbsp;
<a href="entries/POPLmark-deBruijn.html">POPLmark-deBruijn</a> &nbsp;
<a href="entries/Lam-ml-Normalization.html">Lam-ml-Normalization</a> &nbsp;
<a href="entries/LambdaMu.html">LambdaMu</a> &nbsp;
<a href="entries/Binding_Syntax_Theory.html">Binding_Syntax_Theory</a> &nbsp;
<a href="entries/LambdaAuth.html">LambdaAuth</a> &nbsp;
- <strong>Type Systems:</strong>
+ <strong>Type systems:</strong>
<a href="entries/Name_Carrying_Type_Inference.html">Name_Carrying_Type_Inference</a> &nbsp;
<a href="entries/MiniML.html">MiniML</a> &nbsp;
<a href="entries/Possibilistic_Noninterference.html">Possibilistic_Noninterference</a> &nbsp;
<a href="entries/SIFUM_Type_Systems.html">SIFUM_Type_Systems</a> &nbsp;
<a href="entries/Dependent_SIFUM_Type_Systems.html">Dependent_SIFUM_Type_Systems</a> &nbsp;
<a href="entries/Strong_Security.html">Strong_Security</a> &nbsp;
<a href="entries/WHATandWHERE_Security.html">WHATandWHERE_Security</a> &nbsp;
<a href="entries/VolpanoSmith.html">VolpanoSmith</a> &nbsp;
<strong>Logics:</strong>
<a href="entries/ConcurrentIMP.html">ConcurrentIMP</a> &nbsp;
<a href="entries/Refine_Monadic.html">Refine_Monadic</a> &nbsp;
<a href="entries/Automatic_Refinement.html">Automatic_Refinement</a> &nbsp;
<a href="entries/MonoBoolTranAlgebra.html">MonoBoolTranAlgebra</a> &nbsp;
<a href="entries/Simpl.html">Simpl</a> &nbsp;
<a href="entries/Separation_Algebra.html">Separation_Algebra</a> &nbsp;
<a href="entries/Separation_Logic_Imperative_HOL.html">Separation_Logic_Imperative_HOL</a> &nbsp;
<a href="entries/Relational-Incorrectness-Logic.html">Relational-Incorrectness-Logic</a> &nbsp;
<a href="entries/Abstract-Hoare-Logics.html">Abstract-Hoare-Logics</a> &nbsp;
<a href="entries/Kleene_Algebra.html">Kleene_Algebra</a> &nbsp;
<a href="entries/KAT_and_DRA.html">KAT_and_DRA</a> &nbsp;
<a href="entries/KAD.html">KAD</a> &nbsp;
<a href="entries/BytecodeLogicJmlTypes.html">BytecodeLogicJmlTypes</a> &nbsp;
<a href="entries/DataRefinementIBP.html">DataRefinementIBP</a> &nbsp;
<a href="entries/RefinementReactive.html">RefinementReactive</a> &nbsp;
<a href="entries/SIFPL.html">SIFPL</a> &nbsp;
<a href="entries/TLA.html">TLA</a> &nbsp;
<a href="entries/Ribbon_Proofs.html">Ribbon_Proofs</a> &nbsp;
<a href="entries/Separata.html">Separata</a> &nbsp;
<a href="entries/Complx.html">Complx</a> &nbsp;
<a href="entries/Differential_Dynamic_Logic.html">Differential_Dynamic_Logic</a> &nbsp;
<a href="entries/Hoare_Time.html">Hoare_Time</a> &nbsp;
<a href="entries/IMP2.html">IMP2</a> &nbsp;
<a href="entries/UTP.html">UTP</a> &nbsp;
<a href="entries/QHLProver.html">QHLProver</a> &nbsp;
<a href="entries/Differential_Game_Logic.html">Differential_Game_Logic</a> &nbsp;
<strong>Compiling:</strong>
<a href="entries/CakeML_Codegen.html">CakeML_Codegen</a> &nbsp;
<a href="entries/Compiling-Exceptions-Correctly.html">Compiling-Exceptions-Correctly</a> &nbsp;
<a href="entries/NormByEval.html">NormByEval</a> &nbsp;
<a href="entries/Density_Compiler.html">Density_Compiler</a> &nbsp;
<a href="entries/VeriComp.html">VeriComp</a> &nbsp;
- <strong>Static Analysis:</strong>
+ <strong>Static analysis:</strong>
<a href="entries/RIPEMD-160-SPARK.html">RIPEMD-160-SPARK</a> &nbsp;
<a href="entries/Program-Conflict-Analysis.html">Program-Conflict-Analysis</a> &nbsp;
<a href="entries/Shivers-CFA.html">Shivers-CFA</a> &nbsp;
<a href="entries/Slicing.html">Slicing</a> &nbsp;
<a href="entries/HRB-Slicing.html">HRB-Slicing</a> &nbsp;
<a href="entries/InfPathElimination.html">InfPathElimination</a> &nbsp;
<a href="entries/Abs_Int_ITP2012.html">Abs_Int_ITP2012</a> &nbsp;
<strong>Transformations:</strong>
<a href="entries/Call_Arity.html">Call_Arity</a> &nbsp;
<a href="entries/Refine_Imperative_HOL.html">Refine_Imperative_HOL</a> &nbsp;
<a href="entries/WorkerWrapper.html">WorkerWrapper</a> &nbsp;
<a href="entries/Monad_Memo_DP.html">Monad_Memo_DP</a> &nbsp;
<a href="entries/Formal_SSA.html">Formal_SSA</a> &nbsp;
<a href="entries/Minimal_SSA.html">Minimal_SSA</a> &nbsp;
<strong>Misc:</strong>
<a href="entries/JiveDataStoreModel.html">JiveDataStoreModel</a> &nbsp;
<a href="entries/Pop_Refinement.html">Pop_Refinement</a> &nbsp;
<a href="entries/Case_Labeling.html">Case_Labeling</a> &nbsp;
</div>
<h3>Security</h3>
<div class="list">
<a href="entries/Multi_Party_Computation.html">Multi_Party_Computation</a> &nbsp;
<a href="entries/Noninterference_Generic_Unwinding.html">Noninterference_Generic_Unwinding</a> &nbsp;
<a href="entries/Noninterference_Ipurge_Unwinding.html">Noninterference_Ipurge_Unwinding</a> &nbsp;
<a href="entries/UPF.html">UPF</a> &nbsp;
<a href="entries/UPF_Firewall.html">UPF_Firewall</a> &nbsp;
<a href="entries/CISC-Kernel.html">CISC-Kernel</a> &nbsp;
<a href="entries/Noninterference_CSP.html">Noninterference_CSP</a> &nbsp;
<a href="entries/Key_Agreement_Strong_Adversaries.html">Key_Agreement_Strong_Adversaries</a> &nbsp;
<a href="entries/Security_Protocol_Refinement.html">Security_Protocol_Refinement</a> &nbsp;
+ <a href="entries/Attack_Trees.html">Attack_Trees</a> &nbsp;
<a href="entries/Inductive_Confidentiality.html">Inductive_Confidentiality</a> &nbsp;
<a href="entries/Possibilistic_Noninterference.html">Possibilistic_Noninterference</a> &nbsp;
<a href="entries/SIFUM_Type_Systems.html">SIFUM_Type_Systems</a> &nbsp;
<a href="entries/Dependent_SIFUM_Type_Systems.html">Dependent_SIFUM_Type_Systems</a> &nbsp;
<a href="entries/Dependent_SIFUM_Refinement.html">Dependent_SIFUM_Refinement</a> &nbsp;
<a href="entries/Relational-Incorrectness-Logic.html">Relational-Incorrectness-Logic</a> &nbsp;
<a href="entries/Strong_Security.html">Strong_Security</a> &nbsp;
<a href="entries/WHATandWHERE_Security.html">WHATandWHERE_Security</a> &nbsp;
<a href="entries/VolpanoSmith.html">VolpanoSmith</a> &nbsp;
<a href="entries/SIFPL.html">SIFPL</a> &nbsp;
<a href="entries/HotelKeyCards.html">HotelKeyCards</a> &nbsp;
<a href="entries/InformationFlowSlicing.html">InformationFlowSlicing</a> &nbsp;
<a href="entries/InformationFlowSlicing_Inter.html">InformationFlowSlicing_Inter</a> &nbsp;
<a href="entries/CryptoBasedCompositionalProperties.html">CryptoBasedCompositionalProperties</a> &nbsp;
<a href="entries/Probabilistic_Noninterference.html">Probabilistic_Noninterference</a> &nbsp;
<a href="entries/HyperCTL.html">HyperCTL</a> &nbsp;
<a href="entries/Bounded_Deducibility_Security.html">Bounded_Deducibility_Security</a> &nbsp;
<a href="entries/Network_Security_Policy_Verification.html">Network_Security_Policy_Verification</a> &nbsp;
<a href="entries/Noninterference_Inductive_Unwinding.html">Noninterference_Inductive_Unwinding</a> &nbsp;
<a href="entries/Password_Authentication_Protocol.html">Password_Authentication_Protocol</a> &nbsp;
<a href="entries/Noninterference_Sequential_Composition.html">Noninterference_Sequential_Composition</a> &nbsp;
<a href="entries/Noninterference_Concurrent_Composition.html">Noninterference_Concurrent_Composition</a> &nbsp;
<a href="entries/SPARCv8.html">SPARCv8</a> &nbsp;
<a href="entries/Modular_Assembly_Kit_Security.html">Modular_Assembly_Kit_Security</a> &nbsp;
<a href="entries/LambdaAuth.html">LambdaAuth</a> &nbsp;
<strong>Cryptography:</strong>
<a href="entries/Game_Based_Crypto.html">Game_Based_Crypto</a> &nbsp;
<a href="entries/Sigma_Commit_Crypto.html">Sigma_Commit_Crypto</a> &nbsp;
<a href="entries/CryptHOL.html">CryptHOL</a> &nbsp;
<a href="entries/Constructive_Cryptography.html">Constructive_Cryptography</a> &nbsp;
<a href="entries/RSAPSS.html">RSAPSS</a> &nbsp;
<a href="entries/Elliptic_Curves_Group_Law.html">Elliptic_Curves_Group_Law</a> &nbsp;
</div>
<h3>Semantics</h3>
<div class="list">
<a href="entries/Launchbury.html">Launchbury</a> &nbsp;
<a href="entries/Clean.html">Clean</a> &nbsp;
<a href="entries/Transformer_Semantics.html">Transformer_Semantics</a> &nbsp;
<a href="entries/HOL-CSP.html">HOL-CSP</a> &nbsp;
<a href="entries/QHLProver.html">QHLProver</a> &nbsp;
<a href="entries/TESL_Language.html">TESL_Language</a> &nbsp;
<a href="entries/Isabelle_C.html">Isabelle_C</a> &nbsp;
</div>
- <h3>System Description Languages</h3>
+ <h3>System description languages</h3>
<div class="list">
<a href="entries/Circus.html">Circus</a> &nbsp;
<a href="entries/ComponentDependencies.html">ComponentDependencies</a> &nbsp;
<a href="entries/Promela.html">Promela</a> &nbsp;
<a href="entries/Featherweight_OCL.html">Featherweight_OCL</a> &nbsp;
<a href="entries/DynamicArchitectures.html">DynamicArchitectures</a> &nbsp;
<a href="entries/Architectural_Design_Patterns.html">Architectural_Design_Patterns</a> &nbsp;
<a href="entries/TESL_Language.html">TESL_Language</a> &nbsp;
</div>
<h2>Logic</h2>
<div class="list">
</div>
<h3>Philosophical aspects</h3>
<div class="list">
<a href="entries/GoedelGod.html">GoedelGod</a> &nbsp;
<a href="entries/Types_Tableaus_and_Goedels_God.html">Types_Tableaus_and_Goedels_God</a> &nbsp;
<a href="entries/GewirthPGCProof.html">GewirthPGCProof</a> &nbsp;
<a href="entries/Lowe_Ontological_Argument.html">Lowe_Ontological_Argument</a> &nbsp;
<a href="entries/AnselmGod.html">AnselmGod</a> &nbsp;
<a href="entries/PLM.html">PLM</a> &nbsp;
<a href="entries/Aristotles_Assertoric_Syllogistic.html">Aristotles_Assertoric_Syllogistic</a> &nbsp;
</div>
<h3>General logic</h3>
<div class="list">
<strong>Classical propositional logic:</strong>
<a href="entries/Free-Boolean-Algebra.html">Free-Boolean-Algebra</a> &nbsp;
<strong>Classical first-order logic:</strong>
<a href="entries/FOL-Fitting.html">FOL-Fitting</a> &nbsp;
<strong>Decidability of theories:</strong>
<a href="entries/MSO_Regex_Equivalence.html">MSO_Regex_Equivalence</a> &nbsp;
<a href="entries/Formula_Derivatives.html">Formula_Derivatives</a> &nbsp;
<a href="entries/Presburger-Automata.html">Presburger-Automata</a> &nbsp;
<a href="entries/LinearQuantifierElim.html">LinearQuantifierElim</a> &nbsp;
- <a href="entries/Nat-Interval-Logic.html">Nat-Interval-Logic</a> &nbsp;
<strong>Mechanization of proofs:</strong>
<a href="entries/Boolean_Expression_Checkers.html">Boolean_Expression_Checkers</a> &nbsp;
<a href="entries/Verified-Prover.html">Verified-Prover</a> &nbsp;
<a href="entries/Sort_Encodings.html">Sort_Encodings</a> &nbsp;
<a href="entries/PropResPI.html">PropResPI</a> &nbsp;
<a href="entries/Resolution_FOL.html">Resolution_FOL</a> &nbsp;
<a href="entries/FOL_Harrison.html">FOL_Harrison</a> &nbsp;
<a href="entries/Ordered_Resolution_Prover.html">Ordered_Resolution_Prover</a> &nbsp;
<a href="entries/Functional_Ordered_Resolution_Prover.html">Functional_Ordered_Resolution_Prover</a> &nbsp;
<a href="entries/Binding_Syntax_Theory.html">Binding_Syntax_Theory</a> &nbsp;
<a href="entries/Saturation_Framework.html">Saturation_Framework</a> &nbsp;
<strong>Lambda calculus:</strong>
<a href="entries/LambdaMu.html">LambdaMu</a> &nbsp;
<strong>Logics of knowledge and belief:</strong>
<a href="entries/Epistemic_Logic.html">Epistemic_Logic</a> &nbsp;
<strong>Temporal logic:</strong>
+ <a href="entries/Nat-Interval-Logic.html">Nat-Interval-Logic</a> &nbsp;
<a href="entries/LTL.html">LTL</a> &nbsp;
<a href="entries/HyperCTL.html">HyperCTL</a> &nbsp;
<a href="entries/Allen_Calculus.html">Allen_Calculus</a> &nbsp;
<a href="entries/MFOTL_Monitor.html">MFOTL_Monitor</a> &nbsp;
<strong>Modal logic:</strong>
<a href="entries/Modal_Logics_for_NTS.html">Modal_Logics_for_NTS</a> &nbsp;
<a href="entries/Differential_Dynamic_Logic.html">Differential_Dynamic_Logic</a> &nbsp;
<a href="entries/Hybrid_Multi_Lane_Spatial_Logic.html">Hybrid_Multi_Lane_Spatial_Logic</a> &nbsp;
<a href="entries/Hybrid_Logic.html">Hybrid_Logic</a> &nbsp;
<a href="entries/MFODL_Monitor_Optimized.html">MFODL_Monitor_Optimized</a> &nbsp;
<strong>Paraconsistent logics:</strong>
<a href="entries/Paraconsistency.html">Paraconsistency</a> &nbsp;
</div>
<h3>Computability</h3>
<div class="list">
<a href="entries/Universal_Turing_Machine.html">Universal_Turing_Machine</a> &nbsp;
<a href="entries/Recursion-Theory-I.html">Recursion-Theory-I</a> &nbsp;
<a href="entries/Minsky_Machines.html">Minsky_Machines</a> &nbsp;
</div>
<h3>Set theory</h3>
<div class="list">
<a href="entries/Ordinal.html">Ordinal</a> &nbsp;
<a href="entries/Ordinals_and_Cardinals.html">Ordinals_and_Cardinals</a> &nbsp;
<a href="entries/HereditarilyFinite.html">HereditarilyFinite</a> &nbsp;
+ <a href="entries/ZFC_in_HOL.html">ZFC_in_HOL</a> &nbsp;
</div>
<h3>Proof theory</h3>
<div class="list">
<a href="entries/Propositional_Proof_Systems.html">Propositional_Proof_Systems</a> &nbsp;
<a href="entries/Completeness.html">Completeness</a> &nbsp;
<a href="entries/SequentInvertibility.html">SequentInvertibility</a> &nbsp;
<a href="entries/Incompleteness.html">Incompleteness</a> &nbsp;
<a href="entries/Abstract_Completeness.html">Abstract_Completeness</a> &nbsp;
<a href="entries/SuperCalc.html">SuperCalc</a> &nbsp;
<a href="entries/Incredible_Proof_Machine.html">Incredible_Proof_Machine</a> &nbsp;
<a href="entries/Surprise_Paradox.html">Surprise_Paradox</a> &nbsp;
<a href="entries/Abstract_Soundness.html">Abstract_Soundness</a> &nbsp;
<a href="entries/FOL_Seq_Calc1.html">FOL_Seq_Calc1</a> &nbsp;
</div>
<h3>Rewriting</h3>
<div class="list">
<a href="entries/CakeML_Codegen.html">CakeML_Codegen</a> &nbsp;
<a href="entries/Monad_Normalisation.html">Monad_Normalisation</a> &nbsp;
<a href="entries/Lambda_Free_RPOs.html">Lambda_Free_RPOs</a> &nbsp;
<a href="entries/Lambda_Free_KBOs.html">Lambda_Free_KBOs</a> &nbsp;
<a href="entries/Lambda_Free_EPO.html">Lambda_Free_EPO</a> &nbsp;
<a href="entries/Nested_Multisets_Ordinals.html">Nested_Multisets_Ordinals</a> &nbsp;
<a href="entries/Abstract-Rewriting.html">Abstract-Rewriting</a> &nbsp;
<a href="entries/First_Order_Terms.html">First_Order_Terms</a> &nbsp;
<a href="entries/Decreasing-Diagrams.html">Decreasing-Diagrams</a> &nbsp;
<a href="entries/Decreasing-Diagrams-II.html">Decreasing-Diagrams-II</a> &nbsp;
<a href="entries/Rewriting_Z.html">Rewriting_Z</a> &nbsp;
<a href="entries/Graph_Saturation.html">Graph_Saturation</a> &nbsp;
<a href="entries/Goodstein_Lambda.html">Goodstein_Lambda</a> &nbsp;
</div>
<h2>Mathematics</h2>
<div class="list">
</div>
<h3>Order</h3>
<div class="list">
<a href="entries/LatticeProperties.html">LatticeProperties</a> &nbsp;
<a href="entries/Stone_Algebras.html">Stone_Algebras</a> &nbsp;
<a href="entries/Allen_Calculus.html">Allen_Calculus</a> &nbsp;
<a href="entries/Order_Lattice_Props.html">Order_Lattice_Props</a> &nbsp;
<a href="entries/Complete_Non_Orders.html">Complete_Non_Orders</a> &nbsp;
<a href="entries/Szpilrajn.html">Szpilrajn</a> &nbsp;
</div>
<h3>Algebra</h3>
<div class="list">
<a href="entries/Optics.html">Optics</a> &nbsp;
<a href="entries/Subresultants.html">Subresultants</a> &nbsp;
<a href="entries/Buildings.html">Buildings</a> &nbsp;
<a href="entries/Algebraic_VCs.html">Algebraic_VCs</a> &nbsp;
<a href="entries/C2KA_DistributedSystems.html">C2KA_DistributedSystems</a> &nbsp;
<a href="entries/Multirelations.html">Multirelations</a> &nbsp;
<a href="entries/Residuated_Lattices.html">Residuated_Lattices</a> &nbsp;
<a href="entries/PseudoHoops.html">PseudoHoops</a> &nbsp;
<a href="entries/Impossible_Geometry.html">Impossible_Geometry</a> &nbsp;
<a href="entries/Gauss-Jordan-Elim-Fun.html">Gauss-Jordan-Elim-Fun</a> &nbsp;
<a href="entries/Matrix_Tensor.html">Matrix_Tensor</a> &nbsp;
<a href="entries/Kleene_Algebra.html">Kleene_Algebra</a> &nbsp;
<a href="entries/KAT_and_DRA.html">KAT_and_DRA</a> &nbsp;
<a href="entries/KAD.html">KAD</a> &nbsp;
<a href="entries/Regular_Algebras.html">Regular_Algebras</a> &nbsp;
<a href="entries/Free-Groups.html">Free-Groups</a> &nbsp;
<a href="entries/CofGroups.html">CofGroups</a> &nbsp;
<a href="entries/Group-Ring-Module.html">Group-Ring-Module</a> &nbsp;
<a href="entries/Robbins-Conjecture.html">Robbins-Conjecture</a> &nbsp;
<a href="entries/Valuation.html">Valuation</a> &nbsp;
<a href="entries/Rank_Nullity_Theorem.html">Rank_Nullity_Theorem</a> &nbsp;
<a href="entries/Polynomials.html">Polynomials</a> &nbsp;
<a href="entries/Relation_Algebra.html">Relation_Algebra</a> &nbsp;
<a href="entries/PSemigroupsConvolution.html">PSemigroupsConvolution</a> &nbsp;
<a href="entries/Secondary_Sylow.html">Secondary_Sylow</a> &nbsp;
<a href="entries/Jordan_Hoelder.html">Jordan_Hoelder</a> &nbsp;
<a href="entries/Cayley_Hamilton.html">Cayley_Hamilton</a> &nbsp;
<a href="entries/VectorSpace.html">VectorSpace</a> &nbsp;
<a href="entries/Echelon_Form.html">Echelon_Form</a> &nbsp;
<a href="entries/QR_Decomposition.html">QR_Decomposition</a> &nbsp;
<a href="entries/Hermite.html">Hermite</a> &nbsp;
<a href="entries/Rep_Fin_Groups.html">Rep_Fin_Groups</a> &nbsp;
<a href="entries/Jordan_Normal_Form.html">Jordan_Normal_Form</a> &nbsp;
<a href="entries/Algebraic_Numbers.html">Algebraic_Numbers</a> &nbsp;
<a href="entries/Polynomial_Interpolation.html">Polynomial_Interpolation</a> &nbsp;
<a href="entries/Polynomial_Factorization.html">Polynomial_Factorization</a> &nbsp;
<a href="entries/Perron_Frobenius.html">Perron_Frobenius</a> &nbsp;
<a href="entries/Stochastic_Matrices.html">Stochastic_Matrices</a> &nbsp;
<a href="entries/Groebner_Bases.html">Groebner_Bases</a> &nbsp;
<a href="entries/Nullstellensatz.html">Nullstellensatz</a> &nbsp;
<a href="entries/Mason_Stothers.html">Mason_Stothers</a> &nbsp;
<a href="entries/Berlekamp_Zassenhaus.html">Berlekamp_Zassenhaus</a> &nbsp;
<a href="entries/Stone_Relation_Algebras.html">Stone_Relation_Algebras</a> &nbsp;
<a href="entries/Stone_Kleene_Relation_Algebras.html">Stone_Kleene_Relation_Algebras</a> &nbsp;
<a href="entries/Orbit_Stabiliser.html">Orbit_Stabiliser</a> &nbsp;
<a href="entries/Dirichlet_L.html">Dirichlet_L</a> &nbsp;
<a href="entries/Symmetric_Polynomials.html">Symmetric_Polynomials</a> &nbsp;
<a href="entries/Taylor_Models.html">Taylor_Models</a> &nbsp;
<a href="entries/LLL_Basis_Reduction.html">LLL_Basis_Reduction</a> &nbsp;
<a href="entries/LLL_Factorization.html">LLL_Factorization</a> &nbsp;
<a href="entries/Localization_Ring.html">Localization_Ring</a> &nbsp;
<a href="entries/Quaternions.html">Quaternions</a> &nbsp;
<a href="entries/Octonions.html">Octonions</a> &nbsp;
<a href="entries/Aggregation_Algebras.html">Aggregation_Algebras</a> &nbsp;
<a href="entries/Signature_Groebner.html">Signature_Groebner</a> &nbsp;
<a href="entries/Quantales.html">Quantales</a> &nbsp;
<a href="entries/Transformer_Semantics.html">Transformer_Semantics</a> &nbsp;
<a href="entries/Farkas.html">Farkas</a> &nbsp;
<a href="entries/Groebner_Macaulay.html">Groebner_Macaulay</a> &nbsp;
<a href="entries/Linear_Inequalities.html">Linear_Inequalities</a> &nbsp;
<a href="entries/Linear_Programming.html">Linear_Programming</a> &nbsp;
<a href="entries/Jacobson_Basic_Algebra.html">Jacobson_Basic_Algebra</a> &nbsp;
<a href="entries/Hybrid_Systems_VCs.html">Hybrid_Systems_VCs</a> &nbsp;
<a href="entries/Subset_Boolean_Algebras.html">Subset_Boolean_Algebras</a> &nbsp;
</div>
<h3>Analysis</h3>
<div class="list">
<a href="entries/Fourier.html">Fourier</a> &nbsp;
<a href="entries/E_Transcendental.html">E_Transcendental</a> &nbsp;
<a href="entries/Liouville_Numbers.html">Liouville_Numbers</a> &nbsp;
<a href="entries/Descartes_Sign_Rule.html">Descartes_Sign_Rule</a> &nbsp;
<a href="entries/Euler_MacLaurin.html">Euler_MacLaurin</a> &nbsp;
<a href="entries/Real_Impl.html">Real_Impl</a> &nbsp;
<a href="entries/Lower_Semicontinuous.html">Lower_Semicontinuous</a> &nbsp;
<a href="entries/Affine_Arithmetic.html">Affine_Arithmetic</a> &nbsp;
<a href="entries/Laplace_Transform.html">Laplace_Transform</a> &nbsp;
<a href="entries/Cauchy.html">Cauchy</a> &nbsp;
<a href="entries/Integration.html">Integration</a> &nbsp;
<a href="entries/Ordinary_Differential_Equations.html">Ordinary_Differential_Equations</a> &nbsp;
<a href="entries/Polynomials.html">Polynomials</a> &nbsp;
<a href="entries/Sqrt_Babylonian.html">Sqrt_Babylonian</a> &nbsp;
<a href="entries/Sturm_Sequences.html">Sturm_Sequences</a> &nbsp;
<a href="entries/Sturm_Tarski.html">Sturm_Tarski</a> &nbsp;
<a href="entries/Special_Function_Bounds.html">Special_Function_Bounds</a> &nbsp;
<a href="entries/Landau_Symbols.html">Landau_Symbols</a> &nbsp;
<a href="entries/Error_Function.html">Error_Function</a> &nbsp;
<a href="entries/Akra_Bazzi.html">Akra_Bazzi</a> &nbsp;
<a href="entries/Zeta_Function.html">Zeta_Function</a> &nbsp;
<a href="entries/Linear_Recurrences.html">Linear_Recurrences</a> &nbsp;
<a href="entries/Cartan_FP.html">Cartan_FP</a> &nbsp;
<a href="entries/Deep_Learning.html">Deep_Learning</a> &nbsp;
<a href="entries/Stirling_Formula.html">Stirling_Formula</a> &nbsp;
<a href="entries/Lp.html">Lp</a> &nbsp;
<a href="entries/Bernoulli.html">Bernoulli</a> &nbsp;
<a href="entries/Winding_Number_Eval.html">Winding_Number_Eval</a> &nbsp;
<a href="entries/Count_Complex_Roots.html">Count_Complex_Roots</a> &nbsp;
<a href="entries/Taylor_Models.html">Taylor_Models</a> &nbsp;
<a href="entries/Green.html">Green</a> &nbsp;
<a href="entries/Irrationality_J_Hancl.html">Irrationality_J_Hancl</a> &nbsp;
<a href="entries/Budan_Fourier.html">Budan_Fourier</a> &nbsp;
<a href="entries/Smooth_Manifolds.html">Smooth_Manifolds</a> &nbsp;
<a href="entries/Transcendence_Series_Hancl_Rucki.html">Transcendence_Series_Hancl_Rucki</a> &nbsp;
<a href="entries/Hybrid_Systems_VCs.html">Hybrid_Systems_VCs</a> &nbsp;
<a href="entries/Poincare_Bendixson.html">Poincare_Bendixson</a> &nbsp;
</div>
- <h3>Probability Theory</h3>
+ <h3>Probability theory</h3>
<div class="list">
<a href="entries/DiscretePricing.html">DiscretePricing</a> &nbsp;
<a href="entries/CryptHOL.html">CryptHOL</a> &nbsp;
<a href="entries/Constructive_Cryptography.html">Constructive_Cryptography</a> &nbsp;
<a href="entries/Probabilistic_While.html">Probabilistic_While</a> &nbsp;
<a href="entries/Markov_Models.html">Markov_Models</a> &nbsp;
<a href="entries/Density_Compiler.html">Density_Compiler</a> &nbsp;
<a href="entries/Probabilistic_Timed_Automata.html">Probabilistic_Timed_Automata</a> &nbsp;
<a href="entries/Hidden_Markov_Models.html">Hidden_Markov_Models</a> &nbsp;
<a href="entries/Random_Graph_Subgraph_Threshold.html">Random_Graph_Subgraph_Threshold</a> &nbsp;
<a href="entries/Ergodic_Theory.html">Ergodic_Theory</a> &nbsp;
<a href="entries/Source_Coding_Theorem.html">Source_Coding_Theorem</a> &nbsp;
<a href="entries/Buffons_Needle.html">Buffons_Needle</a> &nbsp;
</div>
- <h3>Number Theory</h3>
+ <h3>Number theory</h3>
<div class="list">
<a href="entries/Arith_Prog_Rel_Primes.html">Arith_Prog_Rel_Primes</a> &nbsp;
<a href="entries/Pell.html">Pell</a> &nbsp;
<a href="entries/Minkowskis_Theorem.html">Minkowskis_Theorem</a> &nbsp;
<a href="entries/E_Transcendental.html">E_Transcendental</a> &nbsp;
<a href="entries/Pi_Transcendental.html">Pi_Transcendental</a> &nbsp;
<a href="entries/Liouville_Numbers.html">Liouville_Numbers</a> &nbsp;
<a href="entries/Prime_Harmonic_Series.html">Prime_Harmonic_Series</a> &nbsp;
<a href="entries/Fermat3_4.html">Fermat3_4</a> &nbsp;
<a href="entries/Perfect-Number-Thm.html">Perfect-Number-Thm</a> &nbsp;
<a href="entries/SumSquares.html">SumSquares</a> &nbsp;
<a href="entries/Lehmer.html">Lehmer</a> &nbsp;
<a href="entries/Pratt_Certificate.html">Pratt_Certificate</a> &nbsp;
<a href="entries/Dirichlet_Series.html">Dirichlet_Series</a> &nbsp;
<a href="entries/Gauss_Sums.html">Gauss_Sums</a> &nbsp;
<a href="entries/Zeta_Function.html">Zeta_Function</a> &nbsp;
<a href="entries/Stern_Brocot.html">Stern_Brocot</a> &nbsp;
<a href="entries/Bertrands_Postulate.html">Bertrands_Postulate</a> &nbsp;
<a href="entries/Bernoulli.html">Bernoulli</a> &nbsp;
<a href="entries/Diophantine_Eqns_Lin_Hom.html">Diophantine_Eqns_Lin_Hom</a> &nbsp;
<a href="entries/Dirichlet_L.html">Dirichlet_L</a> &nbsp;
<a href="entries/Mersenne_Primes.html">Mersenne_Primes</a> &nbsp;
<a href="entries/Irrationality_J_Hancl.html">Irrationality_J_Hancl</a> &nbsp;
<a href="entries/Prime_Number_Theorem.html">Prime_Number_Theorem</a> &nbsp;
<a href="entries/Probabilistic_Prime_Tests.html">Probabilistic_Prime_Tests</a> &nbsp;
<a href="entries/Prime_Distribution_Elementary.html">Prime_Distribution_Elementary</a> &nbsp;
<a href="entries/Transcendence_Series_Hancl_Rucki.html">Transcendence_Series_Hancl_Rucki</a> &nbsp;
<a href="entries/Zeta_3_Irrational.html">Zeta_3_Irrational</a> &nbsp;
<a href="entries/Furstenberg_Topology.html">Furstenberg_Topology</a> &nbsp;
+ <a href="entries/Lucas_Theorem.html">Lucas_Theorem</a> &nbsp;
</div>
- <h3>Games and Economics</h3>
+ <h3>Games and economics</h3>
<div class="list">
<a href="entries/DiscretePricing.html">DiscretePricing</a> &nbsp;
<a href="entries/ArrowImpossibilityGS.html">ArrowImpossibilityGS</a> &nbsp;
<a href="entries/SenSocialChoice.html">SenSocialChoice</a> &nbsp;
<a href="entries/Vickrey_Clarke_Groves.html">Vickrey_Clarke_Groves</a> &nbsp;
<a href="entries/Parity_Game.html">Parity_Game</a> &nbsp;
<a href="entries/First_Welfare_Theorem.html">First_Welfare_Theorem</a> &nbsp;
<a href="entries/Randomised_Social_Choice.html">Randomised_Social_Choice</a> &nbsp;
<a href="entries/SDS_Impossibility.html">SDS_Impossibility</a> &nbsp;
<a href="entries/Stable_Matching.html">Stable_Matching</a> &nbsp;
<a href="entries/Fishburn_Impossibility.html">Fishburn_Impossibility</a> &nbsp;
<a href="entries/Neumann_Morgenstern_Utility.html">Neumann_Morgenstern_Utility</a> &nbsp;
</div>
<h3>Geometry</h3>
<div class="list">
<a href="entries/Complex_Geometry.html">Complex_Geometry</a> &nbsp;
<a href="entries/Poincare_Disc.html">Poincare_Disc</a> &nbsp;
<a href="entries/Minkowskis_Theorem.html">Minkowskis_Theorem</a> &nbsp;
<a href="entries/Buildings.html">Buildings</a> &nbsp;
<a href="entries/Chord_Segments.html">Chord_Segments</a> &nbsp;
<a href="entries/Triangle.html">Triangle</a> &nbsp;
<a href="entries/Impossible_Geometry.html">Impossible_Geometry</a> &nbsp;
<a href="entries/Tarskis_Geometry.html">Tarskis_Geometry</a> &nbsp;
<a href="entries/General-Triangle.html">General-Triangle</a> &nbsp;
<a href="entries/Nullstellensatz.html">Nullstellensatz</a> &nbsp;
<a href="entries/Ptolemys_Theorem.html">Ptolemys_Theorem</a> &nbsp;
<a href="entries/Buffons_Needle.html">Buffons_Needle</a> &nbsp;
<a href="entries/Stewart_Apollonius.html">Stewart_Apollonius</a> &nbsp;
<a href="entries/Gromov_Hyperbolicity.html">Gromov_Hyperbolicity</a> &nbsp;
<a href="entries/Projective_Geometry.html">Projective_Geometry</a> &nbsp;
<a href="entries/Quaternions.html">Quaternions</a> &nbsp;
<a href="entries/Octonions.html">Octonions</a> &nbsp;
</div>
<h3>Topology</h3>
<div class="list">
<a href="entries/Topology.html">Topology</a> &nbsp;
<a href="entries/Knot_Theory.html">Knot_Theory</a> &nbsp;
<a href="entries/Kuratowski_Closure_Complement.html">Kuratowski_Closure_Complement</a> &nbsp;
<a href="entries/Smooth_Manifolds.html">Smooth_Manifolds</a> &nbsp;
</div>
- <h3>Graph Theory</h3>
+ <h3>Graph theory</h3>
<div class="list">
<a href="entries/Flow_Networks.html">Flow_Networks</a> &nbsp;
<a href="entries/Prpu_Maxflow.html">Prpu_Maxflow</a> &nbsp;
<a href="entries/MFMC_Countable.html">MFMC_Countable</a> &nbsp;
<a href="entries/ShortestPath.html">ShortestPath</a> &nbsp;
<a href="entries/Gabow_SCC.html">Gabow_SCC</a> &nbsp;
<a href="entries/Graph_Theory.html">Graph_Theory</a> &nbsp;
<a href="entries/Planarity_Certificates.html">Planarity_Certificates</a> &nbsp;
<a href="entries/Max-Card-Matching.html">Max-Card-Matching</a> &nbsp;
<a href="entries/Girth_Chromatic.html">Girth_Chromatic</a> &nbsp;
<a href="entries/Random_Graph_Subgraph_Threshold.html">Random_Graph_Subgraph_Threshold</a> &nbsp;
<a href="entries/Flyspeck-Tame.html">Flyspeck-Tame</a> &nbsp;
<a href="entries/Koenigsberg_Friendship.html">Koenigsberg_Friendship</a> &nbsp;
<a href="entries/Tree_Decomposition.html">Tree_Decomposition</a> &nbsp;
<a href="entries/Menger.html">Menger</a> &nbsp;
<a href="entries/Parity_Game.html">Parity_Game</a> &nbsp;
<a href="entries/Factored_Transition_System_Bounding.html">Factored_Transition_System_Bounding</a> &nbsp;
<a href="entries/Graph_Saturation.html">Graph_Saturation</a> &nbsp;
</div>
<h3>Combinatorics</h3>
<div class="list">
<a href="entries/Card_Equiv_Relations.html">Card_Equiv_Relations</a> &nbsp;
<a href="entries/Twelvefold_Way.html">Twelvefold_Way</a> &nbsp;
<a href="entries/Card_Multisets.html">Card_Multisets</a> &nbsp;
<a href="entries/Card_Partitions.html">Card_Partitions</a> &nbsp;
<a href="entries/Card_Number_Partitions.html">Card_Number_Partitions</a> &nbsp;
<a href="entries/Well_Quasi_Orders.html">Well_Quasi_Orders</a> &nbsp;
<a href="entries/Marriage.html">Marriage</a> &nbsp;
<a href="entries/Bondy.html">Bondy</a> &nbsp;
<a href="entries/Ramsey-Infinite.html">Ramsey-Infinite</a> &nbsp;
<a href="entries/Derangements.html">Derangements</a> &nbsp;
<a href="entries/Euler_Partition.html">Euler_Partition</a> &nbsp;
<a href="entries/Discrete_Summation.html">Discrete_Summation</a> &nbsp;
<a href="entries/Open_Induction.html">Open_Induction</a> &nbsp;
<a href="entries/Latin_Square.html">Latin_Square</a> &nbsp;
<a href="entries/Bell_Numbers_Spivey.html">Bell_Numbers_Spivey</a> &nbsp;
<a href="entries/Catalan_Numbers.html">Catalan_Numbers</a> &nbsp;
<a href="entries/Falling_Factorial_Sum.html">Falling_Factorial_Sum</a> &nbsp;
<a href="entries/Matroids.html">Matroids</a> &nbsp;
</div>
- <h3>Category Theory</h3>
+ <h3>Category theory</h3>
<div class="list">
<a href="entries/Category3.html">Category3</a> &nbsp;
<a href="entries/MonoidalCategory.html">MonoidalCategory</a> &nbsp;
<a href="entries/Category.html">Category</a> &nbsp;
<a href="entries/Category2.html">Category2</a> &nbsp;
<a href="entries/AxiomaticCategoryTheory.html">AxiomaticCategoryTheory</a> &nbsp;
<a href="entries/Bicategory.html">Bicategory</a> &nbsp;
</div>
<h3>Physics</h3>
<div class="list">
<a href="entries/No_FTL_observers.html">No_FTL_observers</a> &nbsp;
</div>
- <h3>Set Theory</h3>
- <div class="list">
- <a href="entries/ZFC_in_HOL.html">ZFC_in_HOL</a> &nbsp;
- </div>
<h3>Misc</h3>
<div class="list">
<a href="entries/FunWithFunctions.html">FunWithFunctions</a> &nbsp;
<a href="entries/FunWithTilings.html">FunWithTilings</a> &nbsp;
<a href="entries/IMO2019.html">IMO2019</a> &nbsp;
</div>
<h2>Tools</h2>
<div class="list">
<a href="entries/Monad_Normalisation.html">Monad_Normalisation</a> &nbsp;
<a href="entries/Constructor_Funs.html">Constructor_Funs</a> &nbsp;
<a href="entries/Lazy_Case.html">Lazy_Case</a> &nbsp;
<a href="entries/Dict_Construction.html">Dict_Construction</a> &nbsp;
<a href="entries/Case_Labeling.html">Case_Labeling</a> &nbsp;
<a href="entries/DPT-SAT-Solver.html">DPT-SAT-Solver</a> &nbsp;
<a href="entries/Nominal2.html">Nominal2</a> &nbsp;
<a href="entries/Separata.html">Separata</a> &nbsp;
<a href="entries/Proof_Strategy_Language.html">Proof_Strategy_Language</a> &nbsp;
<a href="entries/Diophantine_Eqns_Lin_Hom.html">Diophantine_Eqns_Lin_Hom</a> &nbsp;
<a href="entries/BNF_Operations.html">BNF_Operations</a> &nbsp;
<a href="entries/BNF_CC.html">BNF_CC</a> &nbsp;
<a href="entries/Auto2_HOL.html">Auto2_HOL</a> &nbsp;
<a href="entries/Isabelle_C.html">Isabelle_C</a> &nbsp;
</div>
</td>
</tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
</body>
</html>
\ No newline at end of file

File Metadata

Mime Type
application/octet-stream
Expires
Sun, May 5, 10:10 AM (1 d, 23 h)
Storage Engine
chunks
Storage Format
Chunks
Storage Handle
WTlVhwWxjzwy
Default Alt Text
(7 MB)

Event Timeline